This content is part of the Conference Coverage: KubeCon + CloudNativeCon 2024 news coverage

Key takeaways from KubeCon 2024 in Paris

KubeCon + CloudNativeCon Europe 2024 offered attendees a look into the growing popularity of AI but also covered key areas such as sustainability, observability and FinOps.

KubeCon + CloudNativeCon Europe 2024 attracted a record-breaking 12,000 attendees. Held in Paris in late March, the conference catered to Kubernetes-focused application developers and operations professionals. The conference underscored the growing convergence of AI and cloud-native technologies. However, key areas such as sustainability, observability and cost management -- FinOps -- also received significant attention.

Sustainability in cloud-native infrastructure

While the broader environmental, social and governance conversation has seen a shift in the past year, KubeCon underscored the continued importance of environmental sustainability, particularly relevant in the wake of AI's exponential growth. This surge in AI workloads creates an unprecedented demand for cloud resources and raises concerns about the tradeoff between achieving faster business value and maintaining operational and energy efficiency. In a keynote session, the panel discussed ways to optimize AI workloads on Kubernetes for better total cost of ownership, performance and sustainability. This includes strategies such as using Azure Resource Manager processors for AI inferencing and optimizing the usage and performance of GPUs -- which is especially important now.

In one session, Deutsche Bahn's Gaulter Barbas Baptista presented practical methods for empowering developers to make decisions that result in greener IT or less energy consumption. Their approach leverages Kubernetes' inherent green qualities due to its fine-grained scalability. This can be further enhanced with vertical pod autoscalers for optimized resource allocation and tools, such as KubeCost for cost management. Baptista emphasized equipping developers with data on their code's environmental impact. This is achieved through monitoring tools such as Kepler, which gathers energy consumption metrics that are then visualized in Grafana dashboards.

While not mentioned in the talk, vendors StormForge and NetApp -- with StormForge's Optimize Live and NetApp with Spot -- had new tools on display aimed at optimizing the cost and sustainability of Kubernetes environments. I expect to see more of what I like to call operationalizing sustainability as we prepare for another Linux Foundation event -- FinOps X, taking place in June in San Diego. This conference is focused on cost management and increasingly on carbon operations and sustainability.

Observability: Managing cloud-native complexity

The momentum in observability was evident at KubeCon 2024 in that it seems that all vendors are growing at high rates. The results of M&A activity is coming to fruition -- Chronosphere/Calyptia, Cisco/Splunk, Cisco/Isovalent -- and significant funding rounds, including those from Observe Inc., Honeycomb and Chronosphere, are highlighting this momentum. Vendors dominated the show floor, showcasing offerings for managing cloud-native workloads and highlighting interoperability and integrations with other tools.

The adoption of OpenTelemetry, actively championed by Splunk and others as a common standard for data creation and collection of metrics, traces and logs in cloud-native environments is in full swing. Using OpenTelemetry for distributed tracing was a hot topic, as well as leveraging Jaeger, an open source project that keeps track of all connections and charts/graphs to visualize request paths in an application.

The potential for observability in AI environments was also a main topic of discussion. Apple, a Cloud Native Computing Foundation member, delivered an excellent talk on “Intelligent Observability” and how observability is changing and needs to change with the advent of AI-enabled applications. The session offered three tenets for observability in the new environment. First, observability needs to be integrated into the AI model training process to collect metrics about training times, accuracy and resource consumption so that these can be optimized in the future.

Second, inference pipelines, the process of requesting and receiving answers from an AI model, should be measured with observability tools to capture metrics such as latency accuracy and resource consumption. 

Lastly, measuring the performance of the underlying infrastructure and its consumption of resources is important to detect problems sooner and to better optimize their performance. Currently, there are no standards for data collection and metrics for AI models. So, while these are great suggestions, firms that want to manage their AI applications like other applications will have to improvise for right now. I expect observability vendors sitting on huge piles of cash will be addressing these new needs shortly and for standards to arise over time as more firms operationalize their AI investments.

Extended Berkely Packet Filtering (eBPf) based approaches to observability also saw some traction. Groundcover, an Israeli startup, offers a budget-friendly approach to observability leveraging eBPF. In the observability context, eBPF enables deep visibility inside of the Linux kernel with minimal performance impact, no modification to the Linux kernel and native Kubernetes integration, and eBPF-based approaches offer another, more granular approach to cloud-native observability.

Signs point to a greener, more observable cloud-native future

KubeCon 2024 in Paris was a window into the evolving cloud-native management landscape. While AI and its resource demands became a new and central focus this year, sustainability and observability are becoming more crucial. The emphasis on empowering developers with data-driven insights for greener coding and workload placement, along with the growing adoption of standards such as Open Telemetry, signifies a maturing cloud-native ecosystem.

We are heading toward FinOps X in June, and it will be interesting to see how cost optimization strategies integrate with the growing focus on operationalizing sustainability. This convergence signals a future where environmental impact becomes a core metric alongside cost and performance in managing cloud-native deployments -- AI-enabled or not.

Jon Brown is a senior analyst at TechTarget's Enterprise Strategy Group, where he researches IT operations and sustainability in IT. Jon has more than 20 years of experience in IT product management and is a frequent speaker at industry events.

Dig Deeper on DevOps

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close