Kubernetes made my life much, much worse

Kubernetes was supposed to make managing distributed applications and infrastructure easy. No more worrying about servers or resilience: just deploy code and let Kubernetes handle the operational complexities like scaling, failovers, and load balancing.

But the benefits don’t come for free. Kubernetes, alongside the distributed architecture of microservice-based applications, introduces complexity that makes debugging failures vastly more challenging.

SRE and DevOps teams need new approaches to quickly identifying and resolving production issues. Here’s why – and what to do about it.

The curse of complexity

Kubernetes has become the de facto standard for container orchestration, adopted by 88% of companies in production environments according to the Cloud Native Computing Foundation.

The benefits are clear: it abstracts away infrastructure so you can easily deploy containerized applications and scale them on-demand. Kubernetes takes care of scheduling containers across clusters, restarting failed containers, load balancing, secrets management, and more. As a platform, it provides increased flexibility, scalability, and resilience compared to traditional monolithic applications.

But Kubernetes comes with challenges – with the complexity of troubleshooting failures frequently cited by users as a major pain point.

We’ve seen SREs spend days trying to hunt down the root cause of issues in a Kubernetes environment. A service is timing out – is it a network problem, an overloaded pod, or a software bug? Containers keep crashing – is there a configuration error, resource constraint, or faulty application code? Bugs that would take minutes to debug in a monolithic app can take days to track down in Kubernetes.

Common Kubernetes debugging hurdles include:

Lack of visibility

Kubernetes clusters run distributed workloads across nodes in the cluster. A single service can have pods scattered across multiple nodes. This distributed nature means you lack visibility into the entire system from a single vantage point. Traditional monitoring gives metrics for individual pods or services, not the correlations and interactions between them. It’s like debugging with blinders on.

Moreover, since the application is distributed it is very dependent on the way Kubernetes decides to orchestrate it, creating a very vague line separating application issues and infrastructure issues.

For example, say a web service dependent on a cache service is failing. Metrics show the web pods are overloaded. But traditional monitoring won’t highlight that the cache service pods on different nodes are also degraded, causing increased load on the web tier.

Microservices-based architecture relies on small, preferably immutable units. Since the flow of the system is based on APIs – coupled with the way Kubernetes works – you’ll quickly discover that without visibility across nodes, you miss the root cause.

Ephemeral environments

Pods have short lifetimes. They are constantly rescheduled across nodes based on resource needs. New pods are spun up for deployments while old ones are terminated. Issues present in one pod instance quickly disappear as the pod is killed. 

This ephemeral nature makes reproducing and tracing failures feel like whack-a-mole. For instance, a bug that causes occasional crashes will disappear into thin air as the pod restarts. Without tracing data showing the crash starting from the root request, it’s virtually impossible to reconstruct and debug.

Complex networking

Pods and services communicate across a complex network fabric created by Kubernetes networking components like CoreDNS, kube-proxy, CNI plugins, etc. Connectivity issues could stem from misconfigurations in this network fabric, routing issues, DNS failures, firewall rules, and more.

Since Kubernetes abstracts the networking layer to a degree, it’s even harder to use traditional techniques to monitor it.

Debugging networking issues requires visibility into Kubernetes network traffic flows, topology, and configurations – data that traditional monitoring lacks. Without this context, network issues turn into dead ends.

Shared resources

In Kubernetes, workload containers share compute resources on nodes and storage resources on volumes. This sharing creates contention that can cause unexpected resource starvation or throttling issues.

Slowdowns caused by resource contention are hard to differentiate from things like memory leaks or scaling problems with traditional metrics. For example, high CPU on a container could be from a memory leak or from another “noisy neighbor” container on the same node hogging a shared CPU.

Blackbox containers

Traditional debugging tools centered around logs, APM agents, or JMX don’t provide full visibility in Kubernetes. You can’t easily “SSH into a container” – they are isolated and ephemeral. Agents only give metrics from inside the container without kernel or host context.

This blackbox design means you have limited views into things like network calls between containers, host resource contention, kernel exceptions etc.

Automation complexity

Kubernetes enables fully automated CI/CD deployments. But this automation also hinders debugging by making it hard to reconstruct past failures. The automaticity restricts a human from manually recovering or diagnosing an issue postmortem.

For example, rolling back to a previous version may be impossible if the deployment automation already terminated older pods. And the automation itself could be causing failures that are hard to detect. Without tracing showing the full deployment process, the root cause could stay hidden.

How Senser helps

The combination of these factors means debugging in Kubernetes requires new approaches. Traditional logging, tracing, and metrics tools designed for monoliths don’t provide the combined system-level and code-level observability needed. You end up trying to debug based on coarse alerts and surface-level symptoms.

This is where eBPF and machine learning (ML) come in. eBPF lets you deeply inspect your environment without high overhead, giving you infrastructure and application level visibility.  ML helps you cut through the complexity, giving you insight into precisely why things are failing and suggesting fixes.

With Senser, we leverage these technologies to offer a purpose-built solution for cloud-native analysis. Our platform auto-discovers your microservices and correlated metrics, traces, and logs to provide context for troubleshooting. ML detects anomalies, patterns and interaction correlations pointing to the root cause. You get precise answers instead of alert floods.

Kubernetes may have made our lives harder in some ways. But with the right solutions tailored to this new environment, teams can reap all the benefits of Kubernetes – without drowning in complexity when things go wrong.

Learn more about how Senser helps SRE and DevOps teams go from production chaos to intelligence.