Kubernetes v1.36: Mixed Version Proxy Reaches Beta – Smoother Upgrades Ahead

By

Kubernetes cluster upgrades just got a major safety boost. The Mixed Version Proxy (MVP), first introduced as an alpha feature in version 1.28, is now graduating to beta with Kubernetes 1.36 and will be enabled by default. This feature addresses a critical pain point: ensuring that when a request hits an older API server that doesn't recognize a new resource, it gets seamlessly forwarded to a peer server that does—instead of returning a misleading 404 error. Over the past releases, the team has refined MVP's architecture, moving from StorageVersion API to Aggregated Discovery for better compatibility. In this Q&A, we'll break down what MVP is, how it works, and what it means for your upgrade strategy.

What is the Mixed Version Proxy and why was it created?

The Mixed Version Proxy (MVP) is a Kubernetes feature designed to make control plane upgrades safer and more reliable. During an upgrade, especially in a highly available setup, you often have API servers running different versions side by side. These servers may serve different sets of APIs (groups, versions, resources). Without MVP, if a client request lands on an API server that doesn't serve a requested resource—like a brand-new API version—that server would return a 404 Not Found. That's technically wrong because the resource does exist in the cluster, just on a different server. Such false negatives can trigger serious side effects, like mistaken garbage collection or blocked namespace deletions. MVP solves this by proxying the request from the unable server to a capable peer, ensuring the client gets the correct response. Initially introduced as an alpha feature under the gate UnknownVersionInteroperabilityProxy, it has now matured to beta.

Kubernetes v1.36: Mixed Version Proxy Reaches Beta – Smoother Upgrades Ahead

How does the Mixed Version Proxy work technically?

When a client sends a request to an API server that cannot serve the resource locally, MVP kicks in. The flow is straightforward:

  1. The client request arrives at API Server A (older or different version).
  2. Server A checks its discovery cache to find a peer that can serve the requested resource.
  3. It forwards the request to that peer (API Server B), adding the x-kubernetes-peer-proxied header.
  4. Server B processes the request locally and returns the response to Server A.
  5. Server A forwards the response back to the client.

This proxy layer is transparent to the client—no extra configuration is needed. The key enabler is the discovery cache, which is now populated using Aggregated Discovery (see next question). This ensures that even resources from custom resource definitions (CRDs) or aggregated APIs are correctly identified.

How did MVP evolve from Alpha to Beta?

The initial alpha implementation in Kubernetes 1.28 was a solid proof of concept, but it had limitations. The most significant was its reliance on the StorageVersion API to determine which peers served which resources. That approach didn't work for Custom Resource Definitions (CRDs) or aggregated APIs because the StorageVersion API didn't support them. For the beta graduation in 1.36, the team replaced StorageVersion with Aggregated Discovery. Now, each API server uses aggregated discovery data to dynamically understand the capabilities of its peers—CRDs included. This change not only broadens compatibility but also modernizes the architecture. Another gap noted in the 1.28 post was that while resource requests could be proxied, discovery requests still only reflected the local server's view. In beta, that's been addressed as well: the aggregated discovery information is shared among peers, so clients get a complete picture of available APIs regardless of which server they hit.

What are the benefits of MVP being enabled by default in 1.36?

Having MVP enabled by default in Kubernetes 1.36 means that cluster operators no longer need to manually flip a feature gate to get safer upgrades. The main benefits include:

  • Elimination of false 404 errors during mixed-version control planes, preventing unintended side effects like garbage collection on namespaces.
  • Seamless client experience—applications and controllers simply continue to work without modification.
  • Zero configuration required for most clusters; the proxy logic is built into the API server.
  • Support for CRDs and aggregated APIs thanks to the switch to Aggregated Discovery.

For anyone managing a multi-version upgrade, especially in large environments with rolling updates, this default-on behavior reduces the risk of service disruption and debugging headaches.

Are there any considerations or caveats for users?

While MVP greatly improves upgrade safety, there are a few points to keep in mind:

  • Network latency: Proxying adds a small overhead because requests travel between API servers. In most clusters this is negligible, but extremely latency-sensitive workloads should be tested.
  • Single control plane nodes: MVP is primarily beneficial in highly available (multi-master) setups. If you have a single API server, version skew isn't an issue during upgrade.
  • Feature gate: In 1.36, MVP is enabled by default, but you can still disable it via the UnknownVersionInteroperabilityProxy feature gate if needed (e.g., for troubleshooting).
  • Audit logs: Proxied requests generate audit events on both the proxying and the target server. Ensure your audit policy handles this appropriately.

Overall, the benefits far outweigh the costs, and the Kubernetes community recommends leaving MVP enabled for all production clusters.

What's next for the Mixed Version Proxy?

With beta graduation in 1.36, the MVP feature is on track to become generally available (GA) in a future release. The team is focusing on performance optimizations and further hardening. Potential future enhancements include smarter caching of peer capabilities to reduce discovery overhead, and better integration with cluster autoscaling and upgrade tooling like kubeadm. Users are encouraged to test MVP in their own environments and provide feedback. As Kubernetes upgrades become more frequent and complex, features like MVP are essential to maintaining stability. You can follow the KEP-4474 for the latest updates.

Related Articles

Recommended

Discover More

New Python Backdoor 'DEEP#DOOR' Exploits Tunneling Service to Breach Browser and Cloud CredentialsHow to Fix Broken Enterprise AI Workflows with Agentforce Operations: A Step-by-Step GuideTurning Back the Clock on Liver Aging: How Young Gut Bacteria Could Reverse DamageMastering GitHub Copilot CLI: Interactive vs Non-Interactive Mode Step-by-StepHot Hatch Shocks Market: 275-HP Performance Car Priced Below Toyota Corolla