Using KubeVirt with Cilium's kube-proxy Replacement Mode

March 30, 2025

A few months back I set up a new Kubernetes cluster for self-hosting, and decided on Cilium for the CNI implementation. Partly to learn more about Cilium (it’s a project with a rather large scope), but also because it has some neat features, including cluster-wide firewalling and a UI called Hubble for visualizing network connections within the cluster.

So far it’s worked well, but I recently ran into an issue while experimenting with KubeVirt. Cilium is installed as a replacement to kube-proxy, which depends on a feature called “socket-LB” designed to transparently bypass Kubernetes service-level NAT. Unfortunately, this feature breaks the cluster-level networking of KubeVirt virtual machines, so my VMs were able to access external IPs but not services within the cluster (including internal DNS or the public interface of exposed services). After some searching I was able to fix it with the following steps:

  1. Enable the hostNamespaceOnly option in Cilium; if you’re using the Helm chart, this can be enabled by setting the value:

    socketLB:
      hostNamespaceOnly: true
    

    If you’re installing by another method (e.g. Kustomize), it seems this flag sets the option bpf-lb-sock-hostns-only: "true" in the cilium-config ConfigMap.

    This is rather hidden in their documentation about the kube-proxy replacement mode: Socket LoadBalancer Bypass in Pod Namespace. I originally found the option mentioned in a GitHub issue about networking not working with KubeVirt.

  2. Restart the Cilium pods! I spent quite some time wondering why the networking was still broken after applying the fix; it turns out the config isn’t automatically reloaded until the Cilium pods are restarted. My assumption was that updating the Helm release would apply the change, but it seems to just update the ConfigMap. After Cilium is restarted you can verify that the config is updated by running kubectl -n kube-system exec ds/cilium -- cilium-dbg status --verbose; under the KubeProxyReplacement Details header you’ll see Socket LB Coverage:

    [...]
    KubeProxyReplacement Details:
      Status:                 True
      Socket LB:              Enabled
      Socket LB Tracing:      Enabled
      Socket LB Coverage:     Hostns-only
    [...]
    

Since this feature is designed to bypass some additional networking steps, it’s likely that enabling the hostNamespaceOnly option comes at some efficiency cost; however my cluster and use-case are small enough that I don’t expect it to have a noticeable impact.