Of course we cannot always share details about our work with customers, but nevertheless it is nice to show our technical achievements and share some of our implemented solutions.
The default Ingress software in Kubernetes is Nginx. It handles incoming HTTP/HTTPS connections, hence the name "INgress". The configuration options for the Ingress (Nginx) is read from so-called config maps.
In Rancher these config maps can be seen and edited in the System project and by selecting Resources -> Config. Under the ingress-nginx namespace, there should be a couple of config maps already defined - including the "nginx-configuration" config map, which is used to configure the Nginx Ingress:
A known problem, that the nginx-configuration config map loses all its entries after a Kubernetes update is mentioned in Rancher issue #30127. The solution to this is to define all these Nginx config options directly in the cluster's YAML file (cluster.yaml).
In Rancher 2, this cluster.yaml can be edited by selecting the relevant cluster, hitting the three-dot-menu and click on Edit:
Instead of using any of the form fields, click on the "Edit as YAML" button. This opens up the cluster's YAML configuration inside the window. There are many configuration options available. Relevant in our case are the Nginx options for the Ingress service. The following shows an example:
ingress:
provider: nginx
options:
map-hash-bucket-size: "128"
ssl-protocols: SSLv2
extra_args:
enable-ssl-passthrough: ""
If you're used to configuring Nginx, you basically use the Nginx configuration options inside the YAML's "options:" context. But attention: The names could sometimes differ. An overview of the available Nginx Ingress options helps to understand and find the differences.
By adding the current keys and values from the nginx-configuration config map above into the cluster.yaml, results in the following:
After Rancher 2.5.10 was updated to 2.5.12 and Kubernetes updated from 1.20.8 to 1.20.15, the Nginx Ingress was started once again with the default settings - even though the Ingress options were set in the cluster.yaml.
The reason for this is a new bug in and is currently discussed in Rancher issue #36484 and RKE issue #2834.
What happened after the upgrade was a name change of the relevant config map:
By taking a closer look at the deployment of the "nginx-ingress-controller" daemon set, we can spot the difference in the configuration. In Rancher 2.5.10 with Kubernetes 1.20.8, the --configmap parameter points to nginx-configuration:
But after the update to Rancher 2.5.12 and Kubernetes 1.20.15, this parameter has changed and is now looking for the "ingress-nginx-controller" config map:
The problem with this? RKE, which is used to deploy and configure the downstream cluster from the Rancher/management cluster, still applies the Ingress options from the cluster.yaml into the old nginx-configuration config map.
This results in two config maps on an updated cluster:
The current workaround is to manually apply all the configs from nginx-configuration into the new ingress-nginx-controller config map - until this is fixed in RKE.
Although currently called "the de-facto container infrastructure", Kubernetes is anything but easy. The complexity adds additional problems and considerations. We at Infiniroot love to share our troubleshooting knowledge when we need to tackle certain issues - but we also know this is not for everyone ("it just needs to work"). So if you are looking for a managed and dedicated Kubernetes environment, managed by Rancher 2, with server location Switzerland or even in your own on-premise data center, check out our Private Kubernetes Container Cloud Infrastructure service at Infiniroot.