How to config jwilder/nginx-proxy, letsencrypt, volumes in Kubernetes (Azure)



still working my way through docker compose and kubernetes config files and trying to map docker compose -> kubernetes.

I’m currently looking at the proxy that sits in front of kestrel and how I would set that up in Azure (using Kubernetes) and also trying to figure out how some of the mount points work.

In particular the volume mounts I’m confused about are the /etc/squidex ones for the nginx-proxy and how I would replicate these in Azure using probably an Azure storage account:

- /etc/squidex/nginx/vhost.d:/etc/nginx/vhost.d
- /etc/squidex/nginx/certs:/etc/nginx/certs:ro
- /etc/squidex/nginx/html:/usr/share/nginx/html
- /var/run/docker.sock:/tmp/docker.sock:ro

Another interesting thing is the kubernetes config doesn’t use neither jwilder/nginx proxy nor letsencrypt? A Service of type LoadBalancer is configured however but that only tells Azure in this case to provision a load balancer. Does the Azure load balancer replace jwilder/nginx proxy in this case so that the nginx proxy is not needed here? Or can both be used in conjunction? And what about Letsencrypt; how do you configure that in Azure and in your k8s manifest file?
And again, I’m not sure how I would configure the volumes part for letsencrypt?
- /etc/squidex/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:ro



the proxy is not needed in kubernetes. It solves three problems:

  1. Securing Kestrel. With .NET Core 2.1 kestrel became production ready and it is not needed anymore.
  2. Https termination. (Should also be supported by kestrel, but I have not tested it yet).
  3. Multiple applications listening on port 80.

The volumes in nginx are used to store the certificates and to give the nginx proxy access to kubernetes. Because what it does is not check all other containers. If a container has a VIRTUAL_HOST environment, the proxy will listen to the host and forward all requests to this container. The lets encrypt containers works in a similary way and uses the LETSENCRYPT_XXX env varaiables to issue the certificates.

In kubernetes it depends on your setup.

  1. An easy approach would be to use cloudflare in front of your services and terminate https there. Of course you do not get end to end encryption then but it is easy and might be good enough.

  2. Another approach is to use an ingress controller:

  3. The third approach is to enable https in squidex directly and integrate lets encrypt, but you have to make a PR for that.

Btw: Kubernetes is complicated and expensive, especially if you want to have failover and replication. If you do not need to scale you can get a 10€/month server at with preinstalled docker and setup squidex within minutes. It is way cheaper. Or use our cloud which is also cheaper ;).


For the future readers, I’d like to add additional alternatives in regards of kubernetes:
Use the nginx-ingress helm chart with cert-manager. With these the whole setup is very easy. :slight_smile:


In addition to the above I would suggest checking

It basically walks you through creating the aforementioned ingress and cert manager.