Docker: why one process per container does not always work

My “permanent” SSL certificates are expiring soon, so I decided to switch to letsecnrypt. The easiest way to obtain their certificate is allegedly to use Certbot.

There is a nice docker image for Certbot, but even the authors caution against using it, unless you really know what you’re doing.

When certbot runs inside the web server’s container, it can do everything automatically. When it runs outside, a lot of things must be done by hand, but the biggest problem is fighting over who controls port 80, and web server restarts.

Certbot needs to listen to incoming HTTP requests in order to prove to letsencrypt that you own the domain you claim. When it runs inside, it modifies the server config and ensures proper files are served, so everything just works. When it runs outside, it cannot modify the config.

If we want to keep the main web site running while certbot is working, we somehow need to proxy incoming HTTP requests to the certbot. This is not very hard if the certbot were in a constantly running container, but in reality it is executed via docker run --rm command, so its container is short lived. Proxying such a thing is probably possible, but not easy.

Without the proxy, we must shut down the web server container while certbot is running, but it would mean hard downtime for the web site users. If one merely changes the config, most web servers know how to do graceful restart without refusing incoming connections, but if the web server container is stopped, no requests will be served until it is up again.

So, as with the mail server, practical considerations forced me to give up on the “pure” solution and bake certbot into my web server container.


        1. Fixed that too 🙂 Looks like the cross-poster was creating a new LJ post for every time I changed something,


Leave a Reply

Your email address will not be published. Required fields are marked *