Auto Deploy is a feature of VMware vSphere that enables booting VMware ESXi hosts directly from the network instead of a local storage device. The boot process starts via PXE, loading an agent which ultimately pulls ESXi images from the Auto Deploy HTTP service on your vCenter Server Appliance. VMware customers that use this deployment model tend to be on the larger side, so performance and scale is of natural concern.
The latest release of vSphere – version 6.5 – included quite a few enhancements to Auto Deploy, as well as to Host Profiles, that make this stateless option much more approachable and easier to operate. One nice improvement is the ability to easily configure reverse caching proxies to offload all of that HTTP traffic generated by booting hosts. It’s an optional architecture, but nice to have available when planning for large numbers of concurrent boots.
I wrote about configuring this feature on the vSphere Blog, but here I will explain how I made the Docker container running Nginx so you could make your own, if you like, and feel more confident about the internals. Since the container is based on the official Nginx image, there is very little else that needs to be done.
Nginx Configuration File
This slim nginx.conf does nothing fancy, so there may be some room left to optimize, but it works on my machine™. Create a directory and save it as nginx.conf.template.simple wherever you’re going to run your docker client.
user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 1024; } http { sendfile on; proxy_buffering on; proxy_cache_valid 200 1d; proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:15m max_size=1g inactive=24h; proxy_temp_path /var/www/cache/tmp; server { listen 80; location / { proxy_pass https://${AUTO_DEPLOY}; keepalive_timeout 65; tcp_nodelay on; proxy_cache my-cache; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $server_name; } } } daemon off;
Dockerfile
The only other requirement to build your own image is to create a Dockerfile, in the same directory.
FROM nginx COPY nginx.conf.template.simple /etc/nginx/nginx.conf.template RUN mkdir -p /var/www/cache CMD envsubst '$$AUTO_DEPLOY' < /etc/nginx/nginx.conf.template > /etc/nginx/nginx.conf && nginx
Build and Run
Once the two files are in place, build it like any other docker image, for example:
$ docker build -t egray/auto_deploy_nginx .
Then, start up the container on a suitable Linux VM (PhotonOS works perfectly) and try it out:
docker run --restart=always -p 5100:80 -d \ -e AUTO_DEPLOY=10.197.34.22:6501 egray/auto_deploy_nginx
The nice thing about that Nginx image is that the logs are configured to go to stdout and stderr, so you can view them without much effort through the docker logs command.
Please note that this proof of concept is intended to get up and running quickly and, as a result, does not incorporate SSL certificates so that access to the proxy can be secured over HTTPS. For a production rollout, SSL may be an important consideration. Fortunately, this is easy to do; just create certificate and key files, copy them into the container during Docker build, and add appropriate lines to the Nginx config file.
Auto Deploy is an innovative approach to infrastructure management, and this new enhanced reverse proxy ability can help with performance and scalability. If you try this out, let me know how it goes!