Server Distro-hopping with Docker

Linux users love to tinker. We love to experiment. We love to play. I have long been a distrohopper on my laptop, but it’s never been as easy on my home server. For my personal computer, I keep all my projects and shell config/scripts backed up with a git server, so usually all it takes to format with a new distro is to make sure to push all of those repositories upstream. But with my home server, data and configurations are strewn across the system in /etc, /var, /usr, ~, and wherever else the package maintainers decided to stick things. This makes it basically impossible to quickly backup, reformat and restore a system with a fresh installation.

That is, until containerization! Or VMs, I guess, but whatever. I’m new to this stuff, okay?

Docker all the things!

While I was experimenting with Docker for deploying a small Rails app, it occurred to me that containers would be awesome for running services on my home server. Containerization would give me better security, simplify installation and configuration, and allow me to wipe and reformat to my heart’s desire without needing to reconfigure my various services. All I would have to do is get Docker installed, and then my services will be back exactly as they were before! Furthermore, it would allow me to run Arch on my server without the fear that constant updates will bork my services.

When your application is in a container, all the files it needs to run are within the container. This means that backing up your application and all its data is a simple matter of backing up the container’s files. This would make distrohopping on the server just as easy as on a desktop!

Using host volumes

While you certainly could back up the entire container filesystem, most applications will have a few certain directories where the user data and configuration is stored. You can optimize the reload process by limiting the files you backup to just those unique to your application instance. For example, the Ghost image that’s running this blog currently stores its application data in /var/lib/ghost. So by linking a volume to that location in the container, you can persist your Ghost data on the host and not have to worry about the rest of the filesystem the container uses.

Keeping track of it all

It’s a pain to try to remember the full options for running each application. Fortunately, docker-compose exists. It gives you a convenient way to configure your docker containers to run in a consistent way. I use it to link volumes, provide environment variables, and to inter-link containers (such as linking a single database container to multiple application containers).

Nginx

There’s one last project that absolutely rocks for tying it all together: nginx-proxy. This project runs nginx in a container and dynamically maintains a configuration for any Docker container running on the same host with the environment variable VIRTUAL_HOST defined, so you don’t have to worry about what IP address or host port your container is running on. This makes spinning containers up and down a cinch.

An example

I plan to host all of my actual configuration on GitHub sometime, but currently I have some passwords hard-coded into my docker-compose.yml files, so a polished up example will have to suffice for now. This is (more or less) how I have OwnCloud configured using MariaDB, Redis behind nginx, all inside Docker containers:

# /containers/mariadb/docker-compose.yml
db:
  image: mariadb
  restart: always
  expose:
    - "3306" # exposed so I can connect to the DB from the host
  ports:
    - "3306"
  environment:
    MYSQL_ROOT_PASSWORD: *********
    MYSQL_USER: my_username
    MYSQL_PASSWORD: *********
  volumes:
    - ./volumes/var/lib/mysql:/var/lib/mysql
# /containers/redis/docker-compose.yml
cache:
  image: redis
  restart: always
  ports:
    - "6379"
# /containers/owncloud/docker-compose.yml
web:
  image: owncloud
  restart: always
  ports:
    - "80"
    - "443"
  external_links:
    - maria_db_1:db
    - redis_cache_1:redis
  volumes:
    - ./volumes/var/www/html/config:/var/www/html/config
    - ./volumes/var/www/html/data:/var/www/html/data
    - ./volumes/var/www/html/apps:/var/www/html/apps
  environment:
    VIRTUAL_HOST: my-owncloud-domain.example.com
  domainname: my-owncloud-domain.example.com
# /containers/nginx/docker-compose.yml
proxy:
  image: jwilder/nginx-proxy
  restart: always
  ports:
    - "80:80" # bind ports 80 and 443 directly to the host
    - "443:443"
  volumes:
    - ./volumes/etc/nginx/vhost.d:/etc/nginx/vhost.d:ro
    - /var/run/docker.sock:/tmp/docker.sock:ro
    - /ssl/certs:/etc/nginx/certs
  environment:
    DEFAULT_HOST: my-default-domain.example.com

With those files in place, it’s a simple matter of starting up all those services with a few docker-compose up -d commands.

Conclusion

I’m excited about this setup. I’m running OwnCloud, Ghost, and GitLab on my server and each one was quite easy to set up. I’m keeping my containers on a separate hard drive that I won’t format when I reinstall the OS, so I shouldn’t even need to backup and restore that data. Best of all, I’m running all of this on Arch, which as we all know is the most fun distro of them all :)

Categories

Other posts