the value of docker …
There was a fair amount of questioning the value of docker. This was primarily in the context of comparison to golden images or standard AMI builds.
I think there were a couple of key points that were missed. Here I submit my perception of docker’s benefits.
what is docker
Well, what is docker besides a “container”. I think of if as a tar ball. This is particularly noticeable when you export a docker image. It comes out as a tar ball. In essence the contents of the exported image are just a bunch of files, which docker overlays on the underlying Linux OS. This happens via an overlay filesystem. In this case AUFS.
The container essentially uses this overlay and runs a command with a certain environment inside the docker image.
the developers don’t care about the OS
This was touched on during the show, but I feel there is more to it. I write code, but I also have a long history of being an operations guy. I’ve seen many cases, where code that was developed failed to run on production hosts. It’s the classic case of “it works on my machine”.
Therein lies the crux. When developing code, it’s sometimes necessary to install a gem, do some things with pip, or lean on the OS provided library. Sure, everyone can be mandated to only develop on a sanctioned OS, but that feels counterproductive.
Using docker, the developer can build software on their choice of OS along with all the dependencies. Then at deployment time everything is exactly the same, since everything is in the docker image on which the container runs. The developer is responsible and able to ensure all of the dependencies are met.
why docker is positive for operations
Similar to the reason that docker is a positive for developers, by empowering them with the best OS for their container, it offers benefits to operations.
As an ops guy, I’ve often had to deal with requests for the newest version of python or a newer version of some other tool. Sometimes you get lucky and find packages in EPEL or can easily build them. Sometimes it becomes tricky with concerns over replacing or upgrading tools on which the underlying OS relies. At other times, the idea of gem, cpan and pip magic cluttering up a filesystem has made me shudder.
Docker again eliminates that concern. The underlying OS isn’t modified by anything that goes inside the container and doesn’t require extra work with virtualenv to get things into the right state.
Operations can keep deploying their chosen OS, but the developers can drop an image based on a different distro with the right tools on top of that. Operations can managed/patch the host OS as necessary. With docker the two don’t start conflicting with each other.
docker vs virtual machine images
In the show there was also talk about pushing a new image around in very light fashion via an index. The reason this works is because the image is really just a file system and so can be diffed easily against the last version. Now, the pushing around of container images is really just a diff, rather than an entire image. This in turn makes it pretty easy to see what changed between versions.
Docker containers also fire up much quicker than a virtual machine. A docker container is really just process that sits in an isolated environment. Spawn time is pretty much identical to running it directly. This makes it possible to spawn up more instances very quickly and shut others down just as fast without having to wait for OS boot time. This is likely not applicable to all environments and applications, but with the much lower overhead of the container versus a full OS it is worth some thought.
where docker may not be ideal
Docker isn’t the best choice for data stores such as databases and filestores. The reason for this is that those generally need all the power they can get and running those services inside the container would likely suffer due to the overlay filesystem. There are ways around this though, but exposing the underlying host filesystem. None the less for those applications that aren’t ephemeral and have lots of data to hold on to, it still makes sense to just through a (virtual) machine at.
how docker fits with configuration management tools
I think of docker images as ideally requiring any configuration. That said, CM tools are useful in making sure things are right. While the existing Dockerfile is really just a shell script, there is nothing precluding running a puppet or chef command inside it. A great example of this is Deis, which essentially aims at Heroku by leveraging Chef and Docker.