Be aware! Docker is a trap.
Linux containers is a new term for old technologies like namespace and control groups, used in production maybe for more than a decade.
LXC is one way to access those kernel feature in a boring way. OpenVZ and the newly hyped LXD is virtualization hypervisor based on Linux containers. SystemD do use containers for all of spawned services and can be used to spawn containers (using nspawn). In other words Docker did not invent containers.
I was warned about that more than a year ago but then it was not that obvious.
What's wrong with docker?So many things! At every single level! But the real problem that those problems are intentional and they are not going to be fixed.
What's the alternatives?CoreOS-backed appc and Rocket (aka. rkt) and RedHat-backed Linux Foundation's runC.
But Why? A google point of viewLet me quote from Tim Hockin a Google's engineer from Kubernetes project blog (here google is an honest 3rd party that would benefit from users rentng google cloud computing to run their code with all possible ways of shipping their code)
Throughout this investigation Docker has made it clear that they’re not very open to ideas that deviate from their current course or that delegate control. This is very worrisome to us, since Kubernetes complements Docker and adds so much functionality, but exists outside of Docker itself.
Docker church have their own plans and views and any one who does not agree with this will be crucified and burned. They try to control the world and they do not accept delegation of control. Why are they doing this? Because they are building a platform and they want to lock you in their platform. Docker is not dedicated to opensource and free software for example their Universal Control Plane is a proprietary software. And they don't use support subscription model as with RedHat and CoreOS. By locking you in their platform they earn money. They will not accept any enhancement that make them loose control or delegate control.
Just claims, aren't they?Google has no benefit to lie in this context and that blog article cite specific issues that belong to network vendors (not just google)
This and other issues have been brought up to Docker developers by network vendors, and are usually closed as "working as intended" (libnetwork #139, libnetwork #486, libnetwork #514, libnetwork #865, docker #18864), even though they make non-Docker third-party systems more difficult to integrate with.
so docker knows about those problems because they engineered them to lock you in. They will not accept solutions. "Patches are not welcome"
Docker problems. A top docker contributor point of viewWith respect to the previous google's blog post and beside the previously mentioned libnetwork, we have an intentionally broken DNS
Docker's networking model makes a lot of assumptions that aren’t valid for Kubernetes. In docker versions 1.8 and 1.9, it includes a fundamentally flawed implementation of "discovery" that results in corrupted /etc/hosts files in containers (docker #17190) — and this cannot be easily turned off. In version 1.10 Docker is planning to bundle a new DNS server, and it’s unclear whether this will be able to be turned off. Container-level naming is not the right abstraction for Kubernetes — we already have our own concepts of service naming, discovery, and binding, and we already have our own DNS schema and server (based on the well-established SkyDNS). The bundled solutions are not sufficient for our needs but are not disableable.
Those "assumptions" are intentional. They won't accept patches. You will find intentionally flawed model at any level in any component of docker.
Let's get to back to the announcement of rkt (the docker alternative) by one of the top contributor to docker. In my humble opinion the most important thing in that is the removal of the "manifesto" from docker README
The Docker repository included a manifesto of what a standard container should be. This was a rally cry to the industry, and we quickly followed. Brandon Philips, co-founder/CTO of CoreOS, became a top Docker contributor, and now serves on the Docker governance board. CoreOS is one of the most widely used platforms for Docker containers, and ships releases to the community hours after they happen upstream. We thought Docker would become a simple unit that we can all agree on.That's why they say
Why not just fork Docker?You can read comments about coreOS announcement here
From a security and composability perspective, the Docker process model - where everything runs through a central daemon - is fundamentally flawed. To “fix” Docker would essentially mean a rewrite of the project, while inheriting all the baggage of the existing implementation.
More problem. RedHat's point of viewFedora upstreaming policy is why I love fedora. Their policy mean they can't ship changes that are not approved by original developer and they need to include links to upstream tickets for any patch. By inspecting docker README in their package you see the following RedHat's patches that are not accepted by docker community, with my phrasing to what does that mean in less technical term
- Red Hat Support wants to know the version of the rpm package that docker is running. .. to be reported in "docker info" Docker upstream was not interested in this patch.
- it seems docker don't want you to get support from a third party (RedHat for example)
- Current docker tests run totally on Ubuntu. .. run rhel7 test on rhel7 and fedora tests on fedora ..etc.
- in other words docker does not want to validate your build of docker in your own distro
- This patch allows the subscriptions to work inside of the container. Docker thought this was too RHEL specific so told us to carry a patch.
- Registry related
- PR: #11991, #10411, #14258
- This patch allows users to customize the default registries available. We believe this closely aligns with the way yum and apt currently work. This patch also allows customers to block images from registries. Some customers do not want software to be accidentally pulled and run on a machine. Some customers also have no access to the internet and want to setup private registries to handle their content.
- They are saying: you are not supposed to use any external registry, we will make your life harder if you try, we make docker display wrong results when you search and we will even send your credentials (passwords) of one registry to the other registry
A side from reject patches and pull requests, you can follow several "fundamental flaws" for example in a blog post by SELinux guru
I have a co-worker who said: "Docker is about running random code downloaded from the Internet and running it as root."
which was a trigger for extending sVirt to support Docker as a mean for isolating docker by SELinux.
- Docker runs everything as a child process of its privileged daemon if this daemon dies (ex. updating the package or as a result of a bug) every thing would collapse
- reasoning: docker wants everything to be managed by itself. rkt and runc can be a process without any daemon
- Docker daemon is running as root and it does not drop its privilege, every container is running as root were as runc and rkt can be run as any user.
- Docker pulls random code from its registry and this can't be disabled
- reasoning: because docker want you to host your code there
- Docker has its own planned flawed discovery service that can't be turned off
- Docker has its own planned (now existing) multi-host networking and vendors of SDN (software defined networks) are not welcome
- Docker has flawed security at many levels like trusting signed headers without checking payload (replay attack)
- Allowing anyone to talk to docker socket, is equivalent to giving him root access. Docker socket is flawed.
- Docker layers (and debian and ubuntu) rely on AUFS that was rejected by Linux Kernel and they get have a limitation on number of layers and there is no way of merging layers
- Docker recommends debian and they use ubuntu for unit tests, other are not welcome.