Why CoreOS and Docker: What You Need To Know

What is CoreOS

CoreOS is an extremely light weight, stripped down Linux distribution containing none of the extras that are associated with Ubuntu and the like.CoreOS is designed to allow you to build your stack how you want it, without having to worrying about dependencies and resolve package conflicts.

It is meant to provide the infrastructure behind clustered deployments, and many companies such as Facebook and Google use CoreOS to build the infrastructure behind their scalable and reliable services.

If you want easy application deployment, reliability, an insane scalability, CoreOS delivers.

Why CoreOS and Docker

CoreOS and Docker make provisioning Linux containers very, very easy. If you’re not sure what a Linux container is or how it differs from a virtual machine, check out our earlier article on Virtualization Containers provide a completely isolated environment for your applications and don’t require a hypervisor to chew up resources, meaning you can provision more containers and not worry as much about resource allocation.

Isolation means you don’t have to worry about extremely frustrating conflicts between applications, dependencies and other packages that can use up valuable time, and can instead focus on building infrastructure and deploying services.

Rather than having to upgrade or change packages to run multiple services on one server, each service operates in an environment that caters strictly to that service.

CoreOS comes with etcd, a highly ­available and distributed key value store for configuration management and discovery, as well as fleet, which uses etcd to provide a distributed init system.

These two services combine to provide a systemd represented at the cluster level instead of a per host basis. When you combine this with Docker, you get a platform that enables quick and easy massive server deployments, something that no other solution can offer.

Additionally, CoreOS allows you to do some useful things with containers and their applications while using fewer resources than other standard Linux distributions. You can, for example, ensure that backup images of containers are stored in separate locations in case of data loss, or setup applications that are self-­configuring using CoreOS’ etcd tool. This allows applications to recover if any systems happen to go offline, and reduces or eliminates service outages.

Hosting Applications on Docker

Docker is packaged with CoreOS, and creates the Linux containers where your applications will live and operate. Each individual service should have its own container, and applications are started with fleet and connected using etcd.

As soon as a container is started, which only takes a few milliseconds, they can use etcd to tell your proxy they are ready to receive traffic. Rather than running chef on each individual VM, you can simply create a container and deploy it to as many CoreOS hosts as necessary, saving resources and allowing easy container management.

Why use full virtual machines and take up huge amounts of resources if you don’t have to?

Discovery and Configuration Management With etcd

etcd enables you to easily locate services and applications in your environment, and provides you with notifications when and if things change. Discovery is a must have feature of a complex clustered environment, as there is simply no other way to track services and ensure availability.

With CoreOS and Docker, each application you run can reach the local etcd instance at, and each CoreOS host provides an endpoint etcd can use for discovery or recording configurations. Replication of etcd across your entire environment means that any change made is reflected against the entire cluster nearly instantly. Etcd discovery and configuration management gives you the ability to add hosts, containers and applications or scale your services quickly and effortlessly.

Using fleet with systemd to Manage Clusters

systemd, a system management daemon, forms the core of CoreOS’ init system. systemd offers superior performance, especially at boot time and modern logging journal that make it the best fit for CoreOS’ distributed init system, fleet. Service dependency definition, launch order scheduling and service conflict resolution are a few of the things that systemd brings to the table, and most systems administrators are probably intimately familiar with systemd already.

Fleet works by aggregating the systemd of each machine, and fleet schedules and starts services based on its configuration parameters. This means that if a machine fails, fleet is capable of restarting services and applications on any other eligible host in your cluster.

High availability services can also be managed by fleet, since it has the ability to ensure that service containers are located on different hosts or regions. No longer do you have to worry about losing a service or application because one or a few servers are down. fleet will simply restart your application in another container, and fleets’ store of configuration parameters means only containers with no conflicting packages or prerequisites will be used to restart failed services.

CoreOS and Security

Security is at the forefront of every IT professional’s mind, especially when it comes to infrastructure deployment. The CoreOS development team takes a reassuring approach to security, providing reliable, automoatic updates to keep your systems safe. CoreOS uses both SSL and a metadata verification process to check patch integrity before applying updates.

It is still recommended that you run a firewall and restrict all non­allowed traffic, but CoreOS does give you the ability to expose ports only on containers that need it, and etcd even allows you to automate this process to give you more peace of mind.


Whether you have a large clustered environment or not, CoreOS and Docker virtual container provisioning can streamline new application and service deployment. CoreOS will give you amazing performance with none of the resource hogging services that are packaged with many other Linux distributions while also providing the core operating system services you need.

Combining CoreOS with Docker will make your next server deployment much less work intensive, and allow you to concentrate on applications and services your customers use rather than building and maintaining the infrastructure behind them.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button