In the IT world, a software product generally goes through a Software Supply Chain, which is basically all the processes and methods used to deliver that product to the customers, it’s a pipeline which mainly starts with planning and development, going through a variable set of other processes like test, build and quality assurance, all that should end up with the deployment phase. All this process requires some libraries, configuration files and maybe other softwares, and it’s not that easy to have the same environment at each stage of the pipeline, which may lead to different behaviours in those environments, making the software runs well during testing but may crash in production.
Docker is a technology that lets you build, run and ship applications with their own environment by isolating them using the underlying functionalities of the Operating System. Each application will have its own configuration files and dependencies, running without interfering with other part of the system. You can think of it as a new way to build softwares by encapsulating all the components of an application by putting them in a standard box, different applications will have different content inside of it, but we want to make sure that those boxes are used the same way, even if they differ in their content, so we add standard buttons on the outside of it (like run and stop) so we can make use of the application that’s inside of the box. At the end, we will have an encapsulated software (a box) that can be moved and used the same way as all the other ones.
Of course, there are plenty of container technologies out there, but Docker is (in my view) the most complete, you will not need to learn something else to build or scale your applications, it covers all the stack of tools that we need to run, build and ship our container applications at scale. We can mention some of the tools that Docker provides:
The container runtime, it’s the part that is responsible of running our containers. From a user perspective, it is absolutely transparent, you will only need to talk to Docker and it will be responsible of giving orders to Containerd.
Read more about it here.
We said that a software in the container world has been transformed into a kind of box, in Docker terminology, it’s an image. So how do we move those images around? We simply use Docker registries. There are different registry products that have different set of features, but all of them provide the ability to store and download images. So it’s a place where we can put images to be used later by our team or maybe other users that may find it useful.
The default registry that Docker uses is the Docker Hub, you can find there interesting images built by the community or Docker itself.
The default orchestrator that comes prepackaged with Docker, so you won’t need any additional installation to use it. It lets you manage a cluster of Docker engine that we call a Swarm. In short, you will connect multiple nodes running Docker to form a cluster, then you will be able to deploy applications to the swarm and it will be responsible for assigning tasks to different nodes and making your application highly available.
Kubernates is another orchestrator that can be used with Docker Enterprise Edition.
With the Docker toolset, you will be focusing more on innovative solutions than on managing infrastructure and applications.
Start learning about Docker today without the need to install it: https://training.play-with-docker.com
comments powered by Disqus