Are you wondering what Docker can do for you, or how to get started with using it? A basic understanding of Docker is key to fast, efficient deployments on the Divio platform. This article aims to give you the basics to get you up and running quickly.
Michael Nicholson
Cloud Solution Engineer
Docker is a powerful containerisation platform that has revolutionised how developers create, deploy, and manage applications. By encapsulating applications in containers, Docker ensures consistency across multiple environments, making both the development and deployment processes more streamlined and predictable. This has led to Docker becoming a cornerstone in the world of software development.
This post delves into Docker's pivotal role in modern development, particularly for deployments on the Divio platform.
Understanding the “why” behind Docker's widespread adoption is key, especially for Divio platform users. Let's explore Docker's significance, its common use cases, and its necessity for Divio deployments.
Docker, with its containerization technology, brings unparalleled advantages to software development, particularly when understanding what is application security and how Docker contributes to it. It enables applications to run in isolated environments, called containers, providing consistent behavior across different platforms. This isolation not only enhances security but also ensures that applications are lightweight and portable.
This consistency and portability also makes it much easier to build CI/CD pipelines for your application, a feature Divio makes good use of with our automated deployments.
Docker supports a wide variety of use cases. For the new user though, the use case is likely to be one or more of the following:
Implementing a microservices architecture by enabling smaller, manageable chunks of functionality to be developed and deployed independently with little to zero downtime.
Allowing local development to use the same backing services (eg. Postgres, Redis, message broker etc.) as a production environment without costly installs.
As previously mentioned, Docker also excels in Continuous Integration and Continuous Deployment (CI/CD), where its ability to create and manage containers quickly becomes invaluable.
For Divio platform users, Docker is more than a tool; it's a requirement. This integration ensures that applications deployed on the Divio platform benefit from Docker’s efficient management, security, and consistency, providing a seamless development and deployment experience.
The Docker Command-Line Interface (CLI) is a fundamental tool for interacting with Docker. Familiarity with the command line can help developers understand the basics of Docker and fully leverage its potential.
Basic commands include creating, starting, and stopping containers, as well as managing images and networks. These commands form the backbone of Docker operations, providing the control necessary to manage application environments effectively.
While the CLI offers a text-based interface, Docker desktop provides a graphical interface, making it more accessible to those who prefer a visual approach. Portainer, another tool, offers a web-based UI for managing Docker environments, simplifying container orchestration for users of all skill levels.
Command format: docker create [OPTIONS] IMAGE [COMMAND] [ARG...]
Example:
docker create --tty --interactive ubuntu /bin/bash
creates a new Ubuntu container with interactive terminal access. This is a very common usage, and extremely useful for entering a container to run a few commands, examine the file system etc.
For convenience, the --tty –interactive
flags can also be shortened to -it
.
The create
command can be useful when you need to create a container ahead of time, without running it, but you are more likely to use the docker container run
command.
Note: If you do not already have the Ubuntu image locally, then it will be downloaded for you. All CLI commands will do this if they require an image that does not exist locally.
Command format: docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
Example:
docker container run -it ubuntu /bin/bash
will start an interactive Ubuntu bash shell (see the note on -it
above).
This perhaps gives some idea of the power of Docker. If an OS provides an image on Docker Hub, you can run that OS locally, regardless of the actual OS on your computer.
Command format:
docker start [OPTIONS] CONTAINER [CONTAINER...]
docker stop [OPTIONS] CONTAINER [CONTAINER...]
Examples:
docker start my_custom_container
docker stop my_custom_container
These commands allow you to start and stop containers that already exist on your system.
Command format: docker container rm [OPTIONS] CONTAINER [CONTAINER...]
Cleaning up old containers is good practice as they can take up a lot of space on your system. See also the section on “Efficient resource management”.
Command format: docker [image] pull [OPTIONS] NAME[:TAG|@DIGEST]
Examples:
docker pull ubuntu:latest
retrieves the latest Ubuntu image. This may give you a different image each time you use it, if new versions have been released.
docker pull ubuntu:jammy-20240111
pulls that specific Ubuntu image.
Pinning versions rather than using latest
is usually a good idea. Otherwise changes can be introduced each time a pull is done without you being aware of them.
Command format: docker [image] build [OPTIONS] PATH | URL | -
Example:
docker build -t my_image .
builds an image from a Dockerfile in the current directory and names it my_image
.
docker build -t my_image:feb2024 .
builds the same image, calls it my_image
but also tags it as feb2024
.
Command format: docker image ls [OPTIONS] [REPOSITORY[:TAG]]
Example:
docker image ls
lists all available images.
Command format: docker image rm [OPTIONS] IMAGE [IMAGE...]
Example:
docker image rm my_image
removes my_image
Docker uses networks for communications both between containers and external services. Containers must be connected to a Docker network. In simpler setups this will be managed for you using a default bridge network, so you will not need to give too much consideration to manually creating and managing networks.
Because containers are transient, and any data stored within a container will be lost whenever that container is recreated, Docker offers the concept of volumes. Mapping a volume into a container means that even when you rebuild your container, the important data will not be lost. This is used for things like file storage, databases etc.
Volume mapping can be done when running a container by setting the --volume
flag. Creating named volumes can be good practice so it is easy to identify which volume is for Postgres, which is for Redis etc.
Volume creation:
Command format: docker volume create [OPTIONS] [VOLUME]
Adopting best practices in Docker management is important for managing efficiency and security, as well as the demands on your system.
An efficient Dockerfile can reduce both the build time and container size.
Reduction of build time can be an important factor when the container is to be used in CI/CD processes, and smaller files improve both deployment times and save disk space.
Security is a large topic, but there are a few basic things to bear in mind when Dockerising your application.
Regularly update images - Keep your Docker images updated to ensure you have the latest security patches, both for base images and any packages you may be using.
Manage user access - Control who can access and modify your Docker images and containers, and use a non-root user to run the application within your image.
Do not include secrets – Ensure there are no build secrets, API keys etc. encoded into the Dockerfile.
Docker can use a lot of system resources if left unmonitored.
Pruning regularly using docker system prune
cleans up unused containers, images, networks and volumes (see the docs for use of the --all
flag as well).
Perform regular checks on the performance of Docker containers to identify inefficiencies, and monitor CI/CD usage (see the section above on optimising Dockerfiles).
This is a slightly more advanced topic. Tools like Kubernetes or k3s can help manage complex container setups, ensuring they run smoothly and efficiently.
Docker's role in the Divio ecosystem is pivotal. It's not just a tool but a foundational element that enhances the efficiency, consistency, and security of your applications. Whether you're a seasoned developer or new to the world of containerisation, Docker offers a streamlined, powerful platform for your development needs.
Divio provides built-in solutions for caching when deploying applications onto our platform. Check out the documentation to get started or get in touch if you have more specific or complex needs and let us help you.
Stay up to date! Connect with us on LinkedIn and X/Twitter to get exclusive insights and be the first to discover our latest blog posts.
Learning about application or cloud development? We have a whole series of articles on our blog breaking down common concepts in a straightforward, accessible way. Check out our beginner’s guide to documentation, what is cloud security and what is a service level agreement as starting points.
New: Experience Divio's Open Cloud with our 30-day Free Trial! Easily deploy your web applications and explore customized solutions.
Sign up now
Developer Tools / Developer Topics
A Guide on Using Multiple Dockerfiles
The widespread use of Docker has been a revolutionary step in how software is developed and delivered. But did you know that there's some advanced usages you might not be benefiting from? This includes using multiple Dockerfiles in a project. In this article, we guide you through the hows and whys of using multiple Dockerfiles in your applications.