Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. And, as we all know Docker image is a read-only template that contains a set of instructions for creating a container that wraps up the software and its dependencies into a standardized unit for software development to run on the Docker platform.
In this post, we are going to step through describing some of the best practices and common pitfalls we encountered while developing our first Dockerfile for Tomcat.
Let’s Get Started
Refreshing basic concepts about Docker Images and Dockerfiles
What is Dockerfile and Docker Image?
A Dockerfile is just a text file that contains the set of instructions to build a Docker image. Docker Image is a template that allows you to initiate running containers.
What is Docker Layer and Docker build cache?
Each layer in a Docker context represents an instruction included in a Docker image’s Dockerfile. The layers can also be referred to as “build steps.”
Every time you build a Docker image, each build step is cached. Reuse cached layers that do not change in the image rebuild process to improve the build time.
Concerns when building images
- Consistency: If we are consistently writing our Dockerfiles for our images, will be easier to maintain and we will reduce the time spent when developing new releases of the image
- Build Time: Especially when our builds are integrated into a Continuous Integration pipeline (CI), reducing the build time can significantly reduce our apps’ development cost.
- Image Size: Reduce the size of our images to improve the security, performance, efficiency, and maintainability of our containers.
- Security: Critical for production environments, securing our containers is very important to protect our applications from external threats and attacks.
Can find more on Docker best practices
Prerequsistes : Docker Linter
A Linter helped us to detect syntax errors in our docker file and give us suggestions based on some best practices. We used OT-Dockerlinter developed by our own team.
Case : Setting Up a Java Application onto Tomcat using Docker form Scratch
Being specific about your base image tag :
Maintained images usually have different tags, used to specify their different flavors. For application, the
maven image is built as a base image. Maven has multiple versions in which it has a slim flavor that includes the minimal needed packages to run a Maven application (see multiple Supported Tags).
Following this case, we used MVN_JDK_VERSION=3.6.0-jdk-11-slim with the minimal packages:
This results in a lesser size of the whole image. Thanks to that finespun but important change, but don’t worry we can use any as per our requirements as we taking it as an Argument.
Using multi-stage builds to separate build and runtime environments :
To continue improving the efficiency, readability, and size of the image, we split the process into different stages :
- Build Stage : Building the application from source code
- Running Stage : Copy the application artifacts needed in the final stage and running the application.
This is a short summary of what we have done :
Using maven:3.6.0-jdk-11-slim to build our application, we added
AS build to name our first stage “build”. Then, we used
COPY --from=build to copy artifacts from that stage. That way, the artifacts copied are only the ones needed to run the minimal image. This approach is extremely effective when building images for compiled applications.
Using the Non-Root Approach to Enforce Container Security :
Non-root containers add an extra layer of security. To put this in perspective, ask this:
“Would we run any process or application as root in my server?” The answer, would be no, right? So why would we do so in our containers?
Running our containers as non-root prevents from gaining sudo privileges in the container host and if not so the repercussion could be bad when the application is deployed and made public, hackers can manipulate just not the application but the entire filesystem of the Docker container.
Setting the WORKDIR instruction :
The default value for the working directory is
/. However, unless we use
FROM scratch images, it is likely that the base image we are using set it. It is a good practice to set the WORKDIR instruction to adapt it to our application characteristics.
Our application server to get installed is under the directory
/opt. Therefore, it makes sense to adapt the working directory to it :
And not only here, but we have also used WORKDIR at many more places.
Say Hello to BuildKit
As our Dockerfile is a Multi-stage Build, when BuildKit comes across a multi-stage build it get concurrency, it analyzes the docker file and create a graph of the dependencies between build steps and uses this to determine which elements of the build can be ignored; which can be executed in parallel; and which need to be executed sequentially. To read more features of BuildKit
Till the time, maven builds the artifact in the first stage, our second stage of downloading Tomcat of the specific version given as per user’s input is ready.
Dockle – Container Image Linter for Security
Dockle performs multiple CIS benchmark checks, as well as more generic checks that are considered recommended best practices. Docker CIS Benchmark focuses on ensuring Docker containers runtimes are configured as securely as possible such as Host Configuration, Docker Daemon Configuration, Container Images, and Build File Configuration, etc. We have multiple tools for such security use cases Trivy, DockerSlim for latest CIS practices check the link
These were some of the practices which we followed, but these don’t end here. There are many more which can bring the best for you docker images suggested by the docker community for example :
- Order the steps in the Dockerfile from least to most frequently changing content.
- Only install what you need. For example, use the
- Don’t install SSH or similar services that may expose your containers.
- Minimize the number of layers.
Try following best practices when writing Dockerfile; use linters. Include these checks in your workflow. Create and standardize organization-wide custom policies to make your workflow consistent and predictable. Add these checks to your CI/CD pipelines to enable and validate security best practices. Extend these practices besides Dockerfile, and implement them in each workflow layer.
Finally, if you have come across any other practices, please comment below. Thanks for reading !!
Blog Pundit: Naveen Verma and Sanjeev Pandey
Opstree is an End to End DevOps solution provider