Skip to content

Discover Docker

Check

Checkpoint: call us to check your results (don’t stay blocked on a checkpoint if we are busy, we can check ⅔ checkpoints at the same time).

Question

Point to document/report.

Tip

Interesting information.

Goals

Good practice

Do not forget to document what you do along the steps, the documentation provided will be evaluated as your report. Create an appropriate file structure, 1 folder per image.

Target application

3-tiers application:

  • HTTP server
  • Backend API
  • Database

For each of those applications, we will follow the same process: choose the appropriate docker base image, create and configure this image, put our application specifics inside and at some point have it running. Our final goal is to have a 3-tier web API running.

Base images

Database

Basics

We will use the image: postgres:17.2-alpine.

Let’s have a simple postgres server running, here is what would be a minimal Dockerfile:

FROM postgres:17.2-alpine

ENV POSTGRES_DB=db \
   POSTGRES_USER=usr \
   POSTGRES_PASSWORD=pwd

Build this image and start a container properly.

Your Postgres DB should be up and running. Check that everything is running smoothly with the docker command of your choice.
Don’t forget to name your docker image and container.

Tip

If you have difficulties go back to part 2.3.3 Build the image and 2.3.4 Run your image on TD01 - Docker (TD 1 Discover Docker).

Re-run your database with adminer. Don't forget --network app-network to enable adminer/database communication. We use -–network instead of -–link because the latter is deprecated.

Tip

Don't forget to create your network

    docker network create app-network

Also, does it seem right to have passwords written in plain text in a file? You may rather define those environment parameters when running the image using the flag -e.

Question

1-1 For which reason is it better to run the container with a flag -e to give the environment variables rather than put them directly in the Dockerfile?

It would be nice to have our database structure initialized with the docker image as well as some initial data. Any sql scripts found in /docker-entrypoint-initdb.d will be executed in alphabetical order, therefore let’s add a couple scripts to our image:

Tip

Don't forget to restart the adminer:

    docker run \
    -p "8090:8080" \
    --net=app-network \
    --name=adminer \
    -d \
    adminer

Init database

01-CreateScheme.sql

CREATE TABLE public.departments
(
 id      SERIAL      PRIMARY KEY,
 name    VARCHAR(20) NOT NULL
);

CREATE TABLE public.students
(
 id              SERIAL      PRIMARY KEY,
 department_id   INT         NOT NULL REFERENCES departments (id),
 first_name      VARCHAR(20) NOT NULL,
 last_name       VARCHAR(20) NOT NULL
);
02-InsertData.sql
INSERT INTO departments (name) VALUES ('IRC');
INSERT INTO departments (name) VALUES ('ETI');
INSERT INTO departments (name) VALUES ('CGP');


INSERT INTO students (department_id, first_name, last_name) VALUES (1, 'Eli', 'Copter');
INSERT INTO students (department_id, first_name, last_name) VALUES (2, 'Emma', 'Carena');
INSERT INTO students (department_id, first_name, last_name) VALUES (2, 'Jack', 'Uzzi');
INSERT INTO students (department_id, first_name, last_name) VALUES (3, 'Aude', 'Javel');

Rebuild your image and check that your scripts have been executed at startup and that the data is present in your container.

Tip

When we talk about /docker-entrypoint-initdb.d it means inside the container, so you have to copy your directory's content and the container’s directory.

Persist data

You may have noticed that if your database container gets destroyed then all your data is reset, a database must persist data durably. Use volumes to persist data on the host disk.

-v /my/own/datadir:/var/lib/postgresql/data

Check that data survives when your container gets destroyed.

Question

1-2 Why do we need a volume to be attached to our postgres container?

Question

1-3 Document your database container essentials: commands and Dockerfile.

Backend API

Basics

For starters, we will simply run a Java hello-world class in our containers, only after will we be running a jar. In both cases, choose the proper image keeping in mind that we only need a Java runtime.

Here is a complex Java Hello World implementation:

Main.java

public class Main {

   public static void main(String[] args) {
       System.out.println("Hello World!");
   }
}

1- Compile with your target Java: javac Main.java.
2- Write dockerfile.

FROM   # TODO: Choose a java JRE
# TODO:  Add the compiled java (aka bytecode, aka .class)
# TODO: Run the Java with: “java Main” command.
3- Now, to launch app you have to do the same thing that Basic step 1.

Here you have a first glimpse of your backend application.

In the next step we will simply enrich the build (using maven instead of a minimalistic javac) and execute a jar instead of a simple .class.

→ If it’s a success you must see “Hello Word” in your console.

Multistage build

In the previous section we were building Java code on our machine to have it running on a docker container. Wouldn’t it be great to have Docker handle the build as well? You probably noticed that the default openjdk docker images contain... Well... a JDK! Create a multistage build using the Multistage.

Your Dockerfile should look like this:

FROM eclipse-temurin:21-jdk-alpine
# Build Main.java with JDK
# TODO : in next steps (not now)

FROM eclipse-temurin:21-jre-alpine
# Copy resource from previous stage
COPY --from=0 /usr/src/Main.class .
# Run java code with the JRE
# TODO : in next steps (not now)

Don’t fill the Dockerfile now, we will have to do it in the next steps.

Backend simple api

We will deploy a Springboot application providing a simple API with a single greeting endpoint.

Create your Springboot application on: Spring Initializer.

Use the following config:

  • Project: Maven
  • Language: Java 21
  • Spring Boot: 3.4.2
  • Packaging: Jar
  • Dependencies: Spring Web

Generate the project and give it a simple GreetingController class:

package fr.takima.training.simpleapi.controller;

import org.springframework.web.bind.annotation.*;

import java.util.concurrent.atomic.AtomicLong;

@RestController
public class GreetingController {

   private static final String template = "Hello, %s!";
   private final AtomicLong counter = new AtomicLong();

   @GetMapping("/")
   public Greeting greeting(@RequestParam(value = "name", defaultValue = "World") String name) {
       return new Greeting(counter.incrementAndGet(), String.format(template, name));
   }

   record Greeting(long id, String content) {}

}

You can now build and start your application, of course you will need maven and a jdk-21.

How convenient would it be to have a virtual container to build and run our simplistic API?

Oh wait, we have docker, here is how you could build and run your application with Docker:

# Build stage
FROM eclipse-temurin:21-jdk-alpine AS myapp-build
ENV MYAPP_HOME=/opt/myapp
WORKDIR $MYAPP_HOME

RUN apk add --no-cache maven

COPY pom.xml .
COPY src ./src
RUN mvn package -DskipTests

# Run stage
FROM eclipse-temurin:21-jre-alpine
ENV MYAPP_HOME=/opt/myapp
WORKDIR $MYAPP_HOME
COPY --from=myapp-build $MYAPP_HOME/target/*.jar $MYAPP_HOME/myapp.jar

ENTRYPOINT ["java", "-jar", "myapp.jar"]

Question

1-4 Why do we need a multistage build? And explain each step of this dockerfile.

Check

A working Springboot application with a simple HelloWorld endpoint.

Did you notice that maven downloads all libraries on every image build?
You can contribute to saving the planet caching libraries when maven pom file has not been changed by running the goal: mvn dependency:go-offline.

Backend API

Let’s now build and run the backend API connected to the database. You can get the zipped source code here: simple-api. You can replace only your src directory and the pom.xml file with the ones available in the repository.

Adjust the configuration in simple-api/src/main/resources/application.yml (this is the application configuration). How to access the database container from your backend application? Use the deprecated --link or create a docker network.

Once everything is properly bound, you should be able to access your application API, for example on: /departments/IRC/students.

[
  {
    "id": 1,
    "firstname": "Eli",
    "lastname": "Copter",
    "department": {
      "id": 1,
      "name": "IRC"
    }
  }
]

Explore your API other endpoints, have a look at the controllers in the source code.

Check

A simple web API on top of your database.

Http server

Basics

Choose an appropriate base image.

Create a simple landing page: index.html and put it inside your container.

It should be enough for now, start your container and check that everything is working as expected.

Here are commands that you may want to try to do so:

  • docker stats
  • docker inspect
  • docker logs

Configuration

You are using the default apache configuration, and it will be enough for now, you use yours by copying it in your image.

Use docker exec to retrieve this default configuration from your running container /usr/local/apache2/conf/httpd.conf.

Note

You can also use docker cp.

Reverse proxy

We will configure the http server as a simple reverse proxy server in front of our application, this server could be used to deliver a front-end application, to configure SSL or to handle load balancing.

So this can be quite useful even though in our case we will keep things simple.

Here is the documentation: Reverse Proxy.

Add the following to the configuration, and you should be all set:

<VirtualHost *:80>
ProxyPreserveHost On
ProxyPass / http://YOUR_BACKEND_LINK:8080/
ProxyPassReverse / http://YOUR_BACKEND_LINK:8080/
</VirtualHost>
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so

Question

1-5 Why do we need a reverse proxy?

Check

Checkpoint: a working application through a reverse proxy.

Docker-compose

1- Install docker-compose if the docker compose command does not work .

You may have noticed that this can be quite painful to orchestrate manually the start, stop and rebuild of our containers. Thankfully, a useful tool called docker-compose comes in handy in those situations.

2- Let’s create a docker-compose.yml file with the following structure to define and drive our containers:

services:
    backend:
        build:
        #TODO
        networks:
        #TODO
        depends_on:
        #TODO

    database:
        build:
        #TODO
        networks:
        #TODO

    httpd:
        build:
        #TODO
        ports:
        #TODO
        networks:
        #TODO
        depends_on:
        #TODO

networks:
  #TODO

volumes:
  #TODO

The docker-compose will handle the three containers for us.

The file above is a basic example of structure, you need to add more parameters and think about the cleanest and most optimized approach like you would do in a company (for example: env variables, volumes, restart policies and processes segregation).

Once your containers are orchestrated as services by docker-compose you should have a perfectly running application, make sure you can access your API on localhost.

Note

The ports of both your backend and database should not be opened to your host machine.

Question

1-6 Why is docker-compose so important?

Question

1-7 Document docker-compose most important commands.

Question

1-8 Document your docker-compose file.

Check

A working 3-tier application running with docker-compose.

Publish

Your docker images are stored locally, let’s publish them, so they can be used by other team members or on other machines.

You will need a Docker Hub account.

1- Connect to your freshly created account with docker login.

2- Tag your image. For now, we have been only using the latest tag, now that we want to publish it, let’s add some meaningful version information to our images.

docker tag my-database USERNAME/my-database:1.0

3- Then push your image to dockerhub:

docker push USERNAME/my-database:1.0  

Dockerhub is not the only docker image registry, and you can also self-host your images (this is obviously the choice of most companies).

Once you publish your images to dockerhub, you will see them in your account: having some documentation for your image would be quite useful if you want to use those later.

Question

1-9 Document your publication commands and published images in dockerhub.

Question

1-10 Why do we put our images into an online repo?

© Takima 2025