Docker Setup & Container Registry

Docker Compose

This application consists of a few services, a database, and a message broker. Each one runs in its own container, and they all need to communicate with each other. I chose to use Docker Compose because it provides a clean, easy to understand configuration file that can build and run the whole application with minimal effort.

Docker Compose also makes it simple to control startup order. For example, my main service waits until the database is fully up and ready, which can take a few seconds. Managing that kind of dependency manually would be more complicated with a few services dependant on each other.

Overall, Docker Compose makes local development straightforward. I can run everything together, test changes quickly, and trust that the setup will behave the same way on my server. This helps ensure everything is configured correctly and reduces the chances of running into container interaction issues later on.

Below is a small snippet from a Docker Compose file for one of my services. In my local development setup, the service is built directly from the project folder. In production, however, the Compose file uses a pre-built image pulled from DigitalOcean Container Registry. I described the registry setup below, but this shows how the configuration differs between local and production.


# =========================
# Blazor WebApp
# =========================
scheduleui_app:
  build:
    context: ./Schedule.UI
    dockerfile: Dockerfile
  image: scheduleui:latest
  container_name: scheduleui
  ports:
    - "5200:5200"
  environment:
    - ASPNETCORE_ENVIRONMENT=Production
    - ASPNETCORE_URLS=http://+:5200
  networks:
    - internal_network
  depends_on:
    - scheduleservice_app
  restart: unless-stopped
  entrypoint: >
      /bin/sh -c "
        /wait-for-it.sh scheduleservice:5000 --timeout=90 --strict -- \
        dotnet Schedule.UI.dll
      "
                            

Docker Container Registry (DOCR)

Originally, my plan was to push my project files to the server and build the Docker images directly there. That didn’t work well because the server wasn’t powerful enough to handle image builds as it is one of the cheaper options, and the hardware just couldn’t keep up. I also didn't want to spend a fortune on this project.

To solve that, I switched to using DigitalOcean Container Registry (DOCR). Now I build the images using GitHub workflow and push them to the registry. When I deploy the application, the server simply pulls the images from DOCR and runs them, which is much faster and more reliable.

Each service has its own dedicated GitHub workflow, so I can update images independently whenever something changes. Here is one of the workflows:


name: Build and push UI to DOCR

on:
  workflow_dispatch:

jobs:
  build-and-push-service:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to DigitalOcean Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ secrets.REGISTRY_NAME }}
          username: ${{ secrets.REGISTRY_USER_NAME }}
          password: ${{ secrets.REGISTRY_TOKEN }}

      - name: Build and Push UI Image
        uses: docker/build-push-action@v5
        with:
          context: ./Schedule.UI
          file: ./Schedule.UI/Dockerfile
          push: true
          tags: |
            ${{ secrets.REGISTRY_NAME }}/scheduleui:latest
            ${{ secrets.REGISTRY_NAME }}/scheduleui:${{ github.sha }}
                            
An unhandled error has occurred. Reload 🗙