Containers
Notes
- Removes the local machine as a factor of code so others can replicate local / repository contexts
- If there are multiple containers, compose it so that databases and third-party services like background workers are included in this ecosystem of containers, an orchestration
- Run an orchestration in the local machine but without using its resources
Some additional thoughts
Re: local/repo context, viewers have to reconstruct the local context by following documentation instructions (that may or may not work). In contrast, a composed orchestration of containers (via a compose.yml
) encapsulates ostensibly running, interrelated code and its a matter of switching it on or off.
The repository context deals with reproducing the visual / organizational element of code with no guarantee of reproducability since the underlying local machines may be different. The container context deals with packaging code for more robust means of reproduction for the live site context.
Post-Setup
By this time, I should already be able to run a web server, employ background tasks, and see how these operate with either postgres or sqlite as the database. These are services
that are operating in the local context. Since deployment means to transfer my local context to a remote one, how can I be certain that the conditions in the remote context will be fit to run my desired services
?
The answer is the use of containers which, in essence, makes an exact remote replica of the local context.
Here, I'll implement a local container by moving all relevant files into this single context.
Prior to doing so, ensure Docker is installed and running locally.
Local files
-
Should exclude all non-essential files presently found inside
/src
, this includes:*/.sqlite-*
*/.db*
staticfiles/
mediafiles/
How this document is structured
We'll first try to do things manually to see how compose.yml
makes this process easier.
Where to run commands
Based on project structure above, make sure to be in the <root>
directory, i.e. where start-django
cloned.
Dockerfiles
There are two Dockerfiles that are preconfigured under /deploy/pg
and /deploy/sq
.
- Prepare packages that will be used to setup
litestream
and compilesqlite
from source - See latest sqlite version. Supply the most recent version and relevant extensions to use. At the time of this writing:
3.41.2
withJSON1
+FTS5
extensions. - See latest litestream release - for use as sqlite backup and recovery. Supply the most recent version. At the time of this writing:
0.3.9
-
Why
opt/
? See some context.Why
/src
? We've placed all relevant files inside this directory, including therequirements.txt
. -
Copies
/src
the the folder of the present local build context to the container'sWORKDIR
which was just set to/opt/src
- Presumes that
poetry export -f requirements.txt --without-hashes --output src/requirements.txt
has previously been run from the project's root directory - Makes the two files executable but is not run. Note that the
run_cmd
needs to be filled up either duringcompose.yml
orfly.toml
since this can either berun.sh
orweb.sh
.
# syntax=docker/dockerfile:1.2
FROM python:3.11-slim-bullseye
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
PIP_DISABLE_PIP_VERSION_CHECK=1
# psycopg (1)
RUN apt-get update && apt-get install -y libpq-dev gcc && rm -rf /var/lib/apt/lists/*
# same
WORKDIR /opt/src
COPY /src .
RUN pip install -r requirements.txt
# make executable (2)
RUN chmod +x /opt/src/scripts/worker.sh /opt/src/scripts/${run_cmd}
- Needed for
psycopg
to use Postgres - Note that the
run_cmd
needs to be filled up either duringcompose.yml
orfly.toml
since this can either berun.sh
orweb.sh
.
Entrypoint/CMD
The included Dockerfiles do not contain a CMD
/ entrypoint script. To run the containers built by the images, I must add a --entrypoint
flag or use this in a compose.yml.tpl
Note however that both Dockerfiles make a variable argument executable alongside worker.sh
.
I can replace this variable argument with either run.sh
or web.sh
depending on the context. So I'll use run.sh
with the compose.yml.tpl
for testing inside the container. Then I can use web.sh
as a build argument with fly.toml
later during deployment.
Entrypoints
Dockerfile contain instructions but the ones above do not run anything yet. These will be done by the entrypoint scripts.
#!/bin/bash
set -e
python manage.py migrate
echo "Run worker."
python manage.py run_huey # (1)
- The background process worker that will only work if env variables are set.
#!/bin/bash
set -e
python manage.py collectstatic --noinput # (1)
python manage.py compress --force # (2)
python manage.py migrate # (3)
gunicorn config.wsgi:application \ # (4)
--bind 0.0.0.0:8080 \ # (5)
--workers=2 \
--capture-output \
--enable-stdio-inheritance
- Collect static files into
/opt/src/staticfiles
- Compress content from
/opt/src/staticfiles
into/static/CACHE
- Ensure all migrations affect the database
- Serve on production gunicorn (https://docs.gunicorn.org/en/latest/run.html) vs.
python manage.py runserver
;config.wsgi:application
refers to the exposed application in Django's/src/config/wsgi.py
. See Django and server discussion 0.0.0.0
is included inconfig.settings.ALLOWED_HOSTS
;8080
will be exposed port.
#!/bin/bash
set -e
# (1)
echo "Static files management."
python manage.py collectstatic --noinput
python manage.py compress --force
python manage.py migrate
# (2)
echo "Gunicorn server."
gunicorn config.wsgi:application \
--bind 0.0.0.0:"$PORT" \
--worker-tmp-dir /dev/shm \
--workers=2 \
--capture-output \
--enable-stdio-inheritance
- Note similar setup with
run.sh
with the addition of bound port :${PORT} which maps to fly.toml services internal port, see fly.toml's PORT.
Dockerfile + Entrypoint
- The Dockerfile referenced contains the following instruction
COPY /src to .
. Since I'm at the root directory in runningdocker build
the local context is.
and I'm copying a portion of this local context, i.e../src
to the container. - Translates to: build the container, tag it with
sq
, use the sqlite-based Dockerfile indicated - Uses entrypoint script inside container just built and runs the same, exposing port 8080.
- The entrypoint location is based on contents found inside the docker container. Since the WORKDIR is
opt/src
, the path to the script is simplyscripts/run.sh
- The Dockerfile referenced contains the following instruction
COPY /src to .
. Since I'm at the root directory in runningdocker build
the local context is.
and I'm copying a portion of this local context, i.e../src
to the container. - Note that the database credentials employed are those created during local development. This instance of postgres sits in the dev machine and not in a separate docker container. To reach the dev machine from the Docker container, need to use
host.docker.internal
for the host.
Docker Compose
The above configurations for each pairing of Dockerfile + database to use can become unwieldly.
Note that though I've gotten a container running for the Django web service, I still haven't implemented redis
, huey
, and postgres
as separate containers. Re: postgres
, I've used the local version of it on my device but I haven't created a separate container for it.
This is where the compose.yml
becomes handy. I'm able to attach profiles, in this case pg
and sq
so that I can simply run:
And this will orchestrate multiple running services: django
, redis
, huey
, and the database, whether sqlite
or postgres
, taking into account the "depends_on" field of each service
See the full compose.debug.yml
which can also be invoked via a just command shortcut: just debug_up
(assumes 1Password secret reference usage.)
Command Runner
Requires: 1password
-based secret references
Container Debug
just debug_up <target>
poetry export -f requirements.txt \
--without-hashes \
--output src/requirements.txt # (1)
op inject -i ./deploy/env.common.tpl -o ./deploy/.env.debug # (2)
cp ./deploy/compose.debug.yml compose.yml # (3)
docker-compose --file compose.debug.yml \
--profile {{target}} up \
--build # (4)
-
The Dockerfile referenced in the compose.yml will pip install a
/src/requirements.txt
sopoetry export
ensures that what is installed is always what's declared inpyproject.toml
-
Since secrets stored in the .env file are 1Password secret references, I need to inject them into a .env-template; ensure that the
compose.debug.yml
makes use of this same .env-template.The .env-file will contain secrets. Hence it's critical it be prefixed
.env
so that this same file will always be included by.gitignore
. -
The
compose.yml
file needs to be in the root directory since the build context is.
and the Dockerfile copies from/src
. -
The
<target>
argument refers to a profile declared in acompose.yml
file
Specific Compose Up
just up <folder>
poetry export -f requirements.txt \
--without-hashes \
--output src/requirements.txt # (1)
op inject -i ./deploy/{{folder}}/env.tpl -o ./deploy/{{folder}}/.env # (2)
cp ./deploy/{{folder}}/compose.yml compose.yml # (3)
docker-compose up --build
-
The Dockerfile referenced in the compose.yml will pip install a
/src/requirements.txt
sopoetry export
ensures that what is installed is always what's declared inpyproject.toml
-
Since secrets stored in the .env file are 1Password secret references, I need to inject them into a .env-template; ensure that the
{{folder}}/compose.yml
makes use of this same{{folder}}/env.tpl
The .env-file will contain secrets. Hence it's critical it be named
.env
so that this same file will always be included by.gitignore
. -
The
compose.yml
file needs to be in the root directory since the build context is.
and the Dockerfile copies from/src
.