Build a dockerized Django app on Google Cloud Run with CI/CD
Hi Django devs,
In the last year, I have become very interested in building web apps with Django and figuring out the best way to develop and deploy them to production. Turns out that I have built myself a nice solution, which I can replicate for all my projects. Let me walk you through how to do this from scratch.
I worked my way through a lot of little tricks and settings I will detail here. Hopefully that will help folks out there, as I had to do quite a bit of research, trial and error to get to this stage.
1. Requirements
- Django app
- mySQL database
- Local development: docker based to avoid local environment issues and ability to share common environment between developers
- Cloud deployment in Google Cloud Platform… simply because I love GCP
- Select a service that is cheap and auto-scalable
- Build some initial level of Continuous Integration and Deployment (CI/CD)
2. Assumptions
- You already have created a Google Cloud Platform project, for which you are the owner.
- You have basic knowledge of Django, Docker and docker-compose
3. Architecture
Databases
I have made the decision not to host mysql locally. I would rather have a cloud based mysql instance that can be shared across developers. I can then also use that same instance as a cloud staging database.
Local development
We use Docker and git source control. I personally use VS code which has a nice Docker plugin to control docker-compose, networks, logs, shell…
Fortunately, GCP has a nice way of accessing Cloud SQL remotely and securely: Cloud SQL Auth proxy. We will use the containerized version of the proxy.
Cloud services
So where should my container run? There are a few options to be considered:
- Google AppEngine. You can run a Django container on Appengine, but I could not find a compelling reason to go through that pain. I could be wrong.
- Google Kubernetes. This is overkill for a simple web app in my view. I don’t need the fine grain control of the clusters nodes. Kubernetes can become very pricey since some VMs are always on.
- Google Cloud Run. Seems to have the great mix of container based, infinite scalability, bill by “cpu seconds usage”, specific for web apps.
So my choice is with Cloud Run !
4. Development setup
4.1. Project file structure
Here is the file structure I will use, in order to support docker-compose on all services together. I will only cover the django web service, but you could have other services such as functions and scripts that you would want to be covered by the docker-compose.
git/
|-- gcp/
|-- web/
| |-- docker/ Docker specific files
| | |-- docker-entrypoint.sh
| | |-- nginx.conf
| | |-- requirements.txt Python pip libs
| | |-- supervisord.conf
| | |-- supervisord.dev.conf
| |-- djangoprj/ Django project
| |-- djangoprj_web/ Django web app
| |-- Dockerfile
| |-- cloudbuild.yaml GCP build file
| |-- manage.py
|-- other_services/
|-- docker-compose.yml
4.2. mySQL setup
First, we setup a GCP dev Cloud SQL instance
- Instance settings
Name. ”djangoprj-dev”
Machine type. Select cheap shared cores for your dev instance.
Connections. Public IP in order to be able to access remotely for the dev instance.
Backups. Disable for dev instance.
Charsets: utf8mb4. (Makes my life easier for compatibility with icons/emojis not supported in mysql utf8…) - Note the connection name: djangoprj:europe-west1:djangoprj-dev
Then, we need to setup credential files for the SQL proxy.
- Go to the GCP Service account page
- Create a new service account
Name: for example cloud-sql-proxy-dev@
Roles: Cloud SQL Client / Editor / Admin - Next to the new account, “Manage keys”
Create new JSON key
Save it, you won’t be able to download it again. This will go to the /gcp folder.
Activate the Cloud SQL Admin API
- Use search bar to find the Market place location to enable the Cloud SQL Admin API
4.3. Initial Django setup
On my local linux Ubuntu:
$ cd git
$ mkdir web
$ cd web
$ sudo pip3 install Django
You could initially install Django with Docker without a local install, but this works for me.
Create django main project
$ sudo django-admin startproject djangoprj .
$ sudo chown -R $USER:$USER .
Create django app
$ ./manage.py startapp djangoprj_web
Update django settings.py
Import some ENV variables
DEBUG = int(os.environ.get("DEBUG", default=1))ALLOWED_HOSTS = os.environ.get('ALLOWED_HOSTS').split(" ") if os.environ.get('ALLOWED_HOSTS') else ['localhost', '127.0.0.1']ENV = os.environ.get('ENV')
Add your app
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'djangoprj_web'
]
Setup mysql
DATABASES = {
'default': {
'HOST': os.environ['PMA_HOST'],
'ENGINE': 'django.db.backends.mysql',
'USER': os.environ['MYSQL_USER'],
'PASSWORD': os.environ['MYSQL_PASSWORD'],
'NAME': os.environ['MYSQL_DB'],
'OPTIONS': {'charset': 'utf8mb4'},
},
}
Static files
STATIC_URL = '/static/'
STATIC_ROOT = 'djangoprj/static/'
4.4. Docker image setup
Before we can create the Django project, we build a Docker image that will host our app.
Container components:
- requirements.txt python pip dependencies in
- docker-entrypoint.sh. Container entry point
- supervisord.conf. Manage applications: nginx & django
- nginx.conf. As we don’t want to serve http from django development server directly, we deploy nginx on the same container
Let’s go through these one by one.
requirements.txt
Django==3.1
djangorestframework
mysqlclient>=1.4.5
supervisor
tzdata
uwsgi
docker-entrypoint.sh
#!/bin/bashpython manage.py migrate # Apply database migrationsif [ "$ENV" == "dev" ] # Collect static files on in DEV
then
python manage.py collectstatic --noinput
fi# Launch supervisor
/usr/local/bin/supervisord
supervisord.conf
[supervisord]
nodaemon=true[program:uwsgi]
command=uwsgi --socket /djangoprj.sock --module djangoprj_web.wsgi --chmod-socket=666
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
supervisord.dev.conf (Just change this line, to enable auto-reload of server when changing code)
command=uwsgi --socket /djangoprj.sock --module djangoprj.wsgi --chmod-socket=666 --py-autoreload=1
nginx.conf
# the upstream component nginx needs to connect to
upstream django {
server unix:/djangoprj.sock; # for a file socket
}server {
# the port your site will be served on
listen 8000;
# the domain name it will serve for
server_name djangoprj.com dev.djangoprj.com; # substitute your machine's IP address or FQDN
charset utf-8; # set gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x- javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # max upload size
client_max_body_size 5M; # adjust to taste# Django media
location /static {
alias /code/djangoprj/static;
} # Finally, send all the rest to the Django server.
location / {
uwsgi_pass django;
include /etc/nginx/uwsgi_params; # the uwsgi_params file you installed
}
}
DockerFile
FROM python:3EXPOSE 8000# Nginx
RUN apt-get update
RUN apt-get install -y net-tools nginx
RUN apt -y install gettext
RUN rm /etc/nginx/sites-enabled/defaultCOPY docker/nginx.conf /etc/nginx/sites-enabled# Supervisord
COPY docker/supervisord.conf /etc/supervisord.confENV PYTHONUNBUFFERED 1# Python deps
RUN mkdir /code
WORKDIR /code
COPY docker/requirements.txt /code/
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . /code/# Timezone sync
RUN echo "Europe/Brussels" > /etc/timezone
RUN dpkg-reconfigure -f noninteractive tzdata# Pass the version and branch name as environment variables
ARG version=dev
ENV VERSION=${version}
ARG branch=dev
ENV BRANCH=${branch}RUN ["chmod", "+x", "/code/docker/docker-entrypoint.sh"]
ENTRYPOINT ["/code/docker/docker-entrypoint.sh"]
4.5. Spin up these containers locally
Create the docker-compose.yml file
version: '3'services:
sql_proxy:
image: gcr.io/cloudsql-docker/gce-proxy:1.16
ports:
- "127.0.0.1:3306:3306"
command:
- "/cloud_sql_proxy"
- "-instances=djangoprj:europe-west1:djangoprj-dev=tcp:0.0.0.0:3306"
- "-credential_file=/root/keys/keyfile.json"
volumes:
- ./gcp/djangoprj-43535312.json:/root/keys/keyfile.json:ro
networks:
- some-netweb:
build:
context: ./web/
ports:
- "8000:8000"
volumes:
- ./web:/code
- ./web/docker/nginx.conf:/etc/nginx/sites-enabled/nginx.conf
- ./web/docker/supervisord.dev.conf:/etc/supervisord.conf
- ./web/gcp:/code/gcp
environment:
MYSQL_USER: web
MYSQL_PASSWORD: <your password>
MYSQL_DB: djangoprj_dev
PMA_HOST: sql_proxy
DEBUG: 1
ALLOWED_HOSTS: localhost 127.0.0.1 dev.djangoprj.com web
ENV: dev
depends_on:
- sql_proxy
networks:
- some-netphpmyadmin:
image: phpmyadmin/phpmyadmin
ports:
- "127.0.0.1:8083:80"
environment:
PMA_HOST: sql_proxy
depends_on:
- sql_proxy
networks:
- some-netnetworks:
some-net:
driver: bridge
Spin up our containers using docker-compose
$ docker-compose up
( I use VS code Docker plugin for that, it’s really cool)
If all goes well, you should be able to access you app at http://127.0.0.1/8000
You will also be able to access phpmyadmin, for troubleshooting purpose at http://127.0.0.1:8083/ (use the root account and password you set in Cloud SQL when creating the instance).
4.7. Create the database
Create a new database. I use phpmyadmin for that.
- Created database called djangoprj_dev
- Created user called web, which has full access on djangoprj_dev
- Copy your DB name, user, pass to docker-compose.yaml ENV variables (if not done yet)
4.7. Initialize Django
Django database 1st migration. This is a tricky part
In order to avoid a huge pain if you later want to use a custom User model, you should configure a custom model before you do anything else. Trust me, or Google it…
In models.py
from django.contrib.auth.models import AbstractUserclass User(AbstractUser):
def __str__(self):
return self.username
In settings.py
# Custom Use modeling
AUTH_USER_MODEL = 'djangoprj_web.User'
In admin.py
from django.contrib import admin
from django.contrib.auth.admin import UserAdmin
from .models import User# Register your models here.
class UserAdmin(UserAdmin):
# The fields to be used in displaying the User model.
# These override the definitions on the base UserAdmin
# that reference specific fields on auth.User.
list_display = ('id', 'username', 'email', 'is_superuser', 'is_staff', 'first_name', 'last_name', 'date_joined', 'last_login')admin.site.register(User, UserAdmin)
Connect to your local container to load the 1st migration.
$ docker exec -it djangoprj_web_1 bash
# ./manage.py makemigrations
# ./manage.py migrate
Create superuser
# ./manage.py createsuperuser
Congrats, you should have your app, admin portal and phpmyadmin rolling at
Now let’s move all this into the cloud…
5. Cloud deployment
So now that we have the Django app running locally, I want to
- Control sources with git
- Automatically build and deploy my app to the staging instance
5.1. GIT source control
Since I am using GCP, I will also use the GCP Source Repositories git service, but you could use any other provider such as github or bitbucket.
I create a new repo under the same GCP project, follow instructions and commit at the code under /djangoprj.
5.2. Cloud Build
In order to deploy your containers in GCP, you will need to build and store them in the cloud.
For this, we can leverage the Cloud Build Triggers. These can be activated from git commit events.
Start by enabling the Cloud Build API in the console.
Then create a new Trigger
- Name: web-commit-develop
- Event: push to branch
- Source: your git source repo
- Branch: ^develop$ (in my case)
- Apply an “included files filter” to build only on commits to your django web app: “web/**”
- Configuration
Type: Cloud build config
Location: Repository
File location: “web/cloudbuild.yaml” - Advanced variables
_CLOUD_RUN_SERVICE: “web-dev” This will tell cloud build where to deploy the container.
Create the cloudbuild.yaml file
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/djangoprj/web-server-$BRANCH_NAME', './web/']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/djangoprj/web-server-$BRANCH_NAME']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
id: 'Deploying image to Cloud Run'
args: ['run', 'deploy', '${_CLOUD_RUN_SERVICE}', '--image', 'gcr.io/djangoprj/web-server-$BRANCH_NAME', '--region', 'europe-west1', '--platform', 'managed', '--allow-unauthenticated']
Now when you commit in /web develop branch, it will trigger the build in GCP Cloud build. You can monitor the build progress and logs in Cloud Build history. You can also find all your builds in the Container registry.
The 1st build you will create will fail, since Cloud Build does not have permissions to push services to Cloud Run. Goto the IAM page and edit the permissions on your <project id>@cloudbuild.gserviceaccount.com account. You should add these roles:
- Cloud Run Admin
- Service Account User
Manually run your trigger again from the trigger page.
5.3. Cloud Run deployment
The previous Trigger has deployed our container in Cloud Run under the service name “web-dev”. The deployment will fail because some options have not been passed to Cloud Run. Now let’s go and tweak a few options.
Go to the Cloud Run console for the “web-dev” service. Click on “Edit and deploy new version”. Update the following options:
- Container
Container port: 8000
Maximum request per container: 5. Limit amount of instances in dev mode - Variables
MYSQL_USER : web
MYSQL_PASSWORD : <your web user password>
MYSQL_DB : djangoprj_dev
PMA_HOST : /cloudsql/djangoprj:europe-west1:djangoprj-dev (this was hard to figure out… after the /cloudsql/ is your SQL connection ID)
DEBUG : 1
ALLOWED_HOSTS : (keep empty for now)
ENV : cloud-dev - Connections
Add the Cloud SQL connection
Hit submit. This will deploy your container, and provide you with the instance URL, which you can see on the cloud run page. In my case, it’s something like https://web-dev-fds56fd-ew.a.run.app
You can now repeat this and add this variable:
- ALLOWED_HOSTS : “web-dev-fds56fd-ew.a.run.app dev.djangoprj.com” (remove the https:// and leave a space between each url)
Congrats you now have your cloud-dev instance running at https://web-dev-fds56fd-ew.a.run.app/
5.4. Map your custom domain
You probably have your own domain name that you want to use and map to your cloud instance. Go to the Cloud Run dashboard. Click on “Manage Custom Domains”. Follow instructions to add a custom mapping. I map dev.djangoprj.com to my Cloud Run web-dev service.
5.5. Production instances
You can replicate these steps for your production instance:
- create a Cloud SQL prod instance, with more compute power, replication and backups…
- create a trigger on your master branch. I disable this one, so I only trigger it manually from the GCP console.
- Create a prod Cloud Run service. Give more instances to this one. Possibly more memory also… Link to prod SQL connection. Adapt Variables to match prod environment.
6. Conclusions
I think this is a nice scalable and automated setup. It has worked fine for me on a few projects. Looking forward to seeing it perform under high loads.
In the process, I have also gone down a few other rabbit holes which I may develop in further articles, such as
- Supporting https with nginx in local dev. Some APIs require https for testing also
- Django social authentication
- Django DRF Rest framework
- Django multi domain management in a single app
- Google Cloud scheduler, triggering Google Cloud functions
- Google Cloud Pub Sub message broker
I welcome feedback and suggestions to improve this setup. I hope this will save you a few hours if you are trying to build Django on GCP.
Cheers.