My Synology disk crashed and so did my Docker set up. Basically, the CI/CD pipeline for my programs no longer existed. The wonderful thing of an awful crash like this, is that I could rethink my setup. The result is what I would call “a poor man’s CI/CD”. It’s just Git, Docker, Docker Compose and Cron. It is easy to set up and it might be all you need.

The idea

We’ll create a scheduled bash script that will use Git to pull the source code to our Synology NAS. The script will build a Docker image and deploy it using Docker Compose.

Diagram of the setup we'll use. A scheduled bash script will pull the source code using Git, build using Docker and deploy using Docker Compose.
Diagram of the setup.

Demo

To show how it works, I’ve set up a public repository at GitHub: synology-ci-cd-nodejs-demo. It it a simple Node.js application that will run on port 3000 on your NAS and return a Hello World message with the time of the server.

Let’s get it active on the NAS in a Docker container:

I executed the following lines of code:

# clone the repository
git clone "https://github.com/KeesCBakker/synology-ci-cd-nodejs-demo.git"

# navigate to the created directory
cd synology-ci-cd-nodejs-demo
ls

# run the CI/CD of the container
bash run.sh

# check what's going on
curl "http://127.0.0.1:3000"

Let’s dive into the inner workings of this setup.

1. Setup Docker+Git on your Synology NAS

First, we need to make some changes to our Synology NAS setup.

Packages

Go to the Package Center of your Synology NAS and install the following packages:

Projects directory

We need a directory in which we will pull the repository. Open up the File Station and create a directory named projects somewhere. In my case it runs on the root folder of my drive.

SSH

Next, make sure you have SSH enabled:

  1. Open up Control Panel.
  2. Search for ssh and click Terminal & SNMP
  3. Click the Enable SSH service checkbox.
  4. Click the Apply button.

Now that SSH is enabled, we need to set up your profile in order for the git command to work. Open your profile in Nano with:

nano ~/.profile

Add the following line of code:

export
PATH="$PATH:/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin"

Press ctrl+x to exit the Nano editor. Choose y to save the file. Now exit your SSH session. Next time you SSH into your NAS, the profile is applied.

2. Prepare your repository

This CI/CD-method works with 3 files that need to be added to the repository. Together these files will create the CI/CD pipeline.

Build+test: Dockerfile

The first file is the Dockerfile. It contains all the information to test, build and package your application into a production container. The demo Dockerfile uses a multi-stage build for the Node.js application.

Deploy: docker-compose.yaml

The second file is the docker-compose.yaml. It stores everything needed to run the container on your NAS. It contains information on volumes that need to be mapped, ports that should be exposed and the name of the image. More on Docker Compose can be found here.

This is what I used for the demo file:

version: '3'
services:
  web:
    image: hello:latest
    restart: always
    expose:
      - 3000
    ports:
      - 3000:3000

The image here is tagged as hello and the service that will be started is called web. We’ll need these in the next step.

Glue: run.sh

The run.sh script glues everything together. This diagram shows what happens in this script:

A diagram showing the CI/CD pipeline in the run.sh script.
Flow diagram showing the steps the run.sh step will execute.

I’ve converted the diagram above into a bash script. We need the tag and service values of the previous step.

#!/bin/sh
tag="hello" # tag of your container
service="web" # docker-compose section to start

stop_timeout=10
need_build=false
need_start=false
full_docker_name="$tag$service1"
set -e;

function echo_title {
  echo ""
  echo "$1" | sed -r 's/./-/g'
  echo "$1"
  echo "$1" | sed -r 's/./-/g'
  echo ""
}

# goto script directory
pushd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"

if [ "$1" = "--force" ] || [ "$1" == "-f" ] ; then
  need_pull=true
else
  need_pull=$(git fetch --dry-run 2>&1)
fi

if [ -n "$need_pull" ] ; then
  echo_title "PULLING LATEST SOURCE CODE"
  git reset --hard
  git pull
  need_build=true
else
  image_exists=$(docker images | grep $tag || true)
  if [ -z "$image_exists" ] ; then
    need_build=true
  fi
fi

if [ "$need_build" = true ] ; then
  echo_title "BUILDING CONTAINER"
  docker build -t "$tag" .
  echo_title "STOPPING RUNNING CONTAINER"
  docker-compose stop -t $stop_timeout
  need_start=true
else
  is_running=$(docker ps | grep $full_docker_name || true)
  if [ -z "$is_running" ] ; then
    need_start=true
  fi
fi

if [ "$need_start" = true ] ; then
  echo_title "STARTING CONTAINER"
  docker-compose up -d $service
  printf "\nContainer is up and running.\n\n"
else
  echo "No changes found. Container is already running."
fi

So what about forced? You might want to change the run script and do a pull to get the changes in. Now, if you run the script, it will think that nothing has changed (you just pulled the source). To circumvent this situation, just do a bash run.sh --force and a rebuild and redeploy will be enforced.

3. Git Tokens

The demo shows how to use a public repository. Your personal repositories will not be publicly accessible, so you’ll have to provide some credentials to access them. You could setup a secure SSH connection between your NAS and your source control provider. I went the easy route and used a simple HTTPS clone with a special token.

GitHub: Personal Access Token

The special token we’ll need is called a Personal Access Token in GitHub. To get one, do this:

  1. Click on your avatar and select Settings
  2. Click on Developer Settings
  3. Click on Personal access tokens
  4. Click on Generate a new token
  5. Enter a Note
  6. Scroll down and click Generate token
  7. Now copy the token.

More on Personal Access Tokens here.

BitBucket: App Password

The special token we’ll need is called an App Password in BitBucket. You can only use them programmatically, you can’t login with them. To get one, do this:

  1. Login to https://bitbucket.org/
  2. Click on your avatar and select Bitbucket Settings
  3. Under Access Management, click on App passwords
  4. Click Create app password
  5. Enter a Label
  6. In the section Repositories, check the Read option.
  7. Click Create.
  8. Now copy the new app password.

More on App Passwords here.

GitLab?

I have no experience using GitLab; but this should work.

Pull

We can use the special token to pull the source from source control:

git clone "username:[email protected]://github.com/KeesCBakker/synology-ci-cd-nodejs-demo.git"

Your token and username are saved in plain text with the repo. After we’ve pulled the repository, it can be turned into a container running the script:

bash run.sh

4. Scheduling on your Synology: the C in CI/CD

Now that we’ve set up our repository and downloaded it to our NAS. Let’s automate the process by scheduling the run script for every 5 minutes:

  1. Open the Control Panel
  2. Click in the top bar Create > Scheduled Task > User-defined script
  3. Enter a name in the Task field, something like “CI_CD Synology NodeJs Demo”
  4. Click on the Schedule tab
  5. Select in Run on the following days the option Daily
  6. Set under Time the First run time to 00:00, the Frequency to Every 5 minutes and the Last run time to 23:55
  7. Click on the Task Settings tab
  8. Check Send run details by email
  9. Enter your email address
  10. Enter the following to User-defined script: bash /{pah-to-your-projects-dir}/{name-of-your-repo}/run.sh
  11. Click the OK button

A new task is created. Select the task en hit Run. The task should now be triggered and send you an email with the result. Check if the result is correct.

Now, you might only want to have an email if stuff fails. You can configure this on the Task Settings tab (check the Send run details only when the script terminates abnormally box).

And now… take a 🍺, because you’ve just setup a Docker CI/CD on your Synology NAS! “Proost!”, as we say in the Netherlands!

Conclusion

We’ve seen that it is easy to create a basic CI/CD on your Synology NAS using Git, Docker, Docker Compose and Cron (the system behind the scheduling). Any questions or troubles? Just post them under this article.

F.A.Q.

Sometimes you might run into something unexpected. Here is a list of stuff I ran into, it might help you:

Q: Do I need a container repository like Docker Hub?
A: No. The docker images are built and cached on your Synology NAS. No container will leave your NAS.

Q: Which branch is used? Can I change the branch?
A: In this case: a commit to master will trigger your CI/CD pipeline. You can easily select a different branch by checking out a different branch: git checkout {name}. The script will only pull the changes for the branch it is on.

Q: I’m getting a Current status: 128 (Interrupted) without any other information in the mail from my schedular. What’s wrong?
A: The script needs to be executed from the right location. Check if the following line is present in your run.sh script:

pushd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"

Q: I’m getting a Bind for 0.0.0.0:3000 failed: port is already allocated error. Why?
A: There’s another container running on port 3000. You can check with docker ps | grep 3000 which container is running.

Q: I’ve made some changes, but they are not picked up by the container. Now what?
A: Do the following: git pull; bash run.sh -f.