My Synology disk crashed and so did my Docker set up. Basically, the CI/CD pipeline for my programs no longer existed. Let’s rethink the setup: I need something that is less complex and easier to restore upon a crash. The result is what I would call “a poor man’s CI/CD”. It’s just Git, Docker, Docker Compose and Cron. It is easy to set up and it might be all you need.

  1. Intro
  2. The idea
  3. Demo
  4. (1) Setup Docker+Git on your Synology NAS
    1. Packages
    2. Projects directory
    3. SSH
  5. (2) Prepare your repository
    1. Build+test: Dockerfile
    2. Deploy: docker-compose.yaml
    3. Glue: run.sh
    4. --force
    5. --cleanup
    6. --fullcleanup
  6. (3) Git Tokens
    1. GitHub: Personal Access Token
    2. BitBucket: App Password
    3. GitLab?
    4. Pull
  7. (4) Scheduling on your Synology: the C in CI/CD
  8. Conclusion / notes
  9. F.A.Q.
  10. Improvements
  11. Comments

The idea

We’ll create a scheduled bash script that will use Git to pull the source code to our Synology NAS. The script will build a Docker image and deploy it using Docker Compose.

Diagram of the setup we'll use. A scheduled bash script will pull the source code using Git, build using Docker and deploy using Docker Compose.
Diagram of the setup.

Demo

To show how it works, I’ve set up a public repository at GitHub: synology-ci-cd-nodejs-demo. It it a simple Node.js application that will run on port 3000 on your NAS and return a Hello World message with the time of the server.

Let’s get it active on the NAS in a Docker container:

An overview of how the process works. Note: the cleanup step is not shown here.

I executed the following lines of code:

# clone the repository
git clone "https://github.com/KeesCBakker/synology-ci-cd-nodejs-demo.git"

# navigate to the created directory
cd synology-ci-cd-nodejs-demo
ls

# run the CI/CD of the container
bash run.sh

# check what's going on
curl "http://127.0.0.1:3000"

Let’s dive into the inner workings of this setup.

1. Setup Docker+Git on your Synology NAS

First, we need to make some changes to our Synology NAS setup.

Packages

Go to the Package Center of your Synology NAS and install the following packages:

Projects directory

We need a directory in which we will pull the repository. Open up the File Station and create a directory named projects somewhere. In my case it runs on the root folder of my drive.

SSH

Next, make sure you have SSH enabled:

  1. Open up Control Panel.
  2. Search for ssh and click Terminal & SNMP
  3. Click the Enable SSH service checkbox.
  4. Click the Apply button.

Now that SSH is enabled, we need to set up your profile in order for the git command to work. Open your profile in Nano with:

nano ~/.profile

Add the following line of code:

export
PATH="$PATH:/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin"

Press ctrl+x to exit the Nano editor. Choose y to save the file. Now exit your SSH session. Next time you SSH into your NAS, the profile is applied.

2. Prepare your repository

This CI/CD-method works with 3 files that need to be added to the repository. Together these files will create the CI/CD pipeline.

Build+test: Dockerfile

The first file is the Dockerfile. It contains all the information to test, build and package your application into a production container. The demo Dockerfile uses a multi-stage build for the Node.js application.

To aide in clean-up I tag all created containers (test, build and production) with the same label. We can use this label to remove the dangling

# mark it with a label, so we can remove dangling images
LABEL cicd="hello"

Deploy: docker-compose.yaml

The second file is the docker-compose.yaml. It stores everything needed to run the container on your NAS. It contains information on volumes that need to be mapped, ports that should be exposed and the name of the image. More on Docker Compose can be found here.

This is what I used for the demo file:

version: '3'
services:
  web:
    image: hello:latest
    restart: always
    expose:
      - 3000
    ports:
      - 3000:3000

The image here is tagged as hello and the service that will be started is called web. We’ll need these in the next step.

Glue: run.sh

The run.sh script glues everything together. This diagram shows what happens in this script:

A diagram showing the CI/CD pipeline in the run.sh script.
Flow diagram showing the steps the run.sh step will execute.

I’ve converted the diagram above into a bash script. We need the tag and service values of the previous step.

#!/bin/sh
tag="hello" # tag of your container
service="web" # docker-compose section to start

stop_timeout=10
need_build=false
need_start=false
need_cleanup=false
full_docker_name="$tag$service1"
option1="$1"
option2="$2"
set -e;

function echo_title {
  echo ""
  echo "$1" | sed -r 's/./-/g'
  echo "$1"
  echo "$1" | sed -r 's/./-/g'
  echo ""
}

function has_option {
  if [ "$option1" == "$1" ] || [ "$option2" == "$1" ] ||
     [ "$option1" == "$2" ] || [ "$option2" == "$2" ] ; then
    echo "true"
  else
    echo "false"
  fi
}

# goto script directory
pushd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"

if [ $(has_option "--force" "-f") == "true" ] ; then
  need_pull=true
else
  need_pull=$(git fetch --dry-run 2>&1)
fi

if [ -n "$need_pull" ] ; then
  echo_title "PULLING LATEST SOURCE CODE"
  git reset --hard
  git pull
  need_build=true
  git log --pretty=oneline -1
else
  image_exists=$(docker images | grep $tag || true)
  if [ -z "$image_exists" ] ; then
    need_build=true
  fi
fi

if [ "$need_build" = true ] ; then
  echo_title "BUILDING CONTAINER"
  docker build -t "$tag" .
  echo_title "STOPPING RUNNING CONTAINER"
  docker-compose stop -t $stop_timeout
  need_start=true
else
  is_running=$(docker ps | grep $full_docker_name || true)
  if [ -z "$is_running" ] ; then
    need_start=true
  fi
fi

if [ "$need_start" = true ] ; then
  echo_title "STARTING CONTAINER"
  docker-compose up -d $service
  printf "\nContainer is up and running.\n\n"
  need_cleanup=true
else
  echo "No changes found. Container is already running."
fi

if [ "$need_cleanup" = true ] ; then

  if [ $(has_option "--full_cleanup" "-fcu") == "true" ] ; then
    echo_title "CLEAN-UP"
    docker image prune --force
    printf "\nImages have been cleaned up. CI/CD finished.\n\n"
  elif [ $(has_option "--cleanup" "-cu") == "true" ] ; then
    echo_title "CLEAN-UP"
    docker image prune --force --filter "label=cicd=$tag"
    printf "\nImages have been cleaned up. CI/CD finished.\n\n"
  fi

fi

–force

So what about forced? You might want to change the run script and do a pull to get the changes in. Now, if you run the script, it will think nothing has changed (you just pulled the source). To circumvent this situation, just do a bash run.sh --force and a rebuild and redeploy will be enforced.

–cleanup

Docker will create many layers and cache them all. This might clutter up your system. If you want to remove the cached images, use this bash run.sh --cleanup option. This will cleanup 12.93MB in the example code. Note: this will make your setup a bit slower, as no cache is used.

–fullcleanup

If you want to cleanup even more images, use bash run.sh --fullcleanup. This will make your CI/CD process a lot slower, as even more images are removed. In our example code this cleaned up 955.6MB.

3. Git Tokens

The demo shows how to use a public repository. Your personal repositories will not be publicly accessible, so you’ll have to provide some credentials to access them. You could setup a secure SSH connection between your NAS and your source control provider. I went the easy route and used a simple HTTPS clone with a special token.

GitHub: Personal Access Token

The special token we’ll need is called a Personal Access Token in GitHub. To get one, do this:

  1. Click on your avatar and select Settings
  2. Click on Developer Settings
  3. Click on Personal access tokens
  4. Click on Generate a new token
  5. Enter a Note
  6. Scroll down and click Generate token
  7. Now copy the token.

More on Personal Access Tokens here.

BitBucket: App Password

The special token we’ll need is called an App Password in BitBucket. You can only use them programmatically, you can’t login with them. To get one, do this:

  1. Login to https://bitbucket.org/
  2. Click on your avatar and select Bitbucket Settings
  3. Under Access Management, click on App passwords
  4. Click Create app password
  5. Enter a Label
  6. In the section Repositories, check the Read option.
  7. Click Create.
  8. Now copy the new app password.

More on App Passwords here.

GitLab?

I have no experience using GitLab; but this should work.

Pull

We can use the special token to pull the source from source control:

git clone "username:[email protected]://github.com/KeesCBakker/synology-ci-cd-nodejs-demo.git"

Your token and username are saved in plain text with the repo. After we’ve pulled the repository, we can turn it into a container running the script:

bash run.sh

4. Scheduling on your Synology: the C in CI/CD

Now that we’ve set up our repository and downloaded it to our NAS. Let’s automate the process by scheduling the run script for every 5 minutes:

  1. Open the Control Panel
  2. Click in the section System on Task Scheduler
  3. Click in the top bar Create > Scheduled Task > User-defined script
  4. Enter a name in the Task field, something like “CI_CD Synology NodeJs Demo”
  5. Click on the Schedule tab
  6. Select in Run on the following days the option Daily
  7. Set under Time the First run time to 00:00, the Frequency to Every 5 minutes and the Last run time to 23:55
  8. Click on the Task Settings tab
  9. Check Send run details by email
  10. Enter your email address
  11. Enter the following to User-defined script: bash /{pah-to-your-projects-dir}/{name-of-your-repo}/run.sh
    Note: you can specify --fullcleanup if you want to clean up all dangling images.
  12. Click the OK button

A new task is created. Select the task en hit Run. The task should now be triggered and send you an email with the result. Check if the result is correct.

Now, you might only want to have an email if stuff fails. You can configure this on the Task Settings tab (check the Send run details only when the script terminates abnormally box).

And now… take a 🍻, because you’ve just setup a Docker CI/CD on your Synology NAS! “Proost!”, as we say in the Netherlands!

Conclusion / notes

We’ve seen that it is easy to create a basic CI/CD on your Synology NAS using Git, Docker, Docker Compose and Cron (the system behind the scheduling).

To be honest, I don’t use the scheduling in my setup. My software does not change often, so for me it is enough to SSH from Visual Studio Code into my NAS and trigger the run script to deploy my code. It saves some CPU cycles as the Task Scheduler does not have to poll my code every 5 minutes. An alternative could be a web hook, but that will make our integration more complex.

Any questions or troubles? Just post them in the comments under this article.

F.A.Q.

Sometimes you might run into something unexpected. Here is a list of stuff I ran into, it might help you:

  • Do I need a container repository like Docker Hub?
    No. The docker images are built and cached on your Synology NAS. No container will leave your NAS.
  • Which branch is used? Can I change the branch?
    In this case: a commit to master will trigger your CI/CD pipeline. You can easily select a different branch by checking out a different branch: git checkout {name}. The script will only pull the changes for the branch it is on.
  • I’m getting a Current status: 128 (Interrupted) without any other information in the mail from my schedular. What’s wrong?
    The script needs to be executed from the right location. Check if the following line is present in your run.sh script:
pushd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
  • I’m getting a Bind for 0.0.0.0:3000 failed: port is already allocated error. Why?
    There’s another container running on port 3000. You can check with docker ps | grep 3000 which container is running.
  • I’ve made some changes, but they are not picked up by the container. Now what?
    Do the following: git pull; bash run.sh --force
  • My NAS is filling up, what can I do?
    Do a manual clean-up, like this: docker image prune --force --all More info can be found here.

Improvements

2020-10-11: Now shows the last commit message after git pull
2020-10-10: Added some notes on the Task Schedular and fixed the 🍻 emoji.
2020-09-24: Added the --cleanup and --full_cleanup scripts to clean up your NAS. Also added a FAQ for this.