My Synology disk crashed and so did my Docker set up. Basically, the CI/CD pipeline for my programs no longer existed. Let's rethink the setup: I need something that is less complex and easier to restore upon a crash. The result is what I would call "a poor man's CI/CD". It's just Git, Docker, Docker Compose and Cron. It is easy to set up and it might be all you need.
- Intro
- The idea
- Demo
- (1.) Setup Docker+Git on your Synology NAS
- (2.) Prepare your repository
- (3.) Git Tokens
- (4.) Scheduling on your Synology: the C in CI/CD
- Conclusion / notes
- F.A.Q.
- Changelog
- Comments
The idea
We'll create a scheduled bash script that will use Git to pull the source code to our Synology NAS. The script will build a Docker image and deploy it using Docker Compose.
Demo
To show how it works, I've set up a public repository at GitHub: synology-ci-cd-nodejs-demo. It it a simple Node.js application that will run on port 3000
on your NAS and return a Hello World message with the time of the server.
Let's get it active on the NAS in a Docker container:

I executed the following lines of code:
# clone the repository
git clone "https://github.com/KeesCBakker/synology-ci-cd-nodejs-demo.git"
# navigate to the created directory
cd synology-ci-cd-nodejs-demo
ls
# run the CI/CD of the container
bash run.sh
# check what's going on
curl "http://127.0.0.1:3000"
Let's dive into the inner workings of this setup.
1. Setup Docker+Git on your Synology NAS
First, we need to make some changes to our Synology NAS setup.
Packages
Go to the Package Center of your Synology NAS and install the following packages:
- Docker
- Git Server (don't worry, we're not going to host git)
- Install nano (if you don't want to use Vim).
Projects directory
We need a directory in which we will pull the repository. Open up the File Station and create a directory named projects somewhere. In my case it runs on the root folder of my drive.
SSH
Next, make sure you have SSH enabled:
- Open up Control Panel.
- Search for ssh and click Terminal & SNMP
- Click the Enable SSH service checkbox.
- Click the Apply button.
Now that SSH is enabled, we need to set up your profile in order for the git
command to work. Open your profile in Nano with:
nano ~/.profile
Add the following line of code:
export
PATH="$PATH:/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin"
Press ctrl+x
to exit the Nano editor. Choose y
to save the file. Now exit your SSH session. Next time you SSH into your NAS, the profile is applied.
2. Prepare your repository
This CI/CD-method works with 3 files that need to be added to the repository. Together these files will create the CI/CD pipeline.
Build+test: Dockerfile
The first file is the Dockerfile. It contains all the information to test, build and package your application into a production container. The demo Dockerfile uses a multi-stage build for the Node.js application:
# test using the latest node container
FROM node:latest AS ci
# mark it with a label, so we can remove dangling images
LABEL cicd="hello"
WORKDIR /app
COPY package.json .
COPY package-lock.json .
COPY lib ./lib
COPY test ./test
RUN npm ci --development
# test
RUN npm test
# get production modules
RUN rm -rf node_modules && npm ci --production
# This is our runtime container that will end up
# running on the device.
FROM node:alpine
# mark it with a label, so we can remove dangling images
LABEL cicd="hello"
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --production
COPY lib ./lib
# Launch our App.
CMD ["node", "lib/app.js"]
To aide in clean-up I tag all created containers (test, build and production) with the same label. We use this label to remove dangling images:
# mark it with a label, so we can remove dangling images
LABEL cicd="hello"
Deploy: docker-compose.yaml
The second file is the docker-compose.yaml. It stores everything needed to run the container on your NAS. It contains information on volumes that need to be mapped, ports that should be exposed and the name of the image. More on Docker Compose can be found here.
This the demo file is I use in my demo:
version: "3.9"
services:
web:
restart: always
expose:
- 3000
ports:
- 3000:3000
image: hello:latest
build: .
Glue: run.sh
The run.sh script glues everything together. This diagram shows what happens in this script:
I've converted the diagram above into a bash script. We need the tag
and service
values of the previous step.
#!/bin/sh
export DOCKER_SCAN_SUGGEST=false
stop_timeout=10
need_build=false
need_start=false
option1="$1"
option2="$2"
set -e;
function echo_title {
line=$(echo "$1" | sed -r 's/./-/g')
printf "\n$line\n$1\n$line\n\n"
}
function has_option {
if [ "$option1" == "$1" ] || [ "$option2" == "$1" ] ||
[ "$option1" == "$2" ] || [ "$option2" == "$2" ] ; then
echo "true"
else
echo "false"
fi
}
# goto script directory
pushd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" > /dev/null
tag=$(cat Dockerfile | grep -oP 'cicd="\K\w+' | tail -1)
if [ -z "$tag" ] ; then
printf "\nNo cicd LABEL found in Dockerfile.\n\n"
exit 1
fi
if [ $(has_option "--force" "-f") == "true" ] ; then
need_pull=true
else
need_pull=$(git fetch --dry-run 2>&1)
fi
if [ -n "$need_pull" ] ; then
echo_title "PULLING LATEST SOURCE CODE"
git reset --hard
git pull
git log --pretty=oneline -1
need_build=true
elif [ -z "$(docker images | grep $tag || true)" ] ; then
need_build=true
fi
status=$(docker-compose ps --status running -q)
if [ "$need_build" == true ] ; then
if [ ! -z "$status" ] ; then
echo_title "STOPPING RUNNING CONTAINER"
docker-compose stop -t $stop_timeout
fi
need_start=true
elif [ -z $status ] ; then
need_start=true
fi
if [ "$need_start" == false ] ; then
printf "\nNo changes found. Container is already running.\n"
elif [ "$need_build" == true ]; then
echo_title "BUILDING & STARTING CONTAINER"
docker-compose up -d --build
else
echo_title "STARTING CONTAINER"
docker-compose up -d
fi
if [ $(has_option "--full_cleanup" "-fcu") == "true" ] ; then
echo_title "FULL CLEAN-UP"
docker image prune --force
elif [ $(has_option "--cleanup" "-cu") == "true" ] ; then
echo_title "CLEAN-UP"
docker image prune --force --filter "label=cicd=$tag"
fi
echo ""
--force
So what about forced? You might want to change the run script and do a pull to get the changes in. Now, if you run the script, it will think nothing has changed (you just pulled the source). To circumvent this situation, just do a bash run.sh --force
and a rebuild and redeploy will be enforced.
--cleanup
Docker will create many layers and cache them all. This might clutter up your system. If you want to remove the cached images, use this bash run.sh --cleanup
option. This will cleanup 12.93MB in the example code. Note: this will make your setup a bit slower, as no cache is used.
--fullcleanup
If you want to cleanup even more images, use bash run.sh --fullcleanup
. This will make your CI/CD process a lot slower, as even more images are removed. In our example code this cleaned up 955.6MB.
3. Git Tokens
The demo shows how to use a public repository. Your personal repositories will not be publicly accessible, so you'll have to provide some credentials to access them. You could setup a secure SSH connection between your NAS and your source control provider. I went the easy route and used a simple HTTPS clone with a special token.
GitHub: Personal Access Token
The special token we'll need is called a Personal Access Token in GitHub. To get one, do this:
- Click on your avatar and select Settings
- Click on Developer Settings
- Click on Personal access tokens
- Click on Generate a new token
- Enter a Note
- Scroll down and click Generate token
- Now copy the token.
More on Personal Access Tokens here.
BitBucket: App Password
The special token we'll need is called an App Password in BitBucket. You can only use them programmatically, you can't login with them. To get one, do this:
- Login to https://bitbucket.org/
- Click on your avatar and select Bitbucket Settings
- Under Access Management, click on App passwords
- Click Create app password
- Enter a Label
- In the section Repositories, check the Read option.
- Click Create.
- Now copy the new app password.
GitLab?
I have no experience using GitLab; but this should work.
Pull
We can use the special token to pull the source from source control:
git clone "https://username:token@github.com/KeesCBakker/synology-ci-cd-nodejs-demo.git"
Your token and username are saved in plain text with the repo. After we've pulled the repository, we can turn it into a container running the script:
bash run.sh
This will execute the script and run your container 🥳.
Update Token
If you need to update your token, you can do the following:
git remote set-url origin "https://username:token@github.com/KeesCBakker/synology-ci-cd-nodejs-demo.git"
4. Scheduling on your Synology: the C in CI/CD
Now that we've set up our repository and downloaded it to our NAS. Let's automate the process by scheduling the run script for every 5 minutes:
- Open the Control Panel
- Click in the section System on Task Scheduler
- Click in the top bar Create > Scheduled Task > User-defined script
- Enter a name in the Task field, something like "CI_CD Synology NodeJs Demo"
- Click on the Schedule tab
- Select in Run on the following days the option Daily
- Set under Time the First run time to 00:00, the Frequency to Every 5 minutes and the Last run time to 23:55
- Click on the Task Settings tab
- Check Send run details by email
- Enter your email address
- Enter the following to User-defined script:
bash /{pah-to-your-projects-dir}/{name-of-your-repo}/run.sh
Note: you can specify--fullcleanup
if you want to clean up all dangling images. - Click the OK button
A new task is created. Select the task en hit Run. The task should now be triggered and send you an email with the result. Check if the result is correct.
Now, you might only want to have an email if stuff fails. You can configure this on the Task Settings tab (check the Send run details only when the script terminates abnormally box).
And now... take a 🍻, because you've just setup a Docker CI/CD on your Synology NAS! "Proost!", as we say in the Netherlands!
Conclusion / notes
We've seen that it is easy to create a basic CI/CD on your Synology NAS using Git, Docker, Docker Compose and Cron (the system behind the scheduling).
To be honest, I don't use the scheduling in my setup. My software does not change often, so for me it is enough to SSH from Visual Studio Code into my NAS and trigger the run script to deploy my code. It saves some CPU cycles as the Task Scheduler does not have to poll my code every 5 minutes. An alternative could be a web hook, but that will make our integration more complex.
Any questions or troubles? Just post them in the comments under this article.
F.A.Q.
Sometimes you might run into something unexpected. Here is a list of stuff I ran into, it might help you:
- Do I need a container repository like Docker Hub?
No. The docker images are built and cached on your Synology NAS. No container will leave your NAS. - Which branch is used? Can I change the branch?
In this case: a commit to master will trigger your CI/CD pipeline. You can easily select a different branch by checking out a different branch:git checkout {name}
. The script will only pull the changes for the branch it is on. - I'm getting a
Current status: 128 (Interrupted)
without any other information in the mail from my schedular. What's wrong?
The script needs to be executed from the right location. Check if the following line is present in yourrun.sh
script:
pushd "$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
- I'm getting a
Bind for 0.0.0.0:3000 failed: port is already allocated
error. Why?
There's another container running on port 3000. You can check withdocker ps | grep 3000
which container is running. - I've made some changes, but they are not picked up by the container. Now what?
Do the following:git pull; bash run.sh --force
- My NAS is filling up, what can I do?
Do a manual clean-up, like this:docker image prune --force --all
More info can be found here.
Changelog
2022-01-04: Added the Update Token section.
2021-12-14: The run.sh
script now uses the docker-compose API for the build action as well. The Dockerfile now uses the Dockerfile as build context.
2021-08-25: The run.sh
script will now launch the entire docker-compose stack and not only the individual service. The Dockerfile
will not reuse the node_modules
anymore.
2021-06-26: Added code for Dockerfile.
2020-10-31: Silenced the output of the pushd
in the run.sh
script. 🤫
2020-10-11: Now shows the last commit message after git pull
2020-10-10: Added some notes on the Task Scheduler and fixed the 🍻 emoji.
2020-09-24: Added the --cleanup
and --full_cleanup
scripts to clean up your NAS. Also added a FAQ for this.
2019-11-15: Initial article.
Wow, this is amazing, it works! Thanks for the hard work!