[This is part 8/8 on GitLab Continuous Integration series]
Objective
In testing microservices I imagine three incremental scenarios:
- The first runs in a developer’s machine, mostly unit tests with mock-ups that can run from shell or IDE.
- Next one adds a few services like a database. It might be local (developer machine).
- Finally, a full app test with deployment to QA servers or Kubernetes cluster.
This post is about scenario #2. We will use a NodeJS app that runs in a web server. The steps to automate are:
- When the App changes (git push) the GitLab Server will build a image.
- It will upload it to our local (GitLab) repository.
- It will create a ‘kind’ Kubernetes cluster in the local PC. I use KIND but there is also Minikube, k3d or MicroK8s.
- Deploy the application and configure an Ingress controller so we can access it on localhost port 80.
Notes:
- The scenario #3 can be implemented using the previous IaC post and tags in the pipeline for a deployment branch. There are lots of tutorials, check the reference section at the end. And the two last post of this series.
- You’ll need in your PC: Python 3 (anaconda), Ansible, kind and a GitLab runner.
I. Prepare your PC
The preferred method that Ansible uses to access the computers is SSH. We need it in our PC.
Install OpenSSH:
- Open a terminal and:
# Install
$ sudo apt-get install openssh-server
# Enable service
$ sudo systemctl enable ssh
- Make sure you complete the login process into your PC to generate the credential’s files:
$ ssh your-username@your.local.ip
II. Project Set Up
We’ll perform two steps in our GitLab Server (or the code repository of your preference). For our GitLab Server the steps are:
Step 2.1 Create the repository
- Access our GitLab Server. I have shown examples our set up using:
http://example.gitlab.com
log in as devguy/Clave123 - Click create project (plus sight at top). Select ‘blank project’.
I’m naming mine CICDtest with public access.
Click create - Get the url to access the project. In our case the string can be copied from the page:
http://devguy:Clave123@gitlab.example.com/devguy/CICDtest.git
Step 2.2 Create Server variables
We will use a few CI/CD variables for our Ansible scripts. To create them:
- Access our GitLab Server
- Go to Project > CICDtest
then Settings > CI/CD
expand Variables - Enter the values below
- Note: Some variable can’t be masked because it does not meet the regular expression requirements (see GitLab’s documentation about masked variables).
- Click ‘Add Variable’ button.
The variables used in our script will hold values we already know or you’ll get from a console terminal in your PC:
- Create two variables with the Gitlab project’s login ‘user’ and ‘password’ (replace your values):
- Key:
USERNAME
Value:devguy
Type:Variable
Environment scope:All (default)
Protect variable:Checked
Mask variable:Unchecked
- key:
USERPASSWORD
value:Clave123
Environment scope:All
Protect:uncheck
Mask:uncheck
- Create a variable for your user and IP in the PC. Get the IP from the command
ifconfig
- key:
SERVER_USER
value:<your user name>
Environment scope:All
Protect:uncheck
Mask:uncheck
- key:
SERVER_IP
value:<your.ip.value>
Environment scope:All
Protect:uncheck
Mask:uncheck
- For the SSH login
You are going to store the SSH private key in a GitLab CI/CD file variable, so that the pipeline can make use of the key to log in to the server. To get the SSH private key use:
$ cat ~/.ssh/id_rsa
Copy the complete output to your clipboard:
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
- Key:
ID_RSA
Value: Paste your SSH private key from your clipboard (including a line break at the end). Environment Scope:All
Type:File
(it can also be ???) Protect variable:Checked
Mask variable:Uncheck
If you select ‘File’, a file containing the private key will be created on the runner for each CI/CD job and its path will be stored in the$ID_RSA
environment variable.
The next one is a big string as it is the values of the server you have ssh into:
- Key:
SSH_KNOWN_HOSTS
Value: can be the output of:
# 1. content of the file
$ cat ~/.ssh/known_hosts
# 2. the command ssh-keyscan
$ ssh-keyscan <IPs to log into>
<<output>>
Environment Scope: All
Protect variable: Checked
Mask variable: Uncheck
III. Get a copy of the code
Four steps are needed to get the code ready
Step 3.1 Get a copy of the example
Create a folder the PC (use your own PATH, it doesn’t affect the setup, just substitute it in a couple of commands):
$ mkdir ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD
Get the ‘CICDtest.zip‘ file into that folder and untar/ungzip it.
Step 3.2 Make it into a local repository
Get into the working folder
$ cd ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest
and initialize it as a git repository
$ git init
$ git add .
If you, like me (still in december 2021) get a ‘master’ branch from git initialization, then you might want it to rename to a more sensible ‘main’ with:
# Check the brach name
$ git branch -a
* master
# You can rename it with
$ git branch -m master main
# Chech the name again
$ git branch -a
* main
$ git status On branch
On branch main
nothing to commit, working tree clean
Set the remote GitLab repository credentials. I write here my URL (from step 2.1 with the user and password), with ‘devguy’ and ‘devguy@gitlab.example.com‘
$ git remote add origin http://devguy:Clave123@gitlab.example.com/devguy/CICDtest.git
$ git config --global user.name "devguy"
$ git config --global user.email "devguy@gitlab.example.com"
Step 3.3 Modify values according [in your PC]
- Update your ‘IP’ and ‘username’ with:
vi hosts.ini
192.168.1.120 ansible_user=fjmartinez
- In the Kubernetes deployment: Might need to modify
./files/deployment_cluster.yaml
as it makes reference to the repository name.
IV. Review the Code
About organization
- App: The app is just a single file. Its a NodeJS that starts a server and listens in port 8080, if a request is received at ‘/’ and port 80, it prints
os.hostname
- files: Contains the yml Ansible files, yaml files for the cluster deployment (Kubernetes deployment files) and kind cluster config.
The overview of the pipeline is explained in 3 parts, as they were tested. Then they were united and some improvements were made.
Part 4.1. Build the container
The first part of the script is the ‘build stage’, the corresponding part of the pipeline involves using the services of a docker container that will allow us to build a container with the complete tag that includes our GitLab project path.
After successful build we want push the image to the GitLab repository. For that we need to provide the credentials en a ‘before_script’ command. Notice the use of our first variable ($USERPASSWORD) so we avoid writing it in the script that will be in the logs.
.gitlab-ci.yml
[Part 1]
stages:
- build
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
build:
stage: build
image: docker:latest
services:
- docker:dind
- name: docker:dind
alias: docker
before_script:
- echo $USERPASSWORD | docker login registry.example.com:5050 -u $USERNAME --password-stdin
script:
- docker build -t registry.example.com:5050/devguy/cicdtest/javatest:v1 app
- docker push registry.example.com:5050/devguy/cicdtest/javatest:v1
Dockerfile
FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]
Part 4.2. Create the cluster
The next step in the pipeline is the ‘test’ stage. We need a Kubernetes cluster. In some examples it exist as it takes time to create one. If you can have one running all the time, remove this job (part 1 of the step). But I choose to illustrate IaC so we will create one.
But testing in our PC ended up to be more complicated that setting up in other machines:
- Caveat #1: Local SSH Access
Running Ansible from a Docker runner works using a local connection. But by executing the .barsh setup script on a no-login sign-in results in exiting and no PATH is set so any software (kind, ansible-playbook, python) is found.
To provide that information we copy the ssh data needed for the runner to ssh into the PC in the ‘before_script’. Tip: Check your machine’s apps path with the commandwhich kind
,which python3
, etc.
The part that is not cool of this approach is that the YML configuration file for the cluster is read from the PC, not the repository. I tried different configuration of inventory/users/connection but to no avail. One option that should work is using a service with an image that contains the software that we need. I’ll update this. - Caveat #2: Repository authorization
Kubernetes can provide credentials to access a private repository or our GitLab repository using a ‘docker-secret’ and referencing it in the deployment spec as ‘imagePullSecrets:’.
But I am using an ‘insecure’ Server (that has proven to be more complicated than a secure one) that contains our project’s repository. The Kubernetes approach wont work in kind. We need to modify the create ‘kind’ command to enable a plugin. From this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
…
To this (in the ./files/kind-config.yaml
file)
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.example.com:5050"]
endpoint = ["http://registry.example.com:5050"]
...
If you are using minikube, k3s similar configuration can be made.
The part of the script that creates the KinD cluster is:
.gitlab-ci.yml
[Part 2]
stages:
- test
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
cluster:
stage: test
script:
- ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; ~/anaconda3/bin/ansible-playbook -i localhost, ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/files/create_kind_cluster.yml'
Note: You can see that the ‘localhost,’ notation as a name list (use of the coma) here.
Part 4.3. Deploy the App
The next step would be the deployment of the app using the Ansible Kubernetes collection. Using a standard kubectl deployment file we will: create a namespace (is mandatory for the collection), start an Ingress controller, start the App deployment (yaml to start a pod and a service), wait until the Ingress controller is ready to serve requests, and lastly deploy the routing rule. Note that if you change the App, you might just need to update the ‘deployment_cluster.yaml’ .
This is the complete file:
deployment_in_kind.yaml
---
# This playbook deploys an App from a local repository into a Kubernetes cluster
- name: Ansible App Deployment
hosts: localhost
collections:
- kubernetes.core
tasks:
- name: Create a namespace (requirement for the collection)
k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Deploy the Ingress controller
k8s:
state: present
kubeconfig: "~/.kube/config"
src: "nginx-deployment.yaml"
- name: Deploy the App
k8s:
state: present
kubeconfig: "~/.kube/config"
src: "deployment_cluster.yaml"
- name: Wait until Ingress conroller is ready to serve requests
k8s_info:
kind: Pod
wait: yes
wait_timeout: 500
wait_condition:
type: Ready
status: "True"
namespace: ingress-nginx
label_selectors:
app.kubernetes.io/component=controller
- name: Deploy the routing rule
k8s:
state: present
kubeconfig: "~/.kube/config"
src: "using_ingress.yaml"
And the third part of the pipeline code is:
.gitlab-ci.yml
[Part 3]
stages:
- deploy
deploy_app:
stage: deploy
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; cd ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/; ~/anaconda3/bin/ansible-playbook -i hosts.ini ./files/deployment_in_kind.yml'
Note: If you want detail logging from ansible then add the -v, -vv or -vvv
parameter, like this: ansible-playbook -vvv -i hosts,
. If you want logging from the script use CI_DEBUG_TRACE: "true"
In this command the inventory file does reference a hosts.ini
file as it list an IP, user and vars:
192.168.1.120 ansible_user=fjmartinez
[vars]
ansible_python_interpreter=/usr/bin/python3.8
ansible_connection=local
The Complete Pipeline
If we assemble the script, the complete .gitlab-ci.yml
is:
stages:
- build
- test
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
build:
stage: build
image: docker:latest
services:
- docker:dind
- name: docker:dind
alias: docker
before_script:
- echo $USERPASSWORD | docker login registry.example.com:5050 -u devguy --password-stdin
script:
- docker build -t registry.example.com:5050/devguy/cicdtest/javatest:v1 app
- docker push registry.example.com:5050/devguy/cicdtest/javatest:v1
cluster:
stage: test
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; ~/anaconda3/bin/ansible-playbook -i localhost, ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/files/create_kind_cluster.yml'
- ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; cd ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/; ~/anaconda3/bin/ansible-playbook -i hosts.ini ./files/deployment_in_kind.yml'
V. Update the Code in the remote repository
You have made modifications to several files, so save them and then push them into our GitLab code repository:
$ git commit -a -m "GitLab CI CD test 1.0"
$ git push origin main
an automatic start of the pipeline will be triggered.
VI. Result
Step 6.1 Pipeline
The pipeline page (CI/CD -> Pipelines) shows the result of the two ‘stages’ in our script with the icon and tooltip: Passed

Clicking on them, if you want to read the logs generated by the runner and Ansible.
Step 6.2 Execution
Open a browser to see our app and type:
That will generate a request to ‘/’ and the app will show a text similar to this:

That it!.
The next steps to upgrade this script would be add a ‘task’ to check if the cluster already exist (it fails as is it) and in that case just clean the objects. Or add a ‘job’, part of the stage ‘test’, to execute a test suite/script (like selenium) and at the end of the pipeline delete the cluster.
VII. Clean Up
To clean the cluster you can eliminate the objects in the cluster with:
$ kubectl delete -f ./files/using_ingress.yaml &&
kubectl delete -f ./files/nginx-deployment.yaml &&
kubectl delete -f ./files/deployment_cluster.yaml &&
kubectl delete -f ./files/namespace testing
Or just delete the cluster
$ kind delete cluster --name "ci-cluster"
References
- Mike Nöthiger – How To Set Up a Continuous Deployment Pipeline with GitLab CI/CD on Ubuntu 18.04
- Adam Rush – Running KinD in GitLab CI on Kubernetes
- Tron Hindenes
- Mike Nöthiger – GitLab CI/CD for a React Native App
- Jehad Nasser – How to configure Gitlab-CI to Auto-deploy your App via SSH
- Gerardo Ocampos – CI/CD | Automatiza tus despliegues con GitLab y Ansible [Spanish]