GitLab & Ansible CI/CD on k3s

This is another post in the series that was not planned for.

It shows the automation of an App [1] build, upload to a local repository and deployment to a ‘k3s Kubernetes cluster’ [2]. The last stage, the deployment mechanism to a Kubernetes cluster differs from the GitLab recommended approach. No agent, helm or special charts that GitLabs provides to deploy to Kubernetes is implemented. The deployment is done with a ‘hack’ as several post on forums suggest. But as it doesn’t scale, an article using the agent should be posted here ‘soon’.

> Notes: > > [1] The app is a PHP/Angular/Bootstrap frontend and Redis enabled replication database. > > [2] You can check the cluster set up steps in a previous post: k3s Raspberry Pi Kubernetes Cluster.

I. GitHub Project Set Up

We will create a project and declare some variables used in our ‘gitlab-ci’ pipeline.

Step 1.1. Make the repository public

  1. Access the GitLab Server. I have shown examples using your local instance:
    http://example.gitlab.com.
    I log in as devguy/Clave123 (use your credentials, annotate them as some files will use them).
  2. Create a project
    Click ‘New project’ button. Select ‘Create blank project’
    I’m naming this one: guestbook
    With public access. No need for a Readme.
    Create it.

Step 1.2. Add variables

We will use a few CI/CD variables to share our credentials with the docker runner (also created previously) so it can SSH into our machines. To create them:

  • Access our GitLab Server
  • Go to **Project ** > gestbook
    then Settings > CI/CD
  • Scroll to the Variables section and click the Expand button
  • Fill the data (see below for each of them) and click ‘Add Variable’ button.

The variables used in our gitlab-ci.yaml hold values we already know from our project or can get from a console terminal in your controlling PC:

1 Create user credentials variables for the Gitlab project (replace your values):

  • Key: USERNAME
    Value: devguy
    Type: Variable
    Environment scope: All (default)
    Protect variable: Checked
    Mask variable: Unchecked
  • key: USERPASSWORD
    value: Clave123
    Type: Variable
    Environment scope: All
    Protect: uncheck
    Mask: uncheck

2 Create a variable for your PC user that Ansible will use to SSH log in

  • key: SERVER_USER value: fjmartinez <- The user who runs ansible in your PC
    Type: Variable
    Environment scope: All
    Protect: uncheck
    Mask: uncheck
  • key: SERVER_IP
    value: 192.168.1.120 <- You can use the ifconfig command
    Type: Variable
    Environment scope: All
    Protect: uncheck
    Mask: uncheck

3 For the SSH login of the docker runner into the Kubernetes nodes (Raspeberry Pi)

You are going to store the PC SSH private key in a GitLab CI/CD variable. To get the SSH private key use:

$ cat ~/.ssh/id_rsa

Copy the complete output to your clipboard:

-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
  • Key: SSH_PRIVATE_KEY
    Value: Paste your SSH private key from your clipboard (including a line break at the end).
    Environment Scope: All
    Type: Variable -> If you use ‘File’ its easier to pass it to a container but I don’t need that for this example.
    Protect variable: Unchecked
    Mask variable: Uncheck

The next one is a big string as it is the values of the machines you have ssh into:

  • Key: SSH_KNOWN_HOSTS
    Value: copy the output of (IP from PC and two raspberry’s):
  $ ssh-keyscan 192.168.1.120 192.168.1.223 192.168.1.224
  <<output>>

Type: Variable
Environment Scope: All
Protect variable: Unchecked
Mask variable: Uncheck


II. Get the code

The code in this post is borrowed from Google Cloud’s tutorial “Create a guestbook with Redis and PHP“. I don’t use their images as they are amd64. I need to create new ones that can run in a Raspberry Pi Kubernetes Cluster (arm64 processor architecture).

My files are here. Then use the following statements:

Step 2.1 Set the local repository

# Use your local path
$ mkdir ~/Desarrollo/ci_cd/Ansible/107_Redis_CICD

# untar/gzip it
$ curl 

# Get into the code folder (use your path to 'guestbook')
$ cd ~/Desarrollo/ci_cd/Ansible/107_Redis_CICD/guestbook

Step 2.2 Initialize the folder as repository with the proper branch

Initialize it as a git repository

$ git init
$ git add .

In Ubuntu 20.10 (December 2021) I get a ‘master’ branch from git initialization. I rename it to ‘main’:

# Check the brach name
$ git branch -a
* master

# If no branch is listed, create it with a commit, that is normal. If master is listed then skup this command
$ git commit -a -m "GitLab CI CD & k3s pre branch"

# If master exist, rename it as main
$ git branch -m master main

# Chech the name again and the status
$ git branch --list
* main

$ git status On branch
On branch main
nothing to commit, working tree clean

Set the remote GitLab repository credentials. I write my URL (from step 2.1 with the user and password), with ‘devguy’ and ‘devguy@gitlab.example.com‘ (replace your values)

$ git remote add origin http://devguy:Clave123@gitlab.example.com/devguy/guestbook.git
$ git config --global user.name "devguy"
$ git config --global user.email "devguy@gitlab.example.com"

Step 2.3 Modify the Kubernetes files

The ‘image’ in ./files/frontend.yaml and ./files/redis-follower.yaml will need to be updated to the project repository name. And also the path to your controlling machine ‘kubeconfig’ in ./files/guestbook_deployment_playbook.yaml.

Step 2.4 Modify the GitLab pipeline

Check the following overview of the pipeline to see what lines you need to modify as you set your own path names and variables.


III. Bulidx

We need ‘arm64’ images for our Raspberry Pi and ‘amd64’ images for our 386/intel/amd machines. To enable docker to build multiplatform images we need to use the ‘experimental’ feature docker buildx to extend the docker build command with the support provided by Moby BuildKit builder toolkit. BuildKit is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run.

In the script we also use buildx lsto list all available builders and docker buildx create command with the parameter --use so the builder is set with the current builder instance pointing to a docker context or endpoint, where context is the name of a context from docker context ls (by default, the current Docker configuration is used for determining the context/endpoint value). You can build it or copy a binary. I show the first one in the script.

When you use this image in docker run or docker service, Docker picks the correct image based on the node’s platform. Or you can set the --platform flag to specify the target platform for the build output, (for example, linux/amd64, linux/arm64, or darwin/amd64).


IV. Code overview

In this gitlab-ci.yml script we can see that three stages will be used. And that the top three variables are the ones we use for the ‘docker runner’ to connect and use the correct filesystem and also to push to our http ‘ssh-less’ repository. The last two ar for the platform that the docker images will be available and the local registry our project is using (update the las one if you use a different project name):

stages:
  - buildx
  - package
  - test

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  # Docker platforms to build
  TARGETPLATFORM: 'linux/amd64,linux/arm64'
  REGISTRY_IMAGE: 'registry.example.com:5050/devguy/guestbook/follower'

In the ‘buildx’ stage the runner is instructed to use initially an image that contains ‘git’ so we can clone the ‘buildx’ git repository. Then an image will be build and stored as an artifact. For that build we need the ‘services’ provided by a docker daemon (that can run in docker – dind).

buildx:
  image: docker:20.10-git
  stage: buildx
  variables:
    GIT_STRATEGY: none
  artifacts:
    paths:
      - buildx
    expire_in: 1 hour
  services:
    - docker:20.10-dind
  script:
    - 'git clone git://github.com/docker/buildx ./docker-buildx'
    - 'DOCKER_BUILDKIT=1 docker build --platform=local -o . ./docker-buildx'

The ‘packager’ stage will build the multiplatform images. I use docker 20.10 (the recommended minimum for buildx is 19.03) for the runner image as for the dind service:

package_follower:
  stage: package
  image: docker:20.10
  services:
    - docker:dind
    - name: docker:dind
      command: ["--experimental"]

In ‘before_script’ we set another environmental variable to enable experimental features, copy our buildx artifact to a new folder (cli-pluins), create a new builder instance and set it as the current one. Run the multiarch/qemu-user-static docker image to emulate ARM64 environment to build that platform. And log into our local GitLab project’s container repository using the server variables previously defined in the GitLab project.

before_script:
    # buildx-artifact setup
    - export DOCKER_CLI_EXPERIMENTAL=enabled
    - mkdir -p ~/.docker/cli-plugins
    - mv buildx ~/.docker/cli-plugins/docker-buildx
    - docker buildx create --driver-opt network=host --config buildkit-config.toml --use
    # Start arm emulator
    - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
    # repository login
    - 'echo $USERPASSWORD | docker login registry.example.com:5050 -u $USERNAME --password-stdin'

As a ‘script’ we create two (Follower and Frontend) images indicating the tag to be used and in the same command push it into our local repository. Notice that I use ‘variable substitution’ with or without “{}” as is helps the parser when concatenation with other text might confuse it.

There are two folders in the example. One for the ‘Follower’ that contains the Dockerfile and the Shell command to start Redis as a ‘–replicaof’ (the leader). The replicas will automatically reconnect to the master every time the link breaks, and will attempt to be an exact copy of it regardless of what happens to the master.

Note: This is a modification made to the original code: “Starting with Redis version 5, the Redis project no longer uses the word slave. Please use the new command REPLICAOF. The command SLAVEOF will continue to work for backward compatibility”. In the run.sh the following change was made:

# Old code
redis-server --slaveof ${REDIS_LEADER_SERVICE_HOST} 6379
# New code:
redis-server --replicaof ${REDIS_LEADER_SERVICE_HOST} 6379

The second folder is the ‘Frontend’ that contains the PHP app with an HTML page, controller, composer (manages dependencies), php code to write to the database plus the Dockerfile. It is a HTML-PHP front end configured to communicate with either the Redis follower or leader Services, depending on whether the request is a read or a write. The frontend exposes a JSON interface, and serves a jQuery-Ajax-based UX.

script:
    - 'docker buildx build --push --platform $TARGETPLATFORM --tag ${REGISTRY_IMAGE}follower:multiarch follower'
    - 'docker buildx build --push --platform $TARGETPLATFORM --tag ${REGISTRY_IMAGE}frontend:multiarch frontend'

In the ‘deploy’ stage we first provide the SSH login data that the Ansible container needs in our ‘Docker runner’ to access the Ansible files in the developer machine to make the App deployment.

Note: We could have also copied the credentials of the pi’s, the kubeconfig file, use an ansible image that has the openshift, request and kubernetes python modules that are need to deploy a Kubernetes scrit and end up with cleaner code, but seems a lot of work that we can avoid 🙂

cluster:
  stage: deploy
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts

The script last stage will execute the ‘guestbook_deployment_playbook.yaml’ as a ssh command in the developers machine. In it we set the folder where the files are located and the path so ansible-playbook executable is located

  script:
    - ssh $SERVER_USER@$SERVER_IP
      'export PATH="$HOME/.local/bin:$PATH";
      cd ~/Desarrollo/ci_cd/Ansible/107_Redis_CICD/guestbook;
      ansible-playbook
      --connection=local
      -i localhost,
      ./files/guestbook_deployment_playbook.yaml'

That playbook will run using ‘localhost,’ (notice the comma, that makes the inventory a list more than a file that we have used before). We dont need the Raspberry’s inventory for the Kubernetes deployment. Each one of the three kubernetes deployments will contain a pod and its service definition:

  • ./files/redis-leader-deployment.yaml First a namespace (testing) is created as it is a requisite of the k8s collection. Then a redis database will be downloaded from hub.docker. The version is updated to 6.2.6/arm from redis:6.0.5 in the original example that was arm/v5.
    spec:
    containers:
    – name: leader
    image: “docker.io/redis:6.2.6”
  • ./filesredis-follower-deployment.yaml This will download from our local repository a container of Redis with the initial parameter to run as copies of the main redis database with two replicas.
  • ./files/frontend-deployment.yaml This will also download from our GitLab server’s repository the app with three replicas running on port 80 at our cluster defined URL.
  • ./files/route.yaml Will create the ingress rote so the frontend service can be accessed from the ouside.

The final .gitlab-ci.yml file is

stages:
  - buildx
  - package
  - deploy

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  # Docker platforms to build
  TARGETPLATFORM: 'linux/amd64,linux/arm64'
  REGISTRY_IMAGE: 'registry.example.com:5050/devguy/guestbook/'

buildx:
  image: docker:20.10-git
  stage: buildx
  variables:
    GIT_STRATEGY: none
  artifacts:
    paths:
    - buildx
    expire_in: 1 hour
  services:
    - docker:20.10-dind
  script:
    - 'git clone git://github.com/docker/buildx ./docker-buildx'
    - 'DOCKER_BUILDKIT=1 docker build --platform=local -o . ./docker-buildx'

packager:
  stage: package
  image: docker:20.10
  services:
    - docker:dind
    - name: docker:dind
      command: ["--experimental"]
  before_script:
    # buildx-artifact setup
    - export DOCKER_CLI_EXPERIMENTAL=enabled
    - mkdir -p ~/.docker/cli-plugins
    - mv buildx ~/.docker/cli-plugins/docker-buildx
    - docker buildx create --driver-opt network=host --config buildkit-config.toml --use
    # Start arm emulator
    - docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
    # repository login
    - 'echo $USERPASSWORD | docker login registry.example.com:5050 -u $USERNAME --password-stdin'
  script:
    - 'docker buildx build --push --platform $TARGETPLATFORM --tag ${REGISTRY_IMAGE}follower:multiarch follower'
    - 'docker buildx build --push --platform $TARGETPLATFORM --tag ${REGISTRY_IMAGE}frontend:multiarch frontend'
      
cluster:
  stage: deploy
  image: mullnerz/ansible-playbook
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts
    # Update Python
    - apk add --no-cache python3 py3-pip
    - python3 -m pip install --upgrade pip
    - python3 -m pip install openshift requests kubernetes
  script:
    - ssh $SERVER_USER@$SERVER_IP
      'export PATH="$HOME/.local/bin:$PATH";
      cd ~/Desarrollo/ci_cd/Ansible/107_Redis_CICD/guestbook;
      ansible-playbook
      --connection=local
      -i localhosts, 
      ./files/guestbook_deployment_playbook.yaml'

V. Update the Code in the remote repository

You have made modifications to several files, so save them and then push them into our GitLab code repository:

$ cd ~/Desarrollo/ci_cd/Ansible/107_Redis_CICD/guestbook

$ git commit -a -m "GitLab CI CD & k3s test 1.0"
$ git push origin main

an automatic start of the pipeline will be triggered.


VI. Test it

deplo

After 18 minutes you can open a browser at:

http://192.168.1.223

Try adding some ‘guestbook’ entries by typing in a message, then click the ‘Submit’ button. The message you typed is send and stored in the Redis database and then is read from the Redis replicas and listed at the bottom.


VII. SCALE

Scale the Web Frontend

You can scale up or down as needed because your servers are defined as a microservice.

  1. Run the following command to scale up the number of frontend Pods:
    $ kr scale deployment frontend –replicas=5
  2. Query the list of Pods to verify the number of frontend Pods running:
    $ kr get pods

VIII. Clean Up

To manually delete the Kubernetes objects we deployed for the App then:

$ cd ~/Desarrollo/ci_cd/Ansible/107_Redis_CICD/guestbook

$ kr delete -f ./files/frontend.yaml
$ kr delete -f ./files/redis-follower.yaml
$ kr delete -f ./files/redis-leader.yaml
$ kr delete namespace testing

$ kr get pods -n testing

Reference

Create a ‘kind’ Kubernetes cluster and deploy an App using a GitLab pipeline

[This is part 8/8 on GitLab Continuous Integration series]

Objective

In testing microservices I imagine three incremental scenarios:

  1. The first runs in a developer’s machine, mostly unit tests with mock-ups that can run from shell or IDE.
  2. Next one adds a few services like a database. It might be local (developer machine).
  3. Finally, a full app test with deployment to QA servers or Kubernetes cluster.

This post is about scenario #2. We will use a NodeJS app that runs in a web server. The steps to automate are:

  • When the App changes (git push) the GitLab Server will build a image.
  • It will upload it to our local (GitLab) repository.
  • It will create a ‘kind’ Kubernetes cluster in the local PC. I use KIND but there is also Minikube, k3d or MicroK8s.
  • Deploy the application and configure an Ingress controller so we can access it on localhost port 80.

Notes:

  • The scenario #3 can be implemented using the previous IaC post and tags in the pipeline for a deployment branch. There are lots of tutorials, check the reference section at the end. And the two last post of this series.
  • You’ll need in your PC: Python 3 (anaconda), Ansible, kind and a GitLab runner.

I. Prepare your PC

The preferred method that Ansible uses to access the computers is SSH. We need it in our PC.

Install OpenSSH:

  1. Open a terminal and:
# Install
$ sudo apt-get install openssh-server

# Enable service
$ sudo systemctl enable ssh
  1. Make sure you complete the login process into your PC to generate the credential’s files:
$ ssh your-username@your.local.ip

II. Project Set Up

We’ll perform two steps in our GitLab Server (or the code repository of your preference). For our GitLab Server the steps are:

Step 2.1 Create the repository

  1. Access our GitLab Server. I have shown examples our set up using:
    http://example.gitlab.com
    log in as devguy/Clave123
  2. Click create project (plus sight at top). Select ‘blank project’.
    I’m naming mine CICDtest with public access.
    Click create
  3. Get the url to access the project. In our case the string can be copied from the page:
    http://devguy:Clave123@gitlab.example.com/devguy/CICDtest.git

Step 2.2 Create Server variables

We will use a few CI/CD variables for our Ansible scripts. To create them:

  • Access our GitLab Server
  • Go to Project > CICDtest
    then Settings > CI/CD
    expand Variables
  • Enter the values below
  • Note: Some variable can’t be masked because it does not meet the regular expression requirements (see GitLab’s documentation about masked variables).
  • Click ‘Add Variable’ button.

The variables used in our script will hold values we already know or you’ll get from a console terminal in your PC:

  1. Create two variables with the Gitlab project’s login ‘user’ and ‘password’ (replace your values):
  • Key: USERNAME
    Value: devguy
    Type: Variable
    Environment scope: All (default)
    Protect variable: Checked
    Mask variable: Unchecked
  • key: USERPASSWORD
    value: Clave123
    Environment scope: All
    Protect: uncheck Mask: uncheck
  1. Create a variable for your user and IP in the PC. Get the IP from the command ifconfig
  • key: SERVER_USER
    value: <your user name>
    Environment scope: All
    Protect: uncheck
    Mask: uncheck
  • key: SERVER_IP
    value: <your.ip.value>
    Environment scope: All
    Protect: uncheck
    Mask: uncheck
  1. For the SSH login
    You are going to store the SSH private key in a GitLab CI/CD file variable, so that the pipeline can make use of the key to log in to the server. To get the SSH private key use:
$ cat ~/.ssh/id_rsa

Copy the complete output to your clipboard:

-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----

  • Key: ID_RSA Value: Paste your SSH private key from your clipboard (including a line break at the end). Environment Scope: All Type: File (it can also be ???) Protect variable: Checked Mask variable: Uncheck
    If you select ‘File’, a file containing the private key will be created on the runner for each CI/CD job and its path will be stored in the $ID_RSA environment variable.

The next one is a big string as it is the values of the server you have ssh into:

  • Key: SSH_KNOWN_HOSTS Value: can be the output of:
# 1. content of the file
$ cat ~/.ssh/known_hosts
# 2. the command ssh-keyscan
$ ssh-keyscan <IPs to log into>
<<output>>

Environment Scope: All Protect variable: Checked Mask variable: Uncheck


III. Get a copy of the code

Four steps are needed to get the code ready

Step 3.1 Get a copy of the example

Create a folder the PC (use your own PATH, it doesn’t affect the setup, just substitute it in a couple of commands):

$ mkdir ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD

Get the ‘CICDtest.zip‘ file into that folder and untar/ungzip it.

Step 3.2 Make it into a local repository

Get into the working folder

$ cd ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest

and initialize it as a git repository

$ git init
$ git add .

If you, like me (still in december 2021) get a ‘master’ branch from git initialization, then you might want it to rename to a more sensible ‘main’ with:

# Check the brach name
$ git branch -a
* master

# You can rename it with
$ git branch -m master main

# Chech the name again
$ git branch -a
* main

$ git status On branch
On branch main
nothing to commit, working tree clean

Set the remote GitLab repository credentials. I write here my URL (from step 2.1 with the user and password), with ‘devguy’ and ‘devguy@gitlab.example.com

$ git remote add origin http://devguy:Clave123@gitlab.example.com/devguy/CICDtest.git
$ git config --global user.name "devguy"
$ git config --global user.email "devguy@gitlab.example.com"

Step 3.3 Modify values according [in your PC]

  1. Update your ‘IP’ and ‘username’ with: vi hosts.ini
192.168.1.120    ansible_user=fjmartinez
  1. In the Kubernetes deployment: Might need to modify ./files/deployment_cluster.yaml as it makes reference to the repository name.

IV. Review the Code

About organization

  • App: The app is just a single file. Its a NodeJS that starts a server and listens in port 8080, if a request is received at ‘/’ and port 80, it prints os.hostname
  • files: Contains the yml Ansible files, yaml files for the cluster deployment (Kubernetes deployment files) and kind cluster config.

The overview of the pipeline is explained in 3 parts, as they were tested. Then they were united and some improvements were made.

Part 4.1. Build the container

The first part of the script is the ‘build stage’, the corresponding part of the pipeline involves using the services of a docker container that will allow us to build a container with the complete tag that includes our GitLab project path.

After successful build we want push the image to the GitLab repository. For that we need to provide the credentials en a ‘before_script’ command. Notice the use of our first variable ($USERPASSWORD) so we avoid writing it in the script that will be in the logs.

.gitlab-ci.yml [Part 1]

stages:
  - build

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
    - name: docker:dind
      alias: docker
  before_script:
    - echo $USERPASSWORD | docker login registry.example.com:5050 -u $USERNAME --password-stdin
  script:
    - docker build -t registry.example.com:5050/devguy/cicdtest/javatest:v1 app
    - docker push registry.example.com:5050/devguy/cicdtest/javatest:v1

Dockerfile

FROM node:7
ADD app.js /app.js
ENTRYPOINT ["node", "app.js"]

Part 4.2. Create the cluster

The next step in the pipeline is the ‘test’ stage. We need a Kubernetes cluster. In some examples it exist as it takes time to create one. If you can have one running all the time, remove this job (part 1 of the step). But I choose to illustrate IaC so we will create one.

But testing in our PC ended up to be more complicated that setting up in other machines:

  • Caveat #1: Local SSH Access
    Running Ansible from a Docker runner works using a local connection. But by executing the .barsh setup script on a no-login sign-in results in exiting and no PATH is set so any software (kind, ansible-playbook, python) is found.
    To provide that information we copy the ssh data needed for the runner to ssh into the PC in the ‘before_script’. Tip: Check your machine’s apps path with the command which kind, which python3, etc.
    The part that is not cool of this approach is that the YML configuration file for the cluster is read from the PC, not the repository. I tried different configuration of inventory/users/connection but to no avail. One option that should work is using a service with an image that contains the software that we need. I’ll update this.
  • Caveat #2: Repository authorization
    Kubernetes can provide credentials to access a private repository or our GitLab repository using a ‘docker-secret’ and referencing it in the deployment spec as ‘imagePullSecrets:’.
    But I am using an ‘insecure’ Server (that has proven to be more complicated than a secure one) that contains our project’s repository. The Kubernetes approach wont work in kind. We need to modify the create ‘kind’ command to enable a plugin. From this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
…

To this (in the ./files/kind-config.yaml file)

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.example.com:5050"]
    endpoint = ["http://registry.example.com:5050"]
...

If you are using minikube, k3s similar configuration can be made.

The part of the script that creates the KinD cluster is:

.gitlab-ci.yml [Part 2]

stages:
  - test
  
before_script:
  - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
  - eval $(ssh-agent -s)
  - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
  - mkdir -p ~/.ssh
  - chmod 700 ~/.ssh
  - echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
  - chmod 644 ~/.ssh/known_hosts

cluster:
  stage: test
  script:
    - ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; ~/anaconda3/bin/ansible-playbook -i localhost, ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/files/create_kind_cluster.yml'

Note: You can see that the ‘localhost,’ notation as a name list (use of the coma) here.

Part 4.3. Deploy the App

The next step would be the deployment of the app using the Ansible Kubernetes collection. Using a standard kubectl deployment file we will: create a namespace (is mandatory for the collection), start an Ingress controller, start the App deployment (yaml to start a pod and a service), wait until the Ingress controller is ready to serve requests, and lastly deploy the routing rule. Note that if you change the App, you might just need to update the ‘deployment_cluster.yaml’ .

This is the complete file:

deployment_in_kind.yaml

---
# This playbook deploys an App from a local repository into a Kubernetes cluster

- name: Ansible App Deployment
  hosts: localhost
  collections:
    - kubernetes.core
    
  tasks:
  - name: Create a namespace (requirement for the collection)
    k8s:
      name: testing
      api_version: v1
      kind: Namespace
      state: present

  - name: Deploy the Ingress controller
    k8s:
      state: present
      kubeconfig: "~/.kube/config"
      src: "nginx-deployment.yaml"
      
  - name: Deploy the App
    k8s:
      state: present
      kubeconfig: "~/.kube/config"
      src: "deployment_cluster.yaml"        

  - name: Wait until Ingress conroller is ready to serve requests
    k8s_info:
      kind: Pod
      wait: yes
      wait_timeout: 500
      wait_condition:
        type: Ready
        status: "True"
      namespace: ingress-nginx
      label_selectors:
        app.kubernetes.io/component=controller

  - name: Deploy the routing rule
    k8s:
      state: present
      kubeconfig: "~/.kube/config"
      src: "using_ingress.yaml"

And the third part of the pipeline code is:

.gitlab-ci.yml [Part 3]

stages:
  - deploy

deploy_app:
  stage: deploy

  before_script:
  - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
  - eval $(ssh-agent -s)
  - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
  - mkdir -p ~/.ssh
  - chmod 700 ~/.ssh
  - echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
  - chmod 644 ~/.ssh/known_hosts

  script:
  - ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; cd ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/; ~/anaconda3/bin/ansible-playbook -i hosts.ini ./files/deployment_in_kind.yml'

Note: If you want detail logging from ansible then add the -v, -vv or -vvv parameter, like this: ansible-playbook -vvv -i hosts, . If you want logging from the script use CI_DEBUG_TRACE: "true"

In this command the inventory file does reference a hosts.ini file as it list an IP, user and vars:

192.168.1.120    ansible_user=fjmartinez

[vars]
ansible_python_interpreter=/usr/bin/python3.8
ansible_connection=local

The Complete Pipeline

If we assemble the script, the complete .gitlab-ci.yml is:

stages:
  - build
  - test

variables:
  DOCKER_HOST: tcp://docker:2375/
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
    - name: docker:dind
      alias: docker
  before_script:
    - echo $USERPASSWORD | docker login registry.example.com:5050 -u devguy --password-stdin
  script:
    - docker build -t registry.example.com:5050/devguy/cicdtest/javatest:v1 app
    - docker push registry.example.com:5050/devguy/cicdtest/javatest:v1

cluster:
  stage: test
  before_script:
    - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
    - eval $(ssh-agent -s)
    - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
    - mkdir -p ~/.ssh
    - chmod 700 ~/.ssh
    - echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
    - chmod 644 ~/.ssh/known_hosts
  script:
    - ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; ~/anaconda3/bin/ansible-playbook -i localhost, ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/files/create_kind_cluster.yml'
    - ssh $SERVER_USER@$SERVER_IP 'export PATH="$HOME/go:$HOME/go/bin/:/usr/local/go/bin:$HOME/anaconda3/bin:$PATH"; cd ~/Desarrollo/ci_cd/Ansible/106_Complete_CI-CD/CICDtest/; ~/anaconda3/bin/ansible-playbook -i hosts.ini ./files/deployment_in_kind.yml'

V. Update the Code in the remote repository

You have made modifications to several files, so save them and then push them into our GitLab code repository:

$ git commit -a -m "GitLab CI CD test 1.0"
$ git push origin main

an automatic start of the pipeline will be triggered.


VI. Result

Step 6.1 Pipeline

The pipeline page (CI/CD -> Pipelines) shows the result of the two ‘stages’ in our script with the icon and tooltip: Passed

Clicking on them, if you want to read the logs generated by the runner and Ansible.

Step 6.2 Execution

Open a browser to see our app and type:

http://localhost

That will generate a request to ‘/’ and the app will show a text similar to this:

That it!.

The next steps to upgrade this script would be add a ‘task’ to check if the cluster already exist (it fails as is it) and in that case just clean the objects. Or add a ‘job’, part of the stage ‘test’, to execute a test suite/script (like selenium) and at the end of the pipeline delete the cluster.


VII. Clean Up

To clean the cluster you can eliminate the objects in the cluster with:

$ kubectl delete -f ./files/using_ingress.yaml &&
  kubectl delete -f ./files/nginx-deployment.yaml &&
  kubectl delete -f ./files/deployment_cluster.yaml &&
  kubectl delete -f ./files/namespace testing

Or just delete the cluster

$ kind delete cluster --name "ci-cluster"

References

Run a CD pipeline for MariaDB and Web App.

This post is an add-on to the series as I tried to do a “LAMP automated deployment from a GitLab repository” using Ansible but the information I found online was too difficult to understand or was outdated.

Lets start with a LAMP Ansible playbook example from the official example repository and let’s modify it so it can upload it to a GitLab Repository and execute the deployment to set up a Web server in one machine and a MariaDB Server in other (two Raspberry Pi’s).

There are two things that we have to consider:

ONE
GitLab currently doesn’t have built-in support for managing SSH keys in a build environment (where the GitLab Runner runs). The most widely supported method is to inject an SSH key into our .gitlab-ci.yml.

  • A common way to make secrets available to containers is using a volume mounted into the container at runtime. Volumes can be added to the executor through the GitLab Runner configuration.
  • But the most widely supported method so GitLab can provide them is to inject the SSH key into your build environment (in the.gitlab-ci.yml). This solution works with any type of executor.
    We will supply the information from Secure Variables.

TWO
The docker image we configured in the runner it is a basic Docker image. The appropriate way is to use an image that has the necessary software, that is Ansible, Python and the MySql utilities to configure the database.

I browsed what was available in hub.docker.com. This one from mullnerz has Ansible 2.9 and Python 3.8. That is good enough for our project. As shown in:

$ docker run --rm --name ansible mullnerz/ansible-playbook ansible --version
ansible 2.9.6
python version = 3.8.2 (default, Feb 29 2020, 17:03:31) [GCC 9.2.0]

I. Create a GitLab Project

We need a repository for our code the GitLab Server. Lets create one:

  • Browse to our local
    http://gitlab.example.com
    or your GitLab, GitHub code service.
    We use devguy/Clave123 to log in (Credentials shown as reference as it is used in the url)
  • Click on right blue button: New project’ > Create blank project
    Project name: infra
    Visibility level: Public
    Create readme file: no
    Click on ‘Create project’ button
  • We can get access to it opening the url:
    http://gitlab.example.com/devguy/infra

II. Check The Runner

We have set up previously a Docker executor to run our CI/CD jobs. To verify that the Docker Runner is still running as we configured it:

[On the PC]

$ sudo gitlab-runner verify
Runtime platform arch=amd64 os=linux pid=50507 revision=7a6612da version=13.12.0
Running in system-mode.
Verifying runner... is alive runner=ejJb1t1N

It seems ok. Let’s continue


III. Set The Log In Keys

Gitlab writes that: “When your CI/CD jobs run inside Docker containers and you want to deploy your code in a private server, you need a way to access it. In this case, you can use an SSH key pair.”

Create two new CI/CD variables.

  1. Access our GitLab Server in
    http://example.gitlab.com
    log in as devguy/Clave123
    (This are the url and credentials used in previous posts)
    Go to Project > infra
    Then Settings > CI/CD
    Expand Variables, and click ‘Add Variable’ button
  2. As Key enter the name SSH_PRIVATE_KEY
    and in the Value field paste the content of your private key
    To get the private key you can use the following command (copy the whole answer):
    $ cat ~/.ssh/id_rsa
    —–BEGIN OPENSSH PRIVATE KEY—–

    —–END OPENSSH PRIVATE KEY—–
    In all variables set as Type: Variable
    Environment scope: all
    In Flags uncheck Protect and Mask
    Click Create variable button
  3. The second will hosts a list of all your hosts:
    key: SSH_KNOWN_HOSTS
    value: output of the command ssh-keyscan <IPS-list>
    (It will be a big string. Copy the answer):
    $ ssh-keyscan 192.168.1.223 192.168.1.224
    <<output>>

III. Modifying the code

We need to create one file and modify an existing one.

ONE
Create the .gitlab-ci.yml pipeline file (the official usage example is here).

$ cd <your-root-dir>/Ansible/105_IaC_with_GitLab/Infra
​
$ vi .gitlab-ci.yml

Add as content:

image:
    name: mullnerz/ansible-playbook
​
before_script:
  - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
  - eval $(ssh-agent -s)
  - echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
  - mkdir -p ~/.ssh
  - chmod 700 ~/.ssh
  - echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
  - chmod 644 ~/.ssh/known_hosts
​
deploy_app:
  stage: deploy
  script:
    - 'ansible-playbook -i hosts site.yml'

Notes to the before_script section:

  1. Install ssh-agent if not already installed
  2. Run ssh-agent (inside the build environment)
  3. Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store Use tr to fix line endings which makes ed25519 keys work without extra base64 encoding the ssh-add - command does not display the value https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
  4. Create the SSH directory and give it the right permissions

TWO
The executor runs Python 3.8 but the MySql utility used in the Ansible example uses Python 2.7 version (mysql-connector-python). So we need to replace it with python3-mysqldb
Edi the file: ./Ansible/IaC_with_GitLab/infra/roles/db/tasks/main.yml

- name: Install MariaBD package
  apt:
    name:
    - 'python3-mysqldb'
    - 'mariadb-server'
    state: present

IV. Upload the Code Sources

[In the PC]

$ mkdir ~/Ansible/IaC_with_GitLab/infra
​
$ cd /Ansible/IaC_with_GitLab/infra
​
$ git config --global user.name "devguy"
$ git config --global user.email "devguy@gitlab.example.com"
​
# Start a politically correct init repository (no master)
$ git init
$ git remote add main http://devguy:Clave123@gitlab.example.com/devguy/infra.git
​
# Add the files
$ git add .
$ git commit -m "Initial commit"
$ git ls-tree -r main

Push the local repo branch under <local_branch_name> to the remote repo at <remote_name>.

$ git push main main
Enumerating objects: 38, done.
Counting objects: 100% (38/38), done.
Delta compression using up to 4 threads
Compressing objects: 100% (26/26), done.
Writing objects: 100% (38/38), 5.52 KiB | 807.00 KiB/s, done.
Total 38 (delta 0), reused 0 (delta 0), pack-reused 0
remote:
remote: To create a merge request for master, visit:
remote: http://gitlab.example.com/devguy/infra/-/merge_requests/new?merge_request%5Bsource_branch%5D=master
remote:
To http://gitlab.example.com/devguy/infra.git
* [new branch] master -> master

$ git status
On branch main
nothing to commit, working tree clean

If a typo was made and a file needs updating the following sequence can be used:

# Make the editing
$ vi .gitlab-ci.yml
# Update the remote repository
$ git add .gitlab-ci.yml
$ git commit -m "Adding IaC CI pipeline"
$ git push main main

V. Automatic Pipeline Execution

The output in the GitLab Server is

Running with gitlab-runner 13.12.0 (7a6612da)
on docker-runner ejJb1t1N
Preparing the "docker" executor 00:02
Using Docker executor with image mullnerz/ansible-playbook …
...
Getting source from Git repository 00:02
...
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 11
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
Identity added: (stdin) (xxxxx@pcxxx)
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
$ chmod 644 ~/.ssh/known_hosts
$ ansible-playbook -i hosts site.yml
PLAY [apply common configuration to all nodes] *********************************
TASK [Gathering Facts] *********************************************************
ok: [192.168.1.223]
ok: [192.168.1.224]
...
PLAY RECAP *********************************************************************
192.168.1.223 : ok=14 changed=4 unreachable=0 failed=0 skipped=0    rescued=0 ignored=0   
192.168.1.224 : ok=16 changed=2 unreachable=0 failed=0 skipped=0    rescued=0 ignored=0   
Job succeeded

IV. Test

Test in a browser

http://192.168.1.223/index.html

Hello World! My App deployed via Ansible V6. 

and

http://192.168.1.223/index.php

Homepage
Hello, World! I am a web server configured using Ansible and I am : kmaster
List of Databases:
foodb information_schema mysql performance_schema

Cleanup(Manual)

There is an Ansible playbook to clean up our servers:

$ cd ~/Ansible/IaC_with_GitLab/infra
​
$ ansible-playbook -i hosts reset.yml

Notes to myself but hope they can be of use to other curious people.
The code pruned for the Raspberry Pi’s is here.