Circle CI/CD

Creating a simple CircleCI and Docker setup. With focus on Kubernetes and Helm.

In my company, we are using AKS (Azure Kubernetes Service). With every git commit to the master branch, a new docker image is build and pushed to the repository with CircleCI. It is automatically deployed to the dev cluster. At a controlled release moment, we are then able to release all necessary docker images with their particular versions in one go to the production cluster using helm.

The project is quite big and all connected parts are quite complicated in their setup. That’s why I wanted to create my own small project and learn how to set up a similar CI/CD (circular integration / circular deployment).

I did so June 2020. Link to the project above. Journey description below.

What to push

First decision was what to build and push. I made a simple website with the static site generator Hugo. It was the first time I made a web with hugo, so this was in itself a very good learning experience. I stuck literary to the “quick start” guide to not lose too much time with this step, since the page itself is obviously not the goal of this project. But in general, I can only recommend Hugo.

Script before circle

Obviously I wanted to use the same technologies that we are using on the company project. I was eager to get into CircleCI, but I quickly realized that is not the right approach. It turns out that the best approach for me was to leave the CI as the absolute last step. First make a long list of all the cli commands that you find useful. I put most of them in the readme.md of the project. This is not “best practice” for a readme, but I well, it was meant to be just my personal learning project and I wanted to have the commands visible. Only after you can do everything with your cli commands from beginning to end, then it makes sense to put it in a CI config.

Docker

So first thing, I made my first successful docker image build. After running it with a port-forward and seeing that the Dockerfile took the correct content and I saw the page loaded in the browser, I have to say it feels good.

build image:
docker build -t radimj/repo1 .

run and forward port to localhost:
docker run -d -p 80:80 radimj/repo1

Docker repository

Next step, where to push the image? Unfortunately, you can’t treat docker images like ISOs or packages as the word “image” would suggest. There are command like “save” and “export”, you can look up the details yourself, but the main takeaway is that docker images are not meant to be sent around as packaged files. You have to use a repository. I saw guides for creating local repositories, but again, this is not the main purpose dockers exist. You simply should use a remote repository. You can store them on Azure too, but I didn’t have a free account there, so I just went with the simplest solution, I made a private DockerHub account.

Another a little unexpected process is that you don’t add the repository through some “add repo” or something, you have to docker login. This will hash the login details in ~/.docker/config.json. As far as I know, that is the only place you can find out what repository your docker is using. After that, you have to make sure your image name (tag) that corresponds to your account and repository name, else it wouldn’t know where in your repository to add the image. After that, a simple push should work.

push to docker hub:
docker push radimj/repo1

I’m noticing I’m being unnecessarily detailed and this project had so many small issues that this would end up being a little book. I will be more concise bellow.

CircleCI

Since I didn’t have a remote kubernetes cluster to push to, I was satisfied if my CI would just build the image and push it to the repository. The details of the CI config are in the /.circleci/config.yml you can find it in the project. I kept it as simple as possible. A few notable points:

  • Base image
    It takes a moment to choose a fitting base docker image, that is used for the whole CI process.
  • Docker
    Use docker with “- setup_remote_docker”. This creates a special environment that the CI uses to work with docker.
  • CLI tool
    Circle CI has a cli tool that lets you run the build locally. This is good to speed up the creation of the config, but don’t rely on it too much. For example the “setup_remote_docker” command didn’t work for me with it and I had to use “sudo” for the docker commands when running the cli localy.
  • SSH into the build
    When something breaks only remotely and you don’t know why, a last resort is SSH into the CI build. This was an interesting experience. It took a moment to set up, but if you just follow the guide in the docs you should be fine. I managed to get into the docker image that was sitting broken in the remote CI build where I found what was missing.

Being able to set up CircleCI was an empowering experience. CI really gives you the feeling like “set up once, never touch again”. When you first write it, the “never touch again” feels very good. But after some time, when you forget why you added this or that line… “never touch again” can become scary 😂

Kubernetes

Kubernetes can be easily tested with Minikube, or similar tools that run a single node cluster on your local drive. Setting up a production ready Kubernetes on a private server is much harder than I expected. It is not like installing nginx or a machine and expecting it to work. I read there are ways how to do it, but for now that is beyond me. Production ready Kubernetes clusters are best chosen from established providers online. After realizing this, being able to use something like Minikube (or others) is a really amazing thing, since it behaves like a real cluster and is perfect for learning and basic testing.

In Kubernetes, everything is about deployment setups within .yaml files. You can write them by hand, or export them from a running deployment / service / pod with –output yaml. The Kubernetes documentation is quite good, and every single of my steps here is better explained there. So I will not be rewriting the docs here. Rather, I will show you the process how I put brick on brick with small commands to get to a better understanding of the result.

First, you need to set up a secret in kubernetes that will be used for authorization when pulling docker images from repos. I did it from the docker login file:

kubectl create secret generic regcred \
    --from-file=.dockerconfigjson=<path/to/.docker/config.json> \
    --type=kubernetes.io/dockerconfigjson

That can be tested by running once pod with the secret:
kubectl run repo1 --overrides='{ "spec": { "imagePullSecrets": [{"name": "regcred"}] } }' --image=radimj/repo1 --port=80 The “overrides” functionality makes more sense when already knowing how a standard yaml config would look like. It is basically just editing a default one.

The following commands give a way how to look into your pod. In my pod there is nginx exposing my website on port 80.

forward or expose pod:
kubectl port-forward repo1 8080:80
kubectl expose pod repo1 --type="NodePort"

Deployments and services. I will not give examples how to create them or explain in detail what they are. Consult the official docs for details. But in essence, deployments are a bundle of pods, and services are configs how these resources are exposed on ports. They are created in a similar way like pods. All ideally with yaml files. Once you have them running, you can observe them with these commands.

expose deployment:
kubectl expose deployment circle-deployment --type=LoadBalancer

nodeport in:
kubectl get svc kubectl describe service repo1

exposed on:
(minikube ip):&nodePort

Kubernetes Yaml management:

create new yaml file from:
kubectl get (deploy / svc / pod ) -o yaml

run pod / deployment from yaml:
kubectl apply --filename private_deploy.yaml

revert (delete) from yaml:
kubectl delete -f private_deploy.yaml

This is the roundabout way how to get to your yaml file. But a good learning experience.

JSON parsing

One of the most surprising discoveries on this project was JQ. When you work with Kubernetes or Azure, you get quickly used to large JSON outputs in your terminal. Both tools have inbuilt ways how to make this more manageable. In Kubernetes, you can query most outputs with --output jsonpath="" this is one of the query languages for parsing json. Azure uses “JMESPath” which is not the same. Azure also uses --output table heavily, to make things more readable. If you make scripts and you are working only with one tool, then it is probably recommendable to use the inbuilt query parsing tool.

But let me tell you, to learn just one way that sits forever in your system, that you can use for any JSON string that enters your command line is a great thing! With JQ I was able to do things like:

display docker login secret from kubectl:\

kubectl get secret regcred \
    --output jsonpath="{.data.\.dockerconfigjson}" | \
    base64 --decode | \
    jq ".auths | map(.auth)[0]" -r | \
    base64 --decode

You can see the use of both “jsonpath” and “jq” here for comparison. The output tells you exactly what login your kubectl is using. Also, good to understand how the secrets are stored in kubernetes.

Helm

Again, I won’t go into details, this post is long enough. But simply put, helm is a bundle of kubernetes yaml config files, also called “manifests”. With helm, you can create files that hold variable names and you can distribute these variable through your manifests. This creates a setup where you can have a hundred manifests with thousands of lines, but if they all run together in a setup, you can just put variable names into all of them and then adjust the variable in just one place. Making releases of big projects manageable. This bundle of manifests is called a “chart” in helm. You can also have full repositories of charts, but I will not be getting into that here.

lint chart:
helm lint circle-chart/

In the bellow example, I am packaging the chart folder into a package. This is useful for distribution, but not necessary if you are running it from just one place. You can simply call the commands on the chart folder as well.

build package:
helm package circle-chart/

install: (name should be same as package name)
helm install circle circle-0.2.0.tgz helm install circle circle-chart/

check release:
helm ls

uninstall totally:
helm uninstall circle

rolling update:

The following command changes only necessary pods, it does not update to “latest” helm upgrade --install circle circle-chart/

if you want to change “latest” tags, old way was: helm upgrade --install --recreate-pods circle circle-chart/ new way is to add a random annotation in metadata:

      annotations:
# creates a random 5-letter word, causing the pods to be recreated
        rollme: {{ randAlphaNum 5 | quote }}

Conclusion

Setting up a CI/CD flow is not a new thing. Jenkins was released 2011, that’s like a millennium in the tech world. The concept of container images is not new either, but it is undeniable that Docker containers are on a big wave right now and related solutions like Kubernetes and Helm are pulled along. In this project, I’m showing that anything, when taken piece by piece, can be learned. And why not learn the basics of the biggest wave in the current tech ocean?

Radim Janoš
Radim Janoš
Programátor

Související