CI/CD Pipeline for Maintaining a Stable and Customizable Kubernetes

I am an intern in the Global Infrastructure group in the Austin, TX office this summer. Our group is responsible for cloud and infrastructure engineering projects that allow Viasat’s development teams to move fast and deploy software efficiently. Containers and container technologies are largely responsible for that and they have exploded in popularity the last few years.

Containers are a form of packaging in which applications can be abstracted from the environment in which they run. But when you start to run a large number of containers in production you have to deal with the underlying complexity of having to maintain individual machines, deal with uptime, and move resources around. That is where Kubernetes comes in.

What is Kubernetes?

Kubernetes is an open source production-scale container orchestration tool that allows for automating deployment, as well as scaling and operating of application containers across clusters of hosts. It can run any containerized application and it has become the industry standard for deploying containers into production.

So what is the problem then? It sounds like Kubernetes is the perfect solution!

Well, as great as Kubernetes is, using it in certain environments brings about unique concerns. Working with open-source projects is nice because they do not cost money, but when there are bugs, there is not anyone that you can call, complain to, or pay to fix them. Kubernetes is released about once a quarter, with maintenance releases every couple weeks. Viasat cannot wait that long for patches when we are running Kubernetes on important workloads. Additionally, Viasat may want to extend the project, but not lose the benefit of constant improvements from the open-source contributors. Thus, it is evident that using Kubernetes in its current state is not always sufficient for our needs, so it would be valuable for Viasat to have a stable, continuously updated copy of the orchestration tool.

The Art of CI/CD

Problems like these are why the practice of continuous integration/continuous deployment (or CI/CD) has become so popular. CI/CD pipelines increase an organization’s ability to deliver applications and services at a higher velocity by integrating code changes more frequently and reliably. This allows the organization to compete more effectively in the market.

By creating a CI/CD pipeline for maintaining a stable version of Kubernetes, we would be able give Viasat a competitive advantage by removing any overhead that comes with confidently deploying Kubernetes into real-world environments.

Architecture of the Pipeline


  • We are maintaining Viasat’s version of Kubernetes with Github. We will denote this version of Kubernetes as Viasat Hyperkube because it is Viasat’s copy and the image is pushed automatically on each update, as we will illustrate.
  • Jenkins an open-source automation tool used often used for CI/CD, whose Pipeline function can be leveraged to automate complicated tasks.
  • Docker is an open-source software containerization tool. Images are snapshots of containers that are not yet running. A Docker registry is a repository that stores your Docker images.

Our cloning pipeline is set to run first. This pipeline compares the upstream Kubernetes repository, fetches changes from upstream and attempts to merge them into the Viasat Hyperkube repository. If a merge conflict arises, the changes to resolve are reported to the administrator with a series of steps to resolve the conflict. We would not want to automate this aspect of the process because it is necessary for a Viasat employee to decide which changes should persist and which should be discarded. This is a multi-branch pipeline, so the above process runs for each configured branch.

Our build pipeline is set to run after the clone pipeline terminates. This pipeline compares the most recent commit ID on the remote Viasat Hyperkube repository with the most recent processed commit ID locally on one of our Jenkins slaves (an AWS EC2 instance). If the commit IDs are different, the further steps in the pipeline are triggered, otherwise the build is aborted because there is no reason to build a copy that we already have a built image for. From there, the pipeline clones the latest version of Viasat Hyperkube locally and builds the binary (by binary, we simply mean a form of Kubernetes that is ready to use). The binary is then packed as a Docker image and given a tag.

At this point, the build pipeline triggers the test pipeline to run end-to-end conformance tests on the image. The results of these tests inform us about whether or not the image is stable enough to confidently deploy into production. Control is then given back to the build pipeline. If the image passed all of the end-to-end tests, that image is pushed to our Docker registry. This registry maintains a handful of the most recent built images of Viasat Hyperkube that passed all of the tests. In the event that any tests fail, a Test Results Analyzer plugin on Jenkins can be used to diagnose any failures. Along with that, we are maintaining several of the most recent test logs containing error messages in AWS S3 to also assist in resolving any test failures. The administrator will receive a notification from Jenkins explaining whether or not the image passed all the tests and if the image was pushed to the registry (including a URL), including an attached file of the test log.

Day-to-Day Usage

 With our pipeline, involvement from any Viasat employee is largely removed (apart from manually resolving merge conflicts). Options such as which branches of Kubernetes to clone and the number of stable images to store in our registry are parameterized and can be modified at any time. A Viasat employee can come into the office in the morning and they automatically have a stable version of Viasat’s Kubernetes that is ready to deploy into production!


 As demonstrated by this project, we learned how to use a handful of cutting-edge technologies as part of our summer internship here at Viasat. Not only are Docker, Jenkins, and Kubernetes highly sought-after skills, but the applications of them are nearly endless.

But we also learned a lot about the type of company we want to work for post-graduation. We really appreciate the level of autonomy that is granted to us; ranging from managing our own working hours to making our own technical decisions within our project. Each time our supervisor Piotr Siwczak made a suggestion to our team about how to tackle a particular problem, he would immediately follow it up with “But if you think that there is a better way you should definitely try that.” I think that this idea of blazing your own trail within the company is a philosophy well-fostered at Viasat. And I would encourage you to come experience it for yourself.

So if you are looking to intern somewhere where you can grow professionally, fly to California for a hackathon and social events with all expenses paid, present your impactful work to executives, pick up cutting-edge tech skills… I could go on.

Check out: Viasat Internships

Note: Austin worked in the Austin, TX office alongside Bhavani Balasubramanyam and Mittal Jethwa. Bhavani is a second-year Masters student at Arizona State University, Mittal is a second-year Masters student at San Diego State University, and Austin is a junior at Rice University.


Leave a Reply