KubeVela bridges the gap between applications and infrastructures, enabling easy delivery and management of development codes. Compared to Kubernetes objects, the Application in KubeVela better abstracts and simplifies the configurations which developers care about, and leave complex infrastruature capabilities and orchestration details to platform engineers. The KubeVela apiserver further exposes HTTP interfaces, which help developers to deploy applications even without Kubernetes cluster access.
This article will use Jenkins, a popular continuous integration tool, as basis and give a brief introduction to how to build GitOps-based application continuous delivery highway.
As application developer, you might care more about whether your application is functioning correctly and if development is convenient. There will be several system components on this highway to help you achieve that.
- First, you need a git repo to place program codes, test codes and a YAML file to declare your KubeVela application.
- Second, you also need a continuous integration tool to help you automate the integration test of codes, build container images and push images to image repo.
- Finally, you need to have a Kubernetes cluster and install KubeVela in it, with its apiserver function enabled.
Currently, the access management for KubeVela apiserver is under construction. You will need to configure apiserver access in later version of KubeVela (after v1.1).
In this article, we adopt GitHub as the git repo, Jenkins as the CI tool, DockerHub as the image repo. We use a simple HTTP Server written in Golang as example. The whole process of continuous delivery is shown as below. We can see that on this highway of continuous delivery, developers only need to care about application development and managing code version with Git. The highway will help developer run integration test and deploy applications into target Kubernetes cluster automatically.
This article takes Jenkins as the CI tool. Developers can choose other CI tools like Travis or GitHub Action.
First you need to set up Jenkins to deploy CI pipelines. The installation and initialization of Jenkins could refer to the official docs.
Notice that since the CI pipeline in this example is based on Docker and GitHub, you need to install related plugins in Jenkins (Dashboard > Manage Jenkins > Manage Plugins), including Pipeline、HTTP Request Plugin、Docker Pipeline、Docker Plugin.
Besides, you also need to configure Docker environment for Jenkins to use (Dashboard > Manage Jenkins > Configure System > Docker Builder). If Docker has already been installed, you can set Docker URL as
Since the docker image will be pushed to image repo during the running of CI pipelines, you also need to store image repo accounts in Jenkins Credintial (Dashboard > Manage Jenkins > Manage Credentials > Add Credentials), such as DockerHub username and password.
This example uses GitHub as git repo. Developer can change it to other repos on demand, such as Gitlab.
To enable Jenkins to retrieve GitHub updates and write pipeline status back to GitHub, you need to execute the following two steps in GitHub.
- Configure Personal Access Token. Notice to check
repo:statusto get the permission for writing commit status.
Then fill Personal Access Token from GitHub in Jenkins Credential (with Secret Text type).
Finally, go to Dashboard > Manage Jenkins > Configure System > GitHub in Jenkins and click Add GitHub Server to fill the newly created credential in. You can click Test connection to check if the configuration is correct.
- Add Webhook to GitHub code repo settings. Fill Jenkins Webhook address into it. For example, http://my-jenkins.example.com/github-webhook/ . In this way, all Push events in this code repo will be pushed to Jenkins.
You need to install KubeVela in your Kubernetes cluster and enable the apiserver function. Refer to official doc for details.
We use a simple HTTP Server as example. Here, we declare a constant named
VERSION and print it when accessing the HTTP service. A simple test is also set up, which can be used to validate the format of
To build container image for the HTTP server and publishing it as KubeVela Application into Kubernetes, we also need another two files
app.yaml in the code repo. They are used to describe how container image is built and configure the KubeVela Application respectively.
app.yaml, we declare the application should contain 5 replica and expose the service through
labels trait is used to tag Application Pods with current git commit id. Then the delivery pipeline in Jenkins will inject GIT_COMMIT into it and submit the Application configuration to KubeVela apiserver. Then the updates for Application will be triggered. The application will update 2 replica first, then hang and wait for manual approve. After developer confirms the change is valid, the rest 3 replica will be updated. This canary release is configured by the
rollout trait declared in the Application.
In this article, we set up two pipelines in Jenkins. One is the test pipeline, which is for running tests for application codes. The other one is the delivery pipeline, which builds container images and uploads them to image repo. Then the application configuration will be updated.
Create a new pipeline in Jenkins. Set Build Triggers as GitHub hook trigger for GITScm polling.
This pipeline uses golang image as execution environment at first. Next, it checkouts the
dev branch of the target GitHub repo, indicating that this pipeline will be triggered by push events to
dev branch. The piepline status will be written back to GitHub after execution finished.
The delivery pipeline, similar to the test pipeline, first pulls codes in
prod branch of the git repo. Then use Docker to build images and push it to remote image repo. (Here we use DockerHub, the withRegistry function takes image repo location and the Credential ID of the repo as parameters). After image been built, the pipeline converts Application YAML file into JSON file, with GIT_COMMIT injected. Finally, the pipeline sends POST requests to KubeVela apiserver (here is http://18.104.22.168/) for creating or updating target application.
Currently, KubeVela apiserver takes JSON object as inputs. Therefore we do extra conversion in the delivery pipeline. In the future, the KubeVela apiserver will further improve and simplify this interaction process. The admission management will be added as well to address the security issue.
In this case we will create an application named cicd-demo-app in Namespace kubevela-demo-namespace. Notice that the Namespace need to be created in Kubernetes in advance. KubeVela apiserver will simplify it in later version.
After finishing the configuration process described above, the whole process of continuous delivery has already been set up. Let's check how it works.
First, we set the
VERSION constant in
Bad Version Number, aka,
Then, we submit this change to
dev branch. We can see that the test pipeline in Jenkins is triggered and the failure status is written back to GitHub.
We edit the
0.1.1 again and resubmit it. Now we see that the test pipeline is successfully executed, with the commit in GitHub marked as succeeded.
Then we issue a Pull Request to merge
dev branch into
The Jenkins delivery pipeline is triggered once the Pull Request is accepted. After execution finished, the latest commit in prod branch is also marked as succeeded.
As shown above, the target application is successfully accepted by KubeVela apiserver and related resources are created by KubeVela controller. The current replica number of Deployment is 2. After deleting
batchPartition : 0 in the
rollout trait of the application, which means confirming current release, the Deployment replica is updated to 5. Now we can access the domain configured in Ingress and get the current version number.
Repeat the steps above. Upgrade the version number to
0.1.2. Finish both test pipeline and delivery pipeline. Then we will see there is a version change to the Deployment managed by the target application. The replica number of the old Deployment decreases from 5 to 3 while the new one contains 2 replica at this moment. If we access the service now, we will find sometimes the old version number is returned and sometimes the new version number is displayed. This is because when rolling update the application, both new version replica and old version replica exist. The incoming traffic will be dispatched to different version replica. Therefore we can observe two different version at the same time.
After confirming new services are functioning correctly, we can remove the
batchPartition: 0 as described above to complete the whole canary release process.
In summary, we executed the whole continuous delivery process successfully. In this process, developers can easily update and deploy their applications, with the help of KubeVela and Jenkins. Besides, developers can use their favourite tools in different stages, such as substituting GitHub with Gitlab, or using TravisCI instead of Jenkins.
Readers might also notice that this progress can not only upgrade the application service, but also change deployment plan via editing
app.yaml, such as scaling up or adding sidecars, which works like classical push-style GitOps. About more KubeVela GitOps content, you can refer to other related case studies.