Skip to main content

KubeVela v1.3 released, CNCF's Next Generation of Cloud Native Application Delivery Platform

KubeVela Community

KubeVela Community

KubeVela Team

Thanks to the contribution of hundreds of developers from KubeVela community and around 500 PRs from more than 30 contributors, KubeVela version 1.3 is officially released. Compared to v1.2 released three months ago, this version provides a large number of new features in three aspects as OAM engine (Vela Core), GUI dashboard (VelaUX) and addon ecosystem. These new features are derived from the in-depth practice of many end users such as Alibaba, LINE, China Merchants Bank, and iQiyi, and then finally become part of the KubeVela project that everyone can use out of the box.

Pain Points of Application Delivery#

So, what challenges have we encountered in cloud-native application delivery?

Hybrid clouds and multi-clusters is the new norm#

On one hand, as global cloud providers' service maturing, the way most enterprises build infrastructure has become mainly replying on cloud providers and self-built as a supplement. More and more business enterprise can directly enjoy the business convenience brought by the development of cloud technology, use the elasticity of the cloud, and reduce the cost of self-built infrastructure. Enterprises need a standardized application delivery layer, which can include containers, cloud services and various self-built services in a unified manner, so as to easily achieve cloud-to-cloud interoperability, reduce the risks brought by tedious application migration, and worry-free cloud migration.

On the other hand, for security concerns such as infrastructure stability and multi-environment isolation and due to limitations by the maximized size of Kubernetes can handle, more and more enterprises are beginning to adopt multiple Kubernetes clusters to manage container workloads. How to manage and orchestrate container applications at the multi-cluster level, and solve problems such as scheduling, dependencies, versions, and gray releasing, while providing business developers with a low-threshold experience, is a big challenge.

It can be seen that the hybrid cloud and multi-cluster involved in modern application delivery are not only multiple Kubernetes clusters, but also diverse workloads and DevOps capabilities for managing cloud services, SaaS, and self-built services.

How to pick from more than 1000+ techniques in cloud-native era#

Let's take the open-source projects that have joined the CNCF ecosystem as an example, the number of which has exceeded 1,000. For teams of different scales, different industries, and different technical backgrounds, it seems that the R&D team is doing similar business application delivery and management, but with changes in requirements and usage scenarios, huge differences in technology stacks will be derived. This involves a very large learning cost and threshold for integration and migration. And CNCF's thousands of ecological projects are always tempting us to integrate new projects, add new features, and better accomplish business goals. The era of static technology stacks is long gone.

alt Figure 1. CNCF landscape

Next-generation application delivery and management require flexible assembly capabilities. According to the needs of the team, based on the minimum capability set, new functions can be expanded at a small cost, but not significantly enlarged. The traditional PaaS solution based only on a set of fixating experiences has been proven to be difficult to meet the changing scenario needs of a team during product evolution.

Next step of DevOps, delivering and managing applications for diverse infrastructures#

For more than a decade, DevOps technology has been evolving to increase productivity. Today, the production process of business applications has also undergone great changes, from the traditional way of coding, testing, packaging, deployment, maintenance, and observation, to the continuous enhancement of cloud infrastructure meaning various SaaS services based on API directly become an integral part of the application. From the diversification of development languages to the diversification of deployment environments, to the diversification of components, the traditional DevOps toolchain is gradually unable to cope with and meanwhile, the complexity of user needs is increasing exponentially.

Although DevOps prolongs, we need some different solutions. For modern application delivery and management, we still have the same pursuit of reducing human input as much as possible and becoming more intelligent. The new generation of DevOps technology needs to have easier-to-use integration capabilities, service mesh capabilities, and management capabilities that integrate observation and maintenance. At the same time, the tools need to be simple and easy to use, and the complexity stays within the platform. When choosing, enterprises can combine their own business needs, cooperate with the new architecture and legacy systems, and assemble a platform solution suitable for their team, to avoid the new platform becoming a burden for business developers or enterprises.

The Path of KubeVela Lies Ahead#

To build the next generation application delivery platform, we do: alt Figure 2. Overlook of OAM/KubeVela ecosystem

OAM(Open Application Model): evolving methodology in fast pacing practice#

Based on the internal practical experience of Alibaba and Microsoft, we launched OAM, a brand-new application model and concept in 2019. Its core idea lies in the separation of concerns, through the unified abstraction of components and traits, it can standardize business research and development in the cloud-native era. Collaboration between development team and DevOps team becomes more efficient, and at the same time we expect to avoid the complexity caused by differences in different infrastructures. We then released KubeVela as a standardized implementation of the OAM model to help companies quickly implement OAM while ensuring that OAM-compliant applications can run anywhere. In short, OAM describes the complete components of a modern application in a declarative way, while KubeVela runs according to the final state declared by OAM. Through the reconcile loop oriented to the final state, the two jointly ensure the consistency and correctness of application delivery.

Recently, we have seen a paper published by Google announcing the results of its internal learning in infrastructure construction named as "Prodspec and Annealing". Its design concept and practice are strikingly similar to "OAM and KubeVela". It can be seen that different enterprises in global shares the same vision for delivering cloud-native applications. This paper also re-confirm the correctness of the standardized model and KubeVela. In the future, we will continue to promote the development of the OAM model based on the community's practice and evolution of KubeVela, and continue to deposit best practices into methodology.

A universal hybrid environment and multi-cluster delivery control plane#

The kernel of KubeVela exists in the form of a CRD Controller, which can be easily integrated with the Kubernetes ecosystem, and the OAM model is also compatible with the Kubernetes API. In addition to the abstraction and orchestration capabilities of the OAM model, KubeVela's microkernel is also a natural application delivery control plane designed for multi-cluster and hybrid cloud environments. This also means that KubeVela can seamlessly connect diverse workloads such as cloud resources and containers, and orchestrate and deliver them in different clouds and clusters.

In addition to the basic orchestration capabilities, one core feature of KubeVela is that it allows users to customize the delivery workflow. The workflow steps provide deploying components to the cluster, setting up manual approval, sending notifications, etc. When the workflow execution enters a stable state (such as waiting for manual approval), KubeVela will also automatically maintain the state. Or, through the CUE-based configuration language, you can integrate any IaC-based process, such as Kubernetes CRD, SaaS API, Terraform module, image script, etc. KubeVela's IaC extensibility enables it to integrate Kubernetes' ecological technology at a meager cost. It is very quickly for platform builders to incorporate into their own PaaS or delivery systems. Also, through KubeVela's powerful extensibility, other ecological capabilities can be standardized for enterprise users.

In addition to the advanced model and extended kernel, we've also heard a lot from the community to call out an out-of-the-box product that makes using KubeVela easier. Since version 1.2, the community has invested in developing the GUI dashboard (VelaUX) project, based on KubeVela's microkernel, running on top of the OAM model and creating a delivery platform for CI/CD scenarios. We hope that enterprises can swiftly adopt VelaUX to meet business needs and have a robust, extensible ability to meet the needs of future businesses. alt Figure 3. Product architecture of KubeVela

Around this path, in version 1.3, the community brought the following updates:

Enhancement as a Kubernetes Multi-Cluster Control Plane#

No migration and switch to multi-cluster seamlessly#

After the enterprise has completed the application transformation to a cloud-native architecture, is it still necessary to perform configuration transformation when switching to multi-cluster deployment? The answer is negative.

KubeVela is naturally built upon a multi-cluster basis. As shown in Figure 4, this application YAML represents an application of the Nginx component that will be published to all clusters labeled as region=hangzhou. For the same application description, we only need to specify the name of the cluster to be delivered in Policy or filter specific collections by tags. alt Figure 4. OAM application - select deployment cluster

Of course, the application description shown in Figure 4 is entirely based on the OAM specification. If your current application has been defined in Kubernetes native resources, don't worry, we support the smooth transition from it, as shown in Figure 5 below, "Referencing Kubernetes resources for multi-cluster deployment," which describes a particular application whose components depend on a Secret resource that exists in the control cluster and publishes it to all clusters labeled as region=hangzhou. alt Figure 5. Reference Kubernetes native resource

In addition to multi-cluster deployment of applications, referencing Kubernetes objects can also be used in scenarios such as multi-cluster replication of existing resources, cluster data backup, etc.

Handling multi-cluster differences#

Although the application has been described in a unified OAM Model, there may be differences in the deployment of different clusters. For example, other regions use different environment variables and image registries. Different clusters deploy various components, or a component is deployed in multiple clusters but works as high availability for all, etc. For such requirements, we provide a deployment strategy to do differentiated configuration, as shown in Figure 6 below, as part of this kind of Policy. The first and second topology types of Policy define two target strategies in two ways. The third one means to deploy only the specified components. The fourth Policy represents the deployment of the selected two kinds of components and the difference in the image configuration of one of the components. alt Figure 6. Differentiated configuration of multi-clusters

KubeVela supports flexible differential configuration policies, which can be configured by component properties, traits, and other forms. As shown in the figure above, the third strategy describes the component selection ability, and the fourth strategy describes the difference between the image version. We can see that there is no target specified when describing the difference. The differentiated configuration can be patched flexibly by combining it into the workflow steps.

Configure a multi-cluster delivery process#

The application delivery process to different target clusters is controllable and described by workflow. As shown in Figure 7, the steps of deploying to two clusters and the target policy and differentiation strategy were adopted, respectively. The above shows that policy deployment only needs to be defined atomically and can be flexibly combined in the workflow steps to meet the requirements of different scenarios. alt Figure 7. Customize the multi-cluster delivery process

There are many more usages for delivery workflow, including multi-cluster canary release, manual approval, precise release control, etc.

Version control, safe and traceable#

The description of complex applications is changing at any time with agile development. To ensure the security of application release, we need to have the ability to roll back our application to a previous correct state at the time of release or after release. Therefore, we have introduced a more robust versioning mechanism in the current version. alt Figure 8. Querying historical version of the application

We can query every past version of an application, including its release time and whether it was successful or not. We can compare the changes between versions and quickly roll back based on the snapshot rendered by the previous successful version when we encounter a failure during the release. After releasing a new version, you don't need to change the configuration source if it fails. You can directly re-release based on a history version. The version control mechanism is the centralized idea of application configuration management. After the complete description of the application is rendered uniformly, it is checked, stored, and distributed.

See more Vela Core usages#

VelaUX Introduces Multi-Tenancy Isolation and User Authentication#

Multi-tenancy and isolation for enterprises#

In VelaUX, we introduce the concept of a Project that separate multi-tenancy for safety, including application delivery targets, environments, team members and permissions, etc. Figure 9 shows the project list page. Project administrators can create different projects on this page according to the team's needs to allocate corresponding resources. This capability becomes very important when there are multiple teams or multiple project groups in the enterprise publishing their business applications using the VelaUX platform simultaneously. alt Figure 9. Project management page

Open Authentication & RBAC#

As a vital platform, user authentication is one of the basic capabilities that must be possessed. Since version 1.3, we have supported user authentication and RBAC authentication.

We believe that most enterprises have built a unified authentication platform (Oauth or LDAP) for user authentication. Therefore, VelaUX prioritizes Dex getting through the single sign-on capability, supports LDAP, OIDC, Gitlab/Github, and other user authentication methods, and regards VelaUX as one of the sub portals that let access get through. Of course, if your team does not need unified authentication, we also provide basic local user authentication capabilities. alt Figure 10. Local user management

For authentication, we use the RBAC model. Still, we also saw that the primary RBAC mode could not handle more precise permission control scenarios, such as authorizing the operation rights of an application to specific users. We inherit the design concept of IAM and expand the permissions to the policy composition of resource + action + condition + behavior. The authentication system (front-end UI authentication/back-end API authentication) has implemented policy-oriented fine-grained authentication. However, in terms of authorization, the current version only has some built-in standard permission policies, and subsequent versions provide the ability to create custom permissions.

At the same time, we have also seen that some large enterprises have built independent IAM platforms. The RBAC data model of VelaUX is the same as that of common IAM platforms. Therefore, users who wish to connect VelaUX to their self-built IAM can extend seamlessly.

More secure centralized DevOps#

There will inevitably be some configuration management of operation and maintenance requirements in application delivery, primarily based on multi-cluster. The configuration management requirements are particularly prominent, such as the authentication configuration of the private image repository, the authentication configuration of the Helm product library, or the SSL certificate Wait. We need to uniformly manage the validity of these configurations and securely synchronize them where they are needed, preferably without business developer awareness.

In version 1.3, we introduced a module for integrated configuration management in VelaUX. Its bottom layer also uses component templates and application resource distribution links to manage and distribute configurations. Currently, Secret is used for configuration storage and distribution. The configuration lifecycle is independent of business applications, and we maintain the configuration distribution process independently in each project. You only need to fill in the configuration information for administrator users according to the configuration template. alt Figure 11. Integration configuration

Various Addons provide different configuration types, and users can define more configuration types according to their needs and manage them uniformly. For business-level configuration management, the community is also planning.

See more VelaUX usages#

Introducing version control in Addon ecosystem#

The Addon function was introduced in version 1.2, providing an extended plug-in specification, installation, operation, and maintenance management capabilities. The community can expand the ecological capacities of KubeVela by making different Addons. When our plug-ins and frameworks are constantly iterating, the problem of version compatibility gradually emerges, and we urgently need a version management mechanism.

  • Addon version distribution: We develop and manage the community's official Addon on Github. In addition to the integrated third-party product version, each Addon also includes a Definition and other configurations. Therefore, after each Addon is released, we define it according to its Definition. The version number is packaged, and the history is preserved. At the same time, we reused Helm Chart's product distribution API specification to distribute Addon.
  • Addon version distribution: We develop and manage the community's official Addon on Github. In addition to the integrated third-party product version, each Addon also includes a Definition and other configurations. Therefore, after each Addon is released, we define it according to its Definition. The version number is packaged, and the history is preserved. At the same time, we reused Helm Chart's product distribution API specification to distribute Addon.

Multi-cluster Addon controllable installation#

A type of Addon needs to be installed in the subcluster when installing, such as the FluxCD plug-in shown in Figure 12, which provides Helm Chart rendering and deployment capabilities. We need to deploy it to sub-clusters, and in the past, this process was distributed to all sub-clusters. However, according to community feedback, different plug-ins do not necessarily need to be installed in all clusters. We need a differential processing mechanism to install extensions to specified clusters on demand. alt Figure 12 Addon configuration

The user can specify the cluster to be deployed when enabling Addon, and the system will deploy the Addon according to the user's configuration.

New members to Addon ecosystem#

While iteratively expanding the framework's capabilities, the existing Addons in the community are also continuously being added and upgraded. The number of supported vendors has increased to seven at the cloud service support level. Ecological technology, AI training and service plug-ins, Kruise Rollout plug-ins, Dex plug-ins, etc., have been added. At the same time, the Helm Chart plug-in and the OCM cluster management plug-in have also been updated for user experience.

More Addon usages#

Recent roadmap#

As KubeVela core becomes more and more stable, its scalability is unleashed gradually. The evolution of the 1.2/1.3 version of the community has been accelerated. In the future, we will iterate progressively new versions in a two-month cycle. In the next 1.4 release, we will add the following features:

  • Observability: Provide a complete observability solution around logs, metrics, and traces, provide out-of-the-box observability of the KubeVela system, allow custom observability configuration, and integrate existing observability components or cloud resources.
  • Offline installation: Provide relatively complete offline installation tools and solutions to facilitate more users to use KubeVela in an offline environment.
  • Multi-cluster permission management: Provides in-depth permission management capabilities for Kubernetes multi-cluster.
  • More out-of-the-box Addon capabilities.

The KubeVela community is looking forward to your joining to build an easy-to-use and standardized next-generation cloud-native application delivery and management platform!

Easily Manage your Application Shipment With Differentiated Configuration in Multi-Cluster

Wei Duan

Wei Duan

KubeVela Team

Under today's multi-cluster business scene, we often encounter these typical requirements: distribute to multiple specific clusters, specific group distributions according to business need, and differentiated configurations for multi-clusters.

KubeVela v1.3 iterates based on the previous multi-cluster function. This article will reveal how to use it to do swift multiple clustered deployment and management to address all your anxieties.

Before Starting#

  1. Prepare a Kubernetes cluster as the control plane of KubeVela.
  2. Make sure KubeVela v1.3 and KubeVela CLI v1.3.0 have been installed successfully.
  3. The list of Kubeconfig from sub clusters that you want to manage. We will take three clusters naming beijing-1, beijing-2 and us-west-1 as examples.
  4. Download and combine with Multi-Cluster-Demo to better understand how to use the KubeVela multi-cluster capabilities.

Distribute to Multiple Specified Clusters#

Distributing multiple specified clusters is the most basic multi-cluster management operation. In KubeVela, you will use a policy called topology to implement it. The cluster will be listed in the attribute clusters, an array.

First let's make sure switching kubeconfig to the control plane cluster, go with vela cluster join to include in the 3 clusters of Beijing-1, Beijing-2 and us-west-1:

➜ vela cluster join beijing-1.kubeconfig --name beijing-1
➜ vela cluster join beijing-2.kubeconfig --name beijing-2
➜ vela cluster join us-west-1.kubeconfig --name us-west-1
➜ vela cluster list
CLUSTER TYPE ENDPOINT ACCEPTED LABELS
beijing-1 X509Certificate https://47.95.22.71:6443 true
beijing-2 X509Certificate https://47.93.117.83:6443 true
us-west-1 X509Certificate https://47.88.31.118:6443 true

Then open multi-cluster-demo, look into Basic.yaml:

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: example-app
namespace: default
spec:
components:
- name: hello-world-server
type: webservice
properties:
image: oamdev/hello-world
port: 8000
traits:
- type: scaler
properties:
replicas: 3
- type: gateway
properties:
domain: testsvc-mc.example.com
# classInSpec : true If the sub clusters has Kubernetes versions below v1.20 installed, please add this field
http:
"/": 8000
policies:
- type: topology
name: beijing-clusters
properties:
clusters: ["beijing-1","beijing-2"]

It can be seen that this app uses the component of type webservice and distributes 3 Deployments to beijing-1 and beijing-2 clusters through the topology policy.

Please note that the premise of successfully distributing resource into managed clusters is that it must contain the exactly same namespace as control plane did. Since each cluster has the default namespace by default, we won't be worry in this case. But suppose we change the namespace in basic.yaml to be multi-cluster, we will receive an error:

...
Status: runningWorkflow
Workflow:
mode: DAG
finished: false
Suspend: false
Terminated: false
Steps
- id:9fierfkhsc
name:deploy-beijing-clusters
type:deploy
phase:failed
message:step deploy: step deploy: run step(provider=oam,do=components-apply): Found 1 errors. [(failed to apply component beijing-1-multi-cluster-0: HandleComponentsRevision: failed to create componentrevision beijing-1/multi-cluster/hello-world-server-v1: namespaces "multi-cluster" not found)]
Services:
...

In future versions of KubeVela, we plan to support a comprehensive Authentication System, more convenient and more securely to: create namespaces in managed cluster through the hub cluster in quick moves.

After creating the sub cluster's namespace, come back to the control plane cluster to create the application and ship out resources:

➜ vela up -f basic.yaml
Applying an application in vela K8s object format...
"patching object" name="example-app" resource="core.oam.dev/v1beta1, Kind=Application"
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward example-app
SSH: vela exec example-app
Logging: vela logs example-app
App status: vela status example-app
Service status: vela status example-app --svc hello-world-server

We use vela status <App Name> to view detailed infos about this app:

➜ vela status example-app
About:
Name: example-app
Namespace: default
Created at: 2022-03-25 17:42:33 +0800 CST
Status: running
Workflow:
mode: DAG
finished: true
Suspend: false
Terminated: false
Steps
- id:wftf9d4exj
name:deploy-beijing-clusters
type:deploy
phase:succeeded
message:
Services:
- Name: hello-world-server
Cluster: beijing-1 Namespace: default
Type: webservice
Healthy Ready:3/3
Traits:
✅ scaler ✅ gateway: Visiting URL: testsvc-mc.example.com, IP: 60.205.222.30
- Name: hello-world-server
Cluster: beijing-2 Namespace: default
Type: webservice
Healthy Ready:3/3
Traits:
✅ scaler ✅ gateway: Visiting URL: testsvc-mc.example.com, IP: 182.92.222.128

Both the beijing-1 and beijing-2 have issued the corresponding resources, they also displayed external access IP addresses, and you can therefore make it public for your users.

Use Cluster Labels to Do Grouping#

In addition to the above basic need, we often encounter additional situations: cross-regional deployment to certain clusters, specify which cloud provider's cluster, etc. In order to achieve a similar goal, the labels feature can be used.

Here, suppose the us-west-1 cluster comes from AWS, we must additionally apply to the AWS cluster. You can use the vela cluster labels add command to tag the cluster. Of course, if there is more of AWS related clusters such as us-west-2, it will be handled as well after they were labeled:

➜ ~ vela cluster labels add us-west-1 provider=AWS
Successfully update labels for cluster us-west-1 (type: X509Certificate).
provider=AWS
➜ ~ vela cluster list
CLUSTER TYPE ENDPOINT ACCEPTED LABELS
beijing-1 X509Certificate https://47.95.22.71:6443 true
beijing-2 X509Certificate https://47.93.117.83:6443 true
us-west-1 X509Certificate https://47.88.31.118:6443 true provider=AWS

Next we update the basic.yaml to add an application policy topology-aws:

...
policies:
- type: topology
name: beijing-clusters
properties:
clusters: ["beijing-1","beijing-2"]
- type: topology
name: topology-aws
properties:
clusterLabelSelector:
provider: AWS

In order save your time, please deploy intermediate.yaml directly:

➜ ~ vela up -f intermediate.yaml

Review the status of the application again:

➜ vela status example-app
...
- Name: hello-world-server
Cluster: us-west-1 Namespace: default
Type: webservice
Healthy Ready:3/3
Traits:
✅ scaler ✅ gateway: Visiting URL: testsvc-mc.example.com, IP: 192.168.40.10

Differentiated Configuration#

Apart from above scenarios, we tend to have more application strategic needs, such as high availability of hoping to distribute 5 replicas. In this case, use the override policy:

...
clusterLabelSelector:
provider: AWS
- type: override
name: override-high-availability
properties:
components:
- type: webservice
traits:
- type: scaler
properties:
replicas: 5

At the same time, we hope that only AWS clusters can get high availability. Then we can expect KubeVela's workflow give us a hand. We use the following workflow: it aims to deploy this app by, first distributing to Beijing's clusters through the deploy-beijing policy, then distributing 5 copies to clusters which were labeled as AWS:

...
properties:
replicas: 5
workflow:
steps:
- type: deploy
name: deploy-beijing
properties:
policies: ["beijing-clusters"]
- type: deploy
name: deploy-aws
properties:
policies: ["override-high-availability","topology-aws"]

Then we attach the above policy and workflow to intermediate.yaml and make it to advanced.yaml:

...
policies:
- type: topology
name: beijing-clusters
properties:
clusters: ["beijing-1","beijing-2"]
- type: topology
name: topology-aws
properties:
clusterLabelSelector:
provider: AWS
- type: override
name: override-high-availability
properties:
components:
- type: webservice
traits:
- type: scaler
properties:
replicas: 5
workflow:
steps:
- type: deploy
name: deploy-beijing
properties:
policies: ["beijing-clusters"]
- type: deploy
name: deploy-aws
properties:
policies: ["override-high-availability","topology-aws"]

Then deploy it, view the status of the application:

➜ vela up -f advanced.yaml
Applying an application in vela K8s object format...
"patching object" name="example-app" resource="core.oam.dev/v1beta1, Kind=Application"
✅ App has been deployed 🚀🚀🚀
Port forward: vela port-forward example-app
SSH: vela exec example-app
Logging: vela logs example-app
App status: vela status example-app
Service status: vela status example-app --svc hello-world-serverapplication.core.oam.dev/podinfo-app configured
➜ vela status example-app
...
- Name: hello-world-server
Cluster: us-west-1 Namespace: default
Type: webservice
Healthy Ready:5/5
Traits:
✅ scaler ✅ gateway: Visiting URL: testsvc-mc.example.com, IP: 192.168.40.10

The above all are what we'd like to share with you for this time, thank you for reading and trying them out.

We invite you to explore KubeVela v1.3 for more to meet further complex requirements on business, such as dig deep in differentiated configurations to use override application policy to either override all resources on one type or only certain specific components.

China Merchants Bank's Practice on Offline Installation with KubeVela

Xiangbo Ma

Xiangbo Ma

(Cloud platform development team)

The cloud platform development team of China Merchants Bank has been trying out KubeVela since 2021 internally and aims to using it for enhancing our primary application delivery and management capabilities. Due to the specific security concern for financial insurance industry, network control measurements are relatively strict, and our intranet cannot directly pull Docker Hub image, and there is no Helm image source available as well. Therefore, in order to landing KubeVela in the intranet, you must perform a complete offline installation.

This article will take the KubeVela V1.2.5 version as an example, introduce the offline installation practice to help other users easier to complete KubeVela's deployment in offline environment.

KubeVela Offline Installation Solution#

We divide the offline installation of KubeVela in three parts, which are Vela CLI, Vela Core, and Addon offline installation. Each part mainly involves the loading of the relevant Docker image and Helm's package, which can greatly speed up deployment process in offline environment.

Before doing so, please ensure that Kubernetes cluster version is >= v1.19 && < v1.22. One way of KubeVela as a control plane relies on Kubernetes, which can be placed in any product or in any cloud provider. At the same time, you can also use Kind or Minikube to deploy KubeVela locally.

Vela CLI Offline Installation#

  • First, you need to download of the binary version of vela that you want by checking KubeVela Release Log
  • Unzip binary files and configure the appropriate environment variables in $PATH
    • Unzip binary file
      • tar -zxvf vela-v1.2.5-linux-amd64.tar.gz
      • mv ./linux-amd64/vela /usr/local/bin/vela
    • Set environment variables
      • vi /etc/profile
      • export PATH="$PATH:/usr/local/bin"
      • source /etc/profile
    • Verify the installation of Vela CLI through vela version
CLI VERSION: V1.2.5
Core Version:
GitRevision: git-ef80b66
GOLANGVERSION: Go1.17.7
  • At this point, Vela CLI has been deployed offline!

Vela Core Offline Installation#

  • Before deploying Vela Core offline, first you need to install Helm in an offline environment and its version needs to meet v3.2.0+
  • Prepare Docker image. Vela Core's deployment mainly involves 5 images, you need to first visit the Docker Hub in extranet to download the corresponding images, then load them to offline environment
    • Pull the image from Docker Hub
      • docker pull oamdev/vela-core:v1.2.5
      • docker pull oamdev/cluster-gateway:v1.1.7
      • docker pull oamdev/kube-webhook-certgen:v2.3
      • docker pull oamdev/alpine-k8s:1.18.2
      • docker pull oamdev/hello-world:v1
    • Save image to local disks
      • docker save -o vela-core.tar oamdev/vela-core:v1.2.5
      • docker save -o cluster-gateway.tar oamdev/cluster-gateway:v1.1.7
      • docker save -o kube-webhook-certgen.tar oamdev/kube-webhook-certgen:v2.3
      • docker save -o alpine-k8s.tar oamdev/alpine-k8s:1.18.2
      • docker save -o hello-world.tar oamdev/hello-world:v1
    • Re-load the image in the offline environment
      • docker load vela-core.tar
      • docker load cluster-gateway.tar
      • docker load kube-webhook-certgen.tar
      • docker load alpine-k8s.tar
      • docker load hello-world.tar
  • Download KubeVela Core, copy it to offline environment and use Helm to repackage
    • Repackage the KubeVela source code and install the chart package to the control cluster offline
      • helm package kubevela/charts/vela-core --destination kubevela/charts
      • helm install --create-namespace -n vela-system kubevela kubevela/charts/vela-core-0.1.0.tgz --wait
    • Check the output
KubeVela Control Plane Has Been successfully set up on your cluster.
  • At this point, Vela Core has been deployed offline!

Addon Offline Installation#

  • First download Catalog Source and copy it to offline environment
  • Here, we will take VelaUX, one of many more addons, as an example. First prepare its Docker image, VelaUX mainly involve 2 images, you need to first access the extranet to download the corresponding image from Docker Hub, then load it to offline environment
    • Pull the image from Docker Hub
      • docker pull oamdev/vela-apiserver:v1.2.5
      • docker pull oamdev/velaux:v1.2.5
    • Save image to local disks
      • docker save -o vela-apiserver.tar oamdev/vela-apiserver:v1.2.5
      • docker save -o velaux.tar oamdev/velaux:v1.2.5
    • Re-load the image in the offline environment
      • docker load vela-apiserver.tar
      • docker load velaux.tar
  • Install VelaUX
    • Install VelaUX via Vela CLI
      • vela addon enable catalog-master/addons/velaux
    • Check the output
Addon: velaux enabled Successfully.
  • If there is a cluster installed route Controller or Nginx Ingress Controller and also linked with an available domain, you can deploy external routing to make VelaUX accessible. Here present Openshift Route as an example, you can also choose Ingress if you wish
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: velaux-route
namespace: vela-system
spec:
host: velaux.xxx.xxx.cn
port:
targetPort: 80
to:
kind: Service
name: velaux
weight: 100
wildcardPolicy: None
  • Check the installation
curl -I -m 10 -o /dev/null -s -w %{http_code} http://velaux.xxx.xxx.cn/applications
  • At this point, VelaUX has been deployed offline! At the same time, for other types of Addon's offline deployment, access to the corresponding directory of the Catalog Source and repeat the above moves, you would complete all the addons' offline deployments for good.

Summarize#

During offline deployment, we also try to save Vela Core and Addon's resource that generated to be YAML files after deploying in extranet and re-deploy them in an offline environment, but because of all different kinds of resource involved in and it requires many other authorization issues to resolve, this way is more than cumbersome.

With this practice of KubeVela's offline deployment, we hope it help you build a complete set of KubeVela in offline environment much faster. Offline installation is pretty much a pain point for most developers, we also see that the KubeVela community is introducing the brand new velad, a fully offline, highly accountable installation tool. Velad can help automate completion by making many steps as one, such as preparing clusters, downloading and packing image, installing and etc. Further more, it do support many features: In Linux machine (such as Alibaba Cloud ECS) we can locally spin up a cluster to install Vela-Core; while starting a KubeVela control plane, do not have to worry about its data to be lost when machine behind it accidentally was shutdown; Velad can stores all the data from control plane cluster into a traditional database (such as MySQL deployed on another ECS).

In the recent version to come, China Merchants Bank will increase the efforts in the open source community of KubeVela, actively building: enterprise-level capacity, enhancement on multi-cluster, offline deployment and application-level observability. We'll also be contributing the financial industry's user scenarios and business needs, driving cloud-native ecology achieve more easily and efficient application management experience, and at last but not at least, welcome you the community member to join us together in this journey.

Use Nocalhost and KubeVela for cloud debugging and multi-cluster hybrid cloud deployment

Tianxin Dong and Yicai Yu

Tianxin Dong and Yicai Yu

KubeVela and Nocalhost team

With the rapid development of cloud-native, how can we use cloud to empower business development? When launching applications, how can cloud developers easily develop and debug applications in a multi-cluster and hybrid cloud environment? In the deployment process, how to make the application deployment have sufficient verification and reliability?

These crucial issues are urgently needed to be resolved.

In this article, we will use KubeVela and Nocalhost to provide a solution for cloud debugging and multi-cluster hybrid cloud deployment.

When a new application needs to be developed and launched, we hope that the results of debugging in the local IDE can be consistent with the final deployment state in the cloud. Such a consistent posture gives us the greatest confidence in deployment and allows us to iteratively apply updates in a more efficient and agile way like GitOps. That is: when new code is pushed to the code repository, the applications in the environment are automatically updated in real time.

Based on KubeVela and Nocalhost, we can deploy application like below:

alt

As shown in the figure: Use KubeVela to create an application, deploy the application to the test environment, and pause it. Use Nocalhost to debug the application in the cloud. After debugging, push the debugged code to the code repository, use GitOps to deploy with KubeVela, and then update it to the production environment after verification in the test environment.

In the following, we will introduce how to use KubeVela and Nocalhost for cloud debugging and multi-cluster hybrid cloud deployment.

What is KubeVela#

KubeVela is an easy-to-use and highly scalable application delivery platform built on Kubernetes and OAM. Its core capability is to allow developers to easily and quickly define and deliver modern microservice applications on Kubernetes without knowing any details related to Kubernetes itself.

KubeVela also provides VelaUX, which can visualize the entire application distribution process, making the process of application easier.

KubeVela provides the following capabilities in this scenario:

  1. The full GitOps capability:
  • KubeVela supports both Pull mode and Push mode for GitOps: we only need to push the updated code to the code repository, and KubeVela can automatically re-deploy applications based on the latest code. In this article, we will use GitOps in Push mode. For GitOps support in Pull mode, you can check this article.
  1. Powerful workflow capabilities, including cross-environment (cluster) deployment, approval, and notification:
  • With its workflow capabilities, KubeVela can easily deploy applications across environments, and supports users to add workflow steps such as manual approval and message notification.
  1. Application abstraction capabilities, make developers can understand, use and customize infrastructure capabilities easily:
  • KubeVela follows OAM and provides a set of simple and easy-to-use application abstraction capabilities, enabling developers easy to understand application and customize infrastructure capabilities. For example, for a simple application, we can divide it into three parts: components, traits and workflow. In the example in this article, the component is a simple FE application; in the traits, we bind the Nocalhost trait to this component, so that this component can use Nocalhost to debug in the cloud; In the workflow, we can first deploy this component in the test environment, and automatically suspend the workflow, then deploy to the production environment until the manual verification and approval are passed.

What is Nocalhost#

Nocalhost is a tool that allows developers to develop applications directly within a Kubernetes cluster.

The core capability of Nocalhost is to provide Nocalhost IDE plugins (including VSCode and Jetbrains plugins) to change remote workloads to development mode. In development mode, the container's image will be replaced with a development image containing development tools (e.g. JDK, Go, Python environment, etc.). When developers write code locally, any changes will be synchronized to the remote development container in real time, and the application will be updated immediately (depending on the application's hot reload mechanism or re-running the application), and the development container will inherit all the original workload's configurations (ConfigMap, Secret, Volume, Env, etc.).

Nocalhost also provides: debug and HotReload for VSCode and Jetbrains IDE; the terminal of the development container in the IDE to obtain a consistent experience with local development; a development space and mesh development space based on Namespace isolation. In addition, Nocalhost also provides a server side to help enterprises manage Kubernetes applications, developers and development spaces, so that enterprises can manage various development and testing environments in a unified way.

In the process of using Nocalhost to develop Kubernetes applications, image building, updating image versions, and waiting for the cluster to schedule Pods is eliminated, and the code/test/debug cycle is reduced from minutes to seconds.

Debug application in the cloud#

Let's take a simple front-end application as an example. First, we use VelaUX to deploy it in the multi-environment.

If you don't know how to enable KubeVela's VelaUX addon, please check the official documentation.

Use VelaUX to deploy application#

Create an environment in VelaUX, each environment can have multiple delivery targets, let's take an environment which contains a test and production delivery targets as an example.

First, create two delivery targets, one for test and one for production. The delivery target here will deliver resources to the test and prod namespaces of the local cluster respectively. You can also add new clusters for deployment through VelaUX's cluster management capabilities.

alt

After creating the delivery targets, create a new environment which contains these two delivery targets.

alt

Then, create a new application for cloud debugging. This front-end application will expose services on port 80, so we open port 80 for this application.

alt

After the application is created, the application comes with a workflow by default, which will automatically deploy the application to two delivery targets. But we don't want the un-debugged app to be deployed directly to production target. So let's edit this default workflow: add a suspend step between the two deploy steps. In this way, after deploying to the test environment, we can suspend the workflow, wait for the user to debug and verify, and then continue to deploy to the production environment.

alt

After completing these configurations, let's add a Nocalhost Trait for this application for cloud debugging.

We'll introduce a few parameters in Nocalhost Trait here in detail:

alt

There's two types of commands, Debug and Run. During development, right-clicking Remote Debug and Remote Run on the plug-in will run the corresponding command in the remote Pod. We are using a front-end application here, so set the command to yarn serve.

alt

alt

Image here refers to the debug image. Nocalhost provides images in five languages by default (go/java/python/ruby/node). You can use the built-in image by filling in the language name, you can also fill in the full image name to use custom image. Turning on HotReload means turning on the hot reload capability, and you can see the effect directly after modifying the code. PortForward will forward the cloud application's port 80 to the local port 8080.

alt

In the Sync section, you can set type to sendReceive (two-way sync), or set to send (one-way send). After completing the configuration, deploy the app. As you can see, the application will automatically suspend after it is deployed to the test target.

alt

At this point, open the Nocalhost plugin in VSCode or Jetbrains IDE, you can see our deployed application under the test namespace, click the hammer button next to the application to enter the debug mode:

alt

After entering Nocalhost debug mode, you can see that the terminal in the IDE has been replaced by the terminal of the container. With the ls command, you can see all the files in the container.

alt

Right-click the application in Nocalhost, and you can choose to enter Remote Debug or Remote Run mode. These two keys will automatically execute the Debug and Run commands we configured earlier.

alt

After entering Debug mode, we can see that our cloud application is forwarded to the local port 8080:

alt

Open the local browser and you can see that the version of the front-end application we are currently deploying is v1.0.0:

alt

Now, we can modify the code in the local IDE to change the version to v2.0.0:

alt

In the previous Nocalhost configuration, we have enabled hot reloading. Therefore, if we refresh the local 8080 port page again, we can see that the application version has become v2.0.0:

alt

Now, we can terminate Nocalhost's debug mode and push the debugged code to the code repository.

alt

Multi-Environment Publishing with GitOps#

After we finish debugging, the application on the environment is still the previous v1.0.0 version. So, what is the way to update the applications in the environment?

During the entire cloud debugging process, we only modify the source code. Therefore, we can use the GitOps to use code as the update source to complete the update of the application in the environment.

Looking at the applications deployed in VelaUX, you can see that each application will have a default Trigger:

alt

Click Manual Trigger to view the details, you can see that VelaUX provides a Webhook URL for each application, request this address, and bring the fields that need to be updated (such as: image, etc.), the the application can be easily and quickly updated. (Note: Since we need to expose addresses externally, you need to use LoadBalancer or other methods to expose VelaUX services when deploying VelaUX).

alt

In Curl Command, an example is also provided. Let's parse the request body in detail:

{
// Required, the update information triggered this time
"upgrade": {
// Application name is the key
"<application-name>": {
// The value that needs to be updated, the content here will be patched to the application
"image": "<image-name>"
}
},
// Optional, the code information carried in this trigger
"codeInfo": {
"commit": "<commit-id>",
"branch": "<branch>",
"user": "<user>",
}
}

upgrade is the update information to be carried in this trigger, in <application-name is the value that needs to be patched. The default recommendation is to update the image, or you can extend the fields here to update other properties of the application.

codeInfo is code information, which can be optionally carried, such as commit ID, branch, committer, etc. Generally, these values can be specified by using variable substitution in the CI system.

When our updated code is merged into the code repository, we can add a new step in CI to integrated with VelaUX in the code repository. Taking GitLab CI as an example, the following steps can be added:

webhook-request:
stage: request
before_script:
- apk add --update curl && rm -rf /var/cache/apk/*
script:
- |
curl -X POST -H "Content-Type: application/json" -d '{"upgrade":{"'"$APP_NAME"'":{"image":"'"$BUILD_IMAGE"'"}},"codeInfo":{"user":"'"$CI_COMMIT_AUTHOR"'","commit":"'"$CI_COMMIT_SHA"'","branch":"'"$CI_COMMIT_BRANCH"'"}}' $WEBHOOK_URL

After the configuration is complete, when the code is updated, the CI will be automatically triggered and the corresponding application in VelaUX will be updated.

alt

When the image is updated, check the application page again, and you can see that the application in the test environment has become the version v2.0.0.

After verification in the test delivery target, we can click Continue in the workflow to deploy the latest version of the application to the production delivery target.

alt

Check the application in the production environment, you can see that the latest v2.0.0 version is already in the production environment:

alt

At this point, we first used Nocalhost in the test environment for cloud debugging through KubeVela. After passing the verification, we updated the code, used GitOps to complete the deployment update, and continued to update the application in the production environment, thus completing an application from deployment to launch.

Summary#

Using KubeVela + Nocalhost, it is not only convenient for cloud debugging in the development environment, but also easy to update and deploy to the production environment after the test is completed, making the entire development and process stable and reliable.

Machine Learning Practice with KubeVela

Tianxin Dong

Tianxin Dong

KubeVela team

At the background of Machine learning goes viral, AI engineers not only need to train and debug their models, but also need to deploy them online to verify how it looks(of course sometimes, this part of the work is done by AI platform engineers. ). It is very tedious and draining AI engineers.

In the cloud-native era, our model training and model serving are also usually performed on the cloud. Doing so not only improves scalability, but also improves resource utility. This is very effective for machine learning scenarios that consume a lot of computing resources.

But it is often difficult for AI engineers to use cloud-native techniques. The concept of cloud native has become more complex over time. Even to deploy a simple model serving on cloud native architecture, AI engineers may need to learn several additional concepts: Deployment, Service, Ingress, etc.

As a simple, easy-to-use, and highly scalable cloud-native application management tool, KubeVela enables developers to quickly and easily define and deliver applications on Kubernetes without knowing any details about the underlying cloud-native infrastructure. KubeVela's rich extensibility extends to AI addons and provide functions such as model training, model serving, and A/B testing, covering the basic needs of AI engineers and helping AI engineers quickly conduct model training and model serving in a cloud-native environment.

This article mainly focus on how to use KubeVela's AI addon to help engineers complete model training and model serving more easily.

KubeVela AI Addon#

The KubeVela AI addon is divided into two: model training and model serving. The model training addon is based on KubeFlow's training-operator and can support distributed model training in different frameworks such as TensorFlow, PyTorch, and MXNet. The model serving addon is based on Seldon Core, which can easily use the model to start the model serving, and also supports advanced functions such as traffic distribution and A/B testing.

alt

Through the KubeVela AI addon, the deployment of model training and serving tasks can be significantly simplified. At the same time, the process of model training and serving can be combined with KubeVela's own workflow, multi-cluster and other functions to complete production-level services.

Note: You can find all source code and YAML files in KubeVela Samples. If you want to use the model pretrained in this example, style-model.yaml and color-model.yaml in the folder will do that and copy the model into the PVC.

Model Training#

First enable the two addons for model training and model serving.

vela addon enable model-training
vela addon enable model-serving

Model training includes two component types, model-training and jupyter-notebook, and model serving includes the model-serving component type. The specific parameters of these three components can be viewed through the vela show command.

You can also read KubeVela AI Addon Documentation for more information.

vela show model-training
vela show jupyter-notebook
vela show model-serving

Let's train a simple model using the TensorFlow framework that turns gray images into colored ones. Deploy the following YAML file:

Note: The source code for model training comes from: emilwallner/Coloring-greyscale-images

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: training-serving
namespace: default
spec:
components:
# Train the model
- name: demo-training
type: model-training
properties:
# Mirror of the trained model
image: fogdong/train-color:v1
# A framework for model training
framework: tensorflow
# Declare storage to persist models. Here, the default storage class in the cluster will be used to create the PVC
storage:
- name: "my-pvc"
mountPath: "/model"

At this point, KubeVela will pull up a TFJob for model training.

It's hard to see what's going on just by training the model. Let's modify this YAML file and put the model serving after the model training step. At the same time, because the model serving will directly start the model, and the input and output of the model are not intuitive (ndarray or Tensor), therefore, we deploy a test service to call the service and convert the result into an image.

Deploy the following YAML file:

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: training-serving
namespace: default
spec:
components:
# Train the model
- name: demo-training
type: model-training
properties:
image: fogdong/train-color:v1
framework: tensorflow
storage:
- name: "my-pvc"
mountPath: "/model"
# Start the model serving
- name: demo-serving
type: model-serving
# The model serving will start after model training is complete
dependsOn:
- demo-training
properties:
# The protocol used to start the model serving can be left blank. By default, seldon's own protocol is used.
protocol: tensorflow
predictors:
- name: model
# The number of replicas for the model serving
replicas: 1
graph:
# model name
name: my-model
# model frame
implementation: tensorflow
# Model address, the previous step will save the trained model to the pvc of my-pvc, so specify the address of the model through pvc://my-pvc
modelUri: pvc://my-pvc
# test model serving
- name: demo-rest-serving
type: webservice
# The test service will start after model training is complete
dependsOn:
- demo-serving
properties:
image: fogdong/color-serving:v1
# Use LoadBalancer to expose external addresses for easy to access
exposeType: LoadBalancer
env:
- name: URL
# The address of the model serving
value: http://ambassador.vela-system.svc.cluster.local/seldon/default/demo-serving/v1/models/my-model:predict
ports:
# Test service port
- port: 3333
expose: true

After deployment, check the status of the application with vela ls:

$ vela ls
training-serving demo-training model-training running healthy Job Succeeded 2022-03-02 17:26:40 +0800 CST
├─ demo-serving model-serving running healthy Available 2022-03-02 17:26:40 +0800 CST
└─ demo-rest-serving webservice running healthy Ready:1/1 2022-03-02 17:26:40 +0800 CST

As you can see, the application has started normally. Use vela status <app-name> --endpoint to view the service address of the application.

$ vela status training-serving --endpoint
+---------+-----------------------------------+---------------------------------------------------+
| CLUSTER | REF(KIND/NAMESPACE/NAME) | ENDPOINT |
+---------+-----------------------------------+---------------------------------------------------+
| | Service/default/demo-rest-serving | tcp://47.251.10.177:3333 |
| | Service/vela-system/ambassador | http://47.251.36.228/seldon/default/demo-serving |
| | Service/vela-system/ambassador | https://47.251.36.228/seldon/default/demo-serving |
+---------+-----------------------------------+---------------------------------------------------+

The application has three service addresses, the first is the address of our test service, the second and third are the addresses of the native model.

We can call the test service to see the effect of the model: the test service will read the content of the image, convert it into a Tensor and request the model serving, and finally convert the Tensor returned by the model serving into an image to return.

We choose a black and white female image as input:

alt

After the request, you can see that a color image is output:

alt

Model Servings: Canary Testing#

In addition to starting the model serving directly, we can also use multiple versions of the model in one model serving and assign different traffic to them for canary testing.

Deploy the following YAML, you can see that both the v1 version of the model and the v2 version of the model are set to 50% traffic. Again, we deploy a test service behind the model serving:

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: color-serving
namespace: default
spec:
components:
- name: color-model-serving
type: model-serving
properties:
protocol: tensorflow
predictors:
- name: model1
replicas: 1
# v1 version model traffic is 50
traffic: 50
graph:
name: my-model
implementation: tensorflow
# Model address, our v1 version model is stored under the /model/v1 path in the pvc of color-model, so specify the address of the model through pvc://color-model/model/v1
modelUri: pvc://color-model/model/v1
- name: model2
replicas: 1
# v2 version model traffic is 50
traffic: 50
graph:
name: my-model
implementation: tensorflow
# Model address, our v2 version model is stored under the /model/v2 path in the pvc of color-model, so specify the address of the model through pvc://color-model/model/v2
modelUri: pvc://color-model/model/v2
- name: color-rest-serving
type: webservice
dependsOn:
- color-model-serving
properties:
image: fogdong/color-serving:v1
exposeType: LoadBalancer
env:
- name: URL
value: http://ambassador.vela-system.svc.cluster.local/seldon/default/color-model-serving/v1/models/my-model:predict
ports:
- port: 3333
expose: true

When the model deployment is complete, use vela status <app-name> --endpoint to view the address of the model serving:

$ vela status color-serving --endpoint
+---------+------------------------------------+----------------------------------------------------------+
| CLUSTER | REF(KIND/NAMESPACE/NAME) | ENDPOINT |
+---------+------------------------------------+----------------------------------------------------------+
| | Service/vela-system/ambassador | http://47.251.36.228/seldon/default/color-model-serving |
| | Service/vela-system/ambassador | https://47.251.36.228/seldon/default/color-model-serving |
| | Service/default/color-rest-serving | tcp://47.89.194.94:3333 |
+---------+------------------------------------+----------------------------------------------------------+

Request the model with a black and white city image:

alt

As you can see, the result of the first request is as follows. While the sky and ground are rendered in color, the city itself is black and white:

alt

Request again, you can see that in the result of this request, the sky, ground and city are rendered in color:

alt

By distributing traffic to different versions of the model, it can help us better judge the model results.

Model Serving: A/B Testing#

For a black and white image, we can turn it into color through the model. Or in another way, we can transfer the style of the original image by uploading another style image.

Do our users love colorful pictures more or pictures of different styles more? We can explore this question by conducting A/B testing.

Deploy the following YAML, by setting customRouting, forward the request with style: transfer in the Header to the model of style transfer. At the same time, make this style transfer model share the same address as the colorized model.

Note: The model for style transfer comes from TensorFlow Hub

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: color-style-ab-serving
namespace: default
spec:
components:
- name: color-ab-serving
type: model-serving
properties:
protocol: tensorflow
predictors:
- name: model1
replicas: 1
graph:
name: my-model
implementation: tensorflow
modelUri: pvc://color-model/model/v2
- name: style-ab-serving
type: model-serving
properties:
protocol: tensorflow
# The model of style migration takes a long time, set the timeout time so that the request will not be timed out
timeout: "10000"
customRouting:
# Specify custom Header
header: "style: transfer"
# Specify custom routes
serviceName: "color-ab-serving"
predictors:
- name: model2
replicas: 1
graph:
name: my-model
implementation: tensorflow
modelUri: pvc://style-model/model
- name: ab-rest-serving
type: webservice
dependsOn:
- color-ab-serving
- style-ab-serving
properties:
image: fogdong/style-serving:v1
exposeType: LoadBalancer
env:
- name: URL
value: http://ambassador.vela-system.svc.cluster.local/seldon/default/color-ab-serving/v1/models/my-model:predict
ports:
- port: 3333
expose: true

After successful deployment, view the address of the model serving through vela status <app-name> --endpoint:

$ vela status color-style-ab-serving --endpoint
+---------+---------------------------------+-------------------------------------------------------+
| CLUSTER | REF(KIND/NAMESPACE/NAME) | ENDPOINT |
+---------+---------------------------------+-------------------------------------------------------+
| | Service/vela-system/ambassador | http://47.251.36.228/seldon/default/color-ab-serving |
| | Service/vela-system/ambassador | https://47.251.36.228/seldon/default/color-ab-serving |
| | Service/vela-system/ambassador | http://47.251.36.228/seldon/default/style-ab-serving |
| | Service/vela-system/ambassador | https://47.251.36.228/seldon/default/style-ab-serving |
| | Service/default/ab-rest-serving | tcp://47.251.5.97:3333 |
+---------+---------------------------------+-------------------------------------------------------+

In this application, the two services have two addresses each, but the model service address of the second style-ab-serving is invalid because the model service is already pointed to the address of color-ab-serving . Again, we see how it works by requesting the test service.

First, without the header, the image changes from black and white to color:

alt

Let's add an image of an ocean wave as a style render:

alt

We add the Header of style: transfer to this request, and you can see that the city has become a wave style:

alt

We can also use an ink painting image as a style rendering:

alt

It can be seen that this time the city has become an ink painting style:

alt

Summary#

Through KubeVela's AI plug-in, it can help you to perform model training and model serving more conveniently.

In addition, together with KubeVela, we can also deliver the tested model to different environments through KubeVela's multi-environment function, so as to realize the flexible deployment of the model.

Generate top 50 popular resources of AWS using 100 lines of code

KubeVela currently supports AWS, Azure, GCP, AliCloud, Tencent Cloud, Baidu Cloud, UCloud and other cloud vendors, and also provides a quick and easy command line tool to introduce cloud resources from cloud providers. But supporting cloud resources from cloud providers one by one in KubeVela is not conducive to quickly satisfying users' needs for cloud resources. This doc provides a solution to quickly introduce the top 50 most popular cloud resources from AWS in less than 100 lines of code.

We also expect users to be inspired by this article to contribute cloud resources for other cloud providers.

Where are the most popular cloud resources on AWS?#

The official Terraform website provides Terraform modules for each cloud provider, for example, AWS cloud resource Terraform modules. And the cloud resources are sorted by popularity of usage (downloads), for example, AWS VPC has 18.7 million downloads.

Through a simple analysis, we found that the data for the top 50 popular Terraform modules for AWS can be obtained by requesting https://registry.terraform.io/v2/modules?filter%5Bprovider%5D=aws&include=latest-version&page%5Bsize%5D=50&page%5Bnumber%5D=1.

Prerequisites#

The code accepts two parameters.

  • provider Name
  • The URL of the Terraform Modules corresponding to the provider

For AWS, Provider Name should be “aws”,corresponding Terraform modules URL isTerraform Modules json API(Searching top 50 popular resources for provider aws in Terraform Registry).

You need to make sure the providerName(aws) and Modules links are correct before executing the code.

Executing the code#

Then you can quickly bring in the top 50 most popular AWS cloud resources in bulk with the following 100 lines of code (filename gen.go).

import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/pkg/errors"
)
type TFDownload struct {
Data []DataItem `json:"data"`
Included []IncludedItem `json:"included"`
}
type IncludedItem struct {
Id string `json:"id"`
Attributes Attributes `json:"attributes"`
}
type DataItem struct {
Attributes Attributes `json:"attributes"`
Relationships Relationships `json:"relationships"`
}
type Relationships struct {
LatestVersion RelationshipLatestVersion `json:"latest-version"`
}
type RelationshipLatestVersion struct {
Data RelationshipData `json:"data"`
}
type RelationshipData struct {
Id string `json:"id"`
}
var errNoVariables = errors.New("failed to find main.tf or variables.tf in Terraform configurations")
type Attributes struct {
Name string `json:"name"`
Downloads int `json:"downloads"`
Source string `json:"source"`
Description string `json:"description"`
Verified bool `json:"verified"`
}
func main() {
if len(os.Args) < 2 {
fmt.Println("Please provide the cloud provider name and an official Terraform modules URL")
os.Exit(1)
}
providerName := os.Args[1]
terraformModulesUrl := os.Args[2]
resp, err := http.Get(terraformModulesUrl)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
var modules TFDownload
if err := json.Unmarshal(body, &modules); err != nil {
fmt.Println(err.Error())
os.Exit(1)
}
if _, err = os.Stat(providerName); err == nil {
if err := os.RemoveAll(providerName); err != nil {
log.Fatal(err)
}
fmt.Printf("Successfully deleted existed directory %s\n", providerName)
}
if _, err = os.Stat(providerName); os.IsNotExist(err) {
if err := os.Mkdir(providerName, 0755); err != nil {
if !os.IsExist(err) {
log.Fatal(err)
}
fmt.Printf("Successfully created directory %s\n", providerName)
}
}
for _, module := range modules.Data {
var description string
for _, attr := range modules.Included {
if module.Relationships.LatestVersion.Data.Id == attr.Id {
description = attr.Attributes.Description
}
}
if description == "" {
description = strings.ToUpper(providerName) + " " + strings.Title(module.Attributes.Name)
}
outputFile := fmt.Sprintf("%s/terraform-%s-%s.yaml", providerName, providerName, module.Attributes.Name)
if _, err := os.Stat(outputFile); !os.IsNotExist(err) {
continue
}
if providerName == "aws" && (module.Attributes.Name == "rds" || module.Attributes.Name == "s3-bucket" ||
module.Attributes.Name == "subnet" || module.Attributes.Name == "vpc") {
continue
}
if err := generateDefinition(providerName, module.Attributes.Name, module.Attributes.Source, "", description); err != nil {
fmt.Println(err.Error())
os.Exit(1)
}
}
}
func generateDefinition(provider, name, gitURL, path, description string) error {
defYaml := filepath.Join(provider, fmt.Sprintf("terraform-%s-%s.yaml", provider, name))
cmd := fmt.Sprintf("vela def init %s --type component --provider %s --git %s.git --desc \"%s\" -o %s",
name, provider, gitURL, description, defYaml)
if path != "" {
cmd = fmt.Sprintf("%s --path %s", cmd, path)
}
fmt.Println(cmd)
stdout, err := exec.Command("bash", "-c", cmd).CombinedOutput()
if err != nil {
return errors.Wrap(err, string(stdout))
}
fmt.Println(string(stdout))
return nil
}

Executing the following command:

go run gen.go aws "https://registry.terraform.io/v2/modules?filter%5Bprovider%5D=aws&include=latest-version&page%5Bsize%5D=50&page%5Bnumber%5D=1"

Explanation for the code#

Unmarshal the json data for the resources#

Access the URL passed in by the user and parse the returned json data into the Go structure.

The json format corresponding to the resource is as follows.

{
"data": [
{
"type": "modules",
"id": "23",
"attributes": {
"downloads": 18440513,
"full-name": "terraform-aws-modules/vpc/aws",
"name": "vpc",
"namespace": "terraform-aws-modules",
"owner-name": "",
"provider-logo-url": "/images/providers/aws.png",
"provider-name": "aws",
"source": "https://github.com/terraform-aws-modules/terraform-aws-vpc",
"verified": true
},
"relationships": {
"latest-version": {
"data": {
"id": "142143",
"type": "module-versions"
}
}
},
"links": {
"self": "/v2/modules/23"
}
},
...
],
"included": [
{
"type": "module-versions",
"id": "36806",
"attributes": {
"created-at": "2020-01-03T11:35:36Z",
"description": "Terraform module Terraform module for creating AWS IAM Roles with heredocs",
"downloads": 260030,
"published-at": "2020-02-06T06:26:08Z",
"source": "",
"tag": "v2.0.0",
"updated-at": "2022-02-22T00:45:44Z",
"version": "2.0.0"
},
"links": {
"self": "/v2/module-versions/36806"
}
},
...
],
...
}

In the json data corresponding to Modules, we only care about two key-value pairs, viz.

  • data: A list containing the names and properties of Modules
  • Included: Information about the specific version of Modules filtered out

In this case, for each Module element in data, resolve its attributes, Id and the id corresponding to the latest-version in relationship; for each Module version element in Included, resolve its attributes and Id.

The attributes are further resolved as follows five items:

  • Name
  • Downloads
  • Source
  • Description
  • Verified

The Go structure is named as TFDownload , The http library gets the json data and then parses the structure of the Terraform modules through the json.Unmarshal.

generating ComponentDefinitions in batch#

  1. creating directory and component definitions

After parsing, create a new folder in the current directory and name the folder as \<provider name>.

Iterate through the parsed data, and for each Module element, perform the following operations to generate the corresponding definition and documentation for it.

  1. Generate definition files

Generate the definition file by reading the corresponding information from the module's github repository using the following vela command.

vela def init {ModuleName} --type component --provider {providerName} --git {gitURL} --desc {description} -o {yamlFileName}

Several items to be filled in the instruction are passed in from the parsed Module structure.

  • gitURL: {Module.Attributes.Source}.git
  • description: If there are elements in Included which have the same ID with relationship.latest-version.ID, set the description as the corresponding description in Included elements, otherwise set the description as providerName+ModuleName.
  • yamlFileName:terraform-{providerName}-{Module.Attributes.Name}.yaml

Have a try?#

There are also a number of cloud providers that offer a wealth of Terraform modules, such as

GCP: https://registry.terraform.io/namespaces/terraform-google-modules

Alibaba Cloud: https://registry.terraform.io/namespaces/terraform-alicloud-modules

Do you want to extend cloud resources for your current or favorite cloud provider for KubeVela as well?

KubeVela v1.2 - Focused on Developer Experience, Simplified Multi-Cluster Application Delivery

As the cloud native technologies grows continuously, more and more infrastructure capabilities are becoming standardized PaaS or SaaS products. To build a product you don't need a whole team to do it nowadays. Because there are so many services that can take roles from software developing, testing to infrastructure operations. As driven the culture of agile development and cloud native technologies, more and more roles can be shifted left to developers, e.g. testing, monitoring, security. As emphasized by the DevOps concepts, it can be done in the development phase for the work of monitoring, security, and operations via open source projects and cloud services. Nonetheless, this also creates huge challenges to developers, as they might lack the control of diverse products and complex APIs. Not only do they have to make choices, but also they need to understand and coordinate the complex, heterogeneous infrastructure capabilities in order to satisfy the fast-changing requirements of the business.

This complexity and uncertainty has exacerbated the developer experience undoubtedly, reducing the delivery efficiency of business system, increasing the operational risks. The tenet of developer experience is simplicity and efficiency. Not only the developers but also the enterprises have to choose the better developer tools and platforms to achieve this goal. This is also the focus of KubeVela v1.2 and upcoming release that to build a modern platform based on cloud native technologies and covering development, delivery, and operations. We can see from the following diagram of KubeVela architecture that developers only need to focus on applications per se, and use differentiated operational and delivery capabilities around the applications.

image.png pic 1. KubeVela Architecture

OAM & KubeVela History#

Let's retrospect the history of OAM and KubeVela to understand how it is formed this way:

  • OAM(Open Application Model)birth and growth

To create simplicity in a complex world, the first problem we need to solve is how to make standard abstractions? OAM creatively proposes two ways of separation: separation between applications and resources, and separation between development and operation (in ideal world the operation can be fully automated). It is a cloud native application specification which provides everything-as-a-service, complete modularity design. The spec has been getting tractions among major vendors all over the world since it's been announced. Because we all share a common goal -- To reduce learning curve and provide application lego-style invention for developers.

  • v1.0 release of KubeVela, bringing the OAM spec implementation

With the application specification as the guidance, advanced community users can create their own tools to build practical solutions. But it is unaccessible to most developers though. KubeVela was born as the community standard implementation to solve this problem. It absorbs the good parts from latest Kubernetes community development. It provides automated, idempotent, reliable application rollout controllers. With its features, KubeVela can empower developers to quickly deploy OAM-compliant applications.

  • v1.1 release of KubeVela, provides delivery workflow, making multi-cluster rollout controlled and simplified

As more and more enterprises adopt the cloud, hybrid and distributed cloud will certainly become the future norm. KubeVela has been designed and built based on hybrid cloud infrastructure as a modern application management system. We anticipate that the architecture of modern enterprise applications will be heterogenous considering factors of availability, performance, data security, etc. In KubeVela 1.1, we adds new feature to achieve programmable delivery workflow. It natively fits the multi-cluster architecture to provide modern multi-cluster application rollout.

By the time of 2022, on the road to serve developers, KubeVela has gone to the fourth phase. It is going to empower developers to do multi-cluster rollout way more easily. In the following we will dissect its changes:

Core Features in v1.2 Release#

The new GUI project: VelaUX#

It is the best choice to reduce developer learning curve by providing an easy-to-use UI console. Since the inception, KubeVela community has been asking for UI. With the v1.2 release, it has finally come. Providing GUI will help developers organize and compose heterogeneous aplications in a standard way. This will help them analyze and discover business obstacles quicker.

VelaUX is the frontend project of KubeVela with extensible core design. It introduces low-code experience for users to drag-and-drop form that takes user input based on dynamic components. To achieve this we have designed the frontend description spec UISchema with X-Definition, and multi-dimensional query language VelaQL. This design makes the foundation for the heterogenous application delivery architecture of KubeVela.

From GUI, users can manage addons, connect Kubernetes clusters, distribute delivery targets, set up environments and deploy all kinds of apps, monitor runtime status, achieve full lifecycle management of application delivery.

image.png pic 2. KubeVela Application Dashboard

For the new terms in GUI, please refer to Core Concepts documentation to learn more details.

Unified Multi-Cluster Control#

KubeVela will manage N Kubernetes clusters, N cloud vendor services in a big unified infrastructure pool. From that our developers can set up different environments based on business requirements, workflow policies, team collaboration needs, etc. This will create separate environment workspaces from big infrastructure resource pool. One application can be deployed into multiple environments, and environments are isolated from each other in both management and runtime.

image.png pic 3. KubeVela Application Status

As shown above, an application can be deployed to default environments and other custom environments such as test or prod. Each environment can include multiple delivery targets. Each delivery target indicates an independent, separate Kubernetes cluster.

Heterogeneous Application Architecture#

In terms of cloud native technologies, we have many options to pick for build application delivery solutions. Based on Kubernetes, we can use mature technologies like Helm Chart to delivery middleware and third-party open source softwares. We can deliver enterprise business applications via container images. We can also use OpenYurt to deliver and mange edge applications. Based on the open technologies of cloud services, we can deliver database, message queues, cache, etc. middleware, including operational features like logging, monitoring.

With so many options, KubeVela adopts OAM as the standard application definition to manage heterogeneous application architecture uniformly. KubeVela provides highly extensible delivery core engine. Users can use built-in or install more plugins to extend the platform, and manage application deliveries in a consistent way. On top of KubeVela, what users see is the modular, everything-as-a-service control plane.

image.png pic 4. Cloud Resources Deploy

As shown above, we can tell that in the application management page users can conveniently deliver cloud resources. Developers can read the following docs to understand the full delivery process of heterogeneous application architecture:

  1. Deliver Docker Image
  2. Deliver Helm Chart
  3. Deliver Kubernetes Resources
  4. Deliver cloud resources

Extension System#

KubeVela has been designed as an extensible system from the very beginning. The aforementioned heterogeneous application architecture can be achieved via KubeVela's extension system. It can be extended via standard interfaces and plugin as many capabilities as you want. This will match the differentiated requirements of enterprises while reducing the cognitive burden incurred in learning new things. KubeVela's extension system includes component types, operational traits, workflow types, delivery policies, etc. In current release, we have the addon management system. An addon packages the extension capabilities for easy distribution.

image.png pic 5. KubeVela Addons

Currently we provide an official catalog with pre-packaged addons shown as above. Meanwhile in the experimental catalog repo we can collaborating with community users to create more capabilities.

By now, KubeVela has grown into an application delivery platform that serve developers directly. What enterprise scenarios can we use KubeVela for? In the following we list a couple of common scenarios:

Enterprise Software Delivery Solutions#

Multi-Cluster DevOps#

Today many enterprise software delivery looks like the following diagram. They use the compute resources from cloud vendors for both the demo and production environments. But they use their in-house server farm for the development or testing environments. If any business applications have multi-region disaster recovery requirements, then production environments can span multiple regions or even clouds.

image.png pic 6. DevOps Pipeline

For basic DevOps workflow, it includes code hosting, CI/CD process. KubeVela can provide support for CD process. To enterprises the following are the practical steps:

  1. Prepare local and cloud resources according to real needs. Make sure local and cloud resources are connected in the same network plane for unified resource management.
  2. Deploy KubeVela into the production environment and ensure its accessibility.
  3. Install DevOps toolchain like Gitlab, Jenkins, Sonar via KubeVela. Usually the accessability of code hosting and developer toolchain are critical and we must deploy them to production environments. Unless you local clusters can ensure accessibility, can hope the business code to exist in local environment, then you can deploy them to local clusters.
  4. Setup local development environments via KubeVela, deploy testing middleware in local. Setup cloud middleware in production environments.
  5. Setup business code CI piplines via Jenkins. Generates Docker image and send it to KubeVela to do multi-environment deployment. This will make up an end-to-end application delivery workflow.

Using KubeVela multi-cluster DevOps solution will provide the following advantages:

  1. Developers do not need to know any Kubernetes knowledge to achieve heterogeneous cloud-native application delivery.
  2. Unified multi-cluster, multi-environment management in a single control plane. Natively deploy multi-cluster applications.
  3. Unified application management mode, regardless of business applications or developer toolchain.
  4. Flexible workflow to help enterprises to glue various software delivery processes in a single workflow.

Unified Management of Heterogeneous Environments#

Different enterprises face different problems and requirements of infrastructure and business. On the infrastructure side, enterprises could build in-house private cloud, yet buy some public cloud resources, and own some edge devices. On the business side, the variance of scale and requirements will lead to multi-cloud, multi-region application architecture, while keeping some legacy systems. On the developer side, developing software will need various environments such as development, testing, staging, production. On the management side, different business teams need isolation from each other, while opening up connection between some business applications. ​

In the past, it was very easy to become fragmented between different business teams inside enterprises. This fragmentation exists in: toolchain, technical architecture, business management. We take this into account while being innovative in technologies. KubeVela brings a new solution that pursues unified management and extensible architecture with good compatibility.

  • On the infrastructure side, we support different API formats including Kubernetes API, cloud APIs, and custom APIs to model all kinds of the infrastructure.
  • On the business architecture side, the application model is open and platform agnostic. KubeVela provides the ability to connect and empower businesses.
  • On the Developer toolchain side, there might be different toolchain and artifacts in the enterprises. KubeVela provides the extension mechanism and standard models to combine different kinds of artifacts into a standardized delivery workflow. Surely, its standards are shifting left and empowering enterprises to unify toolchain management. You don't need to concern whether you are using Gitlab or Jenkins because KubeVela can integrate them both.
  • On the operations side, the operational capabilities and toolchain solutions can be unified under KubeVela standards in the enterprises. Moreover, the community operational capabilities can be shared and reused easily via KubeVela extensions.

Thus, KubeVela can be used to connect different stages inside the enterprises, and unify all capabilities in a single platform. It is a practical and future-proof solution.

Enterprise Internal Application Platform#

Many enterprises that has enough development power will choose to build internal application platforms. The main reason is that they can customize the platform to make it very easy for their use cases. In the past we can see there are many PaaS platforms born out of Cloud Foundry. We all know the stereotypes of application platforms will not satisfy all enterprises. If the application package format and delivery workflow can standardized inside enterprises, then all users need to do is to fill the image name. However, in traditional PaaS platforms developers have to understand a bunch of so-called general concepts. For example, if an enterprise want to deploy AI applications, and there is some difference for AI application architecture, then we have to create such AI PaaS, and enterprises have to pay more fees and learn more concepts.

Therefore, when general products couldn't satisfy the needs of enterprises, they will consider develop one on their own. But it takes so much resources to build an internal platform from scratch. Sometimes it even surpasses the investment of their core business. This is not a feasible solution.

With above introduction, are you more familiar with the motivations and history of KubeVela? There is no such a product to be the silver bullet. But our goal is to create a standardized model to empower more and more enterprises and developers to participate in the path towards building simple and efficient developer tools. KubeVela is still in early development phase. We still hope you can join us to develop it together. We want to thank the 100+ contributors who contributed to KubeVela.

Join the Community#

Collaborate on OAM Specification#

OAM spec is the cornerstone of modern application platform architecture. Currently, OAM spec is driven by implementation of KubeVela for future improvement while the spec didn't rely on KubeVela. We highly encourage cloud vendors, platform builders, and end users to join us to define OAM spec together. We highly appreciate that vendors like Tencent, China Telecom, China Unicom have supported OAM spec and started collaborative work. Every person and organization are welcome to share your ideas, suggestions, and thoughts.

Go the Community repo.

Collaborate on Addon ecosystem#

As mentioned above, we have created the addon extension system, and encourage community developers to contribute your tools, and share your thoughts.

Contribute Cloud Resources#

KubeVela integrated Terraform Module with Terraform controller to extend cloud resources. We have supported several cloud resources, and encourage community developers or cloud providers to contribute more.

Go to contribute cloud resource.

Provide Your Feedback#

We highly welcome everyone to participate in the KubeVela community discussion whether you want to know more or contribute code!

Go to Community repo.

KubeVela is a CNCF sandbox project. Learn more by reading the official documentation

Using GitOps + KubeVela for Application Continuous Delivery

Tianxin Dong

Tianxin Dong

KubeVela Team

KubeVela is a simple, easy-to-use, and highly extensible cloud-native application platform. It can make developers deliver microservices applications easily, without knowing Kubernetes details.

KubeVela is based on OAM model, which naturally solves the orchestration problems of complex resources. It means that KubeVela can manage complex large-scale applications with GitOps. Convergence of team and system size after the system complexity problem.

What is GitOps#

GitOps is a modern way to do continuous delivery. Its core idea is to have a Git repository which contains environmental and application configurations. An automated process is also needed for sync the config to cluster.

By changing the files in repository, developers can apply the applications automatically. The benefits of applying GitOps include:

  • Increased productivity. Continuous delivery can speed up the time of deployment.
  • Lower the barrier for developer to deploy. By pushing code instead of container configuration, developers can easily deploy Kubernetes without knowing its internal implementation.
  • Trace the change records. Managing the cluster with Git makes every change traceable, enhancing the audit trail.
  • Recover the cluster with Git's rollback and branch.

GitOps with KubeVela#

KubeVela as an declarative application delivery control plane can be naturally used in GitOps approach, and this will provide below extra bonus to end users alongside with GitOps benefits:

  • application delivery workflow (CD pipeline)
    • i.e. KubeVela supports pipeline style application delivery process in GitOps, instead of simply declaring final status;
  • handling deployment dependencies and designing typologies (DAG);
  • unified higher level abstraction atop various GitOps tools' primitives;
  • declare, provision and consume cloud resources in unified application definition;
  • various out-of-box deployment strategies (Canary, Blue-Green ...);
  • various out-of-box hybrid/multi-cloud deployment policies (placement rule, cluster selectors etc.);
  • Kustomize-style patch for multi-env deployment without the need to learn Kustomize at all;
  • ... and much more.

In this section, we will introduce steps of using KubeVela directly in GitOps approach.

GitOps workflow#

The GitOps workflow is divided into CI and CD:

  • CI(Continuous Integration): Continuous integration builds code and images, and pushes images to the registry. There are many CI tools like GitHub Action, Travis, Jenkins and so on. In this article, we use GitHub Action for CI. You can also use other CI tools. KubeVela can connect CI processes under any tool around GitOps.
  • CD(Continuous Delivery): Continuous delivery automatically updates the configuration in the cluster. For example, update the latest images in the registry to the cluster.
    • Currently there are two main CD modes:
      • Push-based: Push mode CD is mainly accomplished by configuring CI pipeline. In this way, the access key of the cluster is shared with CI so that the CI pipeline can push changes to the cluster. For this mode, please refer to our previous blog post: Using Jenkins + KubeVela for Application Continuous Delivery.
      • Pull-based: Pull mode CD listens for changes to the repository (code repository or configuration repository) in the cluster and synchronizes those changes to the cluster. In this way, the cluster actively pulls the update, thus avoiding the problem of exposing the secret key. This article will introduce using KubeVela and GitOps in pull mode.

This article will separate into two perspectives:

  1. For platform administrators/SREs, they can update the config in Git repo. It will trigger automated re-deployment.

  2. For developers, they can update the app source code and then push it to Git. It will trigger building latest image and re-deployment.

For platform administrators/SREs#

Platform administrators/SREs prepares the Git repo for operational config. Every config change will be traceable by that. KubeVela will watch the repo and apply changes to the clusters.

alt

Setup Config Repository#

The configuration files are from the Example Repo.

In this example, we will deploy an application and a database, the application uses the database to store data.

The structure of the config repository looks below:

  • The clusters/ contains the GitOps config. It will command KubeVela to watch the specified repo and apply latest changes.
  • The apps/ contains the Application yaml for deploying the user-facing app.
  • The infrastructure/ contains infrastructure tools, i.e. MySQL database.
├── apps
│   └── my-app.yaml
├── clusters
│   ├── apps.yaml
│   └── infra.yaml
└── infrastructure
└── mysql.yaml

KubeVela recommends using the directory structure above to manage your GitOps repository. clusters/ holds the associated KubeVela GitOps configuration that need to be applied to cluster manually, apps/ holds your application and infrastructure/ holds your base configuration. By separating applications from basic configurations, you can manage your deployment environment more reasonably and isolate application changes.

Directory clusters/#

The clusters/ is the initialize configuration directory for KubeVela GitOps.

Below is how the clusters/infra.yaml looks like:

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: infra
spec:
components:
- name: database-config
type: kustomize
properties:
repoType: git
# replace it with your repo url
url: https://github.com/FogDong/KubeVela-GitOps-Infra-Demo
# replace it with your git secret if it's a private repo
# secretRef: git-secret
# the pull interval time, set to 10m since the infrastructure is steady
pullInterval: 10m
git:
# the branch name
branch: main
# the path to sync
path: ./infrastructure

apps.yaml and infra.yaml in clusters/ are similar. Their difference is to watch different directories. In apps.yaml, the properties.path will be ./apps.

Apply the files in clusters/ manually. They will sync the files in infrastructure/ and apps/ dir of the Git repo.

Directory apps/#

The file in apps/ is a simple application with database information and Ingress. The app serves HTTP service and connects to a MySQL database. In the '/' path, it will display the version in the code; in the /db path, it will list the data in database.

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: my-app
namespace: default
spec:
components:
- name: my-server
type: webservice
properties:
image: <your image address> # {"$imagepolicy": "default:apps"}
port: 8088
env:
- name: DB_HOST
value: mysql-cluster-mysql.default.svc.cluster.local:3306
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: ROOT_PASSWORD
traits:
- type: ingress
properties:
domain: testsvc.example.com
http:
/: 8088

This is an Application binds with Traits Ingress. In this way, the underlying Deployment, Service, and Ingress can be brought together in a single file, making it easier to manage the application.

Directory infrastructure/#

The infrastructure/ contains the config of some infrastructures like database. In the following, we will use MySQL operator to deploy a MySQL cluster.

Notice that there must be a secret in your cluster with MySQL password specified in key ROOT_PASSWORD.

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: mysql
namespace: default
spec:
components:
- name: mysql-controller
type: helm
properties:
repoType: helm
url: https://presslabs.github.io/charts
chart: mysql-operator
version: "0.4.0"
- name: mysql-cluster
type: raw
dependsOn:
- mysql-controller
properties:
apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
name: mysql-cluster
spec:
replicas: 1
# replace it with your secret
secretName: mysql-secret

We use workflow in this Application. The first step is to deploy the MySQL controller, after the controller is running, the second step will deploy the MySQL cluster.

Apply the files in clusters/#

After storing bellow files in the Git config repo, we need to apply the GitOps config files in clusters/ manually.

First, apply the clusters/infra.yaml to cluster, we can see that the MySQL in infrastructure/ is automatically deployed:

kubectl apply -f clusters/infra.yaml
$ vela ls
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
infra database-config kustomize running healthy 2021-09-26 20:48:09 +0800 CST
mysql mysql-controller helm running healthy 2021-09-26 20:48:11 +0800 CST
└─ mysql-cluster raw running healthy 2021-09-26 20:48:11 +0800 CST

Apply the clusters/apps.yaml to cluster, we can see that the application in apps/ is automatically deployed:

kubectl apply -f clusters/apps.yaml
APP COMPONENT TYPE TRAITS PHASE HEALTHY STATUS CREATED-TIME
apps apps kustomize running healthy 2021-09-27 16:55:53 +0800 CST
infra database-config kustomize running healthy 2021-09-26 20:48:09 +0800 CST
my-app my-server webservice ingress running healthy 2021-09-27 16:55:55 +0800 CST
mysql mysql-controller helm running healthy 2021-09-26 20:48:11 +0800 CST
└─ mysql-cluster raw running healthy 2021-09-26 20:48:11 +0800 CST

By deploying the KubeVela GitOps config files, we now automatically apply the application and database in cluster.

curl the Ingress of the app, we can see that the current version is 0.1.5 and the application is connected to the database successfully:

$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-server <none> testsvc.example.com <ingress-ip> 80 162m
$ curl -H "Host:testsvc.example.com" http://<ingress-ip>
Version: 0.1.5
$ curl -H "Host:testsvc.example.com" http://<ingress-ip>/db
User: KubeVela
Description: It's a test user

Modify the config for GitOps trigger#

After the first deployment, we can modify the files in config repo to update the applications in the cluster.

Modify the domain of the application's Ingress:

...
traits:
- type: ingress
properties:
domain: kubevela.example.com
http:
/: 8089

Check the Ingress in cluster after a while:

NAME CLASS HOSTS ADDRESS PORTS AGE
my-server <none> kubevela.example.com <ingress-ip> 80 162m

The host of the Ingress has been updated successfully!

In this way, we can edit the files in the Git repo to update the cluster.

For developers#

Developers writes the application source code and push it to a Git repo (aka app repo). Once app repo updates, the CI will build the image and push it to the image registry. KubeVela watches the image registry, and updates the image in config repo. Finally, it will apply the config to the cluster.

User can update the configuration in the cluster automatically when the code is updated.

alt

Setup App Code Repository#

Setup a Git repository with source code and Dockerfile.

The app serves HTTP service and connects to a MySQL database. In the '/' path, it will display the version in the code; in the /db path, it will list the data in database.

http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Version: %s\n", VERSION)
})
http.HandleFunc("/db", func(w http.ResponseWriter, r *http.Request) {
rows, err := db.Query("select * from userinfo;")
if err != nil {
_, _ = fmt.Fprintf(w, "Error: %v\n", err)
}
for rows.Next() {
var username string
var desc string
err = rows.Scan(&username, &desc)
if err != nil {
_, _ = fmt.Fprintf(w, "Scan Error: %v\n", err)
}
_, _ = fmt.Fprintf(w, "User: %s \nDescription: %s\n\n", username, desc)
}
})
if err := http.ListenAndServe(":8088", nil); err != nil {
panic(err.Error())
}

In this tutorial, we will setup a CI pipeline using GitHub Actions to build the image and push it to a registry. The code and configuration files are from the Example Repo.

Create Git Secret for KubeVela committing to Config Repo#

After the new image is pushed to the image registry, KubeVela will be notified and update the Application file in the Git repository and cluster. Therefore, we need a secret with Git information for KubeVela to commit to the Git repository. Fill the following yaml files with your password and apply it to the cluster:

apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: kubernetes.io/basic-auth
stringData:
username: <your username>
password: <your password>

Setup Config Repository#

The configuration repository is almost the same as the previous configuration, you only need to add the image registry config to the file. For more details, please refer to Example Repository.

Add the config of image registry in clusters/apps.yaml, it listens for image updates in the image registry:

...
imageRepository:
image: <your image>
# if it's a private image registry, use `kubectl create secret docker-registry` to create the secret
# secretRef: imagesecret
filterTags:
# filter the image tag
pattern: '^master-[a-f0-9]+-(?P<ts>[0-9]+)'
extract: '$ts'
# use the policy to sort the latest image tag and update
policy:
numerical:
order: asc
# add more commit message
commitMessage: "Image: {{range .Updated.Images}}{{println .}}{{end}}"

Modify the image field in apps/my-app.yaml and add annotation # {"$imagepolicy": "default:apps"}. Notice that KubeVela will only be able to modify the image field if the annotation is added after the field. default:apps is namespace:name of the GitOps config file above.

spec:
components:
- name: my-server
type: webservice
properties:
image: ghcr.io/fogdong/test-fog:master-cba5605f-1632714412 # {"$imagepolicy": "default:apps"}

After update the files in clusters/ to cluster, we can then update the application by modifying the code.

Modify the code#

Change the Version to 0.1.6 and modify the data in database:

const VERSION = "0.1.6"
...
func InsertInitData(db *sql.DB) {
stmt, err := db.Prepare(insertInitData)
if err != nil {
panic(err)
}
defer stmt.Close()
_, err = stmt.Exec("KubeVela2", "It's another test user")
if err != nil {
panic(err)
}
}

Commit the change to the Git Repository, we can see that our CI pipelines has built the image and push it to the image registry.

KubeVela will listen to the image registry and update the apps/my-app.yaml in Git Repository with the latest image tag.

We can see that there is a commit form kubevelabot, the commit message is always with a prefix Update image automatically. You can use format like {{range .Updated.Images}}{{println .}}{{end}} to specify the image name in the commitMessage field.

alt

Note that if you want to put the code and config in the same repository, you need to filter out the commit from KubeVela in CI configuration like below to avoid the repeat build of pipeline.

jobs:
publish:
if: "!contains(github.event.head_commit.message, 'Update image automatically')"

Re-check the Application in cluster, we can see that the image of the my-app has been updated after a while.

KubeVela polls the latest information from the code and image repo periodically (at an interval that can be customized):

  • When the Application file in the Git repository is updated, KubeVela will update the Application in the cluster based on the latest configuration.
  • When a new tag is added to the image registry, KubeVela will filter out the latest tag based on your policy and update it to Git repository. When the files in the repository are updated, KubeVela repeats the first step and updates the files in the cluster, thus achieving automatic deployment.

We can curl to Ingress to see the current version and data:

$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-server <none> kubevela.example.com <ingress-ip> 80 162m
$ curl -H "Host:kubevela.example.com" http://<ingress-ip>
Version: 0.1.6
$ curl -H "Host:kubevela.example.com" http://<ingress-ip>/db
User: KubeVela
Description: It's a test user
User: KubeVela2
Description: It's another test user

The Version has been updated successfully! Now we're done with everything from changing the code to automatically applying to the cluster.

Summary#

For platform admins/SREs, they update the config repo to operate the application and infrastructure. KubeVela will synchronize the config to the cluster, simplifying the deployment process.

For end users/developers, they write the source code, push it to Git, and then re-deployment will happen. It will make CI to build the image. KubeVela will then update the image field and apply the deployment config.

By integrating with GitOps, KubeVela helps users speed up deployment and simplify continuous deployment.

KubeVela Releases 1.1, Reaching New Peaks in Cloud-Native Continuous Delivery

Overview#

Initialized by Alibaba and currently a CNCF sandbox project, KubeVela is a modern application platform that focues on modeling the delivery workflow of micro-services on top of Kubernetes, Terraform, Flux Helm controller and beyond. This brings strong value added to the existing GitOps and IaC primitives with battle tested application delivery practices including deployment pipeline, across-environment promotion, manual approval, canary rollout and notification, etc.

This is the first open source project in CNCF that focuses on the full lifecycle continuous delivery experience from abstraction, rendering, orchestration to deployment. This reminds us of Spinnaker, but designed to be simpler, cloud native, can be used with any CI pipeline and easily extended.

Introduction#

Kubernetes has made it easy to build application deployment infrastructure, either on cloud, on-prem, or on IoT environments. But there are still two problems for developers to manage micro-service applications. First, developers just want to deploy, but delivering application with lower level infrastructure/orchestrator primitives is too much for them. It's very hard for developers to keep up with all these details and they need a simpler abstraction to "just deploy". Second, application delivery workflow is a basic need for "just deploy", but it is inherently out of scope of Kubernetes itself. The existing workflow addons/projects are too generic, they are way more than focusing on delivering applications only. These problems makes continuous delivery complex and unscalable even with the help of Kubernetes. GitOps can help in deploying phase, but lack the capabilities of abstracting, rendering, and orchestration. This results in low SDO (software delivery and operation) performance and burnout of DevOps engineers. In worst case, it could cause production outage if users make unsafe operations due to the complexity.

The latest DORA survey [1] shows that organizations adopting continuous delivery are more likely to have processes that are more high quality, low-risk, and cost-effective. Though the question is how we can make it more focused and easy to practice. Hence, KubeVela introduces Open Application Model (OAM), a higher level abstraction for modeling application delivery workflow with app-centric, consistent and declarative approach. This empowers developers to continuously verify and deploy their applications with confidence, standing on the shoulders of Kubernetes control theory, GitOps, IaC and beyond.

KubeVela latest 1.1 release is a major milestone bringing more continuous delivery features. It highlights:

  • Multi-environment, multi-cluster rollout: KubeVela allows users to define the environments and the clusters which application components to deploy to or to promote. This makes it easier for users to manage multi-stage application rollout. For example, users can deploy applicatons to test environment and then promote to production environment.
  • Canary rollout and approval gate: Application delivery is a procedural workflow that takes multiple steps. KubeVela provides such workflow on top of Kubernetes. By default, Users can use KubeVela to build canary rollout, approval gate, notification pipelines to deliver applications confidently. Moreover, the workflow model is declarative and extensible. Workflow steps can be stored in Git to simplify management.
  • Addon management: All KubeVela capabilities (e.g. Helm chart deployment) are pluggable. They are managed as addons [2]. KubeVela provides simple experience via CLI/UI to discover, install, uninstall addons. There is an official addon registry. Users can also bring their own addon registries.
  • Cloud Resource: Users can enable Terraform addon on KubeVela to deploy cloud resources using the same abstraction to deploy applications. This enables cooperative delivery of application and its dependencies. That includes databases, redis, message queues, etc. By using KubeVela, users don't need to switch over to another interface to manage middlewares. This provides unified experience and aligns better with the upcoming trends in CNCF Cooperative-Delivery Working Group [3].

That is the introduction about KubeVela 1.1 release. In the following, we will provide deep-dive and examples for the new features.

Multi-Environment, Multi-Cluster Rollout#

Users would need to deploy applications across clusters in different regions. Additionally, users would have test environment to run some automated tests first before deploying to production environment. However, it remains mysterious for many users how to do multi-environment, multi-cluster application rollout on Kubernetes.

KubeVela 1.1 introduces multi-environment, multi-cluster rollout. It integrates Open Cluster Management and Karmada projects to handle multi-cluster management. Based on that, it provides EnvBinding Policy to define per-environment config patch and placement decisions. Here is an example of EnvBinding policy:

policies:
- name: example-multi-env-policy
type: env-binding
properties:
envs:
- name: staging
placement: # selecting the cluster to deploy to
clusterSelector:
name: cluster-staging
selector: # selecting which component to use
components:
- hello-world-server
- name: prod
placement:
clusterSelector:
name: cluster-prod
patch: # overlay patch on above components
components:
- name: hello-world-server
type: webservice
traits:
- type: scaler
properties:
replicas: 3

Below is a demo for a multi-stage application rollout from Staging to Production. The local cluster serves as the control plane and the rest two are the runtime clusters.

Note that all the resources and statuses are aggregated and abstracted in the KubeVela Applications. Did any problems happen, it will pinpoint the problematic resources for users. This results in faster recovery time and more manageable delivery.

Canary Rollout, Approval, Notification#

Can you build a canary rollout pipeline in 5 minutes? Ask Kubernetes users and they would tell you it is not even enough to learn an Istio concept. We belive that as a developer you do not need to master Istio to build a canary rollout pipeline. KubeVela abstracts away the low level details and provides a simple solution as follows.

First, installing Istio is made easy via KubeVela addons:

vela addon enable istio

Then, users just need to define how many batches for the rollout:

traits:
- type: rollout
properties:
targetSize: 100
rolloutBatches:
- replicas: 10
- replicas: 90

Finally, define the workflow of canary, approval, and notification:

workflow:
steps:
- name: rollout-1st-batch
type: canary-rollout
properties:
# just upgrade first batch of component
batchPartition: 0
traffic:
weightedTargets:
- revision: reviews-v1
weight: 90 # 90% to the old version
- revision: reviews-v2
weight: 10 # 10% to the new version
- name: approval-gate
type: suspend
- name: rollout-rest
type: canary-rollout
properties:
batchPartition: 1
traffic:
weightedTargets:
- revision: reviews-v2
weight: 100 # 100% shift to new version
- name: send-msg
type: webhook-notification
properties:
slack:
url: <your slack webhook url>
text: "rollout finished"

Here is a full demo:

What Comes Next#

In this KubeVela release we have built the cornerstone for continuous delivery on Kubernetes. For the upcoming release our major theme will be improving user experience. We will release a dashboard that takes the user experience to another level. Besides that, we will keep improving our CLI tools, debuggability, observability. This will ensure our users can self serve to not only deploy and manage applications, but also debug and analyze the delivery pipelines.

For more project roadmap information, please see Kubevela RoadMap.

Join the Community#

KubeVela is a community-driven, open-source project. Dozens of leading enterprises have adopted KubeVela in production, including Alibaba, Tencent, ByteDance, XPeng Motors. You are welcome to join the community. Here are next steps:

References#

(1) DORA full report: https://cloud.google.com/blog/products/devops-sre/announcing-dora-2021-accelerate-state-of-devops-report (2) KubeVela Addon: https://github.com/kubevela/catalog/tree/master/addons/example (3) Cooperative Delivery Charter: https://github.com/cncf/tag-app-delivery/blob/master/cooperative-delivery-wg/charter.md

Using Jenkins + KubeVela for Application Continuous Delivery

Da Yin, Yang Song

Da Yin, Yang Song

KubeVela Team

KubeVela bridges the gap between applications and infrastructures, enabling easy delivery and management of development codes. Compared to Kubernetes objects, the Application in KubeVela better abstracts and simplifies the configurations which developers care about, and leave complex infrastruature capabilities and orchestration details to platform engineers. The KubeVela apiserver further exposes HTTP interfaces, which help developers to deploy applications even without Kubernetes cluster access.

This article will use Jenkins, a popular continuous integration tool, as basis and give a brief introduction to how to build GitOps-based application continuous delivery highway.

Continuous Delivery Highway#

As application developer, you might care more about whether your application is functioning correctly and if development is convenient. There will be several system components on this highway to help you achieve that.

  1. First, you need a git repo to place program codes, test codes and a YAML file to declare your KubeVela application.
  2. Second, you also need a continuous integration tool to help you automate the integration test of codes, build container images and push images to image repo.
  3. Finally, you need to have a Kubernetes cluster and install KubeVela in it, with its apiserver function enabled.

Currently, the access management for KubeVela apiserver is under construction. You will need to configure apiserver access in later version of KubeVela (after v1.1).

In this article, we adopt GitHub as the git repo, Jenkins as the CI tool, DockerHub as the image repo. We use a simple HTTP Server written in Golang as example. The whole process of continuous delivery is shown as below. We can see that on this highway of continuous delivery, developers only need to care about application development and managing code version with Git. The highway will help developer run integration test and deploy applications into target Kubernetes cluster automatically.

arch

Set-up Environment#

Jenkins#

This article takes Jenkins as the CI tool. Developers can choose other CI tools like Travis or GitHub Action.

First you need to set up Jenkins to deploy CI pipelines. The installation and initialization of Jenkins could refer to the official docs.

Notice that since the CI pipeline in this example is based on Docker and GitHub, you need to install related plugins in Jenkins (Dashboard > Manage Jenkins > Manage Plugins), including Pipeline、HTTP Request Plugin、Docker Pipeline、Docker Plugin.

Besides, you also need to configure Docker environment for Jenkins to use (Dashboard > Manage Jenkins > Configure System > Docker Builder). If Docker has already been installed, you can set Docker URL as unix:///var/run/docker.sock.

Since the docker image will be pushed to image repo during the running of CI pipelines, you also need to store image repo accounts in Jenkins Credintial (Dashboard > Manage Jenkins > Manage Credentials > Add Credentials), such as DockerHub username and password.

jenkins-credential

GitHub#

This example uses GitHub as git repo. Developer can change it to other repos on demand, such as Gitlab.

To enable Jenkins to retrieve GitHub updates and write pipeline status back to GitHub, you need to execute the following two steps in GitHub.

  1. Configure Personal Access Token. Notice to check repo:status to get the permission for writing commit status.

github-pat

Then fill Personal Access Token from GitHub in Jenkins Credential (with Secret Text type).

jenkins-secret-text

Finally, go to Dashboard > Manage Jenkins > Configure System > GitHub in Jenkins and click Add GitHub Server to fill the newly created credential in. You can click Test connection to check if the configuration is correct.

jenkins-github

  1. Add Webhook to GitHub code repo settings. Fill Jenkins Webhook address into it. For example, http://my-jenkins.example.com/github-webhook/ . In this way, all Push events in this code repo will be pushed to Jenkins.

github-webhook

KubeVela#

You need to install KubeVela in your Kubernetes cluster and enable the apiserver function. Refer to official doc for details.

Composing Applications#

We use a simple HTTP Server as example. Here, we declare a constant named VERSION and print it when accessing the HTTP service. A simple test is also set up, which can be used to validate the format of VERSION.

// main.go
package main
import (
"fmt"
"net/http"
)
const VERSION = "0.1.0-v1alpha1"
func main() {
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
_, _ = fmt.Fprintf(w, "Version: %s\n", VERSION)
})
if err := http.ListenAndServe(":8088", nil); err != nil {
println(err.Error())
}
}
// main_test.go
package main
import (
"regexp"
"testing"
)
const verRegex string = `^v?([0-9]+)(\.[0-9]+)?(\.[0-9]+)?` +
`(-([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?` +
`(\+([0-9A-Za-z\-]+(\.[0-9A-Za-z\-]+)*))?$`
func TestVersion(t *testing.T) {
if ok, _ := regexp.MatchString(verRegex, VERSION); !ok {
t.Fatalf("invalid version: %s", VERSION)
}
}

To build container image for the HTTP server and publishing it as KubeVela Application into Kubernetes, we also need another two files Dockerfile and app.yaml in the code repo. They are used to describe how container image is built and configure the KubeVela Application respectively.

# Dockerfile
FROM golang:1.13-rc-alpine3.10 as builder
WORKDIR /app
COPY main.go .
RUN go build -o kubevela-demo-cicd-app main.go
FROM alpine:3.10
WORKDIR /app
COPY --from=builder /app/kubevela-demo-cicd-app /app/kubevela-demo-cicd-app
ENTRYPOINT ./kubevela-demo-cicd-app
EXPOSE 8088

In app.yaml, we declare the application should contain 5 replica and expose the service through Ingress. The labels trait is used to tag Application Pods with current git commit id. Then the delivery pipeline in Jenkins will inject GIT_COMMIT into it and submit the Application configuration to KubeVela apiserver. Then the updates for Application will be triggered. The application will update 2 replica first, then hang and wait for manual approve. After developer confirms the change is valid, the rest 3 replica will be updated. This canary release is configured by the rollout trait declared in the Application.

# app.yaml
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: kubevela-demo-app
spec:
components:
- name: kubevela-demo-app-web
type: webservice
properties:
image: somefive/kubevela-demo-cicd-app
imagePullPolicy: Always
port: 8080
traits:
- type: rollout
properties:
rolloutBatches:
- replicas: 2
- replicas: 3
batchPartition: 0
targetSize: 5
- type: labels
properties:
jenkins-build-commit: GIT_COMMIT
- type: ingress
properties:
domain: kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com
http:
"/": 8088

Configure CI pipelines#

In this article, we set up two pipelines in Jenkins. One is the test pipeline, which is for running tests for application codes. The other one is the delivery pipeline, which builds container images and uploads them to image repo. Then the application configuration will be updated.

Test Pipeline#

Create a new pipeline in Jenkins. Set Build Triggers as GitHub hook trigger for GITScm polling.

test-pipeline-create test-pipeline-config

This pipeline uses golang image as execution environment at first. Next, it checkouts the dev branch of the target GitHub repo, indicating that this pipeline will be triggered by push events to dev branch. The piepline status will be written back to GitHub after execution finished.

void setBuildStatus(String message, String state) {
step([
$class: "GitHubCommitStatusSetter",
reposSource: [$class: "ManuallyEnteredRepositorySource", url: "https://github.com/Somefive/KubeVela-demo-CICD-app"],
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: "ci/jenkins/test-status"],
errorHandlers: [[$class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
statusResultSource: [ $class: "ConditionalStatusResultSource", results: [[$class: "AnyBuildResult", message: message, state: state]] ]
]);
}
pipeline {
agent {
docker { image 'golang:1.13-rc-alpine3.10' }
}
stages {
stage('Prepare') {
steps {
script {
def checkout = git branch: 'dev', url: 'https://github.com/Somefive/KubeVela-demo-CICD-app.git'
env.GIT_COMMIT = checkout.GIT_COMMIT
env.GIT_BRANCH = checkout.GIT_BRANCH
echo "env.GIT_BRANCH=${env.GIT_BRANCH},env.GIT_COMMIT=${env.GIT_COMMIT}"
}
setBuildStatus("Test running", "PENDING");
}
}
stage('Test') {
steps {
sh 'CGO_ENABLED=0 GOCACHE=$(pwd)/.cache go test *.go'
}
}
}
post {
success {
setBuildStatus("Test success", "SUCCESS");
}
failure {
setBuildStatus("Test failed", "FAILURE");
}
}
}

Delivery Pipeline#

The delivery pipeline, similar to the test pipeline, first pulls codes in prod branch of the git repo. Then use Docker to build images and push it to remote image repo. (Here we use DockerHub, the withRegistry function takes image repo location and the Credential ID of the repo as parameters). After image been built, the pipeline converts Application YAML file into JSON file, with GIT_COMMIT injected. Finally, the pipeline sends POST requests to KubeVela apiserver (here is http://47.88.24.19/) for creating or updating target application.

Currently, KubeVela apiserver takes JSON object as inputs. Therefore we do extra conversion in the delivery pipeline. In the future, the KubeVela apiserver will further improve and simplify this interaction process. The admission management will be added as well to address the security issue.

In this case we will create an application named cicd-demo-app in Namespace kubevela-demo-namespace. Notice that the Namespace need to be created in Kubernetes in advance. KubeVela apiserver will simplify it in later version.

void setBuildStatus(String message, String state) {
step([
$class: "GitHubCommitStatusSetter",
reposSource: [$class: "ManuallyEnteredRepositorySource", url: "https://github.com/Somefive/KubeVela-demo-CICD-app"],
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: "ci/jenkins/deploy-status"],
errorHandlers: [[$class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
statusResultSource: [ $class: "ConditionalStatusResultSource", results: [[$class: "AnyBuildResult", message: message, state: state]] ]
]);
}
pipeline {
agent any
stages {
stage('Prepare') {
steps {
script {
def checkout = git branch: 'prod', url: 'https://github.com/Somefive/KubeVela-demo-CICD-app.git'
env.GIT_COMMIT = checkout.GIT_COMMIT
env.GIT_BRANCH = checkout.GIT_BRANCH
echo "env.GIT_BRANCH=${env.GIT_BRANCH},env.GIT_COMMIT=${env.GIT_COMMIT}"
setBuildStatus("Deploy running", "PENDING");
}
}
}
stage('Build') {
steps {
script {
docker.withRegistry("https://registry.hub.docker.com", "DockerHubCredential") {
def customImage = docker.build("somefive/kubevela-demo-cicd-app")
customImage.push()
}
}
}
}
stage('Deploy') {
steps {
sh 'wget -q "https://github.com/mikefarah/yq/releases/download/v4.12.1/yq_linux_amd64"'
sh 'chmod +x yq_linux_amd64'
script {
def app = sh (
script: "./yq_linux_amd64 eval -o=json '.spec' app.yaml | sed -e 's/GIT_COMMIT/$GIT_COMMIT/g'",
returnStdout: true
)
echo "app: ${app}"
def response = httpRequest acceptType: 'APPLICATION_JSON', contentType: 'APPLICATION_JSON', httpMode: 'POST', requestBody: app, url: "http://47.88.24.19/v1/namespaces/kubevela-demo-namespace/applications/cicd-demo-app"
println('Status: '+response.status)
println('Response: '+response.content)
}
}
}
}
post {
success {
setBuildStatus("Deploy success", "SUCCESS");
}
failure {
setBuildStatus("Deploy failed", "FAILURE");
}
}
}

NOTE: the deploy stage is written with KubeVela v1.1. The apiserver interaction method is updated in KubeVela v1.2, leveraging VelaUX (the UI dashboard) and webhook trigger. If you are using KubeVela v1.2.0+, you should refer to the latest documents.

Performance#

After finishing the configuration process described above, the whole process of continuous delivery has already been set up. Let's check how it works.

pipeline-overview

First, we set the VERSION constant in main.go to Bad Version Number, aka,

const VERSION = "Bad Version Number"

Then, we submit this change to dev branch. We can see that the test pipeline in Jenkins is triggered and the failure status is written back to GitHub.

test-pipeline-fail test-github-fail

We edit the VERSION to 0.1.1 again and resubmit it. Now we see that the test pipeline is successfully executed, with the commit in GitHub marked as succeeded.

test-pipeline-success test-github-success

Then we issue a Pull Request to merge dev branch into prod branch.

pull-request

The Jenkins delivery pipeline is triggered once the Pull Request is accepted. After execution finished, the latest commit in prod branch is also marked as succeeded.

deploy-pipeline-success deploy-github-success

$ kubectl get app -n kubevela-demo-namespace
NAME COMPONENT TYPE PHASE HEALTHY STATUS AGE
kubevela-demo-cicd-app kubevela-demo-app-web webservice running true 112s
$ kubectl get deployment -n kubevela-demo-namespace
NAME READY UP-TO-DATE AVAILABLE AGE
kubevela-demo-app-web-v1 2/2 2 2 2m1s

As shown above, the target application is successfully accepted by KubeVela apiserver and related resources are created by KubeVela controller. The current replica number of Deployment is 2. After deleting batchPartition : 0 in the rollout trait of the application, which means confirming current release, the Deployment replica is updated to 5. Now we can access the domain configured in Ingress and get the current version number.

$ kubectl edit app -n kubevela-demo-namespace
application.core.oam.dev/kubevela-demo-cicd-app edited
$ kubectl get deployment -n kubevela-demo-namespace -w
NAME READY UP-TO-DATE AVAILABLE AGE
kubevela-demo-app-web-v1 4/5 5 4 3m39s
kubevela-demo-app-web-v1 5/5 5 5 3m39s
kubevela-demo-app-web-v1 5/5 5 5 3m40s
kubevela-demo-app-web-v1 5/5 5 5 3m40s
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.1

Repeat the steps above. Upgrade the version number to 0.1.2. Finish both test pipeline and delivery pipeline. Then we will see there is a version change to the Deployment managed by the target application. The replica number of the old Deployment decreases from 5 to 3 while the new one contains 2 replica at this moment. If we access the service now, we will find sometimes the old version number is returned and sometimes the new version number is displayed. This is because when rolling update the application, both new version replica and old version replica exist. The incoming traffic will be dispatched to different version replica. Therefore we can observe two different version at the same time.

$ kubectl get deployment -n kubevela-demo-namespace -w
NAME READY UP-TO-DATE AVAILABLE AGE
kubevela-demo-app-web-v1 5/5 5 5 11m
kubevela-demo-app-web-v2 0/0 0 0 0s
kubevela-demo-app-web-v2 0/0 0 0 0s
kubevela-demo-app-web-v2 0/0 0 0 0s
kubevela-demo-app-web-v1 5/5 5 5 12m
kubevela-demo-app-web-v2 0/0 0 0 0s
kubevela-demo-app-web-v2 0/0 0 0 0s
kubevela-demo-app-web-v2 0/0 0 0 0s
kubevela-demo-app-web-v2 0/0 0 0 0s
kubevela-demo-app-web-v2 0/2 0 0 0s
kubevela-demo-app-web-v2 0/2 0 0 0s
kubevela-demo-app-web-v2 0/2 0 0 0s
kubevela-demo-app-web-v2 0/2 2 0 0s
kubevela-demo-app-web-v1 5/5 5 5 12m
kubevela-demo-app-web-v2 1/2 2 1 2s
kubevela-demo-app-web-v2 2/2 2 2 2s
kubevela-demo-app-web-v1 5/3 5 5 13m
kubevela-demo-app-web-v1 5/3 5 5 13m
kubevela-demo-app-web-v1 3/3 3 3 13m
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.1
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.1
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.1
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.1
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.2
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.2
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.2
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.2
$ curl http://kubevela-demo-cicd-app.cf7c0ed25b151437ebe1ef58efc29bca4.us-west-1.alicontainer.com/
Version: 0.1.1

After confirming new services are functioning correctly, we can remove the batchPartition: 0 as described above to complete the whole canary release process.

$ kubectl get deployment -n kubevela-demo-namespace -w
NAME READY UP-TO-DATE AVAILABLE AGE
kubevela-demo-app-web-v1 3/3 3 3 18m
kubevela-demo-app-web-v2 2/2 2 2 5m24s
kubevela-demo-app-web-v2 2/5 2 2 5m36s
kubevela-demo-app-web-v2 2/5 2 2 5m37s
kubevela-demo-app-web-v2 2/5 2 2 5m37s
kubevela-demo-app-web-v2 2/5 5 2 5m37s
kubevela-demo-app-web-v2 3/5 5 3 5m38s
kubevela-demo-app-web-v2 4/5 5 4 5m38s
kubevela-demo-app-web-v2 5/5 5 5 5m39s
kubevela-demo-app-web-v1 3/0 3 3 18m
kubevela-demo-app-web-v1 3/0 3 3 18m
kubevela-demo-app-web-v1 0/0 0 0 18m
kubevela-demo-app-web-v1 0/0 0 0 18m
kubevela-demo-app-web-v2 5/5 5 5 5m41s
kubevela-demo-app-web-v2 5/5 5 5 5m41s
kubevela-demo-app-web-v1 0/0 0 0 18m

Conclusion#

In summary, we executed the whole continuous delivery process successfully. In this process, developers can easily update and deploy their applications, with the help of KubeVela and Jenkins. Besides, developers can use their favourite tools in different stages, such as substituting GitHub with Gitlab, or using TravisCI instead of Jenkins.

Readers might also notice that this progress can not only upgrade the application service, but also change deployment plan via editing app.yaml, such as scaling up or adding sidecars, which works like classical push-style GitOps. About more KubeVela GitOps content, you can refer to other related case studies.