Skip to main content

10 posts tagged with "use-case"

View All Tags

· 15 min read
Fog Dong

Serverless Application Engine (SAE) is a Kubernetes-based cloud product that combines the Serverless architecture and the microservice model. As an iterative cloud product, it has encountered many challenges in the process of rapid development. How can we solve these challenges in the booming cloud-native era and perform reliable and fast upgrades for architecture? The SAE team and the KubeVela community worked closely to address these challenges and came up with a replicable open-source solution, KubeVela Workflow.

This article describes how to use KubeVela Workflow to upgrade the architecture of SAE and interprets multiple practice scenarios.

Challenges in the Serverless Era

SAE is an application hosting platform for business application architecture and microservices. It is a Kubernetes-based cloud product that combines the Serverless architecture and the microservice model.

image.png

As shown in the preceding architecture diagram, SAE users can host multiple types of applications on SAE. At the underlying layer of SAE, the Java business layer processes the relevant business logic and interacts with Kubernetes resources. At the bottom, it relies on highly available, O&M-free, and pay-as-you-go elastic resource pools.

In this architecture, SAE mainly relies on its Java business layer to provide users with capabilities. This architecture helps users easily deploy applications. Still, at the same time, it brings some challenges as well.

With the continuous development of Serverless, SAE has encountered three major challenges:

  1. Engineers in SAE have some complex and non-standardized operations. How can we automate these complex operations to reduce development consumption?
  2. With the development of business, the application-delivering capability of SAE is favored by a large number of users. The growth of users also brings efficiency challenges. How can we optimize the existing delivery capability and improve efficiency in high concurrency?
  3. As Serverless architecture continues to be implemented in enterprises, numerous of cloud vendors are making their product systems serverless. In such a wave, how can SAE quickly connect with internal Serverless capabilities and reduce development costs while launching new features?

The preceding three challenges show that SAE needs some kind of orchestration engine to upgrade the delivery process, integrate with internal capabilities, and automate operations.

So, what does this orchestration engine need to meet to solve these challenges?

  1. High Scalability: The steps in the process need to be highly scalable for this orchestration engine. Only in this way can the originally non-standardized and complex operations be standardized with the steps. In the meanwhile, the steps can combine with the process control capability of the orchestration engine, thus reducing the consumption of development resources.
  2. Lightweight and Efficient: This orchestration engine must be efficient and production-ready, so as to meet the high concurrency requirements of SAE in large-scale user scenarios.
  3. Strong Integration and Process Control Capabilities: This orchestration engine needs to be able to quickly integrate with the atomic functions in the company. If the glue code that connects upstream and downstream capabilities can be transformed into the process in the orchestration engine, development costs can be reduced.

Based on these challenges and considerations, the SAE team and the KubeVela community have conducted in-depth cooperation and launched the KubeVela Workflow project as an orchestration engine.

Why Use Kubevela Workflow?

Thanks to the booming ecological of cloud-native, there are already many mature workflow projects in the community (such as Tekton, Argo, etc.). There are also some orchestration engines within Alibaba Cloud. So, why reinvent the wheel instead of using existing technology?

This is because KubeVela Workflow has a fundamental difference in design. The steps in the workflow are designed for the cloud-native IaC system and support abstract encapsulation and reuse, which means that you can use atomic capabilities like a function call in every step, instead of just creating pods or containers.

image.png

In KubeVela Workflow, each step has a step type, and each step type corresponds to the resource callend WorkflowStepDefinition. You can use the CUE language (an IaC language, which is a superset of JSON) to write this step definition or directly use the step type defined in the community.

You can simply regard a WorkflowStepDefinition as a function declaration. Each time a new step type is defined, a new function is defined. The function requires some input parameters, and the step definition is the same. In the step definition, you can define the input parameters required for this step type in the parameter field. When the workflow is running, the workflow controller executes the CUE code in the corresponding step definition using the actual parameter values input by the user, just as it executes your custom function.

With such a layer of abstraction of the steps, it adds huge possibilities to the steps:

  • If you want to customize the step type (just like writing a new function), you can import the official code packages in the step definition, thus introducing other atomic capabilities into the step, including HTTP requests, CRUD resources in multiple clusters, conditional waiting, etc. With such a programmable step type, you can easily integrate with any system. For example, in the SAE scenario, the step definition is integrate with other internal atomic capabilities (such as MSE, ACR, ALB, SLS, etc.). Then, the workflow orchestration capability is used to control the process.

image.png

  • If you only want to use defined steps, just like calling a packaged third-party function, the only thing you need to care about is your input parameters and the step type you want. For example, in a typical image-building scenario, first, specify the step type as build-push-image and then specify your input parameters: the code source and branch of the built image, the name of the built image, and the secret key of the image repository to push.
apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: build-push-image
namespace: default
spec:
workflowSpec:
steps:
- name: build-push
type: build-push-image
properties:
context:
git: github.com/FogDong/simple-web-demo
branch: main
image: fogdong/simple-web-demo:v1
credentials:
image:
name: image-secret

In such an architecture, the abstraction of the step brings infinite possibilities to the workflow. When you need to add a step to the workflow, you no longer need to compile-build-package the business code and then use the pod to execute the code. You only need to modify the configuration code in the step definition (together with the workflow engine's orchestration and control capability) to integrate with new features.

This is also the main reason why SAE chooses Kubevela Workflow. Based on scalability, we can fully leverage the power of ecology and accelerate product upgrades.

Cases

Next, let's go deeper into the user cases in SAE.

Case 1: Automated Operations

The first scenario is an automated operations scenario for SREs in SAE.

In SAE, we write and update some base images for users. We need to preload these images to multiple clusters in different regions to provide a better experience for users who use these base images.

The original operation process is very complicated. It involves building images and pushing them across multiple regions using ACR, as well as creating image cache templates and managing those image caches. Regions here include Shanghai, US West, Shenzhen, and Singapore. These operations are non-standardized and time-consuming. This is because when an SRE needs to push these images from the local to foreign clusters, it is likely to fail due to network problems. Therefore, he needs to disperse his energy on these operations that could otherwise be automated.

This is also the scenario that KubeVela Workflow is suitable for: each step in these operations can be converted into a step in the workflow programmatically, so that these steps can be orchestrated and reused. In addition, KubeVela Workflow provides a visual dashboard. SREs only need to configure a pipeline template once, and can automate the process by triggering execution or by entering specific runtime parameters each time the execution is triggered.

The simplified steps is like below:

  1. Use the HTTP request step type to build an image by requesting the service of ACR and pass the image ID to the next step through inputs/outputs. In this step definition, you need to wait until the ACR service is done before ending the execution of the current step.
  2. If the first step fails, execute the error handle step.
  3. If the first step is successfully built, use the HTTP request step to build image cache, at the same time, the service logs are used as the source of the current step. Here you can view the logs of the steps directly in the Dashboard to troubleshoot problems.
  4. Use a step group with deploy step type to preload images for clusters in the China (Shanghai) region and the US (West) region. The multi-cluster management and control capabilities of KubeVela Workflow are used to distribute the ImagePullJob workload to multiple clusters to preload images.

image.png

In the preceding process, if you do not use KubeVela Workflow, you may need to write a bunch of business code to connect multiple services and clusters. Let’s take the last step of distributing the ImagePullJob workload to multiple clusters as an example: Not only do you need to manage the configuration of multiple clusters, but also need to watch the status of the workload (CRD) until the status of the workload becomes Ready before proceeding to the next step. This process actually corresponds to a simple Kubernetes Operator's reconcile logic: first create or update a resource, if the status of the resource is as expected, then end the reconcile, if not, continue to wait.

Do we need to implement an Operator for every new resource management in our operations? Is there any convenient way to free us from the complicated Operator development?

It is precisely because of the programmability of steps in KubeVela Workflow that it can completely cover these operations and resource management in SAE scenarios, which can help engineers reduce manpower consumption. Similar to the above logic, the step definition corresponding to KubeVela Workflow is very simple. No matter what kind of resources (or a HTTP interface request), it can be covered by a similar step template like:

template: {
// First, read resources from the specified cluster
read: op.#Read & {
value: {
apiVersion: parameter.apiVersion
kind: parameter.kind
metadata: {
name: parameter.name
namespace: parameter.namespace
}
}
cluster: parameter.cluster
}
// Second, wait until the resource is Ready. Otherwise, the step keeps waiting
wait: op.#ConditionalWait & {
continue: read.value.status != _|_ && read.value.status.phase == "Ready"
}
// Third(optional), if the resource is Ready, then...
// Custom Logic...

// Users must input the defined parameter when using the step type
parameter: {
apiVersion: string
kind: string
name: string
namespace: *context.namespace | string
cluster: *"" | string
}
}

Corresponding to the current case is:

  • First: Read the status of ImagePullJob in a specified cluster, such as the cluster in the China (Shanghai) region.
  • Second: If the ImagePullJob is ready and the image has been preloaded, continue to execute.
  • Third(Optional): After the ImagePullJob is ready, clean up the ImagePullJob in the cluster.

Like this, no matter how many Region clusters or new type resources are added in the subsequent O&M scenarios, you can let KubeVela Workflow manage the cluster's KubeConfig, and use the defined step types with different cluster names or resource types to achieve a simplified Kubernetes Operator Reconcile process to reduce development costs.

Case 2: Optimize the Existing Delivery Process

In addition to automating internal O&M operations, upgrading the original SAE product architecture to improve delivery efficiency for users is also an important reason of choosing KubeVela Workflow.

Original Architecture

In the delivery scenario of SAE, a delivery process corresponds to a series of tasks, such as: initializing the environment, building images, releasing in batches, etc. This series of tasks corresponds to the SAE Tasks in the figure below.

These tasks are sequentially thrown to Java Executor for business logic in the original architecture of SAE, such as creating resources in Kubernetes, and synchronizing the status of current tasks with the MySQL database, etc.

After the current task is completed, JAVA Executor will get the next task from SAE's original orchestration engine, and at the same time, the orchestration engine will continuously put new tasks into the initial task list.

image.png

The biggest problem in this old architecture is the polling call. JAVA Executor will get it from the SAE task list every second to check whether there are new tasks; at the same time, after JAVA Executor creates Kubernetes resources, it will attempts to get the status of resources from the cluster every second.

The original architecture of SAE is not the controller model in the Kubernetes ecosystem, but the polling model. If the orchestration engine layer is upgraded to the controller model for event watching, it can better integrate with the entire Kubernetes ecosystem and improve efficiency.

image.png

However, the logic coupling of business is deep in the original architecture. If the traditional container-based cloud-native workflow is used, SAE needs to package the original business logic into images and maintain and update a large number of images, which is not a sustainable path. We hope the upgraded workflow engine can be easily integrated with the task orchestration, business execution layer, and Kubernetes cluster.

New Architecture

With the high scalability of KubeVela Workflow, SAE engineers do not need to repackage the original capabilities into images or make large-scale modifications.

image.png

The new process is shown in the preceding figure. After a delivery paln is created on the SAE product side, the business side writes the model to the database, converts the model, and generates a KubeVele workflow, which corresponds to YAML on the right side.

At the same time, SAE's original Java Executor provides the original business capabilities as microservice APIs. When KubeVela Workflow is executed, each step is IaC-based, and the underlying implementation is in the CUE language. Some of these steps will call the business microservice API of SAE, while others will directly interact with the underlying Kubernetes resources. Data can be transferred between steps. If there is an error in the call, you can use the conditional judgment of the step to handle the error.

Such optimization is scalable and fully reuses the Kubernetes ecosystem. It extends workflow processes and atomic capabilities and is oriented to the final state. This combination of scalability and process control can cover the original business functions and reduce the amount of development. At the same time, the state update latency is reduced from the minute level to the millisecond level, which is agile and native. It has the YAML-level description capability but also improves the development efficiency from 1d to 1h.

Case 3: Launch New Features Quickly

In addition to automated O&M and upgrading the original architecture, what else can KubeVela Workflow provide?

The reusability of steps and the ease of integration with other ecosystems bring more surprises to SAE in addition to upgrades: from writing business code to orchestrating different steps, so as to launch new product features quickly!

SAE has accumulated a large amount of JAVA foundation and supports a wealth of functions, such as: supporting single-batch release, multi-batch release, and canary release for JAVA microservices. However, with the increase of customers, some customers have put forward new requirements, and they hope to have the ability to publish multilingual north-south traffic in canary release.

There are also many mature open-source products for canary releases, such as Kruise Rollout. After investigation, SAE engineers found that Kruise Rollout can be used to complete the ability of canary release, and it can cooperate with the internal ingress controller(ALB) of Alibaba Cloud to split different traffic.

image.png

Such a solution is shown in the architecture diagram above. SAE distributes a KubeVela Workflow, and the steps in the Workflow will integrate with Alibaba Cloud ALB, open-source Kruise Rollout, and SAE's business components at the same time. Batch management is completed in the steps, thus completing the rapid launch of features.

In fact, after using KubeVela Workflow, it is no longer necessary to write new business codes for this feature. It is only necessary to write a step type for updating canary batches.

Due to the programmability of step types, we can easily use different patch policies in the definition to update the release batches of Rollout objects in the different clusters. Moreover, in the workflow, the step type of each step is reusable. This means when you develop a new step type, you are laying the foundation for the next time a new feature is launched. This reuse allows you to launch features quickly and reduce development costs.

Summary

After SAE upgraded the KubeVela architecture, it improved delivery efficiency and reduced development costs. Based on the advantages of its underlying reliance on Serverless infrastructure, it can give full play to the advantages of the product in application hosting.

In addition, the architecture upgrade solution of KubeVela Workflow in SAE is an open-source solution that can be replicated. The community provides more than 50 built-in step types (including image building, image pushing, image scanning, and multi-cluster deployment, using Terraform to manage infrastructure, condition wait, message notification, etc) to help you open up CI/CD easily.

image.png

You can refer to the following documents for more usage scenarios:

The End

You can learn more about KubeVela and the OAM project through the following materials:

· 14 min read
Qiao Zhongpei

This article will focus on KubeVela and OpenYurt (two open-source projects of CNCF) and introduce the solution of cloud-edge collaboration in a practical Helm application delivery scenario.

Background

With the popularization of the Internet of Everything scenario, the computing power of edge devices is increasing. It is a new technological challenge to use the advantages of cloud computing to meet complex and diversified edge application scenarios and extend cloud-native technology to the end and edge. Cloud-Edge Collaboration is becoming a new technological focus. This article will focus on KubeVela and OpenYurt (two open-source projects of CNCF) and introduce the solution of cloud-edge collaboration in a practical Helm application delivery scenario.

OpenYurt focuses on extending Kubernetes to edge computing in a non-intrusive manner. Based on the container orchestration and scheduling capabilities of native Kubernetes, OpenYurt integrates edge computing power into the Kubernetes infrastructure for unified management. It provides capabilities (such as edge autonomy, efficient O&M channels, unitized edge management, edge traffic topology, secure containers, and edge Serverless/FaaS) and support for heterogeneous resources. In short, OpenYurt builds a unified infrastructure for cloud-edge collaboration in a Kubernetes-native manner.

Incubated in the OAM model, KubeVela focuses on helping enterprises build unified application delivery and management capabilities. It shields the complexity of underlying infrastructure for developers and provides flexible scaling capabilities. It also provides out-of-the-box microservice container management, cloud resource management, versioning and canary release, scaling, observability, resource dependency orchestration and data delivery, multi-cluster, CI docking, and GitOps. Maximize the R&D performance of developer self-service application management, which also meets the extensibility demands of the long-term evolution of the platform.

OpenYurt + KubeVela - What Problems Can be Solved?

As mentioned before, OpenYurt supports the access of edge nodes, allowing users to manage edge nodes by operating native Kubernetes. "Edge nodes" are used to represent computing resources closer to users (such as virtual machines or physical servers in a nearby data center). After you add them through OpenYurt, these edge nodes are converted into nodes that can be used in Kubernetes. OpenYurt uses NodePool to describe a group of edge nodes in the same region. After basic resource management is met, we have the following core requirements for how to orchestrate and deploy applications to different NodePools in a cluster.

  1. Unified Configuration: If you manually modify each resource to be distributed, there are many manual interventions, which are prone to errors. We need a unified way to configure parameters, which can not only facilitate batch management operations, but also connect security and audit to meet enterprise risk control and compliance requirements.
  2. Differential Deployment: Most workloads deployed to different NodePools have the same attributes, but there are still personalized configuration differences. The key is how to set the parameters related to node selection. For example, NodeSelector can instruct the Kubernetes scheduler to schedule workloads to different NodePools.
  3. Scalability: The cloud-native ecosystem is booming. Both workload types and different O&M functions are growing. We need the overall application architecture to be scalable so that we can fully benefit from the dividends of the cloud-native ecosystem and meet business needs flexibly.

KubeVela and OpenYurt can complement each other at the application layer to meet the preceding three core requirements. Next, I will show these functions with the operation process.

Deploy an Application to the Edge

We will use the Ingress controller as an example to show how to use KubeVela to deploy applications to the edge. We want to deploy the Nginx Ingress controller to multiple NodePools to access the services provided by the specified NodePool through the edge Ingress. An Ingress can only be handled by the Ingress controller in the NodePool.

The cluster in the schematic diagram contains two NodePools: Beijing and Shanghai. The networks between them are not interconnected. We want to deploy an Nginx Ingress Controller, which can act as the network traffic ingress for each NodePool. A client close to Beijing can access the services provided in the Beijing NodePool by accessing the Ingress Controller of the Beijing NodePool and does not access the services provided in the Shanghai NodePool.

NodePool Structure

Basic Environment of Demo

We will use Kubernetes clusters to simulate edge scenarios. The cluster has three nodes, and their roles are:

  • Node 1: master node, cloud node
  • Node 2: worker node, edge node, in NodePool Beijing
  • Node 3: worker node, edge node, in NodePool Shanghai

Preparations

1. Install YurtAppManager

YurtAppManager is the core component of OpenYurt. It provides NodePool CRD and controller. There are other components in OpenYurt, but we only need YurtAppManager in this tutorial.

git clone https://github.com/openyurtio/yurt-app-managercd yurt-app-manager && helm install yurt-app-manager -n kube-system ./charts/yurt-app-manager/

2. Install KubeVela and Enable the FluxCD Addon

Install the Vela command-line tool and install KubeVela in the cluster:

curl -fsSl https://kubevela.net/script/install.sh | bash
vela install

We want to reuse the mature Helm charts provided by the community, so we use Helm-type components to install the Nginx Ingress Controller. In KubeVela with microkernel design, Helm components are provided by the FluxCD addon. The following enables the FluxCD addon.

vela addon enable fluxcd

3. Prepare NodePool

Create two NodePools: Beijing and Shanghai. Dividing NodePools by region is a common pattern in actual edge scenarios. Different groups of nodes often have obvious isolation attributes (such as network disconnection, no resource sharing, resource heterogeneity, and application independence). This is the origin of the NodePool concept. In OpenYurt, features (such as NodePools and service topologies) are used to help users deal with the preceding issues. In today's example, we will use NodePools to describe and manage nodes:

kubectl apply -f - <<EOF
apiVersion: apps.openyurt.io/v1beta1
kind: NodePool
metadata:
name: beijing
spec:
type: Edge
annotations:
apps.openyurt.io/example: test-beijing
taints:
- key: apps.openyurt.io/example
value: beijing
effect: NoSchedule
---
apiVersion: apps.openyurt.io/v1beta1
kind: NodePool
metadata:
name: shanghai
spec:
type: Edge
annotations:
apps.openyurt.io/example: test-shanghai
taints:
- key: apps.openyurt.io/example
value: shanghai
effect: NoSchedule
EOF

Add edge nodes to their respective NodePools. Please see how OpenYurt nodes are added for more information about how edge nodes are added.

kubectl label node <node1> apps.openyurt.io/desired-nodepool=Beijing
kubectl label node <node2> apps.openyurt.io/desired-nodepool=shanghai
kubectl get nodepool

Expected Output

NAME       TYPE   READYNODES   NOTREADYNODES   AGE
beijing Edge 1 0 6m2s
shanghai Edge 1 0 6m1s

Deploy Edge Applications in Batches

Before we get into the details, let's look at how KubeVela describes and deploys applications to the edge. With the following application, we can deploy multiple Nginx Ingress Controller to their respective edge NodePools. Using the same application to uniformly configure the Nginx Ingress can eliminate duplication and reduce the management burden. It facilitates unified operations (such as the release and other O&M of components in the cluster).

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: edge-ingress
spec:
components:
- name: ingress-nginx
type: helm
properties:
chart: ingress-nginx
url: https://kubernetes.github.io/ingress-nginx
repoType: helm
version: 4.3.0
values:
controller:
service:
type: NodePort
admissionWebhooks:
enabled: false
traits:
- type: edge-nginx
policies:
- name: replication
type: replication
properties:
selector: [ "ingress-nginx" ]
keys: [ "beijing","shanghai" ]
workflow:
steps:
- name: deploy
type: deploy
properties:
policies: ["replication"]

A KubeVela application has three parts.

  1. Helm Type Component: It describes the version of the Helm package that we want to install into the cluster. In addition, we have attached an trait to this component: edge-nginx. We'll show the details of this trait later. You can regard it as a patch that contains the properties of different NodePools.
  2. The Strategy of Replication: It describes how to copy components to different NodePools. The field selector is used to select the components to be copied. Its keys field will convert one component to two components with different keys ("Beijing" and "Shanghai").
  3. Deploy Workflow Steps: It describes how to deploy an application. It specifies the replication strategy to perform replication work.
note
  1. If you want this application to work properly, please apply the edge-ingress trait described below in the cluster first.
  2. Deploy is a KubeVela built-in workflow step. It can be also used with override and topology strategies in multi-cluster scenarios.

Now, we can send the application to the cluster.

vela up -f app.yaml

Check the application status and resources created by KubeVela.

vela status edge-ingress --tree --detail

Expected Output

CLUSTER       NAMESPACE     RESOURCE                           STATUS    APPLY_TIME          DETAIL
local ─── default─┬─ HelmRelease/ingress-nginx-beijing updated 2022-11-02 12:00:24 Ready: True Status: Release reconciliation succeeded Age: 153m
├─ HelmRelease/ingress-nginx-shanghai updated 2022-11-02 12:00:24 Ready: True Status: Release reconciliation succeeded Age: 153m
└─ HelmRepository/ingress-nginx updated 2022-11-02 12:00:24 URL: https://kubernetes.github.io/ingress-nginx Age: 153m
Ready: True
Status: stored artifact for revision '7bce426c58aee962d479ca84e5c
fc6931c19c8995e31638668cb958d4a3486c2'

Vela CLI can stand on a higher level to uniformly show the health status of applications. When it's needed, Vela CLI can help you penetrate applications and direct access to the underlying workloads. Also, it offers a rich selection of observability and Debug capability. For example, you can print the application's log through vela logs. You can forward the port that deploys applications locally through vela port-forward. You can use the vela exec command to go deep into the container at the edge and run Shell commands to troubleshoot the problem.

If you need an intuitive understanding of the application, KubeVela officials provide a Web console addon, VelaUX. You can view detailed resource topologies with the VelaUX addon .

vela addon enable velaux

Visit the Resource Topology page of VelaUX:

Resource Topology in VelaUX

As you can see, KubeVela creates two HelmRelease to deliver the Nginx Ingress Controller Helm chart to two NodePools. HelmRelease resources are processed by the preceding FluxCD addon, and NGINX Ingress is installed in the two NodePools of the cluster. Run the following command to check whether pods of the Ingress controller are created in the Beijing NodePool. The same is true for the shanghai NodePool.

$ kubectl get node -l  apps.openyurt.io/nodepool=beijing                               
NAME STATUS ROLES AGE VERSION
iz0xi0r2pe51he3z8pz1ekz Ready <none> 23h v1.24.7+k3s1

$ kubectl get pod ingress-nginx-beijing-controller-c4c7cbf64-xthlp -oyaml|grep iz0xi0r2pe51he3z8pz1ekz
nodeName: iz0xi0r2pe51he3z8pz1ekz

Differentiated Deployment

How can a differentiated deployment of the same component be implemented during KubeVela application delivery? Let's continue to learn about the Trait and Policy that support the application. As mentioned above, we use KubeVela's built-in component replication policy in the workflow to attach a custom edge-nginx trait to ingress-nginx components.

  • The component splitting policy splits the component into two components with a different context.replicaKey.
  • edge-nginx Trait uses different context.replicaKey to deliver Helm charts with different configuration values to the cluster. Let the two Nginx Ingress Controller run in different NodePools and monitor Ingress resources with different ingressClasses. Patch the values of the Helm chart and modify the fields related to Node Selection, Affinity, and ingressClass.
  • Different patch strategies is used when patching different fields. For example, a retainKeys strategy can overwrite the original value, while a jsonMergePatch strategy is merged with the original value.
"edge-nginx": {
type: "trait"
annotations: {}
attributes: {
podDisruptive: true
appliesToWorkloads: ["helm"]
}
}
template: {
patch: {
// +patchStrategy=retainKeys
metadata: {
name: "\(context.name)-\(context.replicaKey)"
}
// +patchStrategy=jsonMergePatch
spec: values: {
ingressClassByName: true
controller: {
ingressClassResource: {
name: "nginx-" + context.replicaKey
controllerValue: "openyurt.io/" + context.replicaKey
}
_selector
}
defaultBackend: {
_selector
}
}
}
_selector: {
tolerations: [
{
key: "apps.openyurt.io/example"
operator: "Equal"
value: context.replicaKey
},
]
nodeSelector: {
"apps.openyurt.io/nodepool": context.replicaKey
}
}
parameter: null
}

Let More Types of Applications Go to the Edge

As you can see, we only customize a trait with more than 40 rows and make full use of the built-in capabilities of KubeVela to deploy Nginx Ingress to different NodePools. More applications are likely to be deployed at the edge with the prosperous cloud-native ecosystem and cloud-edge collaboration trend. When a new application needs to be deployed in the edge NodePool in the new scenario, there is no need to worry. KubeVela makes it easy to expand a new edge application deployment trait following this pattern without writing code.

For example, we hope to deploy the implementation of the Kubernetes community's recent evolution hotspot Gateway API to the edge as well. We can use the Gateway API to enhance the expressive capability and extensibility of the edge NodePool to expose services and use role-based network APIs on edge nodes. For this scenario, we can easily complete the deployment task based on the preceding extension method. You only need to define a new trait (as shown below):

"gateway-nginx": {
type: "trait"
annotations: {}
attributes: {
podDisruptive: true
appliesToWorkloads: ["helm"]
}
}

template: {
patch: {
// +patchStrategy=retainKeys
metadata: {
name: "\(context.name)-\(context.replicaKey)"
}
// +patchStrategy=jsonMergePatch
spec: values: {
_selector
fullnameOverride: "nginx-gateway-nginx-" + context.replicaKey
gatewayClass: {
name: "nginx" + context.replicaKey
controllerName: "k8s-gateway-nginx.nginx.org/nginx-gateway-nginx-controller-" + context.replicaKey
}
}
}
_selector: {
tolerations: [
{
key: "apps.openyurt.io/example"
operator: "Equal"
value: context.replicaKey
},
]
nodeSelector: {
"apps.openyurt.io/nodepool": context.replicaKey
}
}
parameter: null
}

This Trait is similar to the one used to deploy the Nginx Ingress mentioned earlier. We also made some similar patch for Nginx Gateway chart values, including node selection, affinity, and resource name. The difference between the former trait is that this Trait specifies the gatewayClass instead of the IngressClass. Please see the GitHub repository for more information about the trait and application files. We extend the ability of the cluster to deploy a new application to the edge by customizing such a Trait. If we cannot predict application deployment requirements brought about by the development of edge computing in the future, at least we can adapt to new scenarios through this scalability way.

How KubeVela Solves Edge Deployment Puzzles

Let's review how KubeVela addressed the key issues raised at the beginning of the article.

  1. Unified Configuration::A component is used to describe the common properties of the ingress-nginx Helm chart to be deployed (such as Helm repository, chart name, version, and other unified configurations).
  2. Attribute Differences: KubeVela uses a user-defined trait that describes the differences in Helm configurations sent to different NodePools. This trait can be repeatedly used to deploy the same Helm Chart.
  3. Scalability: KubeVela can be programmable for common workloads (such as Deployment/StatefulSet) or other packaging methods (such as Helm or Kustomize). It requires only a few lines of code to push a new application to the edge.

It benefits from the powerful functions provided by KubeVela in the application delivery and management field. In addition to solving application definition, delivery, O&M, and observability issues within a single cluster, KubeVela natively supports application release and management in the multi-cluster mode. Currently, the Kubernetes deployment mode suitable for edge computing scenarios is not fixed. KubeVela can manage application tasks regardless of the architecture of single cluster + edge NodePool or multi-edge cluster architecture.

With OpenYurt and KubeVela, cloud-edge applications are deployed in a unified manner and share the same abstraction, O&M, and observability, which avoids the experience of being separated in different scenarios. In addition, cloud applications and edge applications can take advantage of KubeVela's excellent practices of a continuously integrated cloud-native ecosystem in the form of addons. In the future, the KubeVela community will continue to enrich out-of-the-box system addons and deliver better and easier-to-use application delivery and management capabilities.

If you want to know more about the capabilities of application deployment and management, you can read KubeVela’s official documents. If you want to know the latest development in the KubeVela community, you are welcome to participate in the discussion! If you are interested in OpenYurt, you are welcome to join the OpenYurt community!

You can learn more about KubeVela and the OAM project through the following materials:

· 13 min read
Yike Wang

Background

Helm is an application packaging and deployment tool of client side widely used in the cloud-native field. Its simple design and easy-to-use features have been recognized by users and formed its ecosystem. Up to now, thousands applications have been packaged using Helm Chart. Helm's design concept is very concise and can be summarized in the following two aspects:

  1. Packaging and templating complex Kubernetes APIs and then abstracting and simplifying them into small number of parameters

  2. Giving Application Lifecycle Solutions: Production, upload (hosting), versioning, distribution (discovery), and deployment.

· 8 min read
Jianbo Sun

Helm Charts are very popular that you can find almost 10K different software packaged in this way. While in today's multi-cluster/hybrid cloud business environment, we often encounter these typical requirements: distribute to multiple specific clusters, specific group distributions according to business need, and differentiated configurations for multi-clusters.

In this blog, we'll introduce how to use KubeVela to do multi cluster delivery for Helm Charts.

If you don't have multi clusters, don't worry, we'll introduce from scratch with only Docker or Linux System required. You can also refer to the basic helm chart delivery in single cluster.

· 7 min read
Wei Duan

Under today's multi-cluster business scene, we often encounter these typical requirements: distribute to multiple specific clusters, specific group distributions according to business need, and differentiated configurations for multi-clusters.

KubeVela v1.3 iterates based on the previous multi-cluster function. This article will reveal how to use it to do swift multiple clustered deployment and management to address all your anxieties.

· 6 min read
Xiangbo Ma

The cloud platform development team of China Merchants Bank has been trying out KubeVela since 2021 internally and aims to using it for enhancing our primary application delivery and management capabilities. Due to the specific security concern for financial insurance industry, network control measurements are relatively strict, and our intranet cannot directly pull Docker Hub image, and there is no Helm image source available as well. Therefore, in order to landing KubeVela in the intranet, you must perform a complete offline installation.

This article will take the KubeVela V1.2.5 version as an example, introduce the offline installation practice to help other users easier to complete KubeVela's deployment in offline environment.

· 10 min read
Tianxin Dong and Yicai Yu

With the rapid development of cloud-native, how can we use cloud to empower business development? When launching applications, how can cloud developers easily develop and debug applications in a multi-cluster and hybrid cloud environment? In the deployment process, how to make the application deployment have sufficient verification and reliability?

These crucial issues are urgently needed to be resolved.

In this article, we will use KubeVela and Nocalhost to provide a solution for cloud debugging and multi-cluster hybrid cloud deployment.

When a new application needs to be developed and launched, we hope that the results of debugging in the local IDE can be consistent with the final deployment state in the cloud. Such a consistent posture gives us the greatest confidence in deployment and allows us to iteratively apply updates in a more efficient and agile way like GitOps. That is: when new code is pushed to the code repository, the applications in the environment are automatically updated in real time.

Based on KubeVela and Nocalhost, we can deploy application like below:

alt

As shown in the figure: Use KubeVela to create an application, deploy the application to the test environment, and pause it. Use Nocalhost to debug the application in the cloud. After debugging, push the debugged code to the code repository, use GitOps to deploy with KubeVela, and then update it to the production environment after verification in the test environment.

In the following, we will introduce how to use KubeVela and Nocalhost for cloud debugging and multi-cluster hybrid cloud deployment.

· 10 min read
Tianxin Dong

At the background of Machine learning goes viral, AI engineers not only need to train and debug their models, but also need to deploy them online to verify how it looks(of course sometimes, this part of the work is done by AI platform engineers. ). It is very tedious and draining AI engineers.

In the cloud-native era, our model training and model serving are also usually performed on the cloud. Doing so not only improves scalability, but also improves resource utility. This is very effective for machine learning scenarios that consume a lot of computing resources.

But it is often difficult for AI engineers to use cloud-native techniques. The concept of cloud native has become more complex over time. Even to deploy a simple model serving on cloud native architecture, AI engineers may need to learn several additional concepts: Deployment, Service, Ingress, etc.

As a simple, easy-to-use, and highly scalable cloud-native application management tool, KubeVela enables developers to quickly and easily define and deliver applications on Kubernetes without knowing any details about the underlying cloud-native infrastructure. KubeVela's rich extensibility extends to AI addons and provide functions such as model training, model serving, and A/B testing, covering the basic needs of AI engineers and helping AI engineers quickly conduct model training and model serving in a cloud-native environment.

This article mainly focus on how to use KubeVela's AI addon to help engineers complete model training and model serving more easily.

· 13 min read
Tianxin Dong

KubeVela is a simple, easy-to-use, and highly extensible cloud-native application platform. It can make developers deliver microservices applications easily, without knowing Kubernetes details.

KubeVela is based on OAM model, which naturally solves the orchestration problems of complex resources. It means that KubeVela can manage complex large-scale applications with GitOps. Convergence of team and system size after the system complexity problem.

· 12 min read
Da Yin, Yang Song

KubeVela bridges the gap between applications and infrastructures, enabling easy delivery and management of development codes. Compared to Kubernetes objects, the Application in KubeVela better abstracts and simplifies the configurations which developers care about, and leave complex infrastruature capabilities and orchestration details to platform engineers. The KubeVela apiserver further exposes HTTP interfaces, which help developers to deploy applications even without Kubernetes cluster access.

This article will use Jenkins, a popular continuous integration tool, as basis and give a brief introduction to how to build GitOps-based application continuous delivery highway.