Skip to main content

27 posts tagged with "KubeVela"

View All Tags

· 8 min read
Junyu Liu

Hello, I am Junyu Liu (GitHub: iyear), currently a sophomore majoring in software engineering. In this blog post, I will share my experiences as a Linux Foundation Mentorship mentee: from applying for the project to becoming part of the community.

In the spring of 2023, I was accepted as a CNCF student under the KubeVela project through LFX Mentorship. In this project, I am responsible for developing a CUE code and documentation generator based on Golang from scratch, laying the foundation for the infrastructure part of KubeVela's future extensibility.

What is LFX Mentorship?

LFX Mentorship

LFX Mentorship is a remote learning program that provides 12 weeks of learning opportunities for open source contributors. It is led by specific mentors (usually the maintainers of the projects) who help mentees contribute to the community and projects.

Many open-source organizations or foundations use LFX Mentorship to announce projects and recruit students for development and contributions. I focused on the cloud-native and noticed that CNCF started its spring projects on LFX in February 2023. I began to explore and apply for projects that interested me.

What is KubeVela?

KubeVela

KubeVela is a modern software delivery and management control plane. The goal is to make deploying and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable.

KubeVela originated from the Open Application Model (OAM) model based on Kubernetes, jointly launched by Alibaba Cloud and Microsoft at the end of 2019. Through continuous evolution and iteration, it has incorporated a large amount of feedback and contributions from the open-source community (especially participants from Microsoft, ByteDance, 4Paradigm, Tencent, and Full Truck Alliance). In 2020, it officially met the open-source community under the name "KubeVela" at KubeCon North America.

The KubeVela project has been developing rapidly, and its community growth trend is as follows: KubeVela Community

It is worth mentioning that in March, KubeVela was promoted to a CNCF incubation project, further proving the stability and flexibility of KubeVela in production environments.

Project Details

Project Name: Support auto generation of CUE schema and docs from Go struct

Project Description: In KubeVela's provider system, we can use our defined Go functions in CUE schema. The Go providers usually have a parameter and return. Fields in Go providers are the same as fields in CUE schema, so it is possible and important to support automatic generation of CUE schemas and documents from Go structs.

Project Outcome: Auto-generators of CUE schemas and docs from Go structs, the capabilities should be wrapped in vela cli command.

Project Mentors: Fog Dong, Da Yin

Project Link: https://mentorship.lfx.linuxfoundation.org/project/85f61cae-02d7-4931-8d87-d3da3128060e

Application and Development

When browsing through the project list, KubeVela quickly became one of my candidates. Before diving into the cloud-native, I had come across the KubeVela project and attempted to understand its concepts and working principles, but due to my limited expertise, I only scratched the surface. This time, if I could familiarize myself with KubeVela through a small entry point, it would be the best contribution path. Additionally, metaprogramming and code generation are important techniques in Golang, and I also wanted to participate in this project as an opportunity for practical experience.

The project involves the core part of KubeVela: CUE. This is the first concept I needed to understand. Through the KubeVela official documentation and CUE Issues, I realized that CUE is a language designed for configuration, with advantages compared to other languages in terms of programmability, automation, and integration with Golang. On the other hand, as KubeVela evolves, it continuously provides practical use cases and feedback for CUE.

After contacting the mentor, my initial goal was to create a demo as a showcase. The core part of the entire project lies in the conversion between Golang AST and CUE AST. I first found a snippet of code that I could learn from. After thoroughly understanding it, I extracted the core parts, made modifications and adaptations, and implemented the struct conversion for the demo. DEMO

By writing the demo, I gained a clearer understanding of the overall project targets. As the top-level language for users and platform developers, CUE needs to interact with Golang extensively, serving as an intermediary to connect and control cloud platforms. In many scenarios, CUE needs to maintain consistency with Golang code, or else there may be errors in the intermediate conversion. This process is time-consuming, labor-intensive and issues are only exposed at runtime, which can potentially impact the stability of production environments. The aim of this project is to solve this problem, making Golang code the single source of truth and ensuring overall configuration consistency through static code generation.

In the project description issue, the mentor provided an example of generating providers. Everything became clear: I divided the CUE Generator into three layers. The bottom layer is responsible for basic and core AST conversions. The middle layer reads specific Golang code, such as providers, policies, etc. , extracts information from the Golang code, and writes it into CUE files. The top layer exposes the generation capability as a CLI to users and developers, allowing them to quickly generate CUE and documentation. When supporting more different formats of CUE in the future, the underlying transformation capabilities can be easily reused.

After further communication with the mentor, I added support for struct tags and comments and summarized some ideas into the proposal. After a series of iterations and discussions, the project has taken shape, and I am honored to have been accepted as a mentee in the LFX Mentorship program.

Acceptance

Following the initial design and demo, the formal development process went relatively smoothly, with most of the communication focused on user experience and detailed design.

The first pull request (PR) received valuable reviews as it was not split into smaller parts, and it took 50 comments before it was finally merged. Since the initial code was written in a casual manner, I also focused on refactoring parts of the code to make it more clear and robust.

From the first PR in February to the fifteenth PR at the end of May, the project is essentially complete, and all the code has been merged into the main branch. It has also passed two mentor evaluations, and I am about to graduate from the first project of LFX Mentorship program.👏

End-Term Evaluation

Project Outcomes

Over the past three months of development, the project has primarily produced three capabilities and two CLIs, with test coverage exceeding 90%.

The core capabilities of the project are located in the references/cuegen directory. It implements the basic functionality of converting Go AST to CUE AST and is accompanied by a README to provide developers with specific conversion rules. The code for the middle layer is placed in the references/cuegen/generators directory, and generators for the provider format have been implemented so far. The documentation generation component is located in references/docgen/provider.go.

The project has added two CLI subcommands, namely vela def gen-cue and vela def gen-doc. The former generates CUE files in the corresponding format from Go code, exposing the capabilities of the middle layer as a CLI, while the latter generates documentation for CUE.

Since vela def gen-cue only supports one file at a time, a shell script was written to enable batch generation by traversing directories: #6009

Taking the code snippet from kubevela/pkg/providers/kube as an example, let's perform the transformation and verification.

First, convert kube.go to kube.cue:

$ vela def gen-cue \
-t provider \
--types *k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.Unstructured=ellipsis \
--types *k8s.io/apimachinery/pkg/apis/meta/v1/unstructured.UnstructuredList=ellipsis \
kube.go > kube.cue

Then, convert kube.cue to kube.md:

$ vela def gen-doc -t provider kube.cue > kube.md

The final result is as follows:

Final Generated Result

Future Outlook

Although the expected outcomes of LFX Mentorship have been fully achieved, it is only the first step for cuegen, and its derivative work will also play an important role in the future development of KubeVela. For example, based on cuegen, we can automate the generation of policy rules that are currently manually maintained. We can migrate and validate the existing providers in kubevela/pkg. We can also develop scaffolding tools for user-defined providers, all of which rely on the capabilities of cuegen. These will be the key areas of my future work in the community.

In addition to the related work in the cuegen ecosystem, I will also delve into other aspects of KubeVela, such as gaining in-depth familiarity with OAM production practices and the user community, and exploring the possibilities of new features by reading the source code of Workflow component. I have also initiated an application to become a KubeVela Reviewer, aiming to contribute to the project's code quality control.

This is my first participation in the LFX Mentorship program, and throughout the three months of communication and collaboration, both mentors have provided me with a lot of help and guidance in terms of details and decision-making. We also conducted a complete demonstration of the functionality and discussed the future direction of the community through online meetings.

Open source is a process driven by interests and self-motivation. Developers can continuously improve themselves through their experiences in different communities and grow together with the community. Open source is about taking the first step with courage and trying to read the source code of projects that interest you. For students, participation in open source is primarily a learning process, and each step brings different rewards and insights. I am very grateful to have encountered the KubeVela community through LFX Mentorship, and I look forward to further deepening my involvement and contributions to the community in the future!

· 10 min read
Fog Dong

ChatGPT is taking the tech industry by storm, thanks to its unparalleled natural language processing capabilities. As a powerful AI language model, it has the ability to understand and generate human-like responses, revolutionizing communication in various industries. From streamlining customer service chatbots to enabling seamless language translation tools, ChatGPT has already proved its mettle in creating innovative solutions that improve efficiency and user experience.

Now the question is, can we leverage ChatGPT to transform the way we deliver applications? With the integration of ChatGPT into DevOps workflows, we are witnessing the possible emergence of a new era of automation called PromptOps. This advancement in AIOps technology is revolutionizing the way businesses operate, allowing for faster and more efficient application delivery.

In this article, we will explore how to integrate ChatGPT into your DevOps workflow to deliver applications.

Integrate ChatGPT into Your DevOps Workflow

When it comes to integrating ChatGPT into DevOps workflows, many developers are faced with the challenge of managing extra resources and writing complicated shells. However, there is a better way - KubeVela Workflow. This open-source cloud-native workflow project offers a streamlined solution that eliminates the need for pods or complex scripting.

In KubeVela Workflow, every step has a type that can be easily abstracted and reused. The step-type is programmed in CUE language, making it incredibly easy to customize and use atomic capabilities like a function call in every step. An important point to note is that with all these atomic capabilities, such as HTTP requests, it is possible to integrate ChatGPT in just 5 minutes by writing a new step.

Check out the Installation Guide to get started with KubeVela Workflow. The complete code of this chat-gpt step type is available at GitHub.

Now that we choose the right tool, let's see the capabilities of ChatGPT in delivery.

Case 1: Diagnose the resources

It's quite common in the DevOps world to encounter problems like "I don't know why the pod is not running" or "I don't know why the service is not available". In this case, we can use ChatGPT to diagnose the resource.

For example, In our workflow, we can apply a Deployment with an invalid image in the first step. Since the deployment will never be ready, we can add a timeout in the step to ensure the workflow is not stuck in this step. Then, passing the unhealthy resources deployed in the first step to the second step, we can use the chat-gpt step type to diagnose the resource to determine the issue. Note that the second step is only executed if the first one fails.

The process of diagnosing the resource in the workflow

The complete workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-diagnose
namespace: default
spec:
workflowSpec:
steps:
# Apply an invalid deployment with a timeout
- name: apply
type: apply-deployment
timeout: 3s
properties:
image: invalid
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value

# Use chat-gpt to diagnose the resource
- name: chat-diagnose
# only execute this step if the `apply` step fails
if: status.apply.failed
type: chat-gpt
# use the resource as inputs and pass it to prompt.content
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: diagnose

Apply this Workflow and check the result, the first step will fail because of timeout. Then the second step will be executed and the result of chat-gpt will be shown in the log:

vela workflow logs chat-gpt-diagnose

The logs of diagnose step

Visualize in the dashboard

If you want to visualize the process and the result in the dashboard, it's time to enable the [velaux](https://kubevela.io/docs/reference/addons/velaux#install) addon.

vela addon enable velaux

Copy all the steps in the above yaml to create a pipeline.

Create the pipeline in VelaUX

Run this pipeline, and you can check out the failed reason analyzed by ChatGPT in the logs of the second step.

Run the pipeline in VelaUX

Write the chat-gpt step from scratch

How to write this chat-gpt step type? Is it simple for you to write a step type like this? Let's see how to complete this step type.

We can first define what this step type need from the user. That is: the users' token for ChatGPT, and the resource to diagnose. For some other parameters like the model or the request timeout, we can set the default value with * like below:

parameter: {
token: value: string
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type: *"diagnose" | string
lang: *"English" | string
content: {...}
}
timeout: *"30s" | string
}

Let's complete this step type by writing the logic of the step. We can first import vela/op package in which we can use the op.#HTTPDo capability to send a request to the ChatGPT API. If the request fails, the step should be failed with op.#Fail. We can also set this step's log data with ChatGPT's answer. The complete step type is shown below:

// import packages
import (
"vela/op"
"encoding/json"
)

// this is the name of the step type
"chat-gpt": {
description: "Send request to chat-gpt"
type: "workflow-step"
}

// this is the logic of the step type
template: {
// send http request to chat gpt
http: op.#HTTPDo & {
method: "POST"
url: "https://api.openai.com/v1/chat/completions"
request: {
timeout: parameter.timeout
body: json.Marshal({
model: parameter.model
messages: [{
if parameter.prompt.type == "diagnose" {
content: """
You are a professional kubernetes administrator.
Carefully read the provided information, being certain to spell out the diagnosis & reasoning, and don't skip any steps.
Answer in \(parameter.prompt.lang).
---
\(json.Marshal(parameter.prompt.content))
---
What is wrong with this object and how to fix it?
"""
}
role: "user"
}]
})
header: {
"Content-Type": "application/json"
"Authorization": "Bearer \(parameter.token.value)"
}
}
}

response: json.Unmarshal(http.response.body)

fail: op.#Steps & {
if http.response.statusCode >= 400 {
requestFail: op.#Fail & {
message: "\(http.response.statusCode): failed to request: \(response.error.message)"
}
}
}
result: response.choices[0].message.content
log: op.#Log & {
data: result
}
parameter: {
token: value: string
// +usage=the model name
model: *"gpt-3.5-turbo" | string
// +usage=the prompt to use
prompt: {
type: *"diagnose" | string
lang: *"English" | string
content: {...}
}
timeout: *"30s" | string
}
}

That's it! Apply this step type and we can use it in our Workflow like the above.

vela def apply chat-gpt.cue

Case 2: Audit the resource

Now the ChatGPT is our Kubernetes expert and can diagnose the resource. Can it also give us some security advice for the resource? Definitely! It's just prompt. Let's modify the step type that we wrote in the previous case to add the audit feature. We can add a new prompt type audit and pass the resource to the prompt. You can check out the whole step type in GitHub.

In the Workflow, we can apply a Deployment with nginx image and pass it to the second step. The second step will use the audit prompt to audit the resource. The process of auditing the resource in workflow The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-audit
namespace: default
spec:
workflowSpec:
steps:
- name: apply
type: apply-deployment
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx

- name: chat-audit
type: chat-gpt
# use the resource as inputs and pass it to prompt.content
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: audit

image.png

Use Diagnose & Audit in one Workflow

Now that we have the capability to diagnose and audit the resource, we can use them in one Workflow, and use the if condition to control the execution of the steps. For example, if the apply step fails, then diagnose the resource, if it succeeds, audit the resource.

Use diagnose &amp; audit in one workflow

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt
namespace: default
spec:
workflowSpec:
steps:
- name: apply
type: apply-deployment
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx

# if the apply step fails, then diagnose the resource
- name: chat-diagnose
if: status.apply.failed
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: diagnose

# if the apply step succeeds, then audit the resource
- name: chat-audit
if: status.apply.succeeded
type: chat-gpt
inputs:
- from: resource
parameterKey: prompt.content
properties:
token:
value: <your token>
prompt:
type: audit

Case 3: Use ChatGPT as a quality gate

If we want to apply the resources to a production environment, can we let ChatGPT rate the quality of the resource first, only if the quality is high enough, then apply the resource to the production environment? Absolutely!

Note that to make the score evaluated by chat-gpt more convincing, it's better to pass metrics than the resource in this case.

Let's write our Workflow. KubeVela Workflow has the capability to apply resources to multi clusters. The first step is to apply the Deployment to the test environment. The second step is to use the ChatGPT to rate the quality of the resource. If the quality is high enough, then apply the resource to the production environment.

The process of using quality gate in workflow

The complete Workflow is shown below:

apiVersion: core.oam.dev/v1alpha1
kind: WorkflowRun
metadata:
name: chat-gpt-quality-gate
namespace: default
spec:
workflowSpec:
steps:
# apply the resource to the test environment
- name: apply
type: apply-deployment
# output the resource to the next step
outputs:
- name: resource
valueFrom: output.value
properties:
image: nginx
cluster: test

- name: chat-quality-check
# this step will always be executed
if: always
type: chat-gpt
# get the inputs from resource and pass it to the prompt.content
inputs:
- from: resource
parameterKey: prompt.content
# output the score of ChatGPT and use strconv.Atoi to convert the score string to int
outputs:
- name: chat-result
valueFrom: |
import "strconv"
strconv.Atoi(result)
properties:
token:
value: <your token>
prompt:
type: quality-gate

# if the score is higher than 60, then apply the resource to the production environment
- name: apply-production
type: apply-deployment
# get the score from chat-result
inputs:
- from: chat-result
# check if the score is higher than 60
if: inputs["chat-result"] > 60
properties:
image: nginx
cluster: prod

Apply this Workflow and we can see that if the score is higher than 60, then the resource will be applied to the production environment.

In the End

ChatGPT brings imagination to the world of Kubernetes. Diagnose, audit, rate is just the beginning. In the new AI era, the most precious thing is idea. What do you want to do with ChatGPT? Share your insights with us in the KubeVela Community.

· 15 min read
Jianbo Sun

The KubeVela 1.7 version has been officially released for some time, during which KubeVela has been officially promoted to a CNCF incubation project, marking a new milestone. KubeVela 1.7 itself is also a turning point because KubeVela has been focusing on the design of an extensible system from the beginning, and the demand for the core functionality of controllers has gradually converged, freeing up more resources to focus on user experience, ease of use, and performance. In this article, we will focus on highlighting the prominent features of version 1.7, such as workload takeover and performance optimization.

Taking Over Your Existing Workloads

Taking over existing workloads has always been a highly demanded requirement within the community, with a clear scenario: existing workloads can be naturally migrated to the OAM standard system and be managed uniformly by KubeVela's application delivery control plane. The workload takeover feature also allows reuse of VelaUX's UI console functions, including a series of operations and maintenance characteristics, workflow steps, and a rich plugin ecosystem. In version 1.7, we officially released this feature. Before diving into the specific operation details, let's first have a basic understanding of its operation mode.

"read-only" and "take-over" policy

To meet the needs of different usage scenarios, KubeVela provides two modes for unified management. One is the "read-only" mode, which is suitable for systems that already have a self-built platform internally and still have the main control capability for existing businesses. The new KubeVela-based platform system can only observe these applications in a read-only manner. The other mode is the "take-over" mode, which is suitable for users who want to directly migrate their workloads to the KubeVela system and achieve complete unified management.

· 7 min read
CNCF

Originally post in CNCF.

The CNCF Technical Oversight Committee (TOC) has voted to accept KubeVela as a CNCF incubating project.

KubeVela is an application delivery engine built with the Kubernetes control plane that makes deploying and operating applications across hybrid and multi-cloud environments easier, faster, and more reliable. KubeVela can orchestrate, deploy, and operate application components and cloud resources with a workflow-based application delivery model. The application delivery abstraction of KubeVela is powered by the Open Application Model (OAM).

image.png

· 15 min read
Fog Dong

Serverless Application Engine (SAE) is a Kubernetes-based cloud product that combines the Serverless architecture and the microservice model. As an iterative cloud product, it has encountered many challenges in the process of rapid development. How can we solve these challenges in the booming cloud-native era and perform reliable and fast upgrades for architecture? The SAE team and the KubeVela community worked closely to address these challenges and came up with a replicable open-source solution, KubeVela Workflow.

This article describes how to use KubeVela Workflow to upgrade the architecture of SAE and interprets multiple practice scenarios.

· 14 min read
Qiao Zhongpei

This article will focus on KubeVela and OpenYurt (two open-source projects of CNCF) and introduce the solution of cloud-edge collaboration in a practical Helm application delivery scenario.

Background

With the popularization of the Internet of Everything scenario, the computing power of edge devices is increasing. It is a new technological challenge to use the advantages of cloud computing to meet complex and diversified edge application scenarios and extend cloud-native technology to the end and edge. Cloud-Edge Collaboration is becoming a new technological focus. This article will focus on KubeVela and OpenYurt (two open-source projects of CNCF) and introduce the solution of cloud-edge collaboration in a practical Helm application delivery scenario.

OpenYurt focuses on extending Kubernetes to edge computing in a non-intrusive manner. Based on the container orchestration and scheduling capabilities of native Kubernetes, OpenYurt integrates edge computing power into the Kubernetes infrastructure for unified management. It provides capabilities (such as edge autonomy, efficient O&M channels, unitized edge management, edge traffic topology, secure containers, and edge Serverless/FaaS) and support for heterogeneous resources. In short, OpenYurt builds a unified infrastructure for cloud-edge collaboration in a Kubernetes-native manner.

Incubated in the OAM model, KubeVela focuses on helping enterprises build unified application delivery and management capabilities. It shields the complexity of underlying infrastructure for developers and provides flexible scaling capabilities. It also provides out-of-the-box microservice container management, cloud resource management, versioning and canary release, scaling, observability, resource dependency orchestration and data delivery, multi-cluster, CI docking, and GitOps. Maximize the R&D performance of developer self-service application management, which also meets the extensibility demands of the long-term evolution of the platform.

OpenYurt + KubeVela - What Problems Can be Solved?

As mentioned before, OpenYurt supports the access of edge nodes, allowing users to manage edge nodes by operating native Kubernetes. "Edge nodes" are used to represent computing resources closer to users (such as virtual machines or physical servers in a nearby data center). After you add them through OpenYurt, these edge nodes are converted into nodes that can be used in Kubernetes. OpenYurt uses NodePool to describe a group of edge nodes in the same region. After basic resource management is met, we have the following core requirements for how to orchestrate and deploy applications to different NodePools in a cluster.

· 5 min read
Gokhan Karadas

This document aims to explain the integration of Kubevela and ArgoCD. We have two approaches to integrate this flow. This doc is trying to explain the pros and cons of two different approaches. Before diving deep into details, we can describe Kubevela and ArgoCD.

KubeVela is a modern software delivery platform that makes deploying and operating applications across multi environments easier, faster, and more reliable.

KubeVela is infrastructure agnostic and application-centric. It allows you to build robust software and deliver them anywhere! Kubevela provides an Open Application Model (OAM) based abstraction approach to ship applications and any resource across multiple environments.

Open Application Model (OAM) is a set of standard yet higher-level abstractions for modeling cloud-native applications on top of today’s hybrid and multi-cloud environments. You can find more conceptual details here.

· 12 min read
Da Yin

Since Open Application Model invented in 2020, KubeVela has experienced tens of version changes and evolves advanced features towards modern application delivery. Recently, KubeVela has proposed to become a CNCF incubation project and delivered several public talks in the community. As a memorandum, this article will look back into the starting points and give a comprehensive introduction to the state of KubeVela in 2022.

What is KubeVela?

KubeVela is a modern software platform that makes delivering and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable. It has three main features:

  • Infrastructure agnotic: KubeVela is able to deploy your cloud-native application into various destinations, such as Kubernetes multi-clusters, cloud provider runtimes (like Alibaba Cloud, AWS or Azure) and edge devices.
  • Programmable: KubeVela has abstraction layers for modeling applications and delivery process. The abstraction layers allow users to use programmable ways to build higher level reusable modules for application delivery and integrate arbitrary third-party projects (like FluxCD, Crossplane, Istio, Prometheus) in the KubeVela system.
  • Application-centric: There are rich tools and eco-systems designed around the KubeVela applications, which add extra capabilities for deliverying and operating the applications, including CLI, UI, GitOps, Observability, etc.

KubeVela cares the whole lifecycle of the applications, including both the Day-1 Delivery and the Day-2 Operating stages. It is able to connect with a wide range of Continuous Integration tools, like Jenkins or GitLab CI, and help users deliver and operate applications across hybrid environments. Slide2.png

· 6 min read
Daniel Higuero

Application Delivery on Kubernetes

The cloud-native landscape is formed by a fast-growing ecosystem of tools with the aim of improving the development of modern applications in a cloud environment. Kubernetes has become the de facto standard to deploy enterprise workloads by improving development speed, and accommodating the needs of a dynamic environment.

Kubernetes offers a comprehensive set of entities that enables any potential application to be deployed into it, independent of its complexity. This however has a significant impact from the point of view of its adoption. Kubernetes is becoming as complex as it is powerful, and that translates into a steep learning curve for newcomers into the ecosystem. Thus, this has generated a new trend focused on providing developers with tools that improve their day-to-day activities without losing the underlying capabilities of the underlying system.

· 12 min read
Jianbo Sun

The community has released the new milestone release v1.6 of KubeVela during the 2022 Apsara Conference. This release is a qualitative change in KubeVela from application delivery to application management. It also creates a precedent in the industry to build an application platform with delivery and management integrated based on a extensible model.