Skip to main content

· 8 min read
Junyu Liu

Hello, I am Junyu Liu (GitHub: iyear), currently a sophomore majoring in software engineering. In this blog post, I will share my experiences as a Linux Foundation Mentorship mentee: from applying for the project to becoming part of the community.

In the spring of 2023, I was accepted as a CNCF student under the KubeVela project through LFX Mentorship. In this project, I am responsible for developing a CUE code and documentation generator based on Golang from scratch, laying the foundation for the infrastructure part of KubeVela's future extensibility.

· 10 min read
Fog Dong

ChatGPT is taking the tech industry by storm, thanks to its unparalleled natural language processing capabilities. As a powerful AI language model, it has the ability to understand and generate human-like responses, revolutionizing communication in various industries. From streamlining customer service chatbots to enabling seamless language translation tools, ChatGPT has already proved its mettle in creating innovative solutions that improve efficiency and user experience.

Now the question is, can we leverage ChatGPT to transform the way we deliver applications? With the integration of ChatGPT into DevOps workflows, we are witnessing the possible emergence of a new era of automation called PromptOps. This advancement in AIOps technology is revolutionizing the way businesses operate, allowing for faster and more efficient application delivery.

In this article, we will explore how to integrate ChatGPT into your DevOps workflow to deliver applications.

· 15 min read
Jianbo Sun

The KubeVela 1.7 version has been officially released for some time, during which KubeVela has been officially promoted to a CNCF incubation project, marking a new milestone. KubeVela 1.7 itself is also a turning point because KubeVela has been focusing on the design of an extensible system from the beginning, and the demand for the core functionality of controllers has gradually converged, freeing up more resources to focus on user experience, ease of use, and performance. In this article, we will focus on highlighting the prominent features of version 1.7, such as workload takeover and performance optimization.

· 7 min read
CNCF

Originally post in CNCF.

The CNCF Technical Oversight Committee (TOC) has voted to accept KubeVela as a CNCF incubating project.

KubeVela is an application delivery engine built with the Kubernetes control plane that makes deploying and operating applications across hybrid and multi-cloud environments easier, faster, and more reliable. KubeVela can orchestrate, deploy, and operate application components and cloud resources with a workflow-based application delivery model. The application delivery abstraction of KubeVela is powered by the Open Application Model (OAM).

image.png

· 15 min read
Fog Dong

Serverless Application Engine (SAE) is a Kubernetes-based cloud product that combines the Serverless architecture and the microservice model. As an iterative cloud product, it has encountered many challenges in the process of rapid development. How can we solve these challenges in the booming cloud-native era and perform reliable and fast upgrades for architecture? The SAE team and the KubeVela community worked closely to address these challenges and came up with a replicable open-source solution, KubeVela Workflow.

This article describes how to use KubeVela Workflow to upgrade the architecture of SAE and interprets multiple practice scenarios.

· 14 min read
Qiao Zhongpei

This article will focus on KubeVela and OpenYurt (two open-source projects of CNCF) and introduce the solution of cloud-edge collaboration in a practical Helm application delivery scenario.

Background

With the popularization of the Internet of Everything scenario, the computing power of edge devices is increasing. It is a new technological challenge to use the advantages of cloud computing to meet complex and diversified edge application scenarios and extend cloud-native technology to the end and edge. Cloud-Edge Collaboration is becoming a new technological focus. This article will focus on KubeVela and OpenYurt (two open-source projects of CNCF) and introduce the solution of cloud-edge collaboration in a practical Helm application delivery scenario.

OpenYurt focuses on extending Kubernetes to edge computing in a non-intrusive manner. Based on the container orchestration and scheduling capabilities of native Kubernetes, OpenYurt integrates edge computing power into the Kubernetes infrastructure for unified management. It provides capabilities (such as edge autonomy, efficient O&M channels, unitized edge management, edge traffic topology, secure containers, and edge Serverless/FaaS) and support for heterogeneous resources. In short, OpenYurt builds a unified infrastructure for cloud-edge collaboration in a Kubernetes-native manner.

Incubated in the OAM model, KubeVela focuses on helping enterprises build unified application delivery and management capabilities. It shields the complexity of underlying infrastructure for developers and provides flexible scaling capabilities. It also provides out-of-the-box microservice container management, cloud resource management, versioning and canary release, scaling, observability, resource dependency orchestration and data delivery, multi-cluster, CI docking, and GitOps. Maximize the R&D performance of developer self-service application management, which also meets the extensibility demands of the long-term evolution of the platform.

OpenYurt + KubeVela - What Problems Can be Solved?

As mentioned before, OpenYurt supports the access of edge nodes, allowing users to manage edge nodes by operating native Kubernetes. "Edge nodes" are used to represent computing resources closer to users (such as virtual machines or physical servers in a nearby data center). After you add them through OpenYurt, these edge nodes are converted into nodes that can be used in Kubernetes. OpenYurt uses NodePool to describe a group of edge nodes in the same region. After basic resource management is met, we have the following core requirements for how to orchestrate and deploy applications to different NodePools in a cluster.

· 5 min read
Gokhan Karadas

This document aims to explain the integration of Kubevela and ArgoCD. We have two approaches to integrate this flow. This doc is trying to explain the pros and cons of two different approaches. Before diving deep into details, we can describe Kubevela and ArgoCD.

KubeVela is a modern software delivery platform that makes deploying and operating applications across multi environments easier, faster, and more reliable.

KubeVela is infrastructure agnostic and application-centric. It allows you to build robust software and deliver them anywhere! Kubevela provides an Open Application Model (OAM) based abstraction approach to ship applications and any resource across multiple environments.

Open Application Model (OAM) is a set of standard yet higher-level abstractions for modeling cloud-native applications on top of today’s hybrid and multi-cloud environments. You can find more conceptual details here.

· 12 min read
Da Yin

Since Open Application Model invented in 2020, KubeVela has experienced tens of version changes and evolves advanced features towards modern application delivery. Recently, KubeVela has proposed to become a CNCF incubation project and delivered several public talks in the community. As a memorandum, this article will look back into the starting points and give a comprehensive introduction to the state of KubeVela in 2022.

What is KubeVela?

KubeVela is a modern software platform that makes delivering and operating applications across today's hybrid, multi-cloud environments easier, faster and more reliable. It has three main features:

  • Infrastructure agnotic: KubeVela is able to deploy your cloud-native application into various destinations, such as Kubernetes multi-clusters, cloud provider runtimes (like Alibaba Cloud, AWS or Azure) and edge devices.
  • Programmable: KubeVela has abstraction layers for modeling applications and delivery process. The abstraction layers allow users to use programmable ways to build higher level reusable modules for application delivery and integrate arbitrary third-party projects (like FluxCD, Crossplane, Istio, Prometheus) in the KubeVela system.
  • Application-centric: There are rich tools and eco-systems designed around the KubeVela applications, which add extra capabilities for deliverying and operating the applications, including CLI, UI, GitOps, Observability, etc.

KubeVela cares the whole lifecycle of the applications, including both the Day-1 Delivery and the Day-2 Operating stages. It is able to connect with a wide range of Continuous Integration tools, like Jenkins or GitLab CI, and help users deliver and operate applications across hybrid environments. Slide2.png

· 6 min read
Daniel Higuero

Application Delivery on Kubernetes

The cloud-native landscape is formed by a fast-growing ecosystem of tools with the aim of improving the development of modern applications in a cloud environment. Kubernetes has become the de facto standard to deploy enterprise workloads by improving development speed, and accommodating the needs of a dynamic environment.

Kubernetes offers a comprehensive set of entities that enables any potential application to be deployed into it, independent of its complexity. This however has a significant impact from the point of view of its adoption. Kubernetes is becoming as complex as it is powerful, and that translates into a steep learning curve for newcomers into the ecosystem. Thus, this has generated a new trend focused on providing developers with tools that improve their day-to-day activities without losing the underlying capabilities of the underlying system.

· 12 min read
Jianbo Sun

The community has released the new milestone release v1.6 of KubeVela during the 2022 Apsara Conference. This release is a qualitative change in KubeVela from application delivery to application management. It also creates a precedent in the industry to build an application platform with delivery and management integrated based on a extensible model.