Skip to main content

5 posts tagged with "release-note"

View All Tags

KubeVela 1.4 released, Make Application Delivery Safe, Foolproof, and Transparent

KubeVela is a modern software delivery control panel. The goal is to make application deployment and O&M simpler, more agile, and more reliable in today's hybrid multi-cloud environment. Since the release of Version 1.1, the KubeVela architecture has naturally solved the delivery problems of enterprises in the hybrid multi-cloud environments and has provided sufficient scalability based on the OAM model, which makes it win the favor of many enterprise developers. This also accelerates the iteration of KubeVela.

In Version 1.2, we released an out-of-the-box visual console, which allows the end user to publish and manage diverse workloads through the interface. The release of Version 1.3 improved the expansion system with the OAM model as the core and provides rich plug-in functions. It also provides users with a large number of enterprise-level functions, including LDAP permission authentication, and provides more convenience for enterprise integration. You can obtain more than 30 addons in the addons registry of the KubeVela community. There are well-known CNCF projects (such as argocd, istio, and traefik), database middleware (such as Flink and MySQL), and hundreds of cloud vendor resources.

In Version 1.4, we focused on making application delivery safe, foolproof, and transparent. We added core functions, including multi-cluster permission authentication and authorization, a complex resource topology display, and a one-click installation control panel. We comprehensively strengthened the delivery security in multi-tenancy scenarios, improved the consistent experience of application development and delivery, and made the application delivery process more transparent.

Core Features#

Out-of-the-Box Authentication and Authorization, Connecting to Kubernetes RBAC and Naturally Supporting Multiple Clusters#

After solving the challenges of architecture upgrade and extensibility, we have noticed that the security of application delivery is a problem in the entire industry that needs to be solved urgently. We found many security risks from the use cases:

  • In using traditional CI/CD, many users will directly embed the admin permission of the production cluster into the environment variable of CI and only have certain permission separation for which clusters are delivered to the most basic. CI systems are usually used intensively for development and testing, and it is easy to introduce uncontrolled risks. Once the CI system is attacked by hackers or some man-made mis-operations occur, it may lead to huge damage to centralized management and coarse-grained authority control.

  • A large number of CRD controllers rely on admin permissions to perform operations on cluster resources and do not impose constraints on API access. Kubernetes has rich RBAC control capabilities. However, due to the high threshold of permission management (which is also independent of the implementation of specific functions), most users do not care about the details. They only choose the default configuration and put it into production use. Controllers with high flexibility (such as the ability to distribute Helm Chart) can easily become targets of hacker attacks, such as embedding a YAML script in the helm to steal keys from other namespaces.

KubeVela 1.4 has added authentication and authorization capabilities and naturally supports a multi-cluster hybrid environment. Each KubeVela platform administrator can customize any API permission combination in fine granularity, connect with the Kubernetes RBAC system, authorize these permission modules to developer users, and strictly restrict their permissions. They can also easily use the permission module preset on the KubeVela platform. For example, they can directly grant a user the permissions on a specific namespace of a cluster and the read-only permissions. This simplifies the learning costs and mental burdens of users and comprehensively strengthens the security of application delivery. The system automatically completes the underlying authorization and strictly verifies the scope and type of resources available for the project for users who use the UI, so the business layer RBAC permissions and the underlying Kubernetes RBAC system can be connected and work together to achieve security from the outside to the inside without expanding the permissions in any link.

alt

Specifically, after the platform administrator authorizes a user, the user's request will go through several stages (as shown in the figure).

  1. First, the webhook of KubeVela intercepts the user's request and sends the user's permission information (ServiceAccount) to the Application object.

  2. When the KubeVela Controller executes the deployment plan of the Application, it changes the permissions of the corresponding users based on the impersonation mechanism (impersonate) of Kubernetes.

  3. The KubeVela multi-cluster module (ClusterGateway) passes the corresponding permissions to the sub-cluster. The Kubernetes APIServer of the sub-cluster performs authentication based on the permissions of the sub-cluster. The permissions of the sub-cluster are created by the KubeVela authorization process.

In short, KubeVela's multi-cluster authentication and authorization ensure that the permissions of each end user are strictly restricted and will not be amplified by the delivery system. At the same time, KubeVela's permissions are minimized, and the entire user experience is simple.

Please read the official Permission Authentication and Authorization to learn more about the operating mechanisms behind them.

Lightweight and Convenient Application Development Control Panel Offers a Consistent Experience for Local Development and Production Deployment#

With the prosperity of the ecosystem, we have seen more developers begin to pay attention to cloud-native technology, but they often don't know how to get started. The following are the main reasons:

  • The application development environment is inconsistent with the production environment, and the experience is different. Cloud-native is a technology trend that has emerged in the last five or six years. It has developed rapidly, but most companies are still accustomed to developing a set of internal platforms shielding underlying technologies. As a result, even if ordinary business developers learn cloud-native technology, it is difficult to practice it in actual work. In the best case, they may have to reconnect the API and configuration, let alone the consistent experience.

  • The deployment and use of cloud-native technologies with Kubernetes as the core is complicated. It is expensive to purchase host services from cloud vendors just for the sake of getting started. Even if it takes a lot of effort to learn to deploy a set of available local environments, it is difficult to connect many cloud-native technologies to complete the entire CI/CD process, which involves a lot of knowledge in the field of operation and maintenance, and ordinary developers usually do not need to care about it and have rare chance to use it.

We have also observed in the community that more companies are beginning to realize that self-built platforms cannot keep up with the development of the community ecosystem. They hope to provide a consistent experience through KubeVela and OAM models without losing the scalability of the ecosystem. However, since KubeVela's control panel relies on Kubernetes, the threshold for getting started is still not low. In response to this problem, the community has been thinking and looking for solutions. We conclude that we need a tool that has these characteristics:

  • Only rely on the container environment (such as Docker) to deploy and run, so every developer can easily obtain and use it

  • The local development and production environment experience are consistent, and the configuration is reusable, which can simulate a hybrid multi-cluster environment.

  • A single binary package supports offline deployment, and the time for environment initialization does not exceed the time to drink a glass of water (three minutes).

After several months of incubation, we can finally release this tool in 1.4: VelaD. D represents Daemon and Developer. It can help KubeVela run on a single machine and does not rely on any existing Kubernetes cluster. At the same time, it works as a lightweight application development control panel with KubeVela, helping developers obtain integrated development, testing, and delivery experiences and simplifying the complexity of cloud-native application deployment and management.

You can use the Demo documentation to install and try this tool to learn more about the implementation details. It only takes three minutes to initialize the installation.

alt

Show Resource Topology and Status to Make the Delivery Process Transparent#

Another big demand in application delivery is the transparent management of the resource delivery process. For example, many users in the community like to use Helm Chart to package a lot of complex YAML together. However, once there are problems in the deployment, it will be difficult to troubleshoot due to the overall black box (even if it is a small problem). Some problems include the underlying storage is not provided normally, the associated resources are not created normally, and the underlying configuration is incorrect. There are many types of resources (especially in the modern hybrid multi-cluster hybrid environment) and how to obtain effective information from them and solve problems is a challenge.

In Version 1.4, we added the resource topology query function to improve KubeVela's application-centric delivery experience. When developers initiate application delivery, they only need to care about simple and consistent APIs. When they need to troubleshoot problems or focus on the delivery process, they can use the resource topology feature to quickly obtain the orchestration relationships of resources in different clusters from applications to the running status of Pod instances and automatically obtain the relationships of resources, including complex and black-box Helm Chart.

resource graph

The application shown in the preceding figure is used as an example. A Redis cluster is delivered through the Helm Chart package. The first layer of the chart is the application name, the second layer is the cluster, and the third layer is the resources directly rendered by the application. The next third and fourth layers are the associated resources of the lower level tracked according to different resources.

Users can use graphs to observe the derived resources and their status during the application delivery process. The abnormal points are displayed in yellow or red and the specific reasons are displayed. Compared with the application shown in the following figure, it is a basic Webservice delivered to two clusters. Developers can find that the application creates Deployment and Service resources in the two clusters, respectively. Also, the Deployment resources in the ask-hongkong cluster are displayed in yellow since the Pod instance has not been fully started.

multiple-cluster-graph

This feature also allows you to search, filter, and query using different clusters and components. This helps developers quickly identify problems and understand the delivery status of the underlying application at a very low threshold.

Please read the official blog Visualize the Topological Relationship of Multi-cluster Resources to learn more about the operation mechanism behind them.

Other Key Changes#

In addition to the core functions and plug-in ecosystem, Version 1.4 also enhances core functions such as workflow:

  • You can configure field ignore rules to maintain the application status. This enables KubeVela to work with other controllers, such as HPA and Istio.

  • Application Resource Recycling supports settings based on the resource type. Currently, it supports settings based on the component name, component type, feature type, and resource type.

  • Workflows support sub-steps. Sub-steps support concurrent execution, which accelerates the delivery of resources in multi-cluster high availability scenarios.

  • You can pause a workflow step for a certain period. After that, the workflow automatically continues.

  • Resource deployment and recycling support follows the component dependency rule settings and supports sequential deployment and recycling of resources.

  • Workflow steps support conditional judgment. Currently, the if: always rule is supported, which means the step is executed under any circumstances, thus supporting deployment failure notification.

  • You can set the deployment scope for O&M features to separate O&M features from the deployment status of components. O&M features can be independently deployed in the control cluster.

Thanks to the continued contributions and efforts from more than 30 organizations and individuals in China and internationally (such as Alibaba Cloud, China Merchants Bank, and Napptive), more than 200 functional features and repairs were completed in a short period of two months, which has made this iteration excellent.

Please see the release details for more information.

Addon Ecosystem#

Our plug-in ecology is also rapidly expanding because of the improvement of the 1.3 addon system:

  • Updated fluxcd addon supports OCI registry, which allows you to select different values files in the chart during deployment

  • The cert-manager addon is added to automatically manage Kubernetes certificates.

  • Adds a flink-kubernetes-operator addon to deliver flink workloads.

  • The kruise-rollout addon is added to support various release policies (such as canary release).

  • The pyroscope addon is added to support continuous performance tuning.

  • The traefik plug-in is added to support configuration API Gateway.

  • The vegeta addon is added to support automated stress testing of workloads.

  • The argocd addon is added to support ArgoCD-based Helm delivery and GitOps.

  • The Dapr addon is added to support the O&M capabilities of Dapr subscription and publishing.

  • The istio addon is added to support Istio-based gateway capabilities and traffic canaries.

  • The mysql-operator addon is added to support the deployment of highly available distributed mysql databases.

Developers are welcome to participate in the community and create addon to extend KubeVela's system capabilities.

undefined

How Can You Participate in the Community?#

alt

KubeVela is an open-source, worldwide, Top-Level Project in the CNCF Foundation. There are more than 300 domestic and international contributors and more than 40 community members and maintainers. It is a bilingual international operation mode with more than 4000 community members, including code, documents, and community communication.

If you are interested in participating in the open-source community, we welcome you to join the KubeVela community. You can learn more about the methods of participating in the open-source community through the developer documentation of the KubeVela community. The engineers of the community will guide you.

Recent Planning#

KubeVela will continue to evolve around an iterative cycle of two months. We will focus on these three dimensions in the next release:

  • Observability will provide end-to-end rich application insights around logs, metrics, and tracing dimensions to lay a solid foundation for the stability and intelligence of application delivery.

  • Workflow Delivery Capabilities will provide richer frameworks and integration capabilities, including custom step timeout, context information-based condition judgment, and branch workflow, and connect CI/CD, providing users with richer use cases and scenarios.

  • Application (Including Plug-Ins) Management Ability: You can disable and restart applications. You can import, export, and upload applications to the application market.

If you want to learn more about planning and become a contributor or partner, you can contact us by participating in community communication. We are looking forward to hearing from you!

KubeVela v1.3 released, CNCF's Next Generation of Cloud Native Application Delivery Platform

KubeVela Community

KubeVela Community

KubeVela Team

Thanks to the contribution of hundreds of developers from KubeVela community and around 500 PRs from more than 30 contributors, KubeVela version 1.3 is officially released. Compared to v1.2 released three months ago, this version provides a large number of new features in three aspects as OAM engine (Vela Core), GUI dashboard (VelaUX) and addon ecosystem. These new features are derived from the in-depth practice of many end users such as Alibaba, LINE, China Merchants Bank, and iQiyi, and then finally become part of the KubeVela project that everyone can use out of the box.

Pain Points of Application Delivery#

So, what challenges have we encountered in cloud-native application delivery?

Hybrid clouds and multi-clusters is the new norm#

On one hand, as global cloud providers' service maturing, the way most enterprises build infrastructure has become mainly replying on cloud providers and self-built as a supplement. More and more business enterprise can directly enjoy the business convenience brought by the development of cloud technology, use the elasticity of the cloud, and reduce the cost of self-built infrastructure. Enterprises need a standardized application delivery layer, which can include containers, cloud services and various self-built services in a unified manner, so as to easily achieve cloud-to-cloud interoperability, reduce the risks brought by tedious application migration, and worry-free cloud migration.

On the other hand, for security concerns such as infrastructure stability and multi-environment isolation and due to limitations by the maximized size of Kubernetes can handle, more and more enterprises are beginning to adopt multiple Kubernetes clusters to manage container workloads. How to manage and orchestrate container applications at the multi-cluster level, and solve problems such as scheduling, dependencies, versions, and gray releasing, while providing business developers with a low-threshold experience, is a big challenge.

It can be seen that the hybrid cloud and multi-cluster involved in modern application delivery are not only multiple Kubernetes clusters, but also diverse workloads and DevOps capabilities for managing cloud services, SaaS, and self-built services.

How to pick from more than 1000+ techniques in cloud-native era#

Let's take the open-source projects that have joined the CNCF ecosystem as an example, the number of which has exceeded 1,000. For teams of different scales, different industries, and different technical backgrounds, it seems that the R&D team is doing similar business application delivery and management, but with changes in requirements and usage scenarios, huge differences in technology stacks will be derived. This involves a very large learning cost and threshold for integration and migration. And CNCF's thousands of ecological projects are always tempting us to integrate new projects, add new features, and better accomplish business goals. The era of static technology stacks is long gone.

alt Figure 1. CNCF landscape

Next-generation application delivery and management require flexible assembly capabilities. According to the needs of the team, based on the minimum capability set, new functions can be expanded at a small cost, but not significantly enlarged. The traditional PaaS solution based only on a set of fixating experiences has been proven to be difficult to meet the changing scenario needs of a team during product evolution.

Next step of DevOps, delivering and managing applications for diverse infrastructures#

For more than a decade, DevOps technology has been evolving to increase productivity. Today, the production process of business applications has also undergone great changes, from the traditional way of coding, testing, packaging, deployment, maintenance, and observation, to the continuous enhancement of cloud infrastructure meaning various SaaS services based on API directly become an integral part of the application. From the diversification of development languages to the diversification of deployment environments, to the diversification of components, the traditional DevOps toolchain is gradually unable to cope with and meanwhile, the complexity of user needs is increasing exponentially.

Although DevOps prolongs, we need some different solutions. For modern application delivery and management, we still have the same pursuit of reducing human input as much as possible and becoming more intelligent. The new generation of DevOps technology needs to have easier-to-use integration capabilities, service mesh capabilities, and management capabilities that integrate observation and maintenance. At the same time, the tools need to be simple and easy to use, and the complexity stays within the platform. When choosing, enterprises can combine their own business needs, cooperate with the new architecture and legacy systems, and assemble a platform solution suitable for their team, to avoid the new platform becoming a burden for business developers or enterprises.

The Path of KubeVela Lies Ahead#

To build the next generation application delivery platform, we do: alt Figure 2. Overlook of OAM/KubeVela ecosystem

OAM(Open Application Model): evolving methodology in fast pacing practice#

Based on the internal practical experience of Alibaba and Microsoft, we launched OAM, a brand-new application model and concept in 2019. Its core idea lies in the separation of concerns, through the unified abstraction of components and traits, it can standardize business research and development in the cloud-native era. Collaboration between development team and DevOps team becomes more efficient, and at the same time we expect to avoid the complexity caused by differences in different infrastructures. We then released KubeVela as a standardized implementation of the OAM model to help companies quickly implement OAM while ensuring that OAM-compliant applications can run anywhere. In short, OAM describes the complete components of a modern application in a declarative way, while KubeVela runs according to the final state declared by OAM. Through the reconcile loop oriented to the final state, the two jointly ensure the consistency and correctness of application delivery.

Recently, we have seen a paper published by Google announcing the results of its internal learning in infrastructure construction named as "Prodspec and Annealing". Its design concept and practice are strikingly similar to "OAM and KubeVela". It can be seen that different enterprises in global shares the same vision for delivering cloud-native applications. This paper also re-confirm the correctness of the standardized model and KubeVela. In the future, we will continue to promote the development of the OAM model based on the community's practice and evolution of KubeVela, and continue to deposit best practices into methodology.

A universal hybrid environment and multi-cluster delivery control plane#

The kernel of KubeVela exists in the form of a CRD Controller, which can be easily integrated with the Kubernetes ecosystem, and the OAM model is also compatible with the Kubernetes API. In addition to the abstraction and orchestration capabilities of the OAM model, KubeVela's microkernel is also a natural application delivery control plane designed for multi-cluster and hybrid cloud environments. This also means that KubeVela can seamlessly connect diverse workloads such as cloud resources and containers, and orchestrate and deliver them in different clouds and clusters.

In addition to the basic orchestration capabilities, one core feature of KubeVela is that it allows users to customize the delivery workflow. The workflow steps provide deploying components to the cluster, setting up manual approval, sending notifications, etc. When the workflow execution enters a stable state (such as waiting for manual approval), KubeVela will also automatically maintain the state. Or, through the CUE-based configuration language, you can integrate any IaC-based process, such as Kubernetes CRD, SaaS API, Terraform module, image script, etc. KubeVela's IaC extensibility enables it to integrate Kubernetes' ecological technology at a meager cost. It is very quickly for platform builders to incorporate into their own PaaS or delivery systems. Also, through KubeVela's powerful extensibility, other ecological capabilities can be standardized for enterprise users.

In addition to the advanced model and extended kernel, we've also heard a lot from the community to call out an out-of-the-box product that makes using KubeVela easier. Since version 1.2, the community has invested in developing the GUI dashboard (VelaUX) project, based on KubeVela's microkernel, running on top of the OAM model and creating a delivery platform for CI/CD scenarios. We hope that enterprises can swiftly adopt VelaUX to meet business needs and have a robust, extensible ability to meet the needs of future businesses. alt Figure 3. Product architecture of KubeVela

Around this path, in version 1.3, the community brought the following updates:

Enhancement as a Kubernetes Multi-Cluster Control Plane#

No migration and switch to multi-cluster seamlessly#

After the enterprise has completed the application transformation to a cloud-native architecture, is it still necessary to perform configuration transformation when switching to multi-cluster deployment? The answer is negative.

KubeVela is naturally built upon a multi-cluster basis. As shown in Figure 4, this application YAML represents an application of the Nginx component that will be published to all clusters labeled as region=hangzhou. For the same application description, we only need to specify the name of the cluster to be delivered in Policy or filter specific collections by tags. alt Figure 4. OAM application - select deployment cluster

Of course, the application description shown in Figure 4 is entirely based on the OAM specification. If your current application has been defined in Kubernetes native resources, don't worry, we support the smooth transition from it, as shown in Figure 5 below, "Referencing Kubernetes resources for multi-cluster deployment," which describes a particular application whose components depend on a Secret resource that exists in the control cluster and publishes it to all clusters labeled as region=hangzhou. alt Figure 5. Reference Kubernetes native resource

In addition to multi-cluster deployment of applications, referencing Kubernetes objects can also be used in scenarios such as multi-cluster replication of existing resources, cluster data backup, etc.

Handling multi-cluster differences#

Although the application has been described in a unified OAM Model, there may be differences in the deployment of different clusters. For example, other regions use different environment variables and image registries. Different clusters deploy various components, or a component is deployed in multiple clusters but works as high availability for all, etc. For such requirements, we provide a deployment strategy to do differentiated configuration, as shown in Figure 6 below, as part of this kind of Policy. The first and second topology types of Policy define two target strategies in two ways. The third one means to deploy only the specified components. The fourth Policy represents the deployment of the selected two kinds of components and the difference in the image configuration of one of the components. alt Figure 6. Differentiated configuration of multi-clusters

KubeVela supports flexible differential configuration policies, which can be configured by component properties, traits, and other forms. As shown in the figure above, the third strategy describes the component selection ability, and the fourth strategy describes the difference between the image version. We can see that there is no target specified when describing the difference. The differentiated configuration can be patched flexibly by combining it into the workflow steps.

Configure a multi-cluster delivery process#

The application delivery process to different target clusters is controllable and described by workflow. As shown in Figure 7, the steps of deploying to two clusters and the target policy and differentiation strategy were adopted, respectively. The above shows that policy deployment only needs to be defined atomically and can be flexibly combined in the workflow steps to meet the requirements of different scenarios. alt Figure 7. Customize the multi-cluster delivery process

There are many more usages for delivery workflow, including multi-cluster canary release, manual approval, precise release control, etc.

Version control, safe and traceable#

The description of complex applications is changing at any time with agile development. To ensure the security of application release, we need to have the ability to roll back our application to a previous correct state at the time of release or after release. Therefore, we have introduced a more robust versioning mechanism in the current version. alt Figure 8. Querying historical version of the application

We can query every past version of an application, including its release time and whether it was successful or not. We can compare the changes between versions and quickly roll back based on the snapshot rendered by the previous successful version when we encounter a failure during the release. After releasing a new version, you don't need to change the configuration source if it fails. You can directly re-release based on a history version. The version control mechanism is the centralized idea of application configuration management. After the complete description of the application is rendered uniformly, it is checked, stored, and distributed.

See more Vela Core usages#

VelaUX Introduces Multi-Tenancy Isolation and User Authentication#

Multi-tenancy and isolation for enterprises#

In VelaUX, we introduce the concept of a Project that separate multi-tenancy for safety, including application delivery targets, environments, team members and permissions, etc. Figure 9 shows the project list page. Project administrators can create different projects on this page according to the team's needs to allocate corresponding resources. This capability becomes very important when there are multiple teams or multiple project groups in the enterprise publishing their business applications using the VelaUX platform simultaneously. alt Figure 9. Project management page

Open Authentication & RBAC#

As a vital platform, user authentication is one of the basic capabilities that must be possessed. Since version 1.3, we have supported user authentication and RBAC authentication.

We believe that most enterprises have built a unified authentication platform (Oauth or LDAP) for user authentication. Therefore, VelaUX prioritizes Dex getting through the single sign-on capability, supports LDAP, OIDC, Gitlab/Github, and other user authentication methods, and regards VelaUX as one of the sub portals that let access get through. Of course, if your team does not need unified authentication, we also provide basic local user authentication capabilities. alt Figure 10. Local user management

For authentication, we use the RBAC model. Still, we also saw that the primary RBAC mode could not handle more precise permission control scenarios, such as authorizing the operation rights of an application to specific users. We inherit the design concept of IAM and expand the permissions to the policy composition of resource + action + condition + behavior. The authentication system (front-end UI authentication/back-end API authentication) has implemented policy-oriented fine-grained authentication. However, in terms of authorization, the current version only has some built-in standard permission policies, and subsequent versions provide the ability to create custom permissions.

At the same time, we have also seen that some large enterprises have built independent IAM platforms. The RBAC data model of VelaUX is the same as that of common IAM platforms. Therefore, users who wish to connect VelaUX to their self-built IAM can extend seamlessly.

More secure centralized DevOps#

There will inevitably be some configuration management of operation and maintenance requirements in application delivery, primarily based on multi-cluster. The configuration management requirements are particularly prominent, such as the authentication configuration of the private image repository, the authentication configuration of the Helm product library, or the SSL certificate Wait. We need to uniformly manage the validity of these configurations and securely synchronize them where they are needed, preferably without business developer awareness.

In version 1.3, we introduced a module for integrated configuration management in VelaUX. Its bottom layer also uses component templates and application resource distribution links to manage and distribute configurations. Currently, Secret is used for configuration storage and distribution. The configuration lifecycle is independent of business applications, and we maintain the configuration distribution process independently in each project. You only need to fill in the configuration information for administrator users according to the configuration template. alt Figure 11. Integration configuration

Various Addons provide different configuration types, and users can define more configuration types according to their needs and manage them uniformly. For business-level configuration management, the community is also planning.

See more VelaUX usages#

Introducing version control in Addon ecosystem#

The Addon function was introduced in version 1.2, providing an extended plug-in specification, installation, operation, and maintenance management capabilities. The community can expand the ecological capacities of KubeVela by making different Addons. When our plug-ins and frameworks are constantly iterating, the problem of version compatibility gradually emerges, and we urgently need a version management mechanism.

  • Addon version distribution: We develop and manage the community's official Addon on Github. In addition to the integrated third-party product version, each Addon also includes a Definition and other configurations. Therefore, after each Addon is released, we define it according to its Definition. The version number is packaged, and the history is preserved. At the same time, we reused Helm Chart's product distribution API specification to distribute Addon.
  • Addon version distribution: We develop and manage the community's official Addon on Github. In addition to the integrated third-party product version, each Addon also includes a Definition and other configurations. Therefore, after each Addon is released, we define it according to its Definition. The version number is packaged, and the history is preserved. At the same time, we reused Helm Chart's product distribution API specification to distribute Addon.

Multi-cluster Addon controllable installation#

A type of Addon needs to be installed in the subcluster when installing, such as the FluxCD plug-in shown in Figure 12, which provides Helm Chart rendering and deployment capabilities. We need to deploy it to sub-clusters, and in the past, this process was distributed to all sub-clusters. However, according to community feedback, different plug-ins do not necessarily need to be installed in all clusters. We need a differential processing mechanism to install extensions to specified clusters on demand. alt Figure 12 Addon configuration

The user can specify the cluster to be deployed when enabling Addon, and the system will deploy the Addon according to the user's configuration.

New members to Addon ecosystem#

While iteratively expanding the framework's capabilities, the existing Addons in the community are also continuously being added and upgraded. The number of supported vendors has increased to seven at the cloud service support level. Ecological technology, AI training and service plug-ins, Kruise Rollout plug-ins, Dex plug-ins, etc., have been added. At the same time, the Helm Chart plug-in and the OCM cluster management plug-in have also been updated for user experience.

More Addon usages#

Recent roadmap#

As KubeVela core becomes more and more stable, its scalability is unleashed gradually. The evolution of the 1.2/1.3 version of the community has been accelerated. In the future, we will iterate progressively new versions in a two-month cycle. In the next 1.4 release, we will add the following features:

  • Observability: Provide a complete observability solution around logs, metrics, and traces, provide out-of-the-box observability of the KubeVela system, allow custom observability configuration, and integrate existing observability components or cloud resources.
  • Offline installation: Provide relatively complete offline installation tools and solutions to facilitate more users to use KubeVela in an offline environment.
  • Multi-cluster permission management: Provides in-depth permission management capabilities for Kubernetes multi-cluster.
  • More out-of-the-box Addon capabilities.

The KubeVela community is looking forward to your joining to build an easy-to-use and standardized next-generation cloud-native application delivery and management platform!

KubeVela v1.2 - Focused on Developer Experience, Simplified Multi-Cluster Application Delivery

As the cloud native technologies grows continuously, more and more infrastructure capabilities are becoming standardized PaaS or SaaS products. To build a product you don't need a whole team to do it nowadays. Because there are so many services that can take roles from software developing, testing to infrastructure operations. As driven the culture of agile development and cloud native technologies, more and more roles can be shifted left to developers, e.g. testing, monitoring, security. As emphasized by the DevOps concepts, it can be done in the development phase for the work of monitoring, security, and operations via open source projects and cloud services. Nonetheless, this also creates huge challenges to developers, as they might lack the control of diverse products and complex APIs. Not only do they have to make choices, but also they need to understand and coordinate the complex, heterogeneous infrastructure capabilities in order to satisfy the fast-changing requirements of the business.

This complexity and uncertainty has exacerbated the developer experience undoubtedly, reducing the delivery efficiency of business system, increasing the operational risks. The tenet of developer experience is simplicity and efficiency. Not only the developers but also the enterprises have to choose the better developer tools and platforms to achieve this goal. This is also the focus of KubeVela v1.2 and upcoming release that to build a modern platform based on cloud native technologies and covering development, delivery, and operations. We can see from the following diagram of KubeVela architecture that developers only need to focus on applications per se, and use differentiated operational and delivery capabilities around the applications.

image.png pic 1. KubeVela Architecture

OAM & KubeVela History#

Let's retrospect the history of OAM and KubeVela to understand how it is formed this way:

  • OAM(Open Application Model)birth and growth

To create simplicity in a complex world, the first problem we need to solve is how to make standard abstractions? OAM creatively proposes two ways of separation: separation between applications and resources, and separation between development and operation (in ideal world the operation can be fully automated). It is a cloud native application specification which provides everything-as-a-service, complete modularity design. The spec has been getting tractions among major vendors all over the world since it's been announced. Because we all share a common goal -- To reduce learning curve and provide application lego-style invention for developers.

  • v1.0 release of KubeVela, bringing the OAM spec implementation

With the application specification as the guidance, advanced community users can create their own tools to build practical solutions. But it is unaccessible to most developers though. KubeVela was born as the community standard implementation to solve this problem. It absorbs the good parts from latest Kubernetes community development. It provides automated, idempotent, reliable application rollout controllers. With its features, KubeVela can empower developers to quickly deploy OAM-compliant applications.

  • v1.1 release of KubeVela, provides delivery workflow, making multi-cluster rollout controlled and simplified

As more and more enterprises adopt the cloud, hybrid and distributed cloud will certainly become the future norm. KubeVela has been designed and built based on hybrid cloud infrastructure as a modern application management system. We anticipate that the architecture of modern enterprise applications will be heterogenous considering factors of availability, performance, data security, etc. In KubeVela 1.1, we adds new feature to achieve programmable delivery workflow. It natively fits the multi-cluster architecture to provide modern multi-cluster application rollout.

By the time of 2022, on the road to serve developers, KubeVela has gone to the fourth phase. It is going to empower developers to do multi-cluster rollout way more easily. In the following we will dissect its changes:

Core Features in v1.2 Release#

The new GUI project: VelaUX#

It is the best choice to reduce developer learning curve by providing an easy-to-use UI console. Since the inception, KubeVela community has been asking for UI. With the v1.2 release, it has finally come. Providing GUI will help developers organize and compose heterogeneous aplications in a standard way. This will help them analyze and discover business obstacles quicker.

VelaUX is the frontend project of KubeVela with extensible core design. It introduces low-code experience for users to drag-and-drop form that takes user input based on dynamic components. To achieve this we have designed the frontend description spec UISchema with X-Definition, and multi-dimensional query language VelaQL. This design makes the foundation for the heterogenous application delivery architecture of KubeVela.

From GUI, users can manage addons, connect Kubernetes clusters, distribute delivery targets, set up environments and deploy all kinds of apps, monitor runtime status, achieve full lifecycle management of application delivery.

image.png pic 2. KubeVela Application Dashboard

For the new terms in GUI, please refer to Core Concepts documentation to learn more details.

Unified Multi-Cluster Control#

KubeVela will manage N Kubernetes clusters, N cloud vendor services in a big unified infrastructure pool. From that our developers can set up different environments based on business requirements, workflow policies, team collaboration needs, etc. This will create separate environment workspaces from big infrastructure resource pool. One application can be deployed into multiple environments, and environments are isolated from each other in both management and runtime.

image.png pic 3. KubeVela Application Status

As shown above, an application can be deployed to default environments and other custom environments such as test or prod. Each environment can include multiple delivery targets. Each delivery target indicates an independent, separate Kubernetes cluster.

Heterogeneous Application Architecture#

In terms of cloud native technologies, we have many options to pick for build application delivery solutions. Based on Kubernetes, we can use mature technologies like Helm Chart to delivery middleware and third-party open source softwares. We can deliver enterprise business applications via container images. We can also use OpenYurt to deliver and mange edge applications. Based on the open technologies of cloud services, we can deliver database, message queues, cache, etc. middleware, including operational features like logging, monitoring.

With so many options, KubeVela adopts OAM as the standard application definition to manage heterogeneous application architecture uniformly. KubeVela provides highly extensible delivery core engine. Users can use built-in or install more plugins to extend the platform, and manage application deliveries in a consistent way. On top of KubeVela, what users see is the modular, everything-as-a-service control plane.

image.png pic 4. Cloud Resources Deploy

As shown above, we can tell that in the application management page users can conveniently deliver cloud resources. Developers can read the following docs to understand the full delivery process of heterogeneous application architecture:

  1. Deliver Docker Image
  2. Deliver Helm Chart
  3. Deliver Kubernetes Resources
  4. Deliver cloud resources

Extension System#

KubeVela has been designed as an extensible system from the very beginning. The aforementioned heterogeneous application architecture can be achieved via KubeVela's extension system. It can be extended via standard interfaces and plugin as many capabilities as you want. This will match the differentiated requirements of enterprises while reducing the cognitive burden incurred in learning new things. KubeVela's extension system includes component types, operational traits, workflow types, delivery policies, etc. In current release, we have the addon management system. An addon packages the extension capabilities for easy distribution.

image.png pic 5. KubeVela Addons

Currently we provide an official catalog with pre-packaged addons shown as above. Meanwhile in the experimental catalog repo we can collaborating with community users to create more capabilities.

By now, KubeVela has grown into an application delivery platform that serve developers directly. What enterprise scenarios can we use KubeVela for? In the following we list a couple of common scenarios:

Enterprise Software Delivery Solutions#

Multi-Cluster DevOps#

Today many enterprise software delivery looks like the following diagram. They use the compute resources from cloud vendors for both the demo and production environments. But they use their in-house server farm for the development or testing environments. If any business applications have multi-region disaster recovery requirements, then production environments can span multiple regions or even clouds.

image.png pic 6. DevOps Pipeline

For basic DevOps workflow, it includes code hosting, CI/CD process. KubeVela can provide support for CD process. To enterprises the following are the practical steps:

  1. Prepare local and cloud resources according to real needs. Make sure local and cloud resources are connected in the same network plane for unified resource management.
  2. Deploy KubeVela into the production environment and ensure its accessibility.
  3. Install DevOps toolchain like Gitlab, Jenkins, Sonar via KubeVela. Usually the accessability of code hosting and developer toolchain are critical and we must deploy them to production environments. Unless you local clusters can ensure accessibility, can hope the business code to exist in local environment, then you can deploy them to local clusters.
  4. Setup local development environments via KubeVela, deploy testing middleware in local. Setup cloud middleware in production environments.
  5. Setup business code CI piplines via Jenkins. Generates Docker image and send it to KubeVela to do multi-environment deployment. This will make up an end-to-end application delivery workflow.

Using KubeVela multi-cluster DevOps solution will provide the following advantages:

  1. Developers do not need to know any Kubernetes knowledge to achieve heterogeneous cloud-native application delivery.
  2. Unified multi-cluster, multi-environment management in a single control plane. Natively deploy multi-cluster applications.
  3. Unified application management mode, regardless of business applications or developer toolchain.
  4. Flexible workflow to help enterprises to glue various software delivery processes in a single workflow.

Unified Management of Heterogeneous Environments#

Different enterprises face different problems and requirements of infrastructure and business. On the infrastructure side, enterprises could build in-house private cloud, yet buy some public cloud resources, and own some edge devices. On the business side, the variance of scale and requirements will lead to multi-cloud, multi-region application architecture, while keeping some legacy systems. On the developer side, developing software will need various environments such as development, testing, staging, production. On the management side, different business teams need isolation from each other, while opening up connection between some business applications. ​

In the past, it was very easy to become fragmented between different business teams inside enterprises. This fragmentation exists in: toolchain, technical architecture, business management. We take this into account while being innovative in technologies. KubeVela brings a new solution that pursues unified management and extensible architecture with good compatibility.

  • On the infrastructure side, we support different API formats including Kubernetes API, cloud APIs, and custom APIs to model all kinds of the infrastructure.
  • On the business architecture side, the application model is open and platform agnostic. KubeVela provides the ability to connect and empower businesses.
  • On the Developer toolchain side, there might be different toolchain and artifacts in the enterprises. KubeVela provides the extension mechanism and standard models to combine different kinds of artifacts into a standardized delivery workflow. Surely, its standards are shifting left and empowering enterprises to unify toolchain management. You don't need to concern whether you are using Gitlab or Jenkins because KubeVela can integrate them both.
  • On the operations side, the operational capabilities and toolchain solutions can be unified under KubeVela standards in the enterprises. Moreover, the community operational capabilities can be shared and reused easily via KubeVela extensions.

Thus, KubeVela can be used to connect different stages inside the enterprises, and unify all capabilities in a single platform. It is a practical and future-proof solution.

Enterprise Internal Application Platform#

Many enterprises that has enough development power will choose to build internal application platforms. The main reason is that they can customize the platform to make it very easy for their use cases. In the past we can see there are many PaaS platforms born out of Cloud Foundry. We all know the stereotypes of application platforms will not satisfy all enterprises. If the application package format and delivery workflow can standardized inside enterprises, then all users need to do is to fill the image name. However, in traditional PaaS platforms developers have to understand a bunch of so-called general concepts. For example, if an enterprise want to deploy AI applications, and there is some difference for AI application architecture, then we have to create such AI PaaS, and enterprises have to pay more fees and learn more concepts.

Therefore, when general products couldn't satisfy the needs of enterprises, they will consider develop one on their own. But it takes so much resources to build an internal platform from scratch. Sometimes it even surpasses the investment of their core business. This is not a feasible solution.

With above introduction, are you more familiar with the motivations and history of KubeVela? There is no such a product to be the silver bullet. But our goal is to create a standardized model to empower more and more enterprises and developers to participate in the path towards building simple and efficient developer tools. KubeVela is still in early development phase. We still hope you can join us to develop it together. We want to thank the 100+ contributors who contributed to KubeVela.

Join the Community#

Collaborate on OAM Specification#

OAM spec is the cornerstone of modern application platform architecture. Currently, OAM spec is driven by implementation of KubeVela for future improvement while the spec didn't rely on KubeVela. We highly encourage cloud vendors, platform builders, and end users to join us to define OAM spec together. We highly appreciate that vendors like Tencent, China Telecom, China Unicom have supported OAM spec and started collaborative work. Every person and organization are welcome to share your ideas, suggestions, and thoughts.

Go the Community repo.

Collaborate on Addon ecosystem#

As mentioned above, we have created the addon extension system, and encourage community developers to contribute your tools, and share your thoughts.

Contribute Cloud Resources#

KubeVela integrated Terraform Module with Terraform controller to extend cloud resources. We have supported several cloud resources, and encourage community developers or cloud providers to contribute more.

Go to contribute cloud resource.

Provide Your Feedback#

We highly welcome everyone to participate in the KubeVela community discussion whether you want to know more or contribute code!

Go to Community repo.

KubeVela is a CNCF sandbox project. Learn more by reading the official documentation

KubeVela Releases 1.1, Reaching New Peaks in Cloud-Native Continuous Delivery

Overview#

Initialized by Alibaba and currently a CNCF sandbox project, KubeVela is a modern application platform that focues on modeling the delivery workflow of micro-services on top of Kubernetes, Terraform, Flux Helm controller and beyond. This brings strong value added to the existing GitOps and IaC primitives with battle tested application delivery practices including deployment pipeline, across-environment promotion, manual approval, canary rollout and notification, etc.

This is the first open source project in CNCF that focuses on the full lifecycle continuous delivery experience from abstraction, rendering, orchestration to deployment. This reminds us of Spinnaker, but designed to be simpler, cloud native, can be used with any CI pipeline and easily extended.

Introduction#

Kubernetes has made it easy to build application deployment infrastructure, either on cloud, on-prem, or on IoT environments. But there are still two problems for developers to manage micro-service applications. First, developers just want to deploy, but delivering application with lower level infrastructure/orchestrator primitives is too much for them. It's very hard for developers to keep up with all these details and they need a simpler abstraction to "just deploy". Second, application delivery workflow is a basic need for "just deploy", but it is inherently out of scope of Kubernetes itself. The existing workflow addons/projects are too generic, they are way more than focusing on delivering applications only. These problems makes continuous delivery complex and unscalable even with the help of Kubernetes. GitOps can help in deploying phase, but lack the capabilities of abstracting, rendering, and orchestration. This results in low SDO (software delivery and operation) performance and burnout of DevOps engineers. In worst case, it could cause production outage if users make unsafe operations due to the complexity.

The latest DORA survey [1] shows that organizations adopting continuous delivery are more likely to have processes that are more high quality, low-risk, and cost-effective. Though the question is how we can make it more focused and easy to practice. Hence, KubeVela introduces Open Application Model (OAM), a higher level abstraction for modeling application delivery workflow with app-centric, consistent and declarative approach. This empowers developers to continuously verify and deploy their applications with confidence, standing on the shoulders of Kubernetes control theory, GitOps, IaC and beyond.

KubeVela latest 1.1 release is a major milestone bringing more continuous delivery features. It highlights:

  • Multi-environment, multi-cluster rollout: KubeVela allows users to define the environments and the clusters which application components to deploy to or to promote. This makes it easier for users to manage multi-stage application rollout. For example, users can deploy applicatons to test environment and then promote to production environment.
  • Canary rollout and approval gate: Application delivery is a procedural workflow that takes multiple steps. KubeVela provides such workflow on top of Kubernetes. By default, Users can use KubeVela to build canary rollout, approval gate, notification pipelines to deliver applications confidently. Moreover, the workflow model is declarative and extensible. Workflow steps can be stored in Git to simplify management.
  • Addon management: All KubeVela capabilities (e.g. Helm chart deployment) are pluggable. They are managed as addons [2]. KubeVela provides simple experience via CLI/UI to discover, install, uninstall addons. There is an community addon registry. Users can also bring their own addon registries.
  • Cloud Resource: Users can enable Terraform addon on KubeVela to deploy cloud resources using the same abstraction to deploy applications. This enables cooperative delivery of application and its dependencies. That includes databases, redis, message queues, etc. By using KubeVela, users don't need to switch over to another interface to manage middlewares. This provides unified experience and aligns better with the upcoming trends in CNCF Cooperative-Delivery Working Group [3].

That is the introduction about KubeVela 1.1 release. In the following, we will provide deep-dive and examples for the new features.

Multi-Environment, Multi-Cluster Rollout#

Users would need to deploy applications across clusters in different regions. Additionally, users would have test environment to run some automated tests first before deploying to production environment. However, it remains mysterious for many users how to do multi-environment, multi-cluster application rollout on Kubernetes.

KubeVela 1.1 introduces multi-environment, multi-cluster rollout. It integrates Open Cluster Management and Karmada projects to handle multi-cluster management. Based on that, it provides EnvBinding Policy to define per-environment config patch and placement decisions. Here is an example of EnvBinding policy:

policies:
- name: example-multi-env-policy
type: env-binding
properties:
envs:
- name: staging
placement: # selecting the cluster to deploy to
clusterSelector:
name: cluster-staging
selector: # selecting which component to use
components:
- hello-world-server
- name: prod
placement:
clusterSelector:
name: cluster-prod
patch: # overlay patch on above components
components:
- name: hello-world-server
type: webservice
traits:
- type: scaler
properties:
replicas: 3

Below is a demo for a multi-stage application rollout from Staging to Production. The local cluster serves as the control plane and the rest two are the runtime clusters.

Note that all the resources and statuses are aggregated and abstracted in the KubeVela Applications. Did any problems happen, it will pinpoint the problematic resources for users. This results in faster recovery time and more manageable delivery.

Canary Rollout, Approval, Notification#

Can you build a canary rollout pipeline in 5 minutes? Ask Kubernetes users and they would tell you it is not even enough to learn an Istio concept. We belive that as a developer you do not need to master Istio to build a canary rollout pipeline. KubeVela abstracts away the low level details and provides a simple solution as follows.

First, installing Istio is made easy via KubeVela addons:

vela addon enable istio

Then, users just need to define how many batches for the rollout:

traits:
- type: rollout
properties:
targetSize: 100
rolloutBatches:
- replicas: 10
- replicas: 90

Finally, define the workflow of canary, approval, and notification:

workflow:
steps:
- name: rollout-1st-batch
type: canary-rollout
properties:
# just upgrade first batch of component
batchPartition: 0
traffic:
weightedTargets:
- revision: reviews-v1
weight: 90 # 90% to the old version
- revision: reviews-v2
weight: 10 # 10% to the new version
- name: approval-gate
type: suspend
- name: rollout-rest
type: canary-rollout
properties:
batchPartition: 1
traffic:
weightedTargets:
- revision: reviews-v2
weight: 100 # 100% shift to new version
- name: send-msg
type: webhook-notification
properties:
slack:
url: <your slack webhook url>
text: "rollout finished"

Here is a full demo:

What Comes Next#

In this KubeVela release we have built the cornerstone for continuous delivery on Kubernetes. For the upcoming release our major theme will be improving user experience. We will release a dashboard that takes the user experience to another level. Besides that, we will keep improving our CLI tools, debuggability, observability. This will ensure our users can self serve to not only deploy and manage applications, but also debug and analyze the delivery pipelines.

For more project roadmap information, please see Kubevela RoadMap.

Join the Community#

KubeVela is a community-driven, open-source project. Dozens of leading enterprises have adopted KubeVela in production, including Alibaba, Tencent, ByteDance, XPeng Motors. You are welcome to join the community. Here are next steps:

References#

(1) DORA full report: https://cloud.google.com/blog/products/devops-sre/announcing-dora-2021-accelerate-state-of-devops-report (2) KubeVela Addon: https://github.com/kubevela/catalog/tree/master/addons/example (3) Cooperative Delivery Charter: https://github.com/cncf/tag-app-delivery/blob/master/cooperative-delivery-wg/charter.md

KubeVela - The Extensible App Platform Based on Open Application Model and Kubernetes

Lei Zhang and Fei Guo

Lei Zhang and Fei Guo

CNCF TOC Member/Kubernetes

7 Dec 2020 12:33pm, by Lei Zhang and Fei Guo

image

Last month at KubeCon+CloudNativeCon 2020, the Open Application Model (OAM) community launched KubeVela, an easy-to-use yet highly extensible application platform based on OAM and Kubernetes.

For developers, KubeVela is an easy-to-use tool that enables you to describe and ship applications to Kubernetes with minimal effort, yet for platform builders, KubeVela serves as a framework that empowers them to create developer-facing yet fully extensible platforms at ease.

The trend of cloud native technology is moving towards pursuing consistent application delivery across clouds and on-premises infrastructures using Kubernetes as the common abstraction layer. Kubernetes, although excellent in abstracting low-level infrastructure details, does introduce extra complexity to application developers, namely understanding the concepts of pods, port exposing, privilege escalation, resource claims, CRD, and so on. We’ve seen the nontrivial learning curve and the lack of developer-facing abstraction have impacted user experiences, slowed down productivity, led to unexpected errors or misconfigurations in production.

Abstracting Kubernetes to serve developers’ requirements is a highly opinionated process, and the resultant abstractions would only make sense had the decision-makers been the platform builders. Unfortunately, the platform builders today face the following dilemma: There is no tool or framework for them to easily extend the abstractions if any.

Thus, many platforms today introduce restricted abstractions and add-on mechanisms despite the extensibility of Kubernetes. This makes easily extending such platforms for developers’ requirements or to wider scenarios almost impossible.

In the end, developers complain those platforms are too rigid and slow in response to feature requests or improvements. The platform builders do want to help but the engineering effort is daunting: any simple API change in the platform could easily become a marathon negotiation around the opinionated abstraction design.

Introducing KubeVela#

With KubeVela, we aim to solve these two challenges in an approach that separates concerns of developers and platform builders.

For developers, KubeVela is an easy-to-use yet extensible tool that enables you to describe and deploy microservices applications with minimal effort. And instead of managing a handful of Kubernetes YAML files, a simple docker-compose style appfile is all you need.

A Sample Appfile#

In this example, we will create a vela.yaml along with your app. This file describes how to build the image, how to deploy the image to Kubernetes, how to access the application and how the system would scale it automatically.

name: testapp
services:
express-server:
image: oamdev/testapp:v1
build:
docker:
file: Dockerfile
contrxt: .
cmd: ["node", "server.js"]
port: 8080
cpu: "0.01"
route:
domain: example.com
rules:
- path: /testapp
rewriteTarget: /
autoscale:
min: 1
max: 4
cpuPercent: 5

Just do: $ vela up, your app will then be alive on https://example.com/testapp.

Behind the Appfile#

The appfile in KubeVela does not have a fixed schema specification, instead, what you can define in this file is determined by what kind of workload types and traits are available in your platform. These two concepts are core concepts from OAM, in detail:

  • Workload type, which declares the characteristics that runtime infrastructure should take into account in application deployment. In the sample above, it defines a “Web Service” workload named express-server as part of your application.
  • Trait, which represents the operation configurations that are attached to an instance of workload type. Traits augment a workload type instance with operational features. In the sample above, it defines a route trait to access the application and an autoscale trait for the CPU based horizontal automatic scaling policy.

Whenever a new workload type or trait is added, it would become immediately available to be declared in the appfile. Let’s say, a new trait named metrics is added, developers could check the schema of this trait by simply $ vela show metrics and define it in the previous sample appfile:

name: testapp
services:
express-server:
type: webservice
image: oamdev/testapp:v1
build:
docker:
file: Dockerfile
contrxt: .
cmd: ["node", "server.js"]
port: 8080
cpu: "0.01"
route:
domain: example.com
rules:
- path: /testapp
rewriteTarget: /
autoscale:
min: 1
max: 4
cpuPercent: 5
metrices:
port: 8080
path: "/metrics"
scheme: "http"
enabled: true

Vela Up#

The vela up command deploys the application defined in appfile to Kubernetes. After deployment, you can use vela status to check how to access your application following the route trait declared in appfile.

Apps deployed with KubeVela will receive a URL (and versioned pre-release URLs) with valid TLS certificate automatically generated via cert-manager. KubeVela also provides a set of commands (i.e. vela logs, vela exec) to best support your application management without becoming a Kubernetes expert. Learn more about vela up and appfile.

KubeVela for Platform Builders#

The above experience cannot be achieved without KubeVela’s innovative offerings to the platform builders as an extensible platform engine. These features are the hidden gems that make KubeVela unique. In details, KubeVela relieves the pains of building developer facing platforms on Kubernetes by doing the following:

  • Application Centric. Behind the appfile, KubeVela enforces “application” as its main API and all KubeVela’s capabilities serve the applications’ requirements only. This is how KubeVela brings application-centric context to the platform by default and changes building such platforms into working around application architecture.
  • Extending Natively. As mentioned in the developer section, an application described by appfile is composed of various pluggable workload types and operation features (i.e. traits). Capabilities from Kubernetes ecosystem can be added to KubeVela as new workload types or traits through Kubernetes CRD registry mechanism at any time.
  • Simple yet Extensible User Interface. Behind the appfile, KubeVela uses CUELang as the “last mile” abstraction engine between user-facing schema and the control plane objects. KubeVela provides a set of built-in abstractions to start with and the platform builders are free to modify them at any time. Capability adding/updating or abstraction changes will all take effect at runtime, neither recompilation nor redeployment of KubeVela is required.

Under the hood, KubeVela core is built on top of Crossplane OAM Kubernetes Runtime with KEDA, Flagger, Prometheus, etc as dependencies, yet its feature pool is “unlimited” and can be extended at any time.

With KubeVela, platform builders now have the tooling support to design and ship any new capabilities with abstractions to end-users with high confidence and low turnaround time. And for a developer, you only need to learn these abstractions, describe the app with them in a single file, and then ship it.

Not Another PaaS System#

Most typical Platform-as-a-Service (PaaS) systems also provide full application management capabilities and aim to improve developer experience and efficiency. In this context, KubeVela shares the same goal.

Though unlike most typical PaaS systems which are either inextensible or create their own addon systems maintained by their own communities. KubeVela is designed to fully leverage the Kubernetes ecosystems as its capability pool. Hence, there’s no additional addon system introduced in this project. For platform builders, a new capability can be installed in KubeVela at any time by simply registering its API resource to OAM and providing a CUE template. We hope and expect that with the help of the open source community, the number of the KubeVela’s capabilities will grow dramatically over time. Learn more about using community capabilities by $vela cap.

So in a nutshell, KubeVela is a Kubernetes plugin for building application-centric abstractions. It leverages the native Kubernetes extensibility and capabilities to resolve a hard problem – making application management enjoyable on Kubernetes.

Learn More#

KubeVela is incubated by the OAM community as the successor of Rudr project, while rather than being a reference implementation, KubeVela intends to be an end-to-end implementation that could be used in wider scenarios. The design of KubeVela’s appfile is also part of the experimental attempt in OAM specification to bring a simplified user experience to developers.

To learn more about KubeVela, please visit KubeVela’s documentation site. The following content are also good next steps:

  • Try out KubeVela following the step-by-step tutorial in its Quick Start page.
  • Give us feedback! KubeVela is still in its early stage and we are happy to ask the community for feedback via OAM Gitter or Slack channel.
  • Extend KubeVela to build your own platforms. If you have an idea for a new workload type, trait or try to build something more complex like a database or AI PaaS with KubeVela, post your idea as a GitHub Issue or propose it to the OAM community, we are eager to know.
  • Contribute to KubeVela. KubeVela is initialized by the open source community with bootstrap contributors from 8+ different organizations. We intend to donate this project to a neutral foundation as soon as it gets stable.