Skip to main content

2 posts tagged with "Kubernetes"

View All Tags

· 8 min read
Jianbo Sun

If you're looking for something to glue Terraform ecosystem with the Kubernetes world, congratulations! You're getting exactly what you want in this blog.

We will introduce how to integrate terraform modules into KubeVela by fixing a real world problem -- "Fixing the Developer Experience of Kubernetes Port Forwarding" inspired by article from Alex Ellis.

In general, this article will be divided into two parts:

  • Part.1 will introduce how to glue Terraform with KubeVela, it needs some basic knowledge of both Terraform and KubeVela. You can just skip this part if you don't want to extend KubeVela as a Developer.
  • Part.2 will introduce how KubeVela can 1) provision a Cloud ECS instance by KubeVela with public IP; 2) Use the ECS instance as a tunnel sever to provide public access for any container service within an intranet environment.

OK, let's go!

Part 1. Glue Terraform Module as KubeVela Capability

In general, KubeVela is a modern software delivery control plane, you may ask: "What benefit from doing this":

  1. The power of gluing Terraform with Kubernetes ecosystem including Helm Charts in one unified solution, that helps you to do GitOps, CI/CD integration and application lifecycle management.
    • Thinking of deploy a product that includes Cloud Database, Container Service and several helm charts, now you can manage and deploy them together without switching to different tools.
  2. Declarative model for all the resources, KubeVela will run the reconcile loop until succeed.
    • You won't be blocked by the network issues from terraform CLI.
  3. A powerful CUE based workflow that you can define any preferred steps in the application delivery process.
    • You can compose the way you like, such as canary rollout, multi-clusters/multi-env promotion, notification.

If you're already a good hand of terraform, it's pretty easy for this integration.

Build your terraform module

This part can usually be skipped, if you already have a well-tested terraform module,

Before start, make sure you have:

Here's my terraform module( https://github.com/wonderflow/terraform-alicloud-ecs-instance ) for this demo.

  • Clone this module:
git clone https://github.com/wonderflow/terraform-alicloud-ecs-instance.git
cd terraform-alicloud-ecs-instance
  • Initialize and download the latest stable version of the Alibaba Cloud provider:
terraform init
  • Configure the Alibaba Cloud provider credentials:
export ALICLOUD_ACCESS_KEY="your-accesskey-id"
export ALICLOUD_SECRET_KEY="your-accesskey-secret"
export ALICLOUD_REGION="your-region-id"

You can also create an provider.tf including the credentials instead:

provider "alicloud" {
access_key = "your-accesskey-id"
secret_key = "your-accesskey-secret"
region = "cn-hangzhou"
}
  • Test creating the resources:
terraform apply -var-file=test/test.tfvars
  • Destroy all resources of tests:
terraform destroy  -var-file=test/test.tfvars

You can customize this module per your needs and push to github of your own.

Make the Terraform module as KubeVela Capability

Before start, make sure you have installed kubevela control plane, don't worry if you don't have Kubernetes cluster, velad is enough for the quick demo.

We'll use the terraform module we have already prepared just now.

  • Generate Component Definition
vela def init ecs --type component --provider alibaba --desc "Terraform configuration for Alibaba Cloud Elastic Compute Service" --git https://github.com/wonderflow/terraform-alicloud-ecs-instance.git > alibaa-ecs.yaml

Change the git url with your own if you have customized.

  • Apply it to the vela control plane
vela kube apply -f alibaa-ecs-def.yaml

vela kube apply works the same with kubectl apply.

Then the extension of ECS module has been added, you can learn more details from here.

We have finished the integration, the end user can discover the capability immediately after the apply.

The end user can use following commands to check the parameters:

vela show alibaba-ecs

They can also view it from website by launching:

vela show alibaba-ecs --web

That's all of the integration needed.

Part 2. Fixing the Developer Experience of Kubernetes Port Forwarding

In this part, we will introduce a solution that you can expose any of your Kubernetes service to public with a specific port. The solution is composed by:

  1. KubeVela environment, you already have if you have practiced in part 1.
  2. Alibaba Cloud ECS, KubeVela will create a tiny ecs(1u1g) automatically by access key.
  3. frp, KubeVela will launch this proxy both at server-side and client-side.

Prepare KubeVela environment

  • Install KubeVela
curl -fsSl https://static.kubevela.net/script/install-velad.sh | bash
velad install

Check this doc to learn more details of installation.

  • Enable Terraform Addon and Alibaba Provider
vela addon enable terraform
vela addon enable terraform-alibaba
  • Add credentials as provider
vela provider add terraform-alibaba --ALICLOUD_ACCESS_KEY <"your-accesskey-id"> --ALICLOUD_SECRET_KEY "your-accesskey-secret" --ALICLOUD_REGION <your-region> --name terraform-alibaba-default

Check this doc for more details about other clouds.

Launch a ECS with Public IP and Deploy the frp server

After the environment prepared well, you can create an application as below.

cat <<EOF | vela up -f -
# YAML begins
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: ecs-demo
spec:
components:
- name: ecs-demo
type: alibaba-ecs
properties:
providerRef:
name: terraform-alibaba-default
writeConnectionSecretToRef:
name: outputs-ecs
name: "test-terraform-vela-123"
instance_type: "ecs.n1.tiny"
host_name: "test-terraform-vela"
password: "Test-123456!"
internet_max_bandwidth_out: "10"
associate_public_ip_address: "true"
instance_charge_type: "PostPaid"
user_data_url: "https://raw.githubusercontent.com/wonderflow/terraform-alicloud-ecs-instance/master/frp.sh"
ports:
- 8080
- 8081
- 8082
- 8083
- 9090
- 9091
- 9092
tags:
created_by: "Terraform-of-KubeVela"
created_from: "module-tf-alicloud-ecs-instance"
# YAML ends
EOF

This application will deploy an ECS instance with a public ip, explanation of some useful fields:

FieldUsage
providerRefreference to the provider credentials we added
writeConnectionSecretToRefthe outputs of terraform module will be written into the secret
namename of the ecs instance
instance_typeecs instance type
host_namehostname of the ecs
passwordpassword of the ecs instance, you can connect by ssh
internet_max_bandwidth_outinternet bandwidth of the ecs instance
associate_public_ip_addresscreate public IP or not
instance_charge_typethe charge way of the resource
user_data_urlthe installation script after the ecs instance created, we have installed the frp server in the script
portsports that will be allowd in the VPC and security group, 9090/9091 is must for frp server while others are preserved for client usage
tagstags of the ECS instance

You can learn more fields by:

vela show alibaba-ecs

After applied, you can check the status and logs of the application by:

vela status ecs-demo
vela logs ecs-demo

You can get the secret from the terraform resource contains the output values.

You may already see the result in vela logs, you can also check the output information from Terraform by:

$ kubectl get secret outputs-ecs --template={{.data.this_public_ip}} | base64 --decode
["121.196.106.174"]

KubeVela will soon support query resource like this https://github.com/kubevela/kubevela/issues/4268.

As a result, you can visit the frp server admin page on port :9091, the admin password is vela123 in the script.

By now, we have finished the server part here.

Use frp client in KubeVela

The usage of frp client is very straight-forward, we can provide public IP for any of the service inside the cluster.

  1. Deploy as standalone to proxy for any Kubernetes Service.
cat <<EOF | vela up -f -
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: frp-proxy
spec:
components:
- name: frp-proxy
type: worker
properties:
image: oamdev/frpc:0.43.0
env:
- name: server_addr
value: "121.196.106.174"
- name: server_port
value: "9090"
- name: local_port
value: "80"
- name: connect_name
value: "velaux-service"
- name: local_ip
value: "velaux.vela-system"
- name: remote_port
value: "8083"
EOF

In this case, we specify the local_ip by velaux.vela-system, which means we're visiting the Kubernetes Service with name velaux in the namespace vela-system.

As a result, you can visit velaux service from the public IP 121.196.106.174:8083.

  1. Compose two components together for the same lifecycle.
cat <<EOF | vela up -f -
# YAML begins
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: composed-app
spec:
components:
- name: web-new
type: webservice
properties:
image: oamdev/hello-world:v2
ports:
- port: 8000
expose: true
- name: frp-web
type: worker
properties:
image: oamdev/frpc:0.43.0
env:
- name: server_addr
value: "121.196.106.174"
- name: server_port
value: "9090"
- name: local_port
value: "8000"
- name: connect_name
value: "composed-app"
- name: local_ip
value: "web-new.default"
- name: remote_port
value: "8082"
EOF

Wow! Then you can visiting the hello-world by:

curl 121.196.106.174:8082

The webservice type component will generate a service with the name of the component automatically. The frp-web component will proxy the traffic to the service web-new in the default namespace which is exactly the service generated.

When the application deleted, all of the resources defined in the same app are deleted together.

You can also compose the database together with them, then you can deliver all components needed in one time.

Clean Up

You can clean up all the applications in the demo by vela delete:

vela delete composed-app -y
vela delete frp-proxy -y
vela delete ecs-demo -y

I think you've learned how to use KubeVela in this scenario now, just try it in your environment!

What's more

In this blog, we have introduced the way to integrate Terraform module with KubeVela. It provides interesting use case that allow you to expose any of inner service to public.

While KubeVela can do more things than that, go and discover it at kubevela.io!

· 13 min read

KubeVela is a modern software delivery control panel. The goal is to make application deployment and O&M simpler, more agile, and more reliable in today's hybrid multi-cloud environment. Since the release of Version 1.1, the KubeVela architecture has naturally solved the delivery problems of enterprises in the hybrid multi-cloud environments and has provided sufficient scalability based on the OAM model, which makes it win the favor of many enterprise developers. This also accelerates the iteration of KubeVela.

In Version 1.2, we released an out-of-the-box visual console, which allows the end user to publish and manage diverse workloads through the interface. The release of Version 1.3 improved the expansion system with the OAM model as the core and provides rich plug-in functions. It also provides users with a large number of enterprise-level functions, including LDAP permission authentication, and provides more convenience for enterprise integration. You can obtain more than 30 addons in the addons registry of the KubeVela community. There are well-known CNCF projects (such as argocd, istio, and traefik), database middleware (such as Flink and MySQL), and hundreds of cloud vendor resources.

In Version 1.4, we focused on making application delivery safe, foolproof, and transparent. We added core functions, including multi-cluster permission authentication and authorization, a complex resource topology display, and a one-click installation control panel. We comprehensively strengthened the delivery security in multi-tenancy scenarios, improved the consistent experience of application development and delivery, and made the application delivery process more transparent.

Core Features

Out-of-the-Box Authentication and Authorization, Connecting to Kubernetes RBAC and Naturally Supporting Multiple Clusters

After solving the challenges of architecture upgrade and extensibility, we have noticed that the security of application delivery is a problem in the entire industry that needs to be solved urgently. We found many security risks from the use cases:

  • In using traditional CI/CD, many users will directly embed the admin permission of the production cluster into the environment variable of CI and only have certain permission separation for which clusters are delivered to the most basic. CI systems are usually used intensively for development and testing, and it is easy to introduce uncontrolled risks. Once the CI system is attacked by hackers or some man-made mis-operations occur, it may lead to huge damage to centralized management and coarse-grained authority control.

  • A large number of CRD controllers rely on admin permissions to perform operations on cluster resources and do not impose constraints on API access. Kubernetes has rich RBAC control capabilities. However, due to the high threshold of permission management (which is also independent of the implementation of specific functions), most users do not care about the details. They only choose the default configuration and put it into production use. Controllers with high flexibility (such as the ability to distribute Helm Chart) can easily become targets of hacker attacks, such as embedding a YAML script in the helm to steal keys from other namespaces.

KubeVela 1.4 has added authentication and authorization capabilities and naturally supports a multi-cluster hybrid environment. Each KubeVela platform administrator can customize any API permission combination in fine granularity, connect with the Kubernetes RBAC system, authorize these permission modules to developer users, and strictly restrict their permissions. They can also easily use the permission module preset on the KubeVela platform. For example, they can directly grant a user the permissions on a specific namespace of a cluster and the read-only permissions. This simplifies the learning costs and mental burdens of users and comprehensively strengthens the security of application delivery. The system automatically completes the underlying authorization and strictly verifies the scope and type of resources available for the project for users who use the UI, so the business layer RBAC permissions and the underlying Kubernetes RBAC system can be connected and work together to achieve security from the outside to the inside without expanding the permissions in any link.

alt

Specifically, after the platform administrator authorizes a user, the user's request will go through several stages (as shown in the figure).

  1. First, the webhook of KubeVela intercepts the user's request and sends the user's permission information (ServiceAccount) to the Application object.

  2. When the KubeVela Controller executes the deployment plan of the Application, it changes the permissions of the corresponding users based on the impersonation mechanism (impersonate) of Kubernetes.

  3. The KubeVela multi-cluster module (ClusterGateway) passes the corresponding permissions to the sub-cluster. The Kubernetes APIServer of the sub-cluster performs authentication based on the permissions of the sub-cluster. The permissions of the sub-cluster are created by the KubeVela authorization process.

In short, KubeVela's multi-cluster authentication and authorization ensure that the permissions of each end user are strictly restricted and will not be amplified by the delivery system. At the same time, KubeVela's permissions are minimized, and the entire user experience is simple.

Please read the official Permission Authentication and Authorization to learn more about the operating mechanisms behind them.

Lightweight and Convenient Application Development Control Panel Offers a Consistent Experience for Local Development and Production Deployment

With the prosperity of the ecosystem, we have seen more developers begin to pay attention to cloud-native technology, but they often don't know how to get started. The following are the main reasons:

  • The application development environment is inconsistent with the production environment, and the experience is different. Cloud-native is a technology trend that has emerged in the last five or six years. It has developed rapidly, but most companies are still accustomed to developing a set of internal platforms shielding underlying technologies. As a result, even if ordinary business developers learn cloud-native technology, it is difficult to practice it in actual work. In the best case, they may have to reconnect the API and configuration, let alone the consistent experience.

  • The deployment and use of cloud-native technologies with Kubernetes as the core is complicated. It is expensive to purchase host services from cloud vendors just for the sake of getting started. Even if it takes a lot of effort to learn to deploy a set of available local environments, it is difficult to connect many cloud-native technologies to complete the entire CI/CD process, which involves a lot of knowledge in the field of operation and maintenance, and ordinary developers usually do not need to care about it and have rare chance to use it.

We have also observed in the community that more companies are beginning to realize that self-built platforms cannot keep up with the development of the community ecosystem. They hope to provide a consistent experience through KubeVela and OAM models without losing the scalability of the ecosystem. However, since KubeVela's control panel relies on Kubernetes, the threshold for getting started is still not low. In response to this problem, the community has been thinking and looking for solutions. We conclude that we need a tool that has these characteristics:

  • Only rely on the container environment (such as Docker) to deploy and run, so every developer can easily obtain and use it

  • The local development and production environment experience are consistent, and the configuration is reusable, which can simulate a hybrid multi-cluster environment.

  • A single binary package supports offline deployment, and the time for environment initialization does not exceed the time to drink a glass of water (three minutes).

After several months of incubation, we can finally release this tool in 1.4: VelaD. D represents Daemon and Developer. It can help KubeVela run on a single machine and does not rely on any existing Kubernetes cluster. At the same time, it works as a lightweight application development control panel with KubeVela, helping developers obtain integrated development, testing, and delivery experiences and simplifying the complexity of cloud-native application deployment and management.

You can use the Demo documentation to install and try this tool to learn more about the implementation details. It only takes three minutes to initialize the installation.

alt

Show Resource Topology and Status to Make the Delivery Process Transparent

Another big demand in application delivery is the transparent management of the resource delivery process. For example, many users in the community like to use Helm Chart to package a lot of complex YAML together. However, once there are problems in the deployment, it will be difficult to troubleshoot due to the overall black box (even if it is a small problem). Some problems include the underlying storage is not provided normally, the associated resources are not created normally, and the underlying configuration is incorrect. There are many types of resources (especially in the modern hybrid multi-cluster hybrid environment) and how to obtain effective information from them and solve problems is a challenge.

In Version 1.4, we added the resource topology query function to improve KubeVela's application-centric delivery experience. When developers initiate application delivery, they only need to care about simple and consistent APIs. When they need to troubleshoot problems or focus on the delivery process, they can use the resource topology feature to quickly obtain the orchestration relationships of resources in different clusters from applications to the running status of Pod instances and automatically obtain the relationships of resources, including complex and black-box Helm Chart.

resource graph

The application shown in the preceding figure is used as an example. A Redis cluster is delivered through the Helm Chart package. The first layer of the chart is the application name, the second layer is the cluster, and the third layer is the resources directly rendered by the application. The next third and fourth layers are the associated resources of the lower level tracked according to different resources.

Users can use graphs to observe the derived resources and their status during the application delivery process. The abnormal points are displayed in yellow or red and the specific reasons are displayed. Compared with the application shown in the following figure, it is a basic Webservice delivered to two clusters. Developers can find that the application creates Deployment and Service resources in the two clusters, respectively. Also, the Deployment resources in the ask-hongkong cluster are displayed in yellow since the Pod instance has not been fully started.

multiple-cluster-graph

This feature also allows you to search, filter, and query using different clusters and components. This helps developers quickly identify problems and understand the delivery status of the underlying application at a very low threshold.

Please read the official blog Visualize the Topological Relationship of Multi-cluster Resources to learn more about the operation mechanism behind them.

Other Key Changes

In addition to the core functions and plug-in ecosystem, Version 1.4 also enhances core functions such as workflow:

  • You can configure field ignore rules to maintain the application status. This enables KubeVela to work with other controllers, such as HPA and Istio.

  • Application Resource Recycling supports settings based on the resource type. Currently, it supports settings based on the component name, component type, feature type, and resource type.

  • Workflows support sub-steps. Sub-steps support concurrent execution, which accelerates the delivery of resources in multi-cluster high availability scenarios.

  • You can pause a workflow step for a certain period. After that, the workflow automatically continues.

  • Resource deployment and recycling support follows the component dependency rule settings and supports sequential deployment and recycling of resources.

  • Workflow steps support conditional judgment. Currently, the if: always rule is supported, which means the step is executed under any circumstances, thus supporting deployment failure notification.

  • You can set the deployment scope for O&M features to separate O&M features from the deployment status of components. O&M features can be independently deployed in the control cluster.

Thanks to the continued contributions and efforts from more than 30 organizations and individuals in China and internationally (such as Alibaba Cloud, China Merchants Bank, and Napptive), more than 200 functional features and repairs were completed in a short period of two months, which has made this iteration excellent.

Please see the release details for more information.

Addon Ecosystem

Our plug-in ecology is also rapidly expanding because of the improvement of the 1.3 addon system:

  • Updated fluxcd addon supports OCI registry, which allows you to select different values files in the chart during deployment

  • The cert-manager addon is added to automatically manage Kubernetes certificates.

  • Adds a flink-kubernetes-operator addon to deliver flink workloads.

  • The kruise-rollout addon is added to support various release policies (such as canary release).

  • The pyroscope addon is added to support continuous performance tuning.

  • The traefik plug-in is added to support configuration API Gateway.

  • The vegeta addon is added to support automated stress testing of workloads.

  • The argocd addon is added to support ArgoCD-based Helm delivery and GitOps.

  • The Dapr addon is added to support the O&M capabilities of Dapr subscription and publishing.

  • The istio addon is added to support Istio-based gateway capabilities and traffic canaries.

  • The mysql-operator addon is added to support the deployment of highly available distributed mysql databases.

Developers are welcome to participate in the community and create addon to extend KubeVela's system capabilities.

undefined

How Can You Participate in the Community?

alt

KubeVela is an open-source, worldwide, Top-Level Project in the CNCF Foundation. There are more than 300 domestic and international contributors and more than 40 community members and maintainers. It is a bilingual international operation mode with more than 4000 community members, including code, documents, and community communication.

If you are interested in participating in the open-source community, we welcome you to join the KubeVela community. You can learn more about the methods of participating in the open-source community through the developer documentation of the KubeVela community. The engineers of the community will guide you.

Recent Planning

KubeVela will continue to evolve around an iterative cycle of two months. We will focus on these three dimensions in the next release:

  • Observability will provide end-to-end rich application insights around logs, metrics, and tracing dimensions to lay a solid foundation for the stability and intelligence of application delivery.

  • Workflow Delivery Capabilities will provide richer frameworks and integration capabilities, including custom step timeout, context information-based condition judgment, and branch workflow, and connect CI/CD, providing users with richer use cases and scenarios.

  • Application (Including Plug-Ins) Management Ability: You can disable and restart applications. You can import, export, and upload applications to the application market.

If you want to learn more about planning and become a contributor or partner, you can contact us by participating in community communication. We are looking forward to hearing from you!