With flexibility in defining abstractions, it's important to be able to debug, test and dry-run the CUE based definitions. This tutorial will show this step by step.
Please make sure below CLIs are present in your environment:
We recommend to define the
Definition Object in two separate parts: the CRD part and the CUE template. This enable us to debug, test and dry-run the CUE template.
Let's name the CRD part as
And the CUE template part as
def.cue, then we can use CUE commands such as
cue fmt /
cue vet to format and validate the CUE file.
After everything is done, there's a script
hack/vela-templates/mergedef.sh to merge the
def.cue into a completed Definition Object.
cue vet to Validate#
reference "context" not found is a common error in this step as
context is a runtime information that only exist in KubeVela controllers. In order to validate the CUE template end-to-end, we can add a mock
Note that you need to remove all mock data when you finished the validation.
Then execute the command:
reference "context" not found error is gone, but
cue vet only validates the data type which is not enough to ensure the login in template is correct. Hence we need to use
cue vet -c for complete validation:
It now complains some runtime data is incomplete (because
parameter do not have value), let's now fill in more mock data in the
It won't complain now which means validation is passed:
cue export to Check the Rendered Resources#
cue export can export rendered result in YAMl foramt:
Test CUE Template with
KubeVela automatically generates internal CUE packages for all built-in Kubernetes API resources including CRDs. You can import them in CUE template to simplify your templates and help you do the validation.
There are two kinds of ways to import internal
- Import them with fixed style:
kube/<apiVersion>and using it by
Kind.This way is very easy to remember and use because it aligns with the K8s Object usage, only need to add a prefiximport (apps "kube/apps/v1"corev1 "kube/v1")// output is validated by Deployment.output: apps.#Deploymentoutputs: service: corev1.#Service
apiVersion. While this way only supported in KubeVela, so you can only debug and test it with
vela system dry-run.
- Import them with third-party packages style. You can run
vela system cue-packagesto list all build-in
kubepackages to know the
third-party packagessupported currently.In fact, they are all built-in packages, but you can import them with the$ vela system cue-packagesDEFINITION-NAME IMPORT-PATH USAGE#Deployment k8s.io/apps/v1 Kube Object for apps/v1.Deployment#Service k8s.io/core/v1 Kube Object for v1.Service#Secret k8s.io/core/v1 Kube Object for v1.Secret#Node k8s.io/core/v1 Kube Object for v1.Node#PersistentVolume k8s.io/core/v1 Kube Object for v1.PersistentVolume#Endpoints k8s.io/core/v1 Kube Object for v1.Endpoints#Pod k8s.io/core/v1 Kube Object for v1.Pod
third-party packages. In this way, you could debug with
A workflow to debug with
Here's a workflow that you can debug and test the CUE template with
cue CLI and use exactly the same CUE template in KubeVela.
- Create a test directory, Init CUE modules.
- Download the
third-party packagesby using
In KubeVela, we don't need to download these packages as they're automatically generated from K8s API.
But for local test, we need to use
cue get go to fetch Go packages and convert them to CUE format files.
So, by using K8s
Serivice, we need download and convert to CUE definitions for the
apps Kubernetes modules like below:
After that, the module directory will show the following contents:
The package import path in CUE template should be:
- Refactor directory hierarchy.
Our goal is to test template locally and use the same template in KubeVela. So we need to refactor our local CUE module directories a bit to align with the import path provided by KubeVela,
(Note we should keep the source directory
gen/k8s.io/api to avoid package dependency issues).
The modified module directory should like:
So, you can import the package use the following path that aligns with KubeVela:
- Test and Run.
Finally, we can test CUE Template which use the
cue export to see the export result.
When CUE template is good, we can use
vela system dry-run to dry run and check the rendered resources in real Kubernetes cluster. This command will exactly execute the same render logic in KubeVela's
Application Controller and output the result for you.
First, we need use
mergedef.sh to merge the definition and cue files.
Then, let's create an Application named
Dry run the application by using
vela system dry-run.
--definitions is a useful flag permitting user to provide capability
definitions used in the application from local files.
dry-run cmd will prioritize the provided capabilities than the living
ones in the cluster.
If the capability is not found in local files and cluster, it will raise an error.
vela system live-diff allows users to have a preview of what would change if
upgrade an application.
It basically generates a diff between the specific revision of an application
and the result of
vela system dry-run.
The result shows the changes (added/modified/removed/no_change) of the application as well as its sub-resources, such as components and traits.
live-diff will not make any changes to the living cluster, so it's very
helpful if you want to update an application but worry about the unknown results
that may be produced.
Let's prepare an application and deploy it.
ComponentDefinitions and TraitDefinitions used in this sample are stored in
Then, assume we want to update the application with below configuration.
To preview changes brought by updating while not really apply updated
configuration into the cluster, we can use
--revision is a flag that specifies the name of a living
ApplicationRevision with which you want to compare the updated application.
--context is a flag that specifies the number of lines shown around a
The unchanged lines which are out of the context of a change will be omitted.
It's useful if the diff result contains a lot of unchanged content while
you just want to focus on the changed ones.