Debug, Test and Dry-run
With flexibility in defining abstractions, it's important to be able to debug, test and dry-run the CUE based definitions. This tutorial will show this step by step.
#
PrerequisitesPlease make sure below CLIs are present in your environment:
#
Define Definition and TemplateWe recommend to define the Definition Object
in two separate parts: the CRD part and the CUE template. This enable us to debug, test and dry-run the CUE template.
Let's name the CRD part as def.yaml
.
And the CUE template part as def.cue
, then we can use CUE commands such as cue fmt
/ cue vet
to format and validate the CUE file.
After everything is done, there's a script hack/vela-templates/mergedef.sh
to merge the def.yaml
and def.cue
into a completed Definition Object.
#
Debug CUE templatecue vet
to Validate#
Use The reference "context" not found
is a common error in this step as context
is a runtime information that only exist in KubeVela controllers. In order to validate the CUE template end-to-end, we can add a mock context
in def.cue
.
Note that you need to remove all mock data when you finished the validation.
Then execute the command:
The reference "context" not found
error is gone, but cue vet
only validates the data type which is not enough to ensure the login in template is correct. Hence we need to use cue vet -c
for complete validation:
It now complains some runtime data is incomplete (because context
and parameter
do not have value), let's now fill in more mock data in the def.cue
file:
It won't complain now which means validation is passed:
cue export
to Check the Rendered Resources#
Use The cue export
can export rendered result in YAMl foramt:
Kube
package#
Test CUE Template with KubeVela automatically generates internal CUE packages for all built-in Kubernetes API resources including CRDs. You can import them in CUE template to simplify your templates and help you do the validation.
There are two kinds of ways to import internal kube
packages.
- Import them with fixed style:
kube/<apiVersion>
and using it byKind
.This way is very easy to remember and use because it aligns with the K8s Object usage, only need to add a prefixkube/
beforeapiVersion
. While this way only supported in KubeVela, so you can only debug and test it withvela system dry-run
. - Import them with third-party packages style. You can run
vela system cue-packages
to list all build-inkube
packages to know thethird-party packages
supported currently.In fact, they are all built-in packages, but you can import them with theimport-path
like thethird-party packages
. In this way, you could debug withcue
cli client.
kube
packages#
A workflow to debug with Here's a workflow that you can debug and test the CUE template with cue
CLI and use exactly the same CUE template in KubeVela.
- Create a test directory, Init CUE modules.
- Download the
third-party packages
by usingcue
CLI.
In KubeVela, we don't need to download these packages as they're automatically generated from K8s API.
But for local test, we need to use cue get go
to fetch Go packages and convert them to CUE format files.
So, by using K8s Deployment
and Serivice
, we need download and convert to CUE definitions for the core
and apps
Kubernetes modules like below:
After that, the module directory will show the following contents:
The package import path in CUE template should be:
- Refactor directory hierarchy.
Our goal is to test template locally and use the same template in KubeVela. So we need to refactor our local CUE module directories a bit to align with the import path provided by KubeVela,
Copy the apps
and core
from cue.mod/gen/k8s.io/api
to cue.mod/gen/k8s.io
.
(Note we should keep the source directory apps
and core
in gen/k8s.io/api
to avoid package dependency issues).
The modified module directory should like:
So, you can import the package use the following path that aligns with KubeVela:
- Test and Run.
Finally, we can test CUE Template which use the Kube
package.
Use cue export
to see the export result.
Application
#
Dry-Run the When CUE template is good, we can use vela system dry-run
to dry run and check the rendered resources in real Kubernetes cluster. This command will exactly execute the same render logic in KubeVela's Application
Controller and output the result for you.
First, we need use mergedef.sh
to merge the definition and cue files.
Then, let's create an Application named test-app.yaml
.
Dry run the application by using vela system dry-run
.
-d
or --definitions
is a useful flag permitting user to provide capability
definitions used in the application from local files.
dry-run
cmd will prioritize the provided capabilities than the living
ones in the cluster.
If the capability is not found in local files and cluster, it will raise an error.
Application
#
Live-Diff the vela system live-diff
allows users to have a preview of what would change if
upgrade an application.
It basically generates a diff between the specific revision of an application
and the result of vela system dry-run
.
The result shows the changes (added/modified/removed/no_change) of the application as well as its sub-resources, such as components and traits.
live-diff
will not make any changes to the living cluster, so it's very
helpful if you want to update an application but worry about the unknown results
that may be produced.
Let's prepare an application and deploy it.
ComponentDefinitions and TraitDefinitions used in this sample are stored in
./doc/examples/live-diff/definitions
.
Then, assume we want to update the application with below configuration.
To preview changes brought by updating while not really apply updated
configuration into the cluster, we can use live-diff
here.
-r
or --revision
is a flag that specifies the name of a living
ApplicationRevision
with which you want to compare the updated application.
-c
or --context
is a flag that specifies the number of lines shown around a
change.
The unchanged lines which are out of the context of a change will be omitted.
It's useful if the diff result contains a lot of unchanged content while
you just want to focus on the changed ones.