How it Works
Understand the technical architecture behind Kubeasy - from CLI to cluster validation.
Kubeasy combines several open-source tools to create an isolated, validated learning environment entirely on your machine.
This page explains the technical architecture and how each component interacts.
Architecture Overview
┌─────────────────────────────────────────────────────────────┐
│ Your Machine │
│ │
│ ┌──────────────┐ │
│ │ Kubeasy CLI │ (Go + Cobra) │
│ │ Manages │ │
│ │ Validates │ │
│ └──────┬───────┘ │
│ │ │
│ │ Creates & Queries │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────┐│
│ │ Kind Cluster (Local K8s) ││
│ │ ││
│ │ ┌──────────────┐ ┌──────────────────────────┐ ││
│ │ │ Kyverno │ │ local-path-provisioner │ ││
│ │ │ │ │ │ ││
│ │ │ Prevents │ │ Provides PV storage │ ││
│ │ │ Bypasses │ │ │ ││
│ │ └──────────────┘ └──────────────────────────┘ ││
│ │ ││
│ │ ┌─────────────────────────────────────────────────────┐││
│ │ │ Challenge Namespaces (isolated) │││
│ │ │ • pod-evicted │││
│ │ │ • first-deployment │││
│ │ │ • ... │││
│ │ └─────────────────────────────────────────────────────┘││
│ └─────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────┘Component Breakdown
1. Kubeasy CLI
The CLI is built with Go and uses the Cobra framework for command-line parsing.
Responsibilities:
- Authenticates users via API keys
- Sets up the local Kind cluster
- Deploys infrastructure components (Kyverno, local-path-provisioner)
- Starts/stops/resets challenges
- Executes validations directly against the cluster
- Submits results to the Kubeasy platform
Key Commands:
kubeasy setup # Bootstrap the local cluster
kubeasy challenge start # Deploy challenge manifests
kubeasy challenge submit # Run validations and submit results
kubeasy challenge reset # Reset challenge and progress
kubeasy challenge clean # Remove challenge resources2. Kind (Kubernetes in Docker)
Kind creates a local, lightweight Kubernetes cluster running entirely in Docker containers.
Why Kind?
- Fast cluster creation
- Completely isolated from your system
- No cloud dependencies
- Realistic Kubernetes environment
When you run kubeasy setup, the CLI:
- Checks if a Kind cluster named
kubeasyexists - Creates it if needed (single control-plane node)
- Configures
kubectlto point to this cluster - Installs Kyverno and local-path-provisioner
3. Kyverno
Kyverno is a Kubernetes-native policy engine.
Role in Kubeasy:
- Prevents bypasses - Stops users from cheating (e.g., replacing broken app with working one)
- Enforces policies defined in each challenge's
policies/folder
Example bypass prevention:
# Prevents changing the container image
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: protect-challenge-image
spec:
validationFailureAction: Enforce
rules:
- name: preserve-image
match:
resources:
kinds: ["Deployment"]
namespaces: ["challenge-*"]
validate:
message: "Cannot change the application image"
pattern:
spec:
template:
spec:
containers:
- name: app
image: "kubeasy/broken-app:v1"Note: Kyverno is NOT used for challenge validation. It only prevents bypasses. Validation is handled by the CLI.
4. Challenge Deployment via OCI
Challenges are packaged as OCI artifacts and stored in the GitHub Container Registry (ghcr.io/kubeasy-dev/challenges). When you start a challenge, the CLI pulls the artifact and applies the manifests directly to the cluster.
This approach:
- Requires no extra infrastructure in the cluster (no ArgoCD or GitOps controller)
- Ensures fast and reliable deployments
- Uses standard OCI registries for distribution
5. CLI-Based Validation
Kubeasy uses CLI-based validation. The CLI reads validation definitions from challenge.yaml and executes checks directly against the cluster.
Validation types:
| Type | What it checks |
|---|---|
condition | Resource conditions (Pod Ready, Deployment Available) |
status | Arbitrary status fields with operators (restart count < 3) |
log | Strings in container logs |
event | Forbidden Kubernetes events (OOMKilled, Evicted) |
connectivity | HTTP connectivity between pods |
Example validation definition:
# In challenge.yaml
objectives:
- key: pod-ready
title: "Pod Ready"
description: "The pod must be running"
order: 1
type: condition
spec:
target:
kind: Pod
labelSelector:
app: data-processor
checks:
- type: Ready
status: "True"When you run kubeasy challenge submit, the CLI:
- Loads objectives from the challenge definition
- Executes each validation against the cluster
- Sends structured results to the backend
- Backend verifies all expected objectives are present
Challenge Lifecycle
Here's what happens during a typical challenge:
Starting a Challenge
kubeasy challenge start pod-evicted- CLI fetches challenge metadata from the API
- Creates a dedicated namespace (e.g.,
pod-evicted) - Pulls the challenge OCI artifact from the registry
- Applies manifests and Kyverno policies to the namespace
- Waits for resources to be ready
- Switches your
kubectlcontext to this namespace
Working on the Challenge
You now have full access to investigate and fix the problem:
kubectl get pods # Inspect resources
kubectl logs my-broken-pod # Check logs
kubectl describe pod my-broken-pod # See events
kubectl edit deployment my-app # Make changesThe cluster behaves like a real Kubernetes environment. Use your debugging skills to find and fix the issue.
Submitting a Solution
kubeasy challenge submit pod-evicted- CLI loads objectives from
challenge.yaml - Executes each validation against the cluster
- Builds structured payload with per-objective results
- Sends results to the Kubeasy backend
- Backend verifies ALL objectives are present and passed
- XP awarded if successful
Resetting
kubeasy challenge reset pod-evicted- Deletes the namespace and all resources
- Resets your challenge progress on the platform
- You can start the challenge again from scratch
Data Flow Diagram
┌─────────────┐
│ User runs │
│ CLI cmd │
└──────┬──────┘
│
▼
┌─────────────────┐
│ Kubeasy CLI │
│ (Go binary) │
└──────┬──────────┘
│
├─────────────────────────────────┐
│ │
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ OCI Pull │ │ CLI Validation │
│ Deploys │ │ Engine │
│ Manifests │ │ (executes │
└──────┬───────┘ │ checks) │
│ └────────┬─────────┘
▼ │
┌─────────────────────────────────┐ │
│ Challenge Namespace │◄───┘
│ (Isolated Kubernetes resources) │
└─────────────────────────────────┘Security and Isolation
Each challenge runs in its own namespace with:
- Kyverno policies: Prevent bypasses and cheating
- Namespace isolation: Resources are contained within the challenge namespace
The Kind cluster is entirely local:
- No data leaves your machine
- No cloud credentials required
- Full control over the environment
Backend Security
The backend validates submissions by:
- Checking ALL registered objectives are present (can't skip objectives)
- Checking no unknown objectives are submitted (can't invent objectives)
- Verifying each objective has
passed: truefor challenge completion
What Gets Installed During Setup?
When you run kubeasy setup, the CLI installs:
| Component | Purpose | Namespace |
|---|---|---|
| Kyverno | Bypass prevention | kyverno |
| local-path-provisioner | PersistentVolume storage | local-path-storage |
You can inspect these at any time:
kubectl get pods -n kyverno
kubectl get pods -n local-path-storagePhilosophy: Why CLI-Based Validation?
Previous versions of Kubeasy used:
- Rego-based policies (complex, hard to write)
- Kubernetes operator with CRDs (complex infrastructure)
The current CLI-based approach offers:
- Simplicity - One
challenge.yamlfile contains everything - No extra infrastructure - CLI handles all validation logic
- Easy testing - Run validations locally without CRDs
- Better error messages - CLI provides detailed failure feedback
- Faster iteration - No operator reconciliation delays