Kubeasy LogoKubeasy

How it Works

Understand the technical architecture behind Kubeasy - from CLI to cluster validation.

Last updated: February 15, 2026GitHubView on GitHub

Kubeasy combines several open-source tools to create an isolated, validated learning environment entirely on your machine.

This page explains the technical architecture and how each component interacts.

Architecture Overview

┌─────────────────────────────────────────────────────────────┐
│                        Your Machine                          │
│                                                               │
│  ┌──────────────┐                                            │
│  │ Kubeasy CLI  │  (Go + Cobra)                              │
│  │   Manages    │                                            │
│  │   Validates  │                                            │
│  └──────┬───────┘                                            │
│         │                                                     │
│         │ Creates & Queries                                   │
│         ▼                                                     │
│  ┌─────────────────────────────────────────────────────────┐│
│  │            Kind Cluster (Local K8s)                      ││
│  │                                                           ││
│  │  ┌──────────────┐  ┌──────────────────────────┐         ││
│  │  │   Kyverno    │  │  local-path-provisioner  │         ││
│  │  │              │  │                          │         ││
│  │  │ Prevents     │  │ Provides PV storage      │         ││
│  │  │ Bypasses     │  │                          │         ││
│  │  └──────────────┘  └──────────────────────────┘         ││
│  │                                                           ││
│  │  ┌─────────────────────────────────────────────────────┐││
│  │  │  Challenge Namespaces (isolated)                    │││
│  │  │  • pod-evicted                                      │││
│  │  │  • first-deployment                                 │││
│  │  │  • ...                                               │││
│  │  └─────────────────────────────────────────────────────┘││
│  └─────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────┘

Component Breakdown

1. Kubeasy CLI

The CLI is built with Go and uses the Cobra framework for command-line parsing.

Responsibilities:

  • Authenticates users via API keys
  • Sets up the local Kind cluster
  • Deploys infrastructure components (Kyverno, local-path-provisioner)
  • Starts/stops/resets challenges
  • Executes validations directly against the cluster
  • Submits results to the Kubeasy platform

Key Commands:

kubeasy setup             # Bootstrap the local cluster
kubeasy challenge start   # Deploy challenge manifests
kubeasy challenge submit  # Run validations and submit results
kubeasy challenge reset   # Reset challenge and progress
kubeasy challenge clean   # Remove challenge resources

2. Kind (Kubernetes in Docker)

Kind creates a local, lightweight Kubernetes cluster running entirely in Docker containers.

Why Kind?

  • Fast cluster creation
  • Completely isolated from your system
  • No cloud dependencies
  • Realistic Kubernetes environment

When you run kubeasy setup, the CLI:

  1. Checks if a Kind cluster named kubeasy exists
  2. Creates it if needed (single control-plane node)
  3. Configures kubectl to point to this cluster
  4. Installs Kyverno and local-path-provisioner

3. Kyverno

Kyverno is a Kubernetes-native policy engine.

Role in Kubeasy:

  • Prevents bypasses - Stops users from cheating (e.g., replacing broken app with working one)
  • Enforces policies defined in each challenge's policies/ folder

Example bypass prevention:

# Prevents changing the container image
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: protect-challenge-image
spec:
  validationFailureAction: Enforce
  rules:
    - name: preserve-image
      match:
        resources:
          kinds: ["Deployment"]
          namespaces: ["challenge-*"]
      validate:
        message: "Cannot change the application image"
        pattern:
          spec:
            template:
              spec:
                containers:
                  - name: app
                    image: "kubeasy/broken-app:v1"

Note: Kyverno is NOT used for challenge validation. It only prevents bypasses. Validation is handled by the CLI.

4. Challenge Deployment via OCI

Challenges are packaged as OCI artifacts and stored in the GitHub Container Registry (ghcr.io/kubeasy-dev/challenges). When you start a challenge, the CLI pulls the artifact and applies the manifests directly to the cluster.

This approach:

  • Requires no extra infrastructure in the cluster (no ArgoCD or GitOps controller)
  • Ensures fast and reliable deployments
  • Uses standard OCI registries for distribution

5. CLI-Based Validation

Kubeasy uses CLI-based validation. The CLI reads validation definitions from challenge.yaml and executes checks directly against the cluster.

Validation types:

TypeWhat it checks
conditionResource conditions (Pod Ready, Deployment Available)
statusArbitrary status fields with operators (restart count < 3)
logStrings in container logs
eventForbidden Kubernetes events (OOMKilled, Evicted)
connectivityHTTP connectivity between pods

Example validation definition:

# In challenge.yaml
objectives:
  - key: pod-ready
    title: "Pod Ready"
    description: "The pod must be running"
    order: 1
    type: condition
    spec:
      target:
        kind: Pod
        labelSelector:
          app: data-processor
      checks:
        - type: Ready
          status: "True"

When you run kubeasy challenge submit, the CLI:

  1. Loads objectives from the challenge definition
  2. Executes each validation against the cluster
  3. Sends structured results to the backend
  4. Backend verifies all expected objectives are present

Challenge Lifecycle

Here's what happens during a typical challenge:

Starting a Challenge

kubeasy challenge start pod-evicted
  1. CLI fetches challenge metadata from the API
  2. Creates a dedicated namespace (e.g., pod-evicted)
  3. Pulls the challenge OCI artifact from the registry
  4. Applies manifests and Kyverno policies to the namespace
  5. Waits for resources to be ready
  6. Switches your kubectl context to this namespace

Working on the Challenge

You now have full access to investigate and fix the problem:

kubectl get pods                    # Inspect resources
kubectl logs my-broken-pod          # Check logs
kubectl describe pod my-broken-pod  # See events
kubectl edit deployment my-app      # Make changes

The cluster behaves like a real Kubernetes environment. Use your debugging skills to find and fix the issue.

Submitting a Solution

kubeasy challenge submit pod-evicted
  1. CLI loads objectives from challenge.yaml
  2. Executes each validation against the cluster
  3. Builds structured payload with per-objective results
  4. Sends results to the Kubeasy backend
  5. Backend verifies ALL objectives are present and passed
  6. XP awarded if successful

Resetting

kubeasy challenge reset pod-evicted
  • Deletes the namespace and all resources
  • Resets your challenge progress on the platform
  • You can start the challenge again from scratch

Data Flow Diagram

┌─────────────┐
│  User runs  │
│   CLI cmd   │
└──────┬──────┘


┌─────────────────┐
│  Kubeasy CLI    │
│  (Go binary)    │
└──────┬──────────┘

       ├─────────────────────────────────┐
       │                                 │
       ▼                                 ▼
┌──────────────┐              ┌──────────────────┐
│  OCI Pull    │              │  CLI Validation  │
│  Deploys     │              │  Engine          │
│  Manifests   │              │  (executes       │
└──────┬───────┘              │   checks)        │
       │                      └────────┬─────────┘
       ▼                               │
┌─────────────────────────────────┐    │
│     Challenge Namespace          │◄───┘
│  (Isolated Kubernetes resources) │
└─────────────────────────────────┘

Security and Isolation

Each challenge runs in its own namespace with:

  • Kyverno policies: Prevent bypasses and cheating
  • Namespace isolation: Resources are contained within the challenge namespace

The Kind cluster is entirely local:

  • No data leaves your machine
  • No cloud credentials required
  • Full control over the environment

Backend Security

The backend validates submissions by:

  1. Checking ALL registered objectives are present (can't skip objectives)
  2. Checking no unknown objectives are submitted (can't invent objectives)
  3. Verifying each objective has passed: true for challenge completion

What Gets Installed During Setup?

When you run kubeasy setup, the CLI installs:

ComponentPurposeNamespace
KyvernoBypass preventionkyverno
local-path-provisionerPersistentVolume storagelocal-path-storage

You can inspect these at any time:

kubectl get pods -n kyverno
kubectl get pods -n local-path-storage

Philosophy: Why CLI-Based Validation?

Previous versions of Kubeasy used:

  1. Rego-based policies (complex, hard to write)
  2. Kubernetes operator with CRDs (complex infrastructure)

The current CLI-based approach offers:

  • Simplicity - One challenge.yaml file contains everything
  • No extra infrastructure - CLI handles all validation logic
  • Easy testing - Run validations locally without CRDs
  • Better error messages - CLI provides detailed failure feedback
  • Faster iteration - No operator reconciliation delays

Further Reading

On this page