Troubleshooting
Common issues and solutions when using Kubeasy.
This guide covers common issues you might encounter while using Kubeasy and how to resolve them.
Installation issues
CLI binary not found after installation
Problem: After downloading the CLI, running kubeasy returns "command not found"
Solution:
# Check if the binary is in your PATH
echo $PATH
# Move the binary to a location in your PATH
sudo mv kubeasy /usr/local/bin/
# Or add the binary's location to your PATH
export PATH=$PATH:/path/to/kubeasyMake the PATH change permanent by adding it to your shell profile:
# For bash
echo 'export PATH=$PATH:/path/to/kubeasy' >> ~/.bashrc
source ~/.bashrc
# For zsh
echo 'export PATH=$PATH:/path/to/kubeasy' >> ~/.zshrc
source ~/.zshrcPermission denied when running CLI
Problem: bash: ./kubeasy: Permission denied
Solution:
chmod +x kubeasymacOS blocks unsigned binary
Problem: macOS prevents opening the binary with "kubeasy cannot be opened because the developer cannot be verified"
Solution:
# Remove the quarantine attribute
xattr -d com.apple.quarantine kubeasy
# Or allow it via System Settings
# System Settings -> Privacy & Security -> "Allow Anyway"Cluster setup issues
Docker not running
Problem: kubeasy setup fails with "Cannot connect to the Docker daemon"
Solution:
# Check if Docker is running
docker ps
# Start Docker Desktop (macOS/Windows)
# Or start Docker service (Linux)
sudo systemctl start docker
# Verify Docker is accessible
docker run hello-worldKind cluster creation fails
Problem: ERROR: failed to create cluster: ...
Possible causes and solutions:
1. Port already in use
# Check what's using port 6443 (Kubernetes API)
lsof -i :6443
# Kill the process or stop existing Kind clusters
kind delete cluster --name kubeasy
kubeasy setup2. Insufficient resources
# Check Docker resource limits
docker info | grep -i memory
# Increase Docker Desktop resources:
# Docker Desktop -> Preferences -> Resources
# Set at least: 4 GB RAM, 2 CPUs3. Existing cluster with same name
# List existing clusters
kind get clusters
# Delete old cluster
kind delete cluster --name kubeasy
# Recreate
kubeasy setupkubectl not configured
Problem: kubectl commands fail with "The connection to the server localhost:8080 was refused"
Solution:
# Verify Kind cluster is running
kind get clusters
# Set kubectl context to Kind cluster
kubectl cluster-info --context kind-kubeasy
# Or let Kubeasy configure it
kubeasy setupAuthentication issues
Login fails with "Invalid token"
Problem: kubeasy login returns authentication failure
Solution:
- Generate a new API key from your Kubeasy profile
- Ensure you copied the entire key (no spaces or newlines)
- Try logging in again:
kubeasy login
# Enter your API key when promptedAPI key not found
Problem: Commands fail with "no API key found, please run 'kubeasy login'"
Solution:
# Re-authenticate
kubeasy loginCheck where credentials are stored:
# System keyring is checked first, then file-based storage at:
# Linux/macOS: ~/.config/kubeasy-cli/credentials
# Windows: %APPDATA%/kubeasy-cli/credentials
# You can also set the environment variable:
export KUBEASY_API_KEY=your-api-keyChallenge issues
Challenge fails to start
Problem: kubeasy challenge start <name> hangs or fails
Debugging steps:
1. Check cluster status
kubectl get nodes
kubectl get pods -A
# Ensure Kyverno is running
kubectl get pods -n kyverno2. Check namespace creation
# Verify namespace exists
kubectl get namespace <challenge-slug>
# Check events
kubectl get events -n <challenge-slug> --sort-by='.lastTimestamp'3. Verify authentication
# Make sure you're logged in
kubeasy loginChallenge resources not deploying
Problem: Manifests don't appear in the cluster after starting a challenge
Solution:
# Check all resources in the challenge namespace
kubectl get all -n <challenge-slug>
# Check events for errors
kubectl get events -n <challenge-slug> --sort-by='.lastTimestamp'
# Try resetting and restarting the challenge
kubeasy challenge reset <challenge-slug>
kubeasy challenge start <challenge-slug>Validation fails unexpectedly
Problem: kubeasy challenge submit fails even though the fix seems correct
Debugging steps:
1. Re-run the submission
kubeasy challenge submit <challenge-slug>Review the output carefully -- each validation shows whether it passed or failed with a message explaining why.
2. Verify resources are as expected
# List all resources in the challenge namespace
kubectl get all -n <challenge-slug>
# Check specific resource details
kubectl describe pod <pod-name> -n <challenge-slug>
kubectl logs <pod-name> -n <challenge-slug>3. Check conditions
kubectl get pod <pod-name> -n <challenge-slug> -o jsonpath='{.status.conditions}'4. Test connectivity manually (for connectivity validations)
kubectl run -it --rm debug --image=curlimages/curl --restart=Never -n <challenge-slug> \
-- curl http://service-name:port/pathKyverno policy blocks legitimate changes
Problem: Kubernetes rejects your changes with a Kyverno policy violation
Understanding the error:
Error from server: admission webhook "validate.kyverno.svc" denied the request:
resource Deployment/my-app violates policy require-resource-limitsSolution: The policy violation is intentional -- it's part of the challenge's bypass prevention.
To fix:
- Read the error message carefully -- it tells you what's blocked
- Kyverno policies protect specific fields (like container images) to prevent cheating
- Focus on fixing the actual problem rather than replacing the application
View all active policies:
kubectl get clusterpolicies
kubectl describe clusterpolicy <policy-name>Challenge stuck in "In Progress"
Problem: Challenge shows as "In Progress" on the platform but you've finished it
Solution:
# Submit your solution
kubeasy challenge submit <challenge-slug>
# This will:
# 1. Run all validations
# 2. Upload results to the platform
# 3. Update your progressIf submission succeeds but progress doesn't update:
- Check the web platform for validation feedback
- Some objectives might be failing silently
- Review the validation output in the CLI
Network issues
Cannot reach services
Problem: Services within the cluster are not accessible
Debugging steps:
1. Check service exists
kubectl get svc -n <challenge-slug>2. Check endpoints
# Endpoints should list pod IPs
kubectl get endpoints <service-name> -n <challenge-slug>
# If empty, check pod labels match service selector
kubectl get pods -n <challenge-slug> --show-labels
kubectl get svc <service-name> -n <challenge-slug> -o yaml | grep selector -A 53. Test connectivity from within cluster
kubectl run -it --rm debug --image=nicolaka/netshoot --restart=Never -n <challenge-slug>
# From the debug pod:
curl http://service-name:port
nslookup service-name4. Check NetworkPolicies
# List network policies
kubectl get networkpolicies -n <challenge-slug>
# Describe to see rules
kubectl describe networkpolicy <policy-name> -n <challenge-slug>Pods cannot pull images
Problem: Pods stuck in ImagePullBackOff or ErrImagePull
Solution:
# Check pod events
kubectl describe pod <pod-name> -n <challenge-slug>
# Common causes:
# 1. Image doesn't exist - check image name
# 2. Private registry without credentials
# 3. Rate limit (Docker Hub) - wait or authenticatePerformance issues
Cluster running slowly
Problem: Commands are slow, pods take long to start
Possible causes:
1. Insufficient resources
# Check Docker resource usage
docker stats
# Increase Docker Desktop limits:
# Docker Desktop -> Settings -> Resources
# Recommended: 4-8 GB RAM, 2-4 CPUs2. Too many running pods
# Check pod count
kubectl get pods -A | wc -l
# Clean up old challenges
kubeasy challenge clean <old-challenge>3. Check node status
kubectl describe node kubeasy-control-plane
# Look for memory/CPU pressure warningsData and cleanup issues
Cannot delete namespace
Problem: Namespace stuck in Terminating state
Solution:
# Check what's blocking deletion
kubectl get namespace <challenge-slug> -o yaml
# Look for finalizers
# Force remove finalizers
kubectl patch namespace <challenge-slug> -p '{"metadata":{"finalizers":[]}}' --type=mergeReset cluster completely
Problem: Cluster is in a bad state and you want to start fresh
Solution:
# Delete the Kind cluster
kind delete cluster --name kubeasy
# Recreate everything
kubeasy setup
# This will:
# 1. Create a fresh Kind cluster
# 2. Reinstall all components
# 3. Configure kubectl contextClear CLI configuration
Problem: You want to reset CLI settings or change accounts
Solution:
# CLI config is stored at:
# Linux/macOS: ~/.config/kubeasy-cli/
# Windows: %APPDATA%/kubeasy-cli/
# Remove CLI configuration
rm -rf ~/.config/kubeasy-cli
# Log in again
kubeasy loginGetting more help
If your issue isn't covered here:
-
Check the logs
# CLI logs are written to a file - check the log path in the output # Kyverno logs kubectl logs -n kyverno -l app.kubernetes.io/name=kyverno -
Gather diagnostic info
# Cluster info kubectl cluster-info # Challenge state kubectl get all -n <challenge-slug> -o yaml > challenge-state.yaml -
Ask for help
- GitHub Issues
- Community Discussions
- Include: Error message, CLI version (
kubeasy version), OS, and what you've tried
Useful debugging commands
# Check all running resources
kubectl get all -A
# View recent events across all namespaces
kubectl get events -A --sort-by='.lastTimestamp' | tail -20
# Check cluster resource usage
kubectl top nodes
kubectl top pods -A
# Verify all Kubeasy components are healthy
kubectl get pods -n kyverno
kubectl get pods -n local-path-storage
# Test DNS resolution
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default