Welcome! In this hands-on workshop, you'll learn Kubernetes fundamentals by working with a real application.
Before starting, ensure you have:
- β Docker Desktop installed and running
- β Git installed (to clone this repository)
Open a terminal and run:
docker psIf this works without errors, you're ready!
Run these commands:
Windows
cd Workshop_Implementation
docker-compose -f docker-compose.workshop.yaml up --build -dLinux/Mac
cd Workshop_Implementation
sh ./setup-workshop.shImportant
On some linux distributions, you have to add a z to the volumes of docker. The changes are: In docker-compose-yaml, to the api volumes:
- ./:/app:zIn docker-compose.workshop.yaml:
./k8s:/k8s:ro ----> ./k8s:/k8s:ro,z
/var/run/docker.sock:/var/run/docker.sock ----> /var/run/docker.sock:/var/run/docker.sock:z
./kubeconfig:/output ----> ./kubeconfig:/output:z
./k8s:/k8s:ro ----> ./k8s:/k8s:ro,zThis will:
- β Build your application images (backend + frontend)
- β Start a Kubernetes cluster (k3s)
- β Deploy all applications automatically
- β Set up the Kubernetes Dashboard
Wait time: 2-3 minutes
Monitor the deployment:
docker logs -f workshop-deployerWhen you see "β Setup Complete!", you're ready!
Press Ctrl+C to stop viewing logs.
Check that all pods are running:
docker exec workshop-k3s kubectl get pods -n workshopYou should see:
NAME READY STATUS RESTARTS AGE
backend-xxxxx-xxxxx 1/1 Running 0 1m
backend-xxxxx-xxxxx 1/1 Running 0 1m
frontend-xxxxx-xxxxx 1/1 Running 0 1m
frontend-xxxxx-xxxxx 1/1 Running 0 1m
frontend-xxxxx-xxxxx 1/1 Running 0 1m
All pods should show Running status.
Open these in your browser:
| Application | URL | Purpose |
|---|---|---|
| Frontend | http://localhost:30081 | Your web application |
| Backend | http://localhost:30080/health | API health check |
| Dashboard | https://localhost:30082 | Kubernetes visualization |
- Your browser will warn about the certificate - click "Advanced" β "Proceed"
- On the login page, click the "Skip" button
- You should now see the Kubernetes Dashboard!
You now have a complete Kubernetes environment with:
- Backend: 2 pods running a Node.js API with SQLite database
- Frontend: 3 pods running a React application
- Load Generator: Ready to create artificial load (starts at 0 replicas)
- Services: Provide stable network endpoints
- HPA (HorizontalPodAutoscaler): Ready to auto-scale based on load
- Dashboard: Web UI to visualize everything
Throughout the workshop, you'll use these commands:
# View all pods
docker exec workshop-k3s kubectl get pods -n workshop
# View all resources
docker exec workshop-k3s kubectl get all -n workshop
# View autoscaling status
docker exec workshop-k3s kubectl get hpa -n workshop
# View resource usage
docker exec workshop-k3s kubectl top pods -n workshopGoal: Learn manual scaling, prove zero-downtime, and see autoscaling in action
Learn how to scale applications manually and watch it happen in real-time.
Open two browser tabs:
- Dashboard: https://localhost:30082
- On the top left, select all namespaces.
- Navigate to: Workloads β Deployments
- Your App: http://localhost:30081
Keep both visible!
In your terminal, check current pods:
docker exec workshop-k3s kubectl get pods -n workshopYou should see:
- 2 backend pods (backend-xxxxx)
- 3 frontend pods (frontend-xxxxx)
In the Dashboard, you can see the same information visually.
Scale the frontend to 6 replicas:
docker exec workshop-k3s kubectl scale deployment/frontend --replicas=6 -n workshopπ Watch it happen:
- In Dashboard: Click on the
frontenddeployment- Watch the Pods section - 3 new pods appear!
- Status changes:
PendingβContainerCreatingβRunning
- In Terminal (optional):
Press
docker exec workshop-k3s kubectl get pods -n workshop -wCtrl+Cto stop watching.
β±οΈ Wait until all 6 pods show Running status.
Now scale back down to just 1 replica:
docker exec workshop-k3s kubectl scale deployment/frontend --replicas=1 -n workshopπ Watch it happen:
- In Dashboard: See 5 pods gracefully terminate
- Notice: Kubernetes removes pods one by one, not all at once
Refresh http://localhost:30081 in your browser.
Result: The app works perfectly, even with just 1 pod!
Prove that scaling doesn't break your application.
Open a new terminal window and run:
Windows PowerShell:
while($true) {
try {
$response = Invoke-WebRequest -Uri http://localhost:30081 -UseBasicParsing
Write-Host "β $($response.StatusCode)" -ForegroundColor Green
} catch {
Write-Host "β ERROR" -ForegroundColor Red
}
Start-Sleep -Milliseconds 500
}Mac/Linux:
while true; do
curl -s -o /dev/null -w "β %{http_code}\n" http://localhost:30081 || echo "β ERROR"
sleep 0.5
doneYou should see a continuous stream of β 200 responses.
Keep the monitoring running! In your original terminal, execute these commands quickly one after another:
# Scale UP
docker exec workshop-k3s kubectl scale deployment/backend --replicas=10 -n workshop
# Wait 5 seconds (just count in your head)
# Scale DOWN
docker exec workshop-k3s kubectl scale deployment/backend --replicas=2 -n workshop
# Wait 5 seconds
# Scale UP again
docker exec workshop-k3s kubectl scale deployment/backend --replicas=7 -n workshop
# Wait 5 seconds
# Scale DOWN again
docker exec workshop-k3s kubectl scale deployment/backend --replicas=3 -n workshopLook at your monitoring terminal:
What you should see:
- β The app never goes down
- β
All requests return
β 200 OK - β
No
β ERRORmessages
In Dashboard:
- Watch pods appear and disappear smoothly
- Traffic continues flowing to healthy pods
Stop monitoring: Press Ctrl+C in the monitoring terminal.
Why doesn't the app break?
- Kubernetes maintains the Service (the network endpoint)
- The Service only routes traffic to healthy pods
- New pods become healthy before old pods are removed
- This is called a rolling update
This is why Kubernetes is so powerful for production applications!
Watch Kubernetes automatically scale based on load.
Now that we've explored manual scaling, let's enable autoscaling:
All platforms:
docker exec workshop-k3s kubectl apply -f k8s/hpa.yamlYou should see:
horizontalpodautoscaler.autoscaling/backend-hpa created
horizontalpodautoscaler.autoscaling/frontend-hpa created
What this does:
- Creates autoscaling rules for backend and frontend
- Backend: Scale between 2-10 pods based on CPU usage
- Frontend: Scale between 2-8 pods based on CPU usage
HPA (HorizontalPodAutoscaler) is already configured. Check it:
docker exec workshop-k3s kubectl get hpa -n workshopYou should see:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
frontend-hpa Deployment/frontend 2%/60% 2 8 2
backend-hpa Deployment/backend 1%/50% 3 10 3
Understanding the columns:
- TARGETS: Current CPU usage / Target CPU usage
- MINPODS: Minimum number of pods (won't go below this)
- MAXPODS: Maximum number of pods (won't go above this)
- REPLICAS: Current number of pods running
See how much CPU/memory your pods are using:
docker exec workshop-k3s kubectl top pods -n workshopYou should see low CPU usage (a few millicores like 1m or 2m).
The load generator will send many requests to the backend to create CPU load:
docker exec workshop-k3s kubectl scale deployment/load-generator --replicas=12 -n workshopWhat this does: Creates 12 pods that continuously hit the backend API.
π In Dashboard:
- Navigate to Workloads β Pods
- You'll see 12 new
load-generatorpods appear
Open two terminal windows and run these commands:
Terminal 1 - Watch HPA:
docker exec workshop-k3s kubectl get hpa -n workshop -wTerminal 2 - Watch Pods:
docker exec workshop-k3s kubectl get pods -n workshop -wWhat you'll see over the next 30-60 seconds:
- CPU increases: The TARGETS column shows higher % (like
45%/50%, then80%/50%) - HPA triggers scaling: When CPU > 50%, HPA decides to add pods
- New pods created: You'll see new
backend-xxxxxpods inContainerCreatingstate - Load distributes: As new pods become Ready, CPU per pod decreases
- System stabilizes: Eventually settles with enough pods to keep CPU around 50%
Expected result: Backend scales from 2 β 4-6 pods (depending on load)
π‘ In Dashboard:
- Watch the backend deployment
- See the replica count increase automatically
- No manual commands needed!
Now stop generating load:
docker exec workshop-k3s kubectl scale deployment/load-generator --replicas=0 -n workshopWatch what happens (30-60 seconds):
- CPU drops: TARGETS shows low usage like
5%/50% - HPA waits: There's a stabilization window (30 seconds)
- Scale down begins: HPA gradually removes pods
- Returns to minimum: Eventually scales back to 2 pods (minReplicas)
Why the wait? To avoid "flapping" (scaling up and down repeatedly).
Press Ctrl+C in both monitoring terminals when done.
docker exec workshop-k3s kubectl get hpa -n workshop
docker exec workshop-k3s kubectl get pods -n workshopEverything should be back to the starting state:
- Backend: 2 pods
- Frontend: 3 pods
- Load generator: 0 pods
- Used
kubectl scaleto change replica count - Kubernetes creates/destroys pods as needed
- Changes happen gradually, maintaining availability
- Applications remain available during scaling
- Kubernetes only routes traffic to healthy pods
- Services provide stable endpoints despite pod changes
- Kubernetes can scale automatically based on metrics
- HPA monitors CPU/memory and adjusts replicas
- Scaling has built-in safeguards (min/max, stabilization)
In Production:
- Handle traffic spikes automatically (Black Friday, viral content)
- Optimize costs by scaling down during low usage
- Maintain availability without manual intervention
- Sleep better knowing your app self-heals and scales
Goal: Deploy new versions without downtime and rollback instantly if something breaks
- How to update applications in Kubernetes
- How rolling updates maintain zero-downtime
- How to rollback when deployments fail
- How Kubernetes protects your application automatically
Deploy a new version and watch Kubernetes update gradually.
First, let's see what version is currently running:
Windows PowerShell:
Invoke-RestMethod http://localhost:30080/api/status | ConvertTo-JsonMac/Linux:
curl http://localhost:30080/api/statusYou should see:
{
"ok": true,
"version": "1.0.0",
"uptime": 123,
"node": "v22.x.x"
}The version is 1.0.0 (v1).
Open three windows to watch the update happen:
Window 1 - Dashboard:
- Open https://localhost:30082
- Navigate to: Workloads β Deployments β Click backend
- Keep this visible to watch pods change
Window 2 - Terminal (Watch Pods):
docker exec workshop-k3s kubectl get pods -n workshop -wWindow 3 - Terminal (Commands):
- Keep this ready for running commands
In Window 3, run the update command:
docker exec workshop-k3s kubectl set image deployment/backend backend=demo-api:v2 -n workshopWhat this does: Changes the image from demo-api:v1 to demo-api:v2
π In Window 2 (Terminal): You'll see this sequence:
- A new pod appears with a new name (e.g.,
backend-xxxxx-yyyyy) - Status:
ContainerCreatingβRunning - Once the new pod is
Ready, an old pod startsTerminating - Another new pod is created
- Another old pod terminates
- Process continues until all pods are v2
π In Window 1 (Dashboard):
- Click on the backend deployment
- Watch the Pods section
- See pods with different names (old vs new)
- Notice: Some v1 pods stay running while v2 pods start!
β±οΈ This takes about 30-60 seconds
Once all pods show Running, check the version:
Windows PowerShell:
Invoke-RestMethod http://localhost:30080/api/status | ConvertTo-JsonMac/Linux:
curl http://localhost:30080/api/statusYou should now see:
{
"ok": true,
"version": "2.0.0", β Changed!
"uptime": 15,
"node": "v22.x.x"
}Success! You've deployed v2 without downtime! π
docker exec workshop-k3s kubectl rollout status deployment/backend -n workshopOutput:
deployment "backend" successfully rolled out
You can also see the rollout history:
docker exec workshop-k3s kubectl rollout history deployment/backend -n workshopNow let's see what happens when an update fails.
Let's simulate deploying a broken version:
docker exec workshop-k3s kubectl set image deployment/backend backend=demo-api:v3 -n workshopWhat this does: Tries to update to v3, which doesn't exist!
Keep Window 2 running (watching pods)
You'll see:
- New pods appear (trying to start v3)
- Status:
ContainerCreatingβImagePullBackOfforErrImagePull - Old v2 pods keep running! β This is the key!
π In Dashboard:
- New pods show status:
ImagePullBackOff(red warning) - Old v2 pods: Still
Running(green) - Traffic only goes to the healthy v2 pods!
While the v3 pods are failing, test the application:
Windows PowerShell:
Invoke-RestMethod http://localhost:30080/api/status | ConvertTo-JsonMac/Linux:
curl http://localhost:30080/api/statusYou should still see:
{
"ok": true,
"version": "2.0.0", β Still v2, still working!
...
}The app never went down! Even though new pods are failing, the old v2 pods continue serving traffic.
docker exec workshop-k3s kubectl rollout status deployment/backend -n workshop --timeout=30sThis will timeout because the rollout can't complete:
Waiting for deployment "backend" rollout to finish: 1 out of 2 new replicas have been updated...
error: timed out waiting for the condition
Pick one of the failing pods and check what's wrong:
All platforms:
# First, get the list of pods
docker exec workshop-k3s kubectl get pods -n workshopLook for a pod with status ImagePullBackOff or ErrImagePull. Copy its name, then describe it:
# Replace with the actual pod name you copied
docker exec workshop-k3s kubectl describe pod -n workshopExample:
docker exec workshop-k3s kubectl describe pod backend-7d4b9c8f6d-x9k2l -n workshopLook for the Events section at the bottom. You'll see:
Failed to pull image "demo-api:v3": rpc error: code = NotFound desc = failed to pull and unpack image
Now let's fix it instantly with a rollback:
docker exec workshop-k3s kubectl rollout undo deployment/backend -n workshopWatch what happens:
- Failed v3 pods are terminated
- v2 pods are restored
- Application continues working
Check the rollout status:
docker exec workshop-k3s kubectl rollout status deployment/backend -n workshopOutput:
deployment "backend" successfully rolled out
Check the version:
curl http://localhost:30080/api/statusBack to:
{
"version": "2.0.0" β Rolled back to v2
}See all the changes we made:
docker exec workshop-k3s kubectl rollout history deployment/backend -n workshopYou should see:
REVISION CHANGE-CAUSE
1 <none>
2 <none>
3 <none>
Each revision represents a deployment change (v1 β v2 β v3 β v2).
- Kubernetes updates pods gradually, not all at once
- New pods start before old pods stop
- Traffic only goes to healthy pods
- Zero downtime during updates
- When new pods fail, old pods keep running
- Your application never goes down
- Failed deployments don't break production
- Kubernetes waits for pods to be ready before switching traffic
- One command:
kubectl rollout undo - Returns to previous working version
- Takes seconds, not minutes
- Can rollback to any previous revision
When you update a deployment, Kubernetes:
- Creates 1 new pod with the new version
- Waits for it to pass health checks (readiness probe)
- Marks it ready to receive traffic
- Terminates 1 old pod
- Repeats until all pods are updated
When new pods fail health checks:
- Kubernetes never marks them ready
- Service never routes traffic to them
- Old pods continue serving requests
- Your SLA is maintained!
Goal: Watch Kubernetes automatically recover from failures without human intervention
- How Kubernetes maintains desired state automatically
- How liveness probes detect and restart failed containers
- How deployments ensure the correct number of replicas
- Why you don't need to manually fix crashed pods
Kubernetes constantly monitors your applications and automatically fixes problems:
- Deployment Controller: Ensures the desired number of pods are always running
- Liveness Probes: Checks if containers are healthy, restarts them if not
- Service: Only routes traffic to healthy pods
Let's see this in action!
Watch Kubernetes automatically recreate deleted pods.
See how many backend pods are running:
All platforms (same command):
docker exec workshop-k3s kubectl get pods -n workshop -l app=backendYou should see 2 backend pods (because replicas: 2 in backend.yaml):
NAME READY STATUS RESTARTS AGE
backend-xxxxx-aaaaa 1/1 Running 0 5m
backend-xxxxx-bbbbb 1/1 Running 0 5m
In a separate terminal, watch pods in real-time:
All platforms (same command):
docker exec workshop-k3s kubectl get pods -n workshop -wKeep this running!
In your original terminal, pick one backend pod and delete it:
All platforms (same command):
# Replace <pod-name> with actual name from Step 1
docker exec workshop-k3s kubectl delete pod <pod-name> -n workshopExample:
docker exec workshop-k3s kubectl delete pod backend-xxxxx-aaaaa -n workshopπ In your monitoring window:
You'll see:
- The deleted pod:
Terminating - Immediately, a new pod appears (different name)
- New pod:
PendingβContainerCreatingβRunning - Old pod disappears
- Total count stays at 2 pods
This happens in seconds!
Stop watching (press Ctrl+C) and check the final state:
All platforms (same command):
docker exec workshop-k3s kubectl get pods -n workshop -l app=backendYou should see:
- 2 backend pods (same as before)
- One pod has a very recent AGE (the newly created one)
π What Just Happened:
Kubernetes Deployment maintains replicas: 2. When you deleted a pod:
- Deployment noticed: "I have 1 pod, but I need 2"
- Deployment created a new pod immediately
- New pod started and became ready
- Desired state restored
No human intervention needed!
Watch Kubernetes automatically restart crashed containers.
Your backend has a special endpoint that crashes the container (for demo purposes):
Windows PowerShell:
Invoke-RestMethod -Method POST -Uri http://localhost:30080/api/crashMac/Linux:
curl -X POST http://localhost:30080/api/crashYou should get a response:
{
"message": "This pod will crash in 1 second for demo purposes",
"pod": "backend-xxxxx-yyyyy",
"note": "Kubernetes will restart this container automatically"
}Note the pod name!
Immediately check pod status:
All platforms (same command):
docker exec workshop-k3s kubectl get pods -n workshop -l app=backendWhat you'll see:
The pod you crashed will show:
NAME READY STATUS RESTARTS AGE
backend-xxxxx-yyyyy 0/1 Error 0 2m
backend-xxxxx-zzzzz 1/1 Running 0 2m
Wait a few seconds and check again:
All platforms (same command):
docker exec workshop-k3s kubectl get pods -n workshop -l app=backendNow it shows:
NAME READY STATUS RESTARTS AGE
backend-xxxxx-yyyyy 1/1 Running 1 2m β RESTARTS increased!
backend-xxxxx-zzzzz 1/1 Running 0 2m
Notice:
- Same pod name (not deleted)
RESTARTSincreased from0to1- Status back to
Running
See what Kubernetes did:
All platforms (same command):
docker exec workshop-k3s kubectl describe pod <crashed-pod-name> -n workshopScroll to the Events section at the bottom. You'll see:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulled 30s kubelet Successfully pulled image "demo-api:v1"
Warning BackOff 15s kubelet Back-off restarting failed container
Normal Pulled 10s kubelet Container image already present on machine
Normal Created 10s kubelet Created container backend
Normal Started 10s kubelet Started container backend
See the crash in the logs:
All platforms (same command):
docker exec workshop-k3s kubectl logs <crashed-pod-name> -n workshop --tail=20You should see:
[backend-xxxxx-yyyyy] Intentional crash triggered for workshop demo
π What Just Happened:
The Liveness Probe detected the crash:
- Container process exited (crashed)
- Liveness probe failed (no response on
/health) - Kubernetes restarted the container (not the whole pod)
- New container started fresh
- Health check passed β Container ready
The pod wasn't deleted, just the container inside was restarted!
Let's cause chaos and prove the application stays available.
Open a new terminal and monitor the application:
Windows PowerShell:
while($true) {
try {
$response = Invoke-WebRequest -Uri http://localhost:30080/api/status -UseBasicParsing
Write-Host "β OK - Version: $(($response.Content | ConvertFrom-Json).version)" -ForegroundColor Green
} catch {
Write-Host "β FAILED" -ForegroundColor Red
}
Start-Sleep -Milliseconds 500
}Mac/Linux:
while true; do
response=$(curl -s http://localhost:30080/api/status)
if [ $? -eq 0 ]; then
version=$(echo $response | grep -o '"version":"[^"]*"' | cut -d'"' -f4)
echo "β OK - Version: $version"
else
echo "β FAILED"
fi
sleep 0.5
doneYou should see continuous β OK messages. Keep this running!
In your original terminal, let's cause multiple failures at once.
First, crash a pod:
Windows PowerShell:
Invoke-RestMethod -Method POST -Uri http://localhost:30080/api/crashMac/Linux:
curl -X POST http://localhost:30080/api/crashThen immediately delete a different pod:
Windows PowerShell:
$podName = docker exec workshop-k3s kubectl get pod -l app=backend -n workshop -o jsonpath='{.items[0].metadata.name}'
docker exec workshop-k3s kubectl delete pod $podName -n workshopMac/Linux:
docker exec workshop-k3s kubectl delete pod $(docker exec workshop-k3s kubectl get pod -l app=backend -n workshop -o jsonpath='{.items[0].metadata.name}') -n workshopThen crash again:
Windows PowerShell:
Invoke-RestMethod -Method POST -Uri http://localhost:30080/api/crashMac/Linux:
curl -X POST http://localhost:30080/api/crashπ In your monitoring terminal:
You should see:
- Mostly
β OKresponses - Maybe 1-2
β FAILED(during brief transition) - Quickly back to all
β OK
Why didn't the app go down?
- You have 2 backend pods
- When one crashes/deleted β Other pod handles traffic
- Service only routes to healthy pods
- By the time one recovers, another is ready
Stop monitoring: Press Ctrl+C
See what happened:
All platforms (same command):
docker exec workshop-k3s kubectl get pods -n workshop -l app=backendYou should see:
- 2 pods running (always!)
- Increased
RESTARTScounts - Recent
AGEfor recreated pods
Everything is back to normal, automatically!
- Deployment maintains desired replica count
- Delete a pod β New pod created instantly
- No manual intervention needed
- Liveness probes detect unhealthy containers
- Failed containers restart automatically
- Pod survives, just container restarts
- Multiple replicas provide redundancy
- Service routes around failed pods
- Application stays available during chaos
What it does:
- Constantly monitors: "Do I have the right number of pods?"
- If count is wrong β Creates or deletes pods to match
replicas:
In your backend.yaml:
spec:
replicas: 2 # Always maintain 2 podsWhat it does:
- Periodically checks if container is alive
- If check fails β Restarts the container
In your backend.yaml:
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10 # Check every 10 secondsHow it works:
- Every 10 seconds, Kubernetes calls
GET /health - If response is OK β Container is healthy
- If no response / error β Container is dead
- After a few failures β Restart container
Goal: Learn how to configure applications without rebuilding images and handle sensitive data securely
- How to separate configuration from code
- The difference between ConfigMaps and Secrets
- How to update application config without rebuilding images
- Why the same image can run in dev, staging, and production
In production, you want:
- One image that works everywhere (dev, staging, production)
- Different configuration per environment
- No secrets in code or Docker images
- Easy config updates without redeployment
Kubernetes provides:
- ConfigMaps: For non-sensitive configuration (URLs, feature flags, settings)
- Secrets: For sensitive data (API keys, passwords, tokens)
Let's see them in action!
Your backend has a special endpoint that shows its current configuration.
Windows PowerShell:
Invoke-RestMethod http://localhost:30080/api/config | ConvertTo-JsonMac/Linux:
curl http://localhost:30080/api/configYou should see:
{
"environment": "development",
"feature_new_ui": false,
"external_api_url": "https://api.example.com",
"max_items": 100,
"database_path": "/data/demo.sqlite",
"has_api_key": true,
"api_key_length": 29
}π What you're seeing:
environment: Loaded from ConfigMapfeature_new_ui: Feature flag from ConfigMapexternal_api_url: External service URL from ConfigMapmax_items: App setting from ConfigMaphas_api_key: Shows we have a secret (without exposing it!)api_key_length: Shows the secret's length (safe to show)
See what's in the ConfigMap:
All platforms:
docker exec workshop-k3s kubectl get configmap backend-config -n workshop -o yamlYou'll see:
apiVersion: v1
kind: ConfigMap
metadata:
name: backend-config
namespace: workshop
data:
APP_ENV: "development"
FEATURE_NEW_UI: "false"
EXTERNAL_API_URL: "https://api.example.com"
MAX_ITEMS: "100"Notice: It's plain text - anyone can read it!
See what's in the Secret:
All platforms:
docker exec workshop-k3s kubectl get secret backend-secret -n workshop -o yamlYou'll see:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
namespace: workshop
type: Opaque
data:
API_KEY: c3VwZXItc2VjcmV0LWFwaS1rZXktMTIzNDU=Notice: It's base64 encoded - not human-readable, but NOT encrypted!
Let's see what the secret actually is:
Windows PowerShell:
[Text.Encoding]::UTF8.GetString([Convert]::FromBase64String("c3VwZXItc2VjcmV0LWFwaS1rZXktMTIzNDU="))Mac/Linux:
echo "c3VwZXItc2VjcmV0LWFwaS1rZXktMTIzNDU=" | base64 --decodeOutput:
super-secret-api-key-12345
π Important: Base64 is NOT encryption - it's just encoding to handle binary data. Secrets in Kubernetes are:
- Base64 encoded in YAML (so they can contain any characters)
- Stored in etcd (can be encrypted at rest with proper K8s config)
- Only accessible with proper RBAC permissions
- Not meant to be read directly by humans
Let's change the configuration without rebuilding the image.
We'll update the ConfigMap by applying a new version.
Edit the existing file
Open k8s/configmap.yaml in your text editor and change:
data:
APP_ENV: "development"
FEATURE_NEW_UI: "true" # Change from "false" to "true"
EXTERNAL_API_URL: "https://api.example.com"
MAX_ITEMS: "200" # Change from "100" to "200"Save the file.
docker exec workshop-k3s kubectl rollout restart deployment/backend -n workshopAll platforms:
docker exec workshop-k3s kubectl apply -f k8s/configmap.yamlYou should see:
configmap/backend-config configured
ConfigMap changes don't automatically update running pods. Delete the pods so they restart with the new config:
All platforms:
docker exec workshop-k3s kubectl delete pods -l app=backend -n workshopYou should see:
pod "backend-xxxxx-yyyyy" deleted
pod "backend-xxxxx-zzzzz" deleted
Watch the pods get recreated:
All platforms:
docker exec workshop-k3s kubectl get pods -n workshop -wWait until you see both backend pods with:
- STATUS:
Running - READY:
1/1
Then press Ctrl+C to stop watching.
π What's Happening:
- Kubernetes Deployment sees pods are missing
- Immediately creates new pods
- New pods load fresh ConfigMap values
- Total downtime: ~10-20 seconds (other pod handles traffic)
Check the config again:
Windows PowerShell:
Invoke-RestMethod http://localhost:30080/api/config | ConvertTo-JsonMac/Linux:
curl http://localhost:30080/api/configNow you should see:
{
"environment": "development",
"feature_new_ui": true, β Changed!
"external_api_url": "https://api.example.com",
"max_items": 200, β Changed!
"database_path": "/data/demo.sqlite",
"has_api_key": true,
"api_key_length": 26
}π What Just Happened:
- Changed ConfigMap (just YAML, no code)
- Restarted pods to pick up new config
- Application now uses new settings
- No image rebuild needed!
- Same image, different config
Let's change the API key to show how Secrets work.
First, we need to base64-encode our new secret value.
Windows PowerShell:
[Convert]::ToBase64String([Text.Encoding]::UTF8.GetBytes("new-production-key-67890"))Mac/Linux:
echo -n "new-production-key-67890" | base64You'll get:
bmV3LXByb2R1Y3Rpb24ta2V5LTY3ODkw
Copy this value!
Open k8s/secret.yaml in your text editor and update the API_KEY value:
Find this line:
data:
# API_KEY value: "super-secret-api-key-12345"
API_KEY: c3VwZXItc2VjcmV0LWFwaS1rZXktMTIzNDU=Replace with your new base64 value:
data:
# API_KEY value: "new-production-key-67890"
API_KEY: bmV3LXByb2R1Y3Rpb24ta2V5LTY3ODkwSave the file.
All platforms:
docker exec workshop-k3s kubectl apply -f k8s/secret.yamlYou should see:
secret/backend-secret configured
All platforms:
docker exec workshop-k3s kubectl delete pods -l app=backend -n workshopWatch the pods restart:
All platforms:
docker exec workshop-k3s kubectl get pods -n workshop -wWait for both pods to show Running and 1/1 READY, then press Ctrl+C.
Check the config:
Windows PowerShell:
Invoke-RestMethod http://localhost:30080/api/config | ConvertTo-JsonMac/Linux:
curl http://localhost:30080/api/configNow you should see:
{
"environment": "development",
"feature_new_ui": true,
"external_api_url": "https://api.example.com",
"max_items": 200,
"database_path": "/data/demo.sqlite",
"has_api_key": true,
"api_key_length": 24 β Changed! (different length)
}Notice: The api_key_length changed from 26 to 24 characters!
π What Just Happened:
- Created new base64-encoded secret
- Edited the Secret file directly
- Applied the updated Secret to Kubernetes
- Deleted pods so they reload with new secret
- Application now uses new API key
- API key never appeared in code or images!
Understanding how this works in production.
Same Image, Different Configs:
βββββββββββββββββββββββββββββββββββββββββββ
β Docker Image: demo-api:v1 β
β (Same code, no config) β
βββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββΌββββββββββββ
βΌ βΌ βΌ
ββββββββββ ββββββββββ ββββββββββ
β Dev β βStaging β β Prod β
βNamespaceβ βNamespaceβ βNamespaceβ
ββββββββββ ββββββββββ ββββββββββ
β β β
ConfigMap ConfigMap ConfigMap
- ENV: dev - ENV: staging - ENV: prod
- DEBUG:true - DEBUG: true - DEBUG: false
- API: test - API: staging - API: prod
β β β
Secret Secret Secret
- KEY: dev123 - KEY: stg456 - KEY: prod789
Key Benefits:
- β One image for all environments
- β Different configs per namespace
- β Secrets separated from code
- β Easy updates - just edit ConfigMap
- β Version controlled - ConfigMaps are YAML files
- Store non-sensitive configuration
- Plain text (anyone can read)
- Easy to edit and version control
- Update without rebuilding images
- Store sensitive data (API keys, passwords, tokens)
- Base64 encoded (not encrypted by default!)
- Only expose to pods that need them
- Can be encrypted at rest with proper K8s setup
- Code: In Docker image
- Config: In ConfigMap
- Secrets: In Secret
- Image: Same everywhere, config differs
In backend.yaml:
env:
- name: APP_ENV
valueFrom:
configMapKeyRef:
name: backend-config
key: APP_ENVWhat this does:
- Kubernetes reads
backend-configConfigMap - Extracts the value of key
APP_ENV - Sets it as environment variable in container
- Application reads
process.env.APP_ENV
In backend.yaml:
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: API_KEYWhat this does:
- Kubernetes reads
backend-secretSecret - Decodes base64 automatically
- Sets plain text as environment variable
- Application reads
process.env.API_KEY(already decoded!)
Resources:
- Official Kubernetes Docs: https://kubernetes.io/docs/
Thank you for participating!