Foundations tooling

Running a Local Cluster

● Beginner ⏱ 15 min read tooling

Before deploying to a cloud cluster, every Kubernetes practitioner should have a local cluster for experimentation, testing, and learning. Three tools dominate: Minikube, kind (Kubernetes IN Docker), and k3d. Each spins up a fully functional Kubernetes cluster on your laptop using different approaches.

📚
Official Reference

Based on kubernetes.io/docs/tasks/tools/.

Why Run Locally?

Tool Comparison

Feature Minikube kind k3d
Kubernetes distribution Full upstream K8s Full upstream K8s k3s (lightweight)
Cluster nodes VM or container Docker containers Docker containers
Multi-node support Yes (limited) Yes (native) Yes (native)
Load balancer minikube tunnel Needs MetalLB Built-in with Traefik
Built-in dashboard Yes No No
Startup time ~2 min ~30 sec ~20 sec
Resource usage Higher (VM) Low Lowest
Best for Beginners, add-ons CI/CD, testing Fast dev, IoT/edge

Minikube

Minikube is the original local Kubernetes tool, backed by the Kubernetes project. It runs a single-node cluster inside a VM (or Docker container) and ships with a rich add-on ecosystem.

Installation

# macOS (Homebrew)
brew install minikube

# Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Windows (winget)
winget install Kubernetes.minikube

Start a cluster

# start with Docker driver (recommended if Docker is installed)
minikube start --driver=docker

# specify Kubernetes version
minikube start --kubernetes-version=v1.30.0

# allocate more resources
minikube start --cpus=4 --memory=8192

# check status
minikube status

Useful extras

# open the built-in web dashboard
minikube dashboard

# enable the metrics-server add-on
minikube addons enable metrics-server

# expose a LoadBalancer service locally
minikube tunnel   # run in a separate terminal

# stop the cluster (preserves state)
minikube stop

# delete the cluster entirely
minikube delete

kind

kind (Kubernetes IN Docker) runs each cluster node as a Docker container. It is the tool of choice for CI/CD pipelines because it starts quickly and has no VM overhead.

Installation

# macOS / Linux (go install)
go install sigs.k8s.io/kind@latest

# macOS (Homebrew)
brew install kind

# Linux (binary)
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Start a cluster

# create a single-node cluster named "dev"
kind create cluster --name dev

# create a multi-node cluster from a config file
kind create cluster --name dev --config kind-config.yaml

# list all kind clusters
kind get clusters

# delete a cluster
kind delete cluster --name dev

Multi-node cluster config

kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

Loading local images into kind

kind doesn't share your local Docker image cache. Load images manually:

docker build -t my-app:dev .
kind load docker-image my-app:dev --name dev

k3d

k3d wraps k3s (a CNCF-certified lightweight Kubernetes) in Docker containers. It is the fastest option and uses the least memory — ideal for developer laptops and Raspberry Pi.

Installation

# macOS (Homebrew)
brew install k3d

# Linux / macOS (install script)
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash

Start a cluster

# create a 1-server 2-agent cluster named "dev"
k3d cluster create dev --agents 2

# map host port 8080 to cluster LoadBalancer port 80
k3d cluster create dev --agents 2 -p "8080:80@loadbalancer"

# list clusters
k3d cluster list

# stop a cluster
k3d cluster stop dev

# delete a cluster
k3d cluster delete dev
💡
k3s vs full Kubernetes

k3s omits some rarely used features (cloud-provider integrations, in-tree storage drivers) and replaces etcd with SQLite by default. For learning and development this is irrelevant. For production, use full upstream Kubernetes.

Your First Workload

Whichever tool you chose, the cluster is now available through kubectl. Let's deploy nginx and access it:

# verify cluster is reachable
kubectl cluster-info
kubectl get nodes

# deploy nginx
kubectl create deployment nginx --image=nginx:1.27 --replicas=2

# expose it as a service
kubectl expose deployment nginx --port=80 --type=NodePort

# find the NodePort assigned
kubectl get service nginx

With Minikube, open the service directly:

minikube service nginx   # opens in browser

With kind or k3d, use port-forward:

kubectl port-forward svc/nginx 8080:80
# now visit http://localhost:8080

Clean up:

kubectl delete deployment nginx
kubectl delete service nginx

Choosing the Right Tool

Use case Recommended tool
Learning Kubernetes for the first time Minikube — dashboard and add-ons make it beginner-friendly
Testing Helm charts and manifests in CI kind — fast, lightweight, multi-node, no VM
Day-to-day development on a laptop k3d — fastest startup, least memory, built-in load balancer
Testing multi-node failure scenarios kind or k3d — both support multi-node out of the box
Edge or IoT simulation k3d (k3s) — designed for resource-constrained environments