Kubernetes Architecture
Kubernetes is designed as a distributed system that orchestrates containerized applications at scale. At its core, this architecture follows a decoupled control plane model where components work together to manage infrastructure and application workloads. Think of it as a self-healing orchestration engine that ensures your containers run reliably across your cloud or on-premises environment. š³
This section dives into the foundational architecture that powers Kubernetes, breaking down each critical component with practical examples and clear relationships.
Master Node
The Master Node (also called the control plane) is the central decision-making layer of Kubernetes. It does not run application workloadsāit’s the brains that manage the entire cluster. The control plane consists of multiple interdependent components running on a single machine (or distributed across multiple machines for high availability).
The Master Node handles:
- Cluster state management (what pods should run where)
- Policy enforcement (resource quotas, network policies)
- Event coordination (scheduling, scaling, health checks)
Hereās how you verify the Master Nodeās health in a real cluster:
<code class="language-bash">kubectl get component-status</code>
Example output:
<code>NAME STATUS MESSAGE AGE <p>etcd-0 Healthy etcd is healthy 10m</p> <p>scheduler Healthy Scheduler is healthy 10m</p> <p>controller-manager Healthy Controller manager is healthy 10m</code>
This command shows the core control plane components and their operational statusācritical for diagnosing cluster issues.
Worker Nodes
Worker Nodes are the physical or virtual machines that run your application workloads. They execute the containerized applications defined in Kubernetes and form the execution layer of the cluster.
Key characteristics:
- No central control: Each worker node operates independently
- Resource isolation: Nodes manage their own compute, memory, and storage
- Dynamic scaling: Kubernetes can add/remove nodes based on demand
When you run kubectl get nodes, you see the list of worker nodes in your cluster:
<code class="language-bash">kubectl get nodes</code>
Example output:
<code>NAME STATUS ROLES AGE VERSION <p>worker-node-01 Ready <none> 5h v1.28.0</p> <p>worker-node-02 Ready <none> 4h v1.28.0</code>
Worker nodes are where your containers liveāthe production environment for your applications.
API Server
The API Server is Kubernetes’ primary interface for all communication with the control plane. It acts as the RESTful gateway between clients (like kubectl), applications, and the control plane components.
Why it matters:
Every action in Kubernetes (e.g., creating a pod, scaling a deployment) must go through the API Server. It validates requests, enforces cluster policies, and routes them to the appropriate component.
Real-world example:
When you run kubectl create deployment nginx --image=nginx:alpine, this command sends a request to the API Server, which then:
- Validates the deployment spec
- Creates a new deployment object
- Triggers the scheduler to assign pods
You can interact with the API Server directly using curl (with caution for security):
<code class="language-bash">curl -sSL -H "Authorization: Bearer $(kubectl token)" https://api.cluster.example.com/api/v1/namespaces/default/deployments</code>
š” Pro tip: The API Server is the only component that handles all client interactionsāthis design ensures security and consistency across the cluster.
Scheduler
The Scheduler is the component that decides where to run your containers. It runs continuously on the Master Node and matches pods to worker nodes based on:
- Resource requests (CPU, memory)
- Node labels (e.g.,
zone=us-east-1) - Pod affinity/anti-affinity rules
- Current node capacity
How it works:
- A pod is created via the API Server
- The Scheduler evaluates node suitability
- It assigns the pod to the best node (lowest resource usage, matching labels)
- The pod is then scheduled on the node
Practical demonstration:
Create a simple pod with resource constraints and observe scheduling:
<code class="language-bash">kubectl run busybox --image=busybox --command -- sleep 3600 -n test-namespace</code>
This pod will be scheduled on a node that meets its resource requirements (e.g., 500m CPU).
Controller Manager
The Controller Manager runs background controllers that maintain the desired state of your cluster. These controllers act like “watchdogs” ensuring your cluster stays aligned with your specifications.
Key controllers include:
- Node Controller: Manages worker nodes (e.g., detects unhealthy nodes)
- Replication Controller: Ensures the correct number of pods run per deployment
- Endpoint Controller: Manages service endpoints
Real-world impact:
When you delete a pod, the Node Controller detects the missing pod and triggers a new one via the Scheduler. This is how Kubernetes achieves self-healing.
Check controller status:
<code class="language-bash">kubectl get controllermanager -o wide</code>
Example output:
<code>NAME STATUS AGE <p>node-controller Running 15m</p> <p>replication-controller Running 15m</code>
The Controller Manager is the unsung hero behind Kubernetes’ resilience.
Kubelet
The Kubelet is the critical agent running on every worker node. Itās Kubernetes’ primary interface to the node itselfāensuring containers run as intended and reporting node health to the Master Node.
Core responsibilities:
- Pod lifecycle management: Starts/stops containers
- Health monitoring: Checks container health via probes
- Resource reporting: Sends node metrics to the API Server
- Security enforcement: Validates container images and configurations
Practical verification:
Check the Kubelet status on a worker node:
<code class="language-bash">kubectl describe node worker-node-01 | grep -A 10 "Kubelet"</code>
Example output:
<code>Kubelet: <p> Status: Running</p> <p> Version: v1.28.0</p> <p> PodCIDR: 192.168.1.10/24</code>
This output shows the Kubelet is active and reporting node details to the clusterāthe bridge between your infrastructure and Kubernetes.
Summary
Kubernetes architecture is a modular, resilient system where the Master Node (control plane) makes decisions, and Worker Nodes execute workloads. The API Server acts as the central interface, while the Scheduler, Controller Manager, and Kubelet form the operational backbone that ensures your containers run reliably, scale automatically, and self-heal when needed. This layered design enables cloud-native applications to be both resilient and efficientāwithout requiring manual intervention. š”