kubeadm kubernetes deployment
Kubernetes Cluster Setup on Ubuntu 24.04 with Kubeadm
Prerequisites and Network Configuration (All Nodes)
Set Hostnames: Ensure each node has the correct hostname.
bash
CopyEdit
# On master node:
sudo hostnamectl set-hostname master-node
# On worker nodes:
sudo hostnamectl set-hostname worker-node1 # for worker 1
sudo hostnamectl set-hostname worker-node2 # for worker 2
Verify with hostnamectl status that each node’s hostname is set.
Configure Network Interfaces: Set static IPs on each node’s interfaces using Netplan (Ubuntu 24.04 uses netplan). The ens33 interface will be the external network (192.168.178.0/24) and ens35 the internal network (192.168.200.0/24). Only the external interface should have a default gateway and DNS.
Master node (master-node): External IP 192.168.178.35/24, Internal IP 192.168.200.10/24.
Worker node1: External IP 192.168.178.41/24, Internal IP 192.168.200.11/24.
Worker node2: External IP 192.168.178.52/24, Internal IP 192.168.200.12/24.
<details> <summary>Example Netplan configuration for <code>master-node</code> (adjust addresses for each node):</summary>
yaml
CopyEdit
network:
version: 2
renderer: networkd
ethernets:
ens33:
addresses: [192.168.178.35/24]
gateway4: 192.168.178.1 # external gateway
nameservers:
addresses: [192.168.178.3] # external DNS server
ens35:
addresses: [192.168.200.10/24]
# No gateway on internal interface
</details>
Apply the netplan config and verify:
bash
CopyEdit
sudo netplan apply
ip addr show ens33 # should show the 192.168.178.x address on ens33
ip addr show ens35 # should show the 192.168.200.x address on ens35
ip route # default route via 192.168.178.1 on ens33, none on ens35
grep nameserver /etc/resolv.conf # should list 192.168.178.3 as nameserver
Ensure you can ping the gateway (192.168.178.1) and DNS (192.168.178.3) from each node on ens33, and that nodes can ping each other via the internal IPs (e.g. ping 192.168.200.11 from master, etc.).
Disable Swap: Kubernetes requires swap to be off
hbayraktar.medium.com
. Run on all nodes:
bash
CopyEdit
sudo swapoff -a
sudo sed -i '/\sswap\s/s/^/#/' /etc/fstab # comment out any swap entries
Verify swap is off with free -h (Swap should be 0).
Load Kernel Modules and Sysctl: Enable required kernel modules and settings for networking:
bash
CopyEdit
# Enable overlay and br_netfilter modules on all nodes
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# Set sysctl parameters for Kubernetes networking
sudo tee /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system # apply sysctl changes
These settings ensure that Linux iptables can see bridged traffic (required for pods networking)
hbayraktar.medium.com
and enable IPv4 forwarding.
Firewall: If Ubuntu’s UFW or any firewall is enabled, disable it or allow Kubernetes traffic. For simplicity, disable UFW on all nodes:
bash
CopyEdit
sudo ufw disable
Note: In a production environment, you would configure specific rules instead.
Clean Up Any Previous Kubernetes Installation (All Nodes)
If these machines had a Kubernetes installation before, clean up leftovers to avoid conflicts:
bash
CopyEdit
sudo kubeadm reset -f # resets kubeadm cluster state​:contentReference[oaicite:2]{index=2}
sudo systemctl stop kubelet
sudo systemctl stop containerd
sudo apt-get purge -y kubelet kubeadm kubectl
sudo apt-get autoremove -y
sudo rm -rf /etc/cni/net.d/* # remove old CNI configurations
sudo rm -rf /etc/kubernetes/ # remove old Kubernetes configs/certs
sudo rm -rf /var/lib/kubelet/* # reset kubelet state
sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X # flush iptables
Explanation: kubeadm reset removes cluster config from the node, and the additional commands remove any residual files or network rules from a previous setup. After this, you can reinstall kubeadm/kubelet if needed. (If this is a fresh install, you can skip the package purge and reinstall steps.)
Install Container Runtime and Kubernetes Packages (All Nodes)
Install Containerd: We will use Containerd as the container runtime.
bash
CopyEdit
# Install dependencies and add Docker’s official repo (for containerd package)
sudo apt update && sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io
Configure Containerd to use systemd cgroups (recommended for Kubernetes
hbayraktar.medium.com
):
bash
CopyEdit
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd
sudo systemctl enable containerd
Verify Containerd is running:
bash
CopyEdit
systemctl status containerd --no-pager
You should see it active (running)
hbayraktar.medium.com
.
Install kubeadm, kubelet, kubectl: Add the Kubernetes apt repository and install the latest stable version of these tools on all nodes
hbayraktar.medium.com
hbayraktar.medium.com
:
bash
CopyEdit
# Add Kubernetes apt key and repository
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
# Install Kubernetes tools
sudo apt update
sudo apt install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl # prevent auto-updating​:contentReference[oaicite:7]{index=7}
Note: Replace v1.30 with the latest stable version if newer (the above example uses Kubernetes 1.30.x). If the repository is already set up and packages installed, ensure they are up to date.
After installation, verify versions:
bash
CopyEdit
kubeadm version # should show the kubeadm version (e.g., v1.30.x)
kubelet --version # should show kubelet version
Configure Kubelet to Use Internal Network (All Nodes)
To ensure Kubernetes uses the internal 192.168.200.x network for intra-cluster communication, configure kubelet on each node to advertise the internal IP:
bash
CopyEdit
# On master-node:
echo 'KUBELET_EXTRA_ARGS="--node-ip=192.168.200.10"' | sudo tee /etc/default/kubelet
# On worker-node1:
echo 'KUBELET_EXTRA_ARGS="--node-ip=192.168.200.11"' | sudo tee /etc/default/kubelet
# On worker-node2:
echo 'KUBELET_EXTRA_ARGS="--node-ip=192.168.200.12"' | sudo tee /etc/default/kubelet
Reload and enable kubelet:
bash
CopyEdit
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl restart kubelet
This sets the --node-ip for kubelet to the internal NIC’s IP
. Each node will report its Internal-IP as the 192.168.200.x address. You can verify with systemctl status kubelet (it should be active; errors about not finding API server are expected until we initialize the cluster).
Initialize the Control Plane (Master Node)
Perform the cluster initialization on the master node (master-node) using kubeadm:
bash
CopyEdit
# On master-node (run as root or with sudo):
sudo kubeadm init \
--apiserver-advertise-address=192.168.200.10 \
--apiserver-cert-extra-sans=192.168.178.35 \
--pod-network-cidr=10.244.0.0/16 \
--node-name master-node
Breakdown of the flags:
--apiserver-advertise-address=192.168.200.10: The API server will advertise this internal IP for the control-plane endpoint
discuss.kubernetes.io
.--apiserver-cert-extra-sans=192.168.178.35: Adds the external IP as an additional SAN in the API server certificate, so it’s accessible via both interfaces
devopscube.com
.--pod-network-cidr=10.244.0.0/16: Use a pod network range on 10.x.x.x (to avoid overlap with host networks). We choose 10.244.0.0/16 as an example. This must match the CNI’s network later.
--node-name master-node: Sets the master’s node name (optional, kubeadm will use hostname by default).
The initialization will take a few minutes. Upon success you should see: “Your Kubernetes control-plane has initialized successfully!” and a kubeadm join command token in the output
.
Post-init steps (on master node):
Set up kubectl for the admin user:
bash
CopyEdit
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This copies the admin kubeconfig to your user so you can use kubectl
kubernetes.io
.
(Optional) Verify control-plane components are running:
bash
CopyEdit
kubectl get pods -n kube-system
You should see the API server, etcd, controller-manager, scheduler pods running on the master. The CoreDNS pods will be in Pending state until we install a network CNI
devopscube.com
.
Check that the API server is accessible on both internal and external addresses:
bash
CopyEdit
# From master node, test the API server endpoints (ignore cert warning with -k):
curl -k https://192.168.200.10:6443/version # internal API endpoint
curl -k https://192.168.178.35:6443/version # external API endpoint
Both should return a JSON with version info, confirming the API is listening on both interfaces. (The certificate is valid for the external IP because we set the SAN.)
Join Worker Nodes to the Cluster
Now join the two workers to the cluster using the token from the kubeadm init output. Use the join command on worker-node1 and worker-node2 (as root or with sudo):
For example, if the token and hash from init were --token <token> --discovery-token-ca-cert-hash sha256:<hash>, run on each worker:
bash
CopyEdit
sudo kubeadm join 192.168.200.10:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name worker-node1
(Use the actual token/hash from your init output. For worker-node2, use --node-name worker-node2.)
This will download the necessary components and register the workers with the control-plane at 192.168.200.10. If you missed the token, you can regenerate a new one on the master with: kubeadm token create --print-join-command.
After running the join on both workers, check nodes status on the master:
bash
CopyEdit
kubectl get nodes -o wide
You should see master-node, worker-node1, and worker-node2 listed. Initially, the workers will likely show STATUS = NotReady – this is expected because we haven’t installed the CNI yet (the network plugin is required for the kubelet to fully initialize)
kubernetes.io
devopscube.com
.
Example output (before CNI):
text
CopyEdit
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
master-node Ready control-plane 5m v1.30.x 192.168.200.10 <none>
worker-node1 NotReady <none> 1m v1.30.x 192.168.200.11 <none>
worker-node2 NotReady <none> 1m v1.30.x 192.168.200.12 <none>
Notice: The Internal-IP for each node is the 192.168.200.x address (as configured), and External-IP is <none>since we are not using a cloud provider to auto-populate it. The master is Ready (but tainted to not schedule pods), and the workers are awaiting networking.
Install Calico CNI (Pod Network on 10.x.x.x)
We will deploy Calico as the Container Network Interface plugin to enable pod networking. We’ll use the latest Calico manifest compatible with our Kubernetes version.
Apply Calico manifest: On the master node, download the Calico manifest and update the pod network CIDR to our chosen 10.244.0.0/16 range (if it’s not already 10.x):
bash
CopyEdit
curl -O -L https://docs.projectcalico.org/manifests/calico.yaml
sed -i 's/192.168.0.0\/16/10.244.0.0\/16/' calico.yaml # replace default pod CIDR with 10.244.0.0/16
kubectl apply -f calico.yaml # apply Calico CNI resources
Note: The manifest by default uses 192.168.0.0/16 for pods – we replace it with 10.244.0.0/16 to meet our requirement of using a 10.x.x.x pod network.
Configure Calico to use Internal Interfaces: Since each node has multiple NICs, we need Calico to utilize the internal network (ens35) for inter-node pod traffic. We will set Calico’s IP autodetection method to pick the 192.168.200.x IP on each node:
bash
CopyEdit
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=cidr=192.168.200.0/24
This updates the Calico daemonset to only consider IPs in 192.168.200.0/24 for node-to-node networking
docs.tigera.io
. Calico pods will restart with this setting. (Alternatively, we could use interface=ens35 to detect by interface name
docs.tigera.io
.)
Verify Calico Pods: After a minute, check that Calico components are running:
bash
CopyEdit
kubectl get pods -n kube-system
Look for pods named calico-node-* (one per node) and calico-kube-controllers. They should be in Running state. You can also check that CoreDNS pods are now Running (they will start once networking is active).
Confirm Nodes are Ready:
bash
CopyEdit
kubectl get nodes -o wide
All nodes (master and both workers) should show STATUS = Ready. The Internal-IP should be the 192.168.200.x addresses we set, confirming inter-node communication is via the internal network. For example:
text
CopyEdit
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
master-node Ready control-plane 10m v1.30.x 192.168.200.10 <none>
worker-node1 Ready <none> 6m v1.30.x 192.168.200.11 <none>
worker-node2 Ready <none> 6m v1.30.x 192.168.200.12 <none>
Explanation: Calico has created a 10.x.x.x overlay network for pods, and it uses the internal NIC (192.168.200.x) for routing pod traffic between nodes. This keeps pod-to-pod traffic on the internal network. The external network (192.168.178.x) is used only for outbound internet access and is not involved in pod networking.
Validation and Testing
Now that the cluster is set up, perform a few tests to ensure everything is working as expected:
Core DNS: Check that the CoreDNS service is up. Run:
bash
CopyEdit
kubectl get pods -n kube-system -o wide | grep coredns
You should see two coredns-... pods in Running state, each on a worker node. You can test DNS by launching a busybox pod:
bash
CopyEdit
kubectl run --rm -ti dns-test --image=busybox:stable --restart=Never nslookup kubernetes.default
This should return the cluster IP of the Kubernetes service, confirming DNS resolution within the cluster.
Pod Networking: Test that pods can communicate across nodes on the internal network. For example, deploy a simple test deployment:
bash
CopyEdit
kubectl create deployment pingtest --image=alpine --replicas=2 -- sleep 1000
kubectl get pods -o wide -l app=pingtest
Note the IP addresses of the two pods and which node each is on. Exec into one pod and ping the other:
bash
CopyEdit
POD_NAME=$(kubectl get pod -l app=pingtest -o jsonpath='{.items[0].metadata.name}')
kubectl exec -ti $POD_NAME -- ping -c4 <IP-of-other-pod>
The ping should succeed, indicating pod-to-pod connectivity via the Calico network. All pod traffic is using the 10.244.0.0/16 network and traversing the ens35 interfaces internally.
External Connectivity: Confirm that nodes and pods can reach external network via the external interface. For example, from any node (or from a pod if internet egress is allowed by default policy) ping an external IP or DNS:
bash
CopyEdit
ping -c4 8.8.8.8 # from a node, test external connectivity
ping -c4 google.com # DNS test (should resolve via 192.168.178.3 and ping)
This uses the default route (192.168.178.1 via ens33) and external DNS, as configured.
API Server Access: To use the Kubernetes API externally, you can copy the kubeconfig from the master and edit the server address to https://192.168.178.35:6443. Because we added the external IP in the cert, you can safely connect to the API via the external address. For example, from a machine on the 192.168.178.0 network:
bash
CopyEdit
export KUBECONFIG=admin.conf # copy this file from master-node
kubectl get nodes # should work from external network
Or simply verify with curl:
bash
CopyEdit
curl -k https://192.168.178.35:6443/healthz # should return "ok"
Congratulations! You now have a 3-node Kubernetes cluster running on Ubuntu 24.04. The control plane is advertising the internal IP and is reachable on both internal and external networks, the Calico CNI is set up with a 10.x.x.x pod network, and inter-node pod communication is confined to the internal network (ens35). You can proceed to deploy workloads on this cluster.
References:
Kubernetes official docs – kubeadm init/join usage
kubernetes.io
kubernetes.io
and setting up kubectl
kubernetes.io
.DevOpsCube – Kubeadm cluster setup guide (using --apiserver-advertise-address, --apiserver-cert-extra-sans, etc.)
devopscube.com
devopscube.com
.Calico Documentation – Configuring IP autodetection for multi-NIC environments
docs.tigera.io
.Ubuntu & Kubernetes setup guides – Disabling swap, enabling kernel modules, installing containerd and Kubernetes packages
hbayraktar.medium.com
hbayraktar.medium.com
.Kubernetes network design – Using a 10.0.0.0/8 pod CIDR to avoid overlap with host networks
docs.tigera.io
.Example commands and outputs were derived from the above references and common kubeadm deployment steps
rakeshjain-devops.medium.com
devopscube.com
.