kubeadm kubernetes deployment 

Kubernetes Cluster Setup on Ubuntu 24.04 with Kubeadm

Prerequisites and Network Configuration (All Nodes)

Set Hostnames: Ensure each node has the correct hostname.
bash
CopyEdit
# On master node:

sudo hostnamectl set-hostname master-node

# On worker nodes:

sudo hostnamectl set-hostname worker-node1   # for worker 1

sudo hostnamectl set-hostname worker-node2   # for worker 2

<details> <summary>Example Netplan configuration for <code>master-node</code> (adjust addresses for each node):</summary>
yaml
CopyEdit
network:

  version: 2

  renderer: networkd

  ethernets:

    ens33:

      addresses: [192.168.178.35/24]

      gateway4: 192.168.178.1             # external gateway

      nameservers:

        addresses: [192.168.178.3]        # external DNS server

    ens35:

      addresses: [192.168.200.10/24]

      # No gateway on internal interface

</details>
Apply the netplan config and verify:
bash
CopyEdit
sudo netplan apply

ip addr show ens33     # should show the 192.168.178.x address on ens33

ip addr show ens35     # should show the 192.168.200.x address on ens35

ip route               # default route via 192.168.178.1 on ens33, none on ens35

grep nameserver /etc/resolv.conf  # should list 192.168.178.3 as nameserver

Disable Swap: Kubernetes requires swap to be off​
hbayraktar.medium.com
. Run on all nodes:
bash
CopyEdit
sudo swapoff -a

sudo sed -i '/\sswap\s/s/^/#/' /etc/fstab   # comment out any swap entries

Load Kernel Modules and Sysctl: Enable required kernel modules and settings for networking:
bash
CopyEdit
# Enable overlay and br_netfilter modules on all nodes

sudo tee /etc/modules-load.d/k8s.conf <<EOF

overlay

br_netfilter

EOF

sudo modprobe overlay

sudo modprobe br_netfilter


# Set sysctl parameters for Kubernetes networking

sudo tee /etc/sysctl.d/99-kubernetes-cri.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF


sudo sysctl --system   # apply sysctl changes

Firewall: If Ubuntu’s UFW or any firewall is enabled, disable it or allow Kubernetes traffic. For simplicity, disable UFW on all nodes:
bash
CopyEdit
sudo ufw disable

Clean Up Any Previous Kubernetes Installation (All Nodes)

If these machines had a Kubernetes installation before, clean up leftovers to avoid conflicts:

bash

CopyEdit

sudo kubeadm reset -f            # resets kubeadm cluster state&#8203;:contentReference[oaicite:2]{index=2}

sudo systemctl stop kubelet

sudo systemctl stop containerd

sudo apt-get purge -y kubelet kubeadm kubectl

sudo apt-get autoremove -y

sudo rm -rf /etc/cni/net.d/*     # remove old CNI configurations

sudo rm -rf /etc/kubernetes/    # remove old Kubernetes configs/certs

sudo rm -rf /var/lib/kubelet/*  # reset kubelet state

sudo iptables -F && sudo iptables -t nat -F && sudo iptables -t mangle -F && sudo iptables -X  # flush iptables


Explanation: kubeadm reset removes cluster config from the node, and the additional commands remove any residual files or network rules from a previous setup. After this, you can reinstall kubeadm/kubelet if needed. (If this is a fresh install, you can skip the package purge and reinstall steps.)

Install Container Runtime and Kubernetes Packages (All Nodes)

Install Containerd: We will use Containerd as the container runtime.
bash
CopyEdit
# Install dependencies and add Docker’s official repo (for containerd package)

sudo apt update && sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/docker.gpg

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt update

sudo apt install -y containerd.io

Configure Containerd to use systemd cgroups (recommended for Kubernetes​
hbayraktar.medium.com
):
bash
CopyEdit
sudo mkdir -p /etc/containerd

sudo containerd config default > /etc/containerd/config.toml

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

sudo systemctl restart containerd

sudo systemctl enable containerd

Verify Containerd is running:
bash
CopyEdit
systemctl status containerd --no-pager

Install kubeadm, kubelet, kubectl: Add the Kubernetes apt repository and install the latest stable version of these tools on all nodes
hbayraktar.medium.com

hbayraktar.medium.com
:
bash
CopyEdit
# Add Kubernetes apt key and repository

sudo mkdir -p /etc/apt/keyrings

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list


# Install Kubernetes tools

sudo apt update

sudo apt install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl   # prevent auto-updating&#8203;:contentReference[oaicite:7]{index=7}

Note: Replace v1.30 with the latest stable version if newer (the above example uses Kubernetes 1.30.x). If the repository is already set up and packages installed, ensure they are up to date.
After installation, verify versions:
bash
CopyEdit
kubeadm version   # should show the kubeadm version (e.g., v1.30.x)

kubelet --version # should show kubelet version


Configure Kubelet to Use Internal Network (All Nodes)

To ensure Kubernetes uses the internal 192.168.200.x network for intra-cluster communication, configure kubelet on each node to advertise the internal IP:

bash

CopyEdit

# On master-node:

echo 'KUBELET_EXTRA_ARGS="--node-ip=192.168.200.10"' | sudo tee /etc/default/kubelet

# On worker-node1:

echo 'KUBELET_EXTRA_ARGS="--node-ip=192.168.200.11"' | sudo tee /etc/default/kubelet

# On worker-node2:

echo 'KUBELET_EXTRA_ARGS="--node-ip=192.168.200.12"' | sudo tee /etc/default/kubelet


Reload and enable kubelet:

bash

CopyEdit

sudo systemctl daemon-reload

sudo systemctl enable kubelet

sudo systemctl restart kubelet


This sets the --node-ip for kubelet to the internal NIC’s IP​

discuss.kubernetes.io

devopscube.com

. Each node will report its Internal-IP as the 192.168.200.x address. You can verify with systemctl status kubelet (it should be active; errors about not finding API server are expected until we initialize the cluster).

Initialize the Control Plane (Master Node)

Perform the cluster initialization on the master node (master-node) using kubeadm:

bash

CopyEdit

# On master-node (run as root or with sudo):

sudo kubeadm init \

  --apiserver-advertise-address=192.168.200.10 \

  --apiserver-cert-extra-sans=192.168.178.35 \

  --pod-network-cidr=10.244.0.0/16 \

  --node-name master-node


Breakdown of the flags:

The initialization will take a few minutes. Upon success you should see: “Your Kubernetes control-plane has initialized successfully!” and a kubeadm join command token in the output​

kubernetes.io

kubernetes.io

.

Post-init steps (on master node):

Set up kubectl for the admin user:
bash
CopyEdit
mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

(Optional) Verify control-plane components are running:
bash
CopyEdit
kubectl get pods -n kube-system

Check that the API server is accessible on both internal and external addresses:
bash
CopyEdit
# From master node, test the API server endpoints (ignore cert warning with -k):

curl -k https://192.168.200.10:6443/version   # internal API endpoint

curl -k https://192.168.178.35:6443/version   # external API endpoint

Join Worker Nodes to the Cluster

Now join the two workers to the cluster using the token from the kubeadm init output. Use the join command on worker-node1 and worker-node2 (as root or with sudo):

For example, if the token and hash from init were --token <token> --discovery-token-ca-cert-hash sha256:<hash>, run on each worker:
bash
CopyEdit
sudo kubeadm join 192.168.200.10:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash> --node-name worker-node1

After running the join on both workers, check nodes status on the master:
bash
CopyEdit
kubectl get nodes -o wide

You should see master-node, worker-node1, and worker-node2 listed. Initially, the workers will likely show STATUS = NotReady – this is expected because we haven’t installed the CNI yet (the network plugin is required for the kubelet to fully initialize)​
kubernetes.io

devopscube.com
.
Example output (before CNI):
text
CopyEdit
NAME           STATUS     ROLES           AGE   VERSION   INTERNAL-IP       EXTERNAL-IP

master-node    Ready      control-plane   5m    v1.30.x   192.168.200.10    <none>

worker-node1   NotReady   <none>          1m    v1.30.x   192.168.200.11    <none>

worker-node2   NotReady   <none>          1m    v1.30.x   192.168.200.12    <none>

Install Calico CNI (Pod Network on 10.x.x.x)

We will deploy Calico as the Container Network Interface plugin to enable pod networking. We’ll use the latest Calico manifest compatible with our Kubernetes version.

Apply Calico manifest: On the master node, download the Calico manifest and update the pod network CIDR to our chosen 10.244.0.0/16 range (if it’s not already 10.x):
bash
CopyEdit
curl -O -L https://docs.projectcalico.org/manifests/calico.yaml

sed -i 's/192.168.0.0\/16/10.244.0.0\/16/' calico.yaml   # replace default pod CIDR with 10.244.0.0/16

kubectl apply -f calico.yaml    # apply Calico CNI resources

Configure Calico to use Internal Interfaces: Since each node has multiple NICs, we need Calico to utilize the internal network (ens35) for inter-node pod traffic. We will set Calico’s IP autodetection method to pick the 192.168.200.x IP on each node:
bash
CopyEdit
kubectl set env daemonset/calico-node -n kube-system IP_AUTODETECTION_METHOD=cidr=192.168.200.0/24

Verify Calico Pods: After a minute, check that Calico components are running:
bash
CopyEdit
kubectl get pods -n kube-system

Confirm Nodes are Ready:
bash
CopyEdit
kubectl get nodes -o wide

All nodes (master and both workers) should show STATUS = Ready. The Internal-IP should be the 192.168.200.x addresses we set, confirming inter-node communication is via the internal network. For example:
text
CopyEdit
NAME           STATUS   ROLES           AGE   VERSION   INTERNAL-IP       EXTERNAL-IP

master-node    Ready    control-plane   10m   v1.30.x   192.168.200.10    <none>

worker-node1   Ready    <none>          6m    v1.30.x   192.168.200.11    <none>

worker-node2   Ready    <none>          6m    v1.30.x   192.168.200.12    <none>

Validation and Testing

Now that the cluster is set up, perform a few tests to ensure everything is working as expected:

Core DNS: Check that the CoreDNS service is up. Run:
bash
CopyEdit
kubectl get pods -n kube-system -o wide | grep coredns

You should see two coredns-... pods in Running state, each on a worker node. You can test DNS by launching a busybox pod:
bash
CopyEdit
kubectl run --rm -ti dns-test --image=busybox:stable --restart=Never nslookup kubernetes.default

Pod Networking: Test that pods can communicate across nodes on the internal network. For example, deploy a simple test deployment:
bash
CopyEdit
kubectl create deployment pingtest --image=alpine --replicas=2 -- sleep 1000

kubectl get pods -o wide -l app=pingtest

Note the IP addresses of the two pods and which node each is on. Exec into one pod and ping the other:
bash
CopyEdit
POD_NAME=$(kubectl get pod -l app=pingtest -o jsonpath='{.items[0].metadata.name}')

kubectl exec -ti $POD_NAME -- ping -c4 <IP-of-other-pod>

External Connectivity: Confirm that nodes and pods can reach external network via the external interface. For example, from any node (or from a pod if internet egress is allowed by default policy) ping an external IP or DNS:
bash
CopyEdit
ping -c4 8.8.8.8        # from a node, test external connectivity

ping -c4 google.com     # DNS test (should resolve via 192.168.178.3 and ping)

API Server Access: To use the Kubernetes API externally, you can copy the kubeconfig from the master and edit the server address to https://192.168.178.35:6443. Because we added the external IP in the cert, you can safely connect to the API via the external address. For example, from a machine on the 192.168.178.0 network:
bash
CopyEdit
export KUBECONFIG=admin.conf  # copy this file from master-node

kubectl get nodes             # should work from external network

Or simply verify with curl:
bash
CopyEdit
curl -k https://192.168.178.35:6443/healthz   # should return "ok"


Congratulations! You now have a 3-node Kubernetes cluster running on Ubuntu 24.04. The control plane is advertising the internal IP and is reachable on both internal and external networks, the Calico CNI is set up with a 10.x.x.x pod network, and inter-node pod communication is confined to the internal network (ens35). You can proceed to deploy workloads on this cluster.

References: