Categories DevOps

Installing K3s and Creating a Cluster

Introduction

K3s is a lightweight, production-ready Kubernetes distribution developed by Rancher Labs. It provides a streamlined approach to container orchestration, making it particularly valuable for edge computing, IoT applications, and development environments. As a CNCF-certified distribution, K3s maintains full Kubernetes compatibility while reducing complexity and resource requirements.

This guide targets system administrators, DevOps engineers, and technical professionals looking to deploy and manage K3s clusters. By following this guide, you’ll learn how to install, configure, and maintain a K3s cluster, whether for development or production use.

Prerequisites

Operating system requirements:

  • Linux with modern kernel supporting cgroups and overlayfs
  • systemd or openrc for service management

Network requirements:

  • Open ports: 6443 (API server), 8472 (networking), 10250 (kubelet)
  • Stable network connectivity between nodes
  • DNS resolution capability

Required Tools and Access

  • SSH access to all nodes
  • Root or sudo privileges
  • Basic command-line tools (curl, wget)
  • Terminal access

Basic Concepts

K3s Architecture Overview

Server (master) node components:

  • API server
  • Controller manager
  • Scheduler
  • etcd or SQLite database
  • Containerd runtime

Agent (worker) node components:

  • Kubelet
  • Containerd
  • K3s agent
  • Network proxy

Control plane elements maintain cluster state and manage workload scheduling, while agent nodes run the actual workloads.

Key Terminology

  • Server Node: Control plane node managing the cluster
  • Agent Node: Worker node running workloads
  • Kubeconfig: Configuration file for cluster access
  • CNI: Container Network Interface
  • CRI: Container Runtime Interface

Single Node Installation

Server Installation Process

Install the server with:

curl -sfL https://get.k3s.io | sh -

Verify installation:

kubectl get node

Configuration options can be set via environment variables or flags:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik" sh -

Post-installation Setup

Configure kubeconfig access:

mkdir ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $USER:$USER ~/.kube/config
export KUBECONFIG=~/.kube/config

Multi-Node Cluster Setup

Server Node Configuration

Install the server and retrieve the node token:

# Install server
curl -sfL https://get.k3s.io | sh -

# Get token
sudo cat /var/lib/rancher/k3s/server/node-token

Adding Worker Nodes

Install agents using the server URL and token:

curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 K3S_TOKEN=node-token sh -

Verify node addition:

kubectl get nodes

Cluster Verification

Check cluster health:

kubectl get pods -A
kubectl get nodes -o wide
kubectl cluster-info

Advanced Configuration

High Availability Setup

Initialize the first server:

curl -sfL https://get.k3s.io | sh -s - server --cluster-init

Add additional server nodes:

curl -sfL https://get.k3s.io | K3S_TOKEN=token K3S_URL=https://first-server:6443 sh -s - server

Custom Configurations

Configure external database:

curl -sfL https://get.k3s.io | sh -s - server \
  --datastore-endpoint="postgres://user:password@host:5432/dbname"

Post-Installation Tasks

Access Management

Set up RBAC:

kubectl create role developer --verb=create,get,list,update,delete --resource=pods,deployments

Create role bindings:

kubectl create rolebinding dev-binding --role=developer --user=username

System Verification

Verify all system components:

kubectl get componentstatuses
kubectl get pods -n kube-system

Storage Configuration

Local Storage Setup

Enable default local-path-provisioner:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Create persistent volume claim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: local-path-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 2Gi

External Storage Integration

For NFS storage:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/kubernetes/release/nfs/nfs-csi-driver.yaml

Network Configuration

Default Networking

K3s uses flannel as the default CNI plugin. Configure network policies:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Advanced Networking

Install MetalLB for load balancing:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Maintenance and Operations

Upgrade Procedures

Upgrade server node:

curl -sfL https://get.k3s.io | sh -

Upgrade agent nodes:

curl -sfL https://get.k3s.io | K3S_URL=https://server-ip:6443 K3S_TOKEN=node-token sh -

Backup and Restore

Create etcd snapshot:

k3s etcd-snapshot save --name pre-upgrade-snapshot

Restore from snapshot:

k3s server --cluster-reset --cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/pre-upgrade-snapshot

Troubleshooting

Check logs:

journalctl -u k3s
kubectl logs -n kube-system -l app=k3s

Conclusion

K3s provides a lightweight yet powerful Kubernetes distribution suitable for various use cases. This guide covered the essential aspects of installing, configuring, and maintaining a K3s cluster. To further enhance your K3s knowledge:

  1. Explore advanced features like auto-deployment
  2. Implement monitoring solutions
  3. Practice disaster recovery scenarios
  4. Join the K3s community

For production deployments, ensure proper planning for high availability, security, and resource requirements. Regular maintenance and monitoring will help ensure optimal cluster performance.

More From Author

You May Also Like