K3s Cluster Setup Guide For API, DB, And Website Deployment

by Alex Johnson 60 views

Are you looking to deploy your APIs, databases, and websites efficiently? Setting up a K3s cluster might just be the perfect solution for you. K3s, a lightweight Kubernetes distribution, is designed for resource-constrained environments, making it ideal for edge computing, IoT, and even your development and production setups. In this comprehensive guide, we’ll walk you through the process of setting up a K3s cluster, ensuring you have a robust foundation for your deployments.

Why Choose K3s?

Before diving into the setup, let's explore why K3s is a compelling choice for your deployment needs. K3s stands out for its simplicity, low resource consumption, and ease of use. Unlike full-fledged Kubernetes distributions, K3s is packaged as a single binary, reducing the overhead and complexity typically associated with Kubernetes. This makes it incredibly easy to install and manage, even for small teams or individual developers.

Key Benefits of Using K3s:

  • Lightweight: K3s has a small footprint, making it perfect for resource-constrained environments.
  • Simple Installation: A single binary and minimal dependencies mean you can get up and running quickly.
  • Certified Kubernetes: K3s is a certified Kubernetes distribution, ensuring compatibility with standard Kubernetes tools and configurations.
  • Scalability: While lightweight, K3s is still scalable and can handle production workloads.
  • Edge Computing: Its low resource requirements make it ideal for edge deployments.

Given these advantages, setting up a K3s cluster is a smart move for anyone looking to streamline their deployment process. Let's get started!

Prerequisites

Before you begin the K3s cluster setup, there are a few prerequisites you'll need to take care of. These steps will ensure that your environment is ready for K3s and that the installation process goes smoothly. Here’s what you need:

  1. Servers or Virtual Machines: You’ll need at least one server or virtual machine (VM) to act as the K3s server node. For a highly available setup, consider using three or more server nodes. Additionally, you may need worker nodes to run your applications.
  2. Operating System: K3s supports various Linux distributions, including Ubuntu, Debian, CentOS, and RHEL. Choose an OS that you’re comfortable with and that meets your application requirements.
  3. SSH Access: Ensure you have SSH access to your servers or VMs. This will allow you to remotely execute commands and configure your cluster.
  4. Network Connectivity: Make sure your servers can communicate with each other over the network. This is crucial for the cluster to function correctly.
  5. Firewall Configuration: Configure your firewall to allow necessary traffic between the nodes. You'll need to allow traffic on ports 6443 (Kubernetes API server), 2379-2380 (etcd), and 10250 (kubelet API).
  6. Container Runtime: K3s comes with containerd as the default container runtime, so you typically don’t need to install one separately. However, if you prefer using Docker, you can install it before installing K3s.

By ensuring these prerequisites are met, you’ll be well-prepared to set up your K3s cluster without encountering common issues. Now, let's move on to the installation process.

Step-by-Step Guide to Setting Up a K3s Cluster

Now that you have your prerequisites in place, let's dive into the step-by-step guide to setting up a K3s cluster. We'll cover installing the K3s server, joining worker nodes, and verifying your cluster setup.

Step 1: Install the K3s Server

The first step is to install the K3s server on your designated server node. This node will act as the control plane for your cluster. Follow these steps:

  1. SSH into your server node:
    ssh user@your_server_ip
    
  2. Download and install K3s: The easiest way to install K3s is by using the installation script provided by Rancher, the company behind K3s.
    curl -sfL https://get.k3s.io | sh -
    
    This command downloads the K3s installation script and executes it. The script will install K3s and configure it as a system service.
  3. Check K3s service status: After the installation is complete, check the status of the K3s service to ensure it’s running correctly.
    sudo systemctl status k3s
    
    If the service is active and running, you’re ready to proceed.
  4. Retrieve the K3s server token: To allow worker nodes to join the cluster, you need the K3s server token. This token is stored on the server node in the /var/lib/rancher/k3s/server/node-token file.
    sudo cat /var/lib/rancher/k3s/server/node-token
    
    Copy this token; you'll need it later when joining worker nodes.

Step 2: Join Worker Nodes to the Cluster

Once the K3s server is up and running, you can join worker nodes to the cluster. Worker nodes are responsible for running your application workloads. Follow these steps for each worker node you want to add:

  1. SSH into your worker node:
    ssh user@your_worker_node_ip
    
  2. Install K3s agent: Similar to the server installation, use the installation script to install the K3s agent.
    curl -sfL https://get.k3s.io | K3S_URL=https://<your_server_ip>:6443 K3S_TOKEN=<your_node_token> sh -
    
    Replace <your_server_ip> with the IP address of your K3s server and <your_node_token> with the token you retrieved earlier.
  3. Check K3s agent status: Verify that the K3s agent service is running correctly.
    sudo systemctl status k3s-agent
    
    If the service is active and running, the worker node has successfully joined the cluster.

Step 3: Verify the Cluster Setup

After setting up the K3s server and joining worker nodes, it’s crucial to verify that your cluster is functioning correctly. You can do this by using the kubectl command-line tool, which is included with K3s.

  1. Get the kubeconfig file: The kubeconfig file contains the necessary information to connect to your K3s cluster. On the server node, the file is located at /etc/rancher/k3s/kubeconfig.yaml. Copy this file to your local machine or wherever you plan to run kubectl.
    sudo cat /etc/rancher/k3s/kubeconfig.yaml
    
    You can copy the contents of this file and save it locally as kubeconfig.yaml, or use a tool like scp to transfer the file.
  2. Set the KUBECONFIG environment variable: Tell kubectl where to find your kubeconfig file by setting the KUBECONFIG environment variable.
    export KUBECONFIG=/path/to/your/kubeconfig.yaml
    
    Replace /path/to/your/kubeconfig.yaml with the actual path to your kubeconfig file.
  3. Verify cluster nodes: Use kubectl to list the nodes in your cluster.
    kubectl get nodes
    
    You should see your K3s server and any joined worker nodes listed. The status should be Ready for all nodes.
  4. Check cluster info: You can also get detailed information about your cluster using the kubectl cluster-info command.
    kubectl cluster-info
    
    This command provides information about the Kubernetes API server and other cluster components.

By following these steps, you can set up a K3s cluster and verify its functionality. Now that your cluster is ready, let's explore how to deploy your applications.

Deploying Your Applications on K3s

With your K3s cluster up and running, you’re ready to deploy your APIs, databases, and websites. K3s supports standard Kubernetes deployments, so you can use familiar tools and techniques.

1. Deploying APIs

To deploy your APIs, you’ll typically use Kubernetes Deployments and Services. A Deployment ensures that a specified number of pod replicas are running, while a Service provides a stable endpoint for accessing your API.

  1. Create a Deployment: Define a Kubernetes Deployment YAML file for your API. This file specifies the container image, number of replicas, and other configuration details.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-api-deployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: my-api
      template:
        metadata:
          labels:
            app: my-api
        spec:
          containers:
          - name: my-api
            image: your-docker-registry/my-api:latest
            ports:
            - containerPort: 8080
    
    Apply the Deployment using kubectl:
    kubectl apply -f my-api-deployment.yaml
    
  2. Create a Service: Define a Kubernetes Service YAML file to expose your API. This file specifies the type of Service (e.g., ClusterIP, NodePort, LoadBalancer) and the ports to expose.
    apiVersion: v1
    kind: Service
    metadata:
      name: my-api-service
    spec:
      selector:
        app: my-api
      ports:
      - protocol: TCP
        port: 80
        targetPort: 8080
      type: ClusterIP
    
    Apply the Service using kubectl:
    kubectl apply -f my-api-service.yaml
    

2. Deploying Databases

Deploying databases on K3s requires careful consideration of data persistence and stateful sets. You can use Kubernetes StatefulSets to manage stateful applications like databases.

  1. Create a StatefulSet: Define a Kubernetes StatefulSet YAML file for your database. This file specifies the container image, storage requirements, and other configuration details.
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: my-db-statefulset
    spec:
      serviceName: my-db-service
      replicas: 1
      selector:
        matchLabels:
          app: my-db
      template:
        metadata:
          labels:
            app: my-db
        spec:
          containers:
          - name: my-db
            image: your-docker-registry/my-db:latest
            ports:
            - containerPort: 5432
          volumeClaimTemplates:
          - metadata:
              name: data
            spec:
              accessModes: [ "ReadWriteOnce" ]
              resources:
                requests:
                  storage: 10Gi
    
    Apply the StatefulSet using kubectl:
    kubectl apply -f my-db-statefulset.yaml
    
  2. Create a Service: Define a Kubernetes Service YAML file to expose your database. This file specifies the type of Service and the ports to expose.
    apiVersion: v1
    kind: Service
    metadata:
      name: my-db-service
    spec:
      selector:
        app: my-db
      ports:
      - protocol: TCP
        port: 5432
        targetPort: 5432
      type: ClusterIP
    
    Apply the Service using kubectl:
    kubectl apply -f my-db-service.yaml
    

3. Deploying Websites

To deploy a website on K3s, you can use Deployments and Services, similar to deploying APIs. Additionally, you might want to use Ingress to manage external access to your website.

  1. Create a Deployment: Define a Kubernetes Deployment YAML file for your website.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-website-deployment
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: my-website
      template:
        metadata:
          labels:
            app: my-website
        spec:
          containers:
          - name: my-website
            image: your-docker-registry/my-website:latest
            ports:
            - containerPort: 80
    
    Apply the Deployment using kubectl:
    kubectl apply -f my-website-deployment.yaml
    
  2. Create a Service: Define a Kubernetes Service YAML file to expose your website.
    apiVersion: v1
    kind: Service
    metadata:
      name: my-website-service
    spec:
      selector:
        app: my-website
      ports:
      - protocol: TCP
        port: 80
        targetPort: 80
      type: ClusterIP
    
    Apply the Service using kubectl:
    kubectl apply -f my-website-service.yaml
    
  3. Create an Ingress (Optional): If you want to expose your website externally, create an Ingress resource. You'll need an Ingress controller installed in your cluster (e.g., Traefik, Nginx Ingress Controller).
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: my-website-ingress
    spec:
      rules:
      - host: your-website.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: my-website-service
                port:
                  number: 80
    
    Apply the Ingress using kubectl:
    kubectl apply -f my-website-ingress.yaml
    

By following these deployment strategies, you can efficiently deploy your APIs, databases, and websites on your K3s cluster. Remember to adjust the configurations to match your specific application requirements.

Conclusion

Setting up a K3s cluster is a straightforward process that provides a powerful platform for deploying your applications. With its lightweight design and ease of use, K3s is an excellent choice for various use cases, from edge computing to local development environments. By following this guide, you can quickly get your K3s cluster up and running and start deploying your APIs, databases, and websites.

We’ve covered the prerequisites, installation steps, and basic deployment strategies. However, the world of Kubernetes is vast, and there’s always more to learn. To deepen your understanding and explore advanced topics, consider checking out the official Kubernetes documentation and community resources.

For further information and best practices on Kubernetes and related technologies, visit the official Kubernetes website at kubernetes.io. This resource offers extensive documentation, tutorials, and community support to help you master Kubernetes and K3s. Happy deploying!