K8s & KubeVirt: Automating Labs On Ubuntu 24.04
Hey guys, let's dive into a cutting-edge approach: using Kubernetes (K8s) with KubeVirt on KVM to completely automate your lab environment. It's not just about deploying an application anymore; we're talking about building a full-fledged "Lab-as-a-Service" (LaaS) platform. This setup, involving Ubuntu 24.04 LTS, is like a spaceship with multiple levels, enabling dynamic resource provisioning (VMs, VDIs), template management, and more. This method represents a forward-thinking vision, demanding high technical expertise. We will break down each layer of this architecture to show you how to set it up properly.
The K8s + KubeVirt Architecture Overview
Think of this system as a multi-layered "spaceship":
- Physical/Virtualization Layer: This is your foundation. Here, you have physical servers running KVM. Within KVM, you'll be creating four Ubuntu 24.04 LTS VMs.
- Container Orchestration Layer: On top of these four VMs, you'll build a Kubernetes cluster (1 Master, 3 Slaves/Workers). Kubernetes essentially becomes the "operating system" for your data center.
- Core Application Layer: Applications like PNETLab and iShare2 are deployed as containers (Pods) within the K8s cluster.
- Virtualization Workload Layer (The Key): To manage student virtual machines (VMs running Cisco, Windows, etc.), you'll leverage KubeVirt, a powerful K8s add-on. KubeVirt lets you manage traditional virtual machines as if they were native K8s objects.
Detailed Analysis: Components and Workflow
1. Kubernetes Foundation on KVM
- Why KVM on K8s? It's a popular model for private cloud or on-premise environments. It allows you to:
- Leverage Existing Infrastructure: If you already have a KVM environment, setting up K8s is straightforward.
- Robust Isolation: Each K8s node (Master and Worker) is a VM, offering complete resource and network isolation, boosting cluster stability.
- Flexibility: Easily snapshot, migrate, and resize K8s node VMs.
- Ubuntu 24.04 LTS: A great choice, providing a modern kernel and top support for container runtimes (containerd) and related technologies.
2. Deploying PNETLab and iShare2 on K8s
This is similar to previous setups, but more deeply integrated with K8s.
- PNETLab:
- Deployment: Deployed via a K8s
Deploymentobject. You can configurereplicas: 3for PNETLab to run across 3 Pods, ensuring high availability. - Service: A
Service(typeLoadBalancerorNodePort) will provide a stable access point for PNETLab Pods. The Robusta Cloud Load Balancer directs traffic from the internet to this Service. - PersistentVolumeClaim (PVC): User data (created labs) will be saved on a PVC, provided by the Robusta Cloud storage system (e.g., Ceph, NFS...). This data persists even if the Pod is deleted.
- Deployment: Deployed via a K8s
- iShare2:
- Similar to PNETLab, deployed via
Deployment,Service, andPVC. - It offers a stable API endpoint for PNETLab to access and retrieve templates.
- Similar to PNETLab, deployed via
3. Automating VMs, VDIs, and Lab Sizing with KubeVirt (Crucial Part)
Here's where the magic happens. PNETLab no longer directly calls KVM to create VMs; instead, it "commands" Kubernetes.
-
What is KubeVirt? An add-on installed in the K8s cluster. It lets you define a complete VM (CPU, RAM, disk, BIOS...) with a YAML file, similar to defining a Pod. KubeVirt ensures the VM runs on a K8s Worker node, using that node's KVM.
-
Workflow when an Instructor Creates a Lab:
- Design in PNETLab: The instructor logs into the PNETLab web interface. They drag-and-drop icons (Router, Switch, PC) to design the lab.
- Get Template from iShare2: When the instructor selects a device, e.g., "Cisco IOSv," PNETLab calls iShare2's API to get that template's information.
- Sizing (Resource Determination): PNETLab provides boxes for the instructor to "size" resources for each device: vCPU count, RAM (GB), and disk (GB).
- PNETLab Creates KubeVirt "Manifest": This is the key step. PNETLab's backend won't call KVM directly. It converts the lab structure and sizing parameters into one or more YAML files (manifests) that define KubeVirt's
VirtualMachineobjects.- Example of a Simple
VirtualMachinefile:apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: lab1-router1 spec: running: true template: spec: domain: resources: requests: memory: "2Gi" # <- Sizing value from instructor cpu: "1" # <- Sizing value from instructor volumes: - name: rootdisk volume: persistentVolumeClaim: claimName: lab1-router1-disk # PVC containing OS
- Example of a Simple
- Send Command to K8s API: PNETLab uses its service account to send these YAML files to the Kubernetes API Server.
- KubeVirt Execution: The KubeVirt controller (installed in the K8s cluster) recognizes new
VirtualMachineobjects. It will:- Create a
PersistentVolumeClaimfor the VM's hard drive. - Use Containerized Data Importer (CDI), a KubeVirt tool, to download the image template from iShare2's URL and write it to the PVC.
- Send commands to
libvirton a K8s Worker node (running on a KVM VM) to start a real virtual machine.
- Create a
- VM Runs as a Pod: This VM is now managed by K8s. It has an IP address from the K8s network (CNI), can be monitored, logs can be collected, and it's limited by K8s
ResourceQuota.
4. Managing VDI (Virtual Desktop Infrastructure)
- "VDI" here isn't Citrix or VMware Horizon. It provides a graphical interface (console) for students to interact with VMs created by KubeVirt.
- KubeVirt has built-in noVNC, a web-based VNC client.
- Workflow:
- The student accesses their lab in PNETLab.
- They click a Router icon.
- PNETLab calls the KubeVirt API to get the path to the VM's VNC console.
- PNETLab embeds an
iframepointing to thisnoVNCpath. The student sees the router's command-line window directly in their browser.
5. Sizing Templates for Instructors
- This is an advanced feature of iShare2 and PNETLab.
- iShare2 stores not just the raw
.qcow2image files but also metadata about the template in its database. - This metadata can include: "recommended minimum resources" (min CPU, min RAM, min Disk).
- When an instructor selects a template in PNETLab, the interface automatically fills in these minimum sizing values, allowing instructors to adjust them if needed. This prevents VMs from being created with insufficient resources and failing to start.
Conclusion: Power and Challenges
This architecture transforms your system from a service provider into a fully automated platform.
-
Power:
- Full Automation: The entire lab lifecycle, from creation, sizing, and launching to destruction, is automated via code (YAML) and APIs.
- Infinite Scalability: Add Worker nodes to the K8s cluster to serve more students. K8s and KubeVirt will automatically distribute new lab VMs.
- Resource Efficiency: K8s will intelligently "package" lab VMs onto Worker nodes, maximizing CPU/RAM utilization.
- Consistency: Every lab is created from the same templates and processes, minimizing manual configuration errors.
-
Challenges:
- High Complexity: Requires a team of engineers with deep expertise in K8s, KubeVirt, networking (CNI), and storage.
- Operation: Troubleshooting a lab VM network issue involves multiple layers (K8s network policy, CNI, KVM bridge, VM's OS).
In essence, you're building a system at the forefront of modern orchestration technology. It's the right path for creating a large-scale, flexible, and future-ready digital education platform.