Architecture: Immutable Infrastructure

Stack Overview

┌─────────────────────────────────────────────────┐
│              Custom Web Console                 │
│         (React + Go + client-go)                │
└─────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────┐
│              Kubernetes API                     │
│              (API Server)                       │
└─────────────────────────────────────────────────┘
        │                          │
┌───────────────────┐      ┌──────────────────┐
│    Containers     │      │   VMs (KubeVirt) │
│    (Pods)         │      │   Virtual Machines│
└───────────────────┘      └──────────────────┘
        │                          │
┌─────────────────────────────────────────────────┐
│           Talos Linux (Immutable OS)            │
│              Dell R430 Server                   │
└─────────────────────────────────────────────────┘

Layer 1: Hardware

Dell R430

Single physical server, but enough power for a full homelab.

Layer 2: Talos Linux

Why Talos?

Traditional OS (Ubuntu, Debian, etc.):

Talos approach:

Configuration Example

machine:
  type: controlplane
  network:
    hostname: r430-k8s-master
    interfaces:
      - interface: eno1
        addresses:
          - 192.168.1.100/24
        routes:
          - network: 0.0.0.0/0
            gateway: 192.168.1.1
    nameservers:
      - 1.1.1.1
      - 8.8.8.8

cluster:
  clusterName: tom-lab-cluster
  controlPlane:
    endpoint: https://192.168.1.100:6443

Everything in Git. Apply with:

talosctl apply-config --nodes 192.168.1.100 --file config.yaml

Layer 3: Kubernetes

Single-node cluster

Why single node?

Components:

Layer 4: Storage

Local Path Provisioner

Simpler than Longhorn for single-node:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: local-path
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer

Dynamically provisions local storage for PVCs. Good enough for:

Layer 5: Networking

MetalLB - LoadBalancer

Bare metal doesn’t have cloud LoadBalancers. MetalLB fixes that:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
spec:
  addresses:
  - 192.168.1.200-192.168.1.220

Services with type: LoadBalancer get real IPs from this pool.

Traefik - Ingress

Routes HTTP/HTTPS to services:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: blog
spec:
  rules:
  - host: lab.tomarc.dev
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: blog
            port:
              number: 80

Single entry point: 192.168.1.200.

Layer 6: KubeVirt

VMs as Kubernetes resources:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: ubuntu-server
spec:
  running: true
  template:
    spec:
      domain:
        cpu:
          cores: 4
        memory:
          guest: 8Gi
        devices:
          disks:
          - name: root
            disk:
              bus: virtio
          interfaces:
          - name: default
            masquerade: {}
      networks:
      - name: default
        pod: {}
      volumes:
      - name: root
        dataVolume:
          name: ubuntu-root

Start/stop with kubectl or our console. Uses hardware virtualization (VT-x) for performance.

Layer 7: Custom Console

Backend (Go)

func (s *Server) getVMs(w http.ResponseWriter, r *http.Request) {
    vms, err := s.kubevirtClient.VirtualMachine(namespace).List(&metav1.ListOptions{})
    if err != nil {
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }
    json.NewEncoder(w).Encode(vms)
}

Direct client-go integration. Fast, type-safe.

Frontend (React)

const { data: vms } = useQuery({
  queryKey: ['vms'],
  queryFn: async () => {
    const { data } = await axios.get('/api/v1/vms')
    return data
  },
})

Real-time updates with React Query.

Security Model

Network Segmentation

Access Control

Updates

Observability

Metrics

Logs

Monitoring

Benefits

  1. Reproducible - Everything in Git
  2. Secure - Minimal attack surface
  3. Maintainable - Standard K8s tools
  4. Scalable - Add nodes easily
  5. Modern - Cloud-native patterns

Trade-offs

  1. Complexity - K8s learning curve
  2. Overhead - More resources than bare metal
  3. Debugging - Different tools than traditional VMs
  4. Maturity - Some rough edges

Next: Network Setup Details