Architecture: Immutable Infrastructure
Stack Overview
┌─────────────────────────────────────────────────┐
│ Custom Web Console │
│ (React + Go + client-go) │
└─────────────────────────────────────────────────┘
│
┌─────────────────────────────────────────────────┐
│ Kubernetes API │
│ (API Server) │
└─────────────────────────────────────────────────┘
│ │
┌───────────────────┐ ┌──────────────────┐
│ Containers │ │ VMs (KubeVirt) │
│ (Pods) │ │ Virtual Machines│
└───────────────────┘ └──────────────────┘
│ │
┌─────────────────────────────────────────────────┐
│ Talos Linux (Immutable OS) │
│ Dell R430 Server │
└─────────────────────────────────────────────────┘
Layer 1: Hardware
Dell R430
- 2x Intel Xeon (VT-x enabled)
- 128GB RAM
- Hardware RAID
- Dual NICs
Single physical server, but enough power for a full homelab.
Layer 2: Talos Linux
Why Talos?
Traditional OS (Ubuntu, Debian, etc.):
- SSH access → configuration drift
- Manual updates → inconsistency
- Package managers → dependency hell
- Persistent state → hard to reproduce
Talos approach:
- No SSH - All config via API
- Immutable - Read-only filesystem
- API-driven - Declarative configuration
- Minimal - Only what Kubernetes needs
Configuration Example
machine:
type: controlplane
network:
hostname: r430-k8s-master
interfaces:
- interface: eno1
addresses:
- 192.168.1.100/24
routes:
- network: 0.0.0.0/0
gateway: 192.168.1.1
nameservers:
- 1.1.1.1
- 8.8.8.8
cluster:
clusterName: tom-lab-cluster
controlPlane:
endpoint: https://192.168.1.100:6443
Everything in Git. Apply with:
talosctl apply-config --nodes 192.168.1.100 --file config.yaml
Layer 3: Kubernetes
Single-node cluster
Why single node?
- Sufficient for lab workloads
- Simpler networking
- All resources available for workloads
- Can add workers later
Components:
- etcd - Cluster state
- kube-apiserver - API endpoint
- kube-controller-manager - Control loops
- kube-scheduler - Pod placement
- kubelet - Node agent
- CoreDNS - Service discovery
- Flannel - Pod networking
Layer 4: Storage
Local Path Provisioner
Simpler than Longhorn for single-node:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
Dynamically provisions local storage for PVCs. Good enough for:
- Development databases
- Logs and metrics
- VM disks (non-replicated)
Layer 5: Networking
MetalLB - LoadBalancer
Bare metal doesn’t have cloud LoadBalancers. MetalLB fixes that:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
spec:
addresses:
- 192.168.1.200-192.168.1.220
Services with type: LoadBalancer get real IPs from this pool.
Traefik - Ingress
Routes HTTP/HTTPS to services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: blog
spec:
rules:
- host: lab.tomarc.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: blog
port:
number: 80
Single entry point: 192.168.1.200.
Layer 6: KubeVirt
VMs as Kubernetes resources:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: ubuntu-server
spec:
running: true
template:
spec:
domain:
cpu:
cores: 4
memory:
guest: 8Gi
devices:
disks:
- name: root
disk:
bus: virtio
interfaces:
- name: default
masquerade: {}
networks:
- name: default
pod: {}
volumes:
- name: root
dataVolume:
name: ubuntu-root
Start/stop with kubectl or our console. Uses hardware virtualization (VT-x) for performance.
Layer 7: Custom Console
Backend (Go)
func (s *Server) getVMs(w http.ResponseWriter, r *http.Request) {
vms, err := s.kubevirtClient.VirtualMachine(namespace).List(&metav1.ListOptions{})
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
json.NewEncoder(w).Encode(vms)
}
Direct client-go integration. Fast, type-safe.
Frontend (React)
const { data: vms } = useQuery({
queryKey: ['vms'],
queryFn: async () => {
const { data } = await axios.get('/api/v1/vms')
return data
},
})
Real-time updates with React Query.
Security Model
Network Segmentation
- Console: Local network only (192.168.1.x)
- Blog: Public via reverse proxy
- API: Behind Traefik Ingress
- VMs: Isolated networks
Access Control
- Kubernetes RBAC for API access
- No SSH to Talos nodes
- Secrets encrypted at rest
- TLS everywhere
Updates
- Talos: In-place upgrade, immutable
- Kubernetes: kubectl version management
- Apps: GitOps with versioned manifests
Observability
Metrics
- Kubernetes metrics-server
- Node exporter for hardware
- VM metrics via KubeVirt
Logs
- Aggregated via kubectl logs
- Persistent with Loki (future)
Monitoring
- Custom dashboard in console
- Prometheus for metrics (future)
- Grafana for visualization (future)
Benefits
- Reproducible - Everything in Git
- Secure - Minimal attack surface
- Maintainable - Standard K8s tools
- Scalable - Add nodes easily
- Modern - Cloud-native patterns
Trade-offs
- Complexity - K8s learning curve
- Overhead - More resources than bare metal
- Debugging - Different tools than traditional VMs
- Maturity - Some rough edges
Next: Network Setup Details