Skip to content

Homelab

active

AI-focused distributed homelab in a mini server rack — 7+ devices connected via Tailscale mesh.

Ubuntu Server Docker Tailscale systemd Proxmox

A distributed homelab where each device has a dedicated role, connected via Tailscale mesh networking. Housed in a 10-inch mini server rack. Supports personal automation, AI agent infrastructure, and future robotics experimentation.

Devices

DeviceRoleStatus
MSI CubiProduction n8n + Obsidian Sync (always-on, headless)Deployed
X1 CarbonAI orchestration hub, Zarq Board, agent servicesActive
Legion 7iAI compute (RTX 4090, Ollama), Pong’s homeActive
Synology NASCentralized storage and backupsActive
MacBook NeoMobile dev laptopActive
Raspberry Pi 5Future robotics gateway (1TB NVMe)Pending
Beelink EQ14Proxmox virtualization labPending

Network Topology

AT&T Gateway (router, DHCP, Wi-Fi)
    |
    +-- Ubiquiti USW-Lite-8-PoE
    |       +-- Legion 7i (RTX 4090, 32GB DDR5)
    |       +-- Ubiquiti USW-Lite-8-PoE #2
    |               +-- MSI Cubi (32GB, 1TB NVMe)
    |               +-- X1 Carbon
    |               +-- Beelink, Pis, NAS
    |
    +-- Tailscale mesh for remote access everywhere

Services Running

ServiceHost
n8n (automation)MSI Cubi
Obsidian Headless Sync (2 vaults)MSI Cubi
Zarq BoardX1 Carbon
Ping (Telegram bot)X1 Carbon
Agent-PKM syncX1 Carbon
Ollama (local LLM)Legion
Pong (AI ops agent)Legion WSL2
Hermes GatewayLegion WSL2

AI Architecture

The homelab runs a distributed AI agent system:

  • Pong — Ops engineer on Legion WSL2. Infrastructure management, deployments, documentation.
  • Ping — Personal AI companion via Telegram. 65+ tools, PKM integration, voice messages.
  • Zarq — Task board agent on X1 Carbon. Coordinates work across all agents.
  • Agent-PKM — Knowledge base synced from Obsidian vault. Agents query it for context.

LLM inference uses a fallback chain: Anthropic → OpenRouter → Kimi → Groq → Ollama (local).

Build Journal

Started February 2026 with physical rack assembly and MSI Cubi upgrades (32GB RAM + 1TB NVMe). Got n8n deployed in Docker by March, Obsidian Headless Sync running for two vaults, and the full AI agent stack operational by April. Cable management is still a work in progress.