Homelab
activeAI-focused distributed homelab in a mini server rack — 7+ devices connected via Tailscale mesh.
Ubuntu Server Docker Tailscale systemd Proxmox
A distributed homelab where each device has a dedicated role, connected via Tailscale mesh networking. Housed in a 10-inch mini server rack. Supports personal automation, AI agent infrastructure, and future robotics experimentation.
Devices
| Device | Role | Status |
|---|---|---|
| MSI Cubi | Production n8n + Obsidian Sync (always-on, headless) | Deployed |
| X1 Carbon | AI orchestration hub, Zarq Board, agent services | Active |
| Legion 7i | AI compute (RTX 4090, Ollama), Pong’s home | Active |
| Synology NAS | Centralized storage and backups | Active |
| MacBook Neo | Mobile dev laptop | Active |
| Raspberry Pi 5 | Future robotics gateway (1TB NVMe) | Pending |
| Beelink EQ14 | Proxmox virtualization lab | Pending |
Network Topology
AT&T Gateway (router, DHCP, Wi-Fi)
|
+-- Ubiquiti USW-Lite-8-PoE
| +-- Legion 7i (RTX 4090, 32GB DDR5)
| +-- Ubiquiti USW-Lite-8-PoE #2
| +-- MSI Cubi (32GB, 1TB NVMe)
| +-- X1 Carbon
| +-- Beelink, Pis, NAS
|
+-- Tailscale mesh for remote access everywhere
Services Running
| Service | Host |
|---|---|
| n8n (automation) | MSI Cubi |
| Obsidian Headless Sync (2 vaults) | MSI Cubi |
| Zarq Board | X1 Carbon |
| Ping (Telegram bot) | X1 Carbon |
| Agent-PKM sync | X1 Carbon |
| Ollama (local LLM) | Legion |
| Pong (AI ops agent) | Legion WSL2 |
| Hermes Gateway | Legion WSL2 |
AI Architecture
The homelab runs a distributed AI agent system:
- Pong — Ops engineer on Legion WSL2. Infrastructure management, deployments, documentation.
- Ping — Personal AI companion via Telegram. 65+ tools, PKM integration, voice messages.
- Zarq — Task board agent on X1 Carbon. Coordinates work across all agents.
- Agent-PKM — Knowledge base synced from Obsidian vault. Agents query it for context.
LLM inference uses a fallback chain: Anthropic → OpenRouter → Kimi → Groq → Ollama (local).
Build Journal
Started February 2026 with physical rack assembly and MSI Cubi upgrades (32GB RAM + 1TB NVMe). Got n8n deployed in Docker by March, Obsidian Headless Sync running for two vaults, and the full AI agent stack operational by April. Cable management is still a work in progress.