For the better part of a decade, I never thought about building a homelab. I spent nearly ten years working in and around data centers — at Amazon, then AWS — where the infrastructure was already there. Racks on racks, redundant power, cooling that could freeze you out if you stood in the wrong aisle. When your day job is operating at that scale, the idea of setting up a server at home feels a bit like a chef coming home and firing up a hot plate.
So I never did. I just didn’t think about it.
The Cirrascale Shift
That changed when I started working at Cirrascale. If you haven’t heard of them, they’re a cloud infrastructure company focused on AI and deep learning workloads — serious GPU compute. But what struck me wasn’t just the hardware. It was the culture of building. The people there were inventive. They’d look at a problem and figure out how to solve it from scratch with whatever they had. It wasn’t “open a ticket and wait for someone else to provision it.” It was “let’s just build the thing.”
That mindset stuck with me. I started thinking differently about what I could do with my own gear. Shoutout to the Cirrascale team — they planted a seed they probably don’t even know about.
The Constraint: A Studio Apartment
Here’s the thing though — I live in a studio apartment with my girlfriend. There’s no spare room. No garage. No closet I can commandeer for a rack. Everything I build has to fit on a desk, stay quiet enough to sleep next to, and ideally not look like I’m running a cybercrime operation.
So instead of buying a bunch of new gear, I started with what I already had and built around one key purchase. The result is a homelab that’s small but surprisingly capable.
The Hardware
Let me walk through each piece.
NVIDIA DGX Spark
This is the centerpiece and the one thing I actually went out and bought specifically for this setup. The DGX Spark runs NVIDIA’s GB10 Grace Blackwell superchip with 128GB of unified LPDDR5x memory and a 4TB NVMe drive. It’s the priciest piece by far, but if you’re trying to stay current with AI — running local models, experimenting with fine-tuning, generating images — you need real compute. Cloud GPU hours add up fast, and there’s something freeing about having the hardware right there.
It handles everything from running Ollama for local LLM inference to powering ComfyUI for image generation workflows. More on that in a bit.
Raspberry Pi 5
The Pi 5 with 8GB of RAM is my monitoring hub. It runs Prometheus for metrics collection and Grafana for dashboards. Every other device in the lab exports metrics back to the Pi, so I’ve got a single pane of glass for GPU utilization, network performance, system health — all of it.
It’s the cheapest device in the setup and arguably the most important for keeping everything observable. A 128GB SD card gives it plenty of room to retain metric history.
Framework Laptop 13
This one’s interesting. It’s a Framework Laptop 13 with an AMD Ryzen AI 7 350 and 32GB of RAM, running Ubuntu off a 1TB Samsung SSD. I use it primarily for network simulation — it runs Containerlab, which lets me spin up virtual network topologies with Junos switches for testing configurations before pushing them to real gear.
I’ll be honest: Framework laptops are cool in concept — modular, repairable, all that — but they can be finicky. The AMD variant has had its quirks with Linux. Still, for a headless Ubuntu box running containers, it gets the job done.
Mac Mini
The Mac Mini lives at a friend’s house. Sounds weird, but it works. It’s running OpenClaw and serves as a remote node I can reach through Tailscale. Having a device on a completely separate network and ISP is useful for testing connectivity, running distributed workloads, and just having another compute endpoint that isn’t on my apartment’s circuit.
The Networking Gear
For routing, I’m using a GL.iNet Beryl AX — a compact travel router that punches well above its weight. It has a 2.5 GbE WAN port, runs AdGuard Home for DNS-level ad blocking, and supports WireGuard for VPN. Behind it sits a TP-Link TL-SG108 8-port gigabit switch that ties everything together physically.
My upstream connection is AT&T Fiber, which gives me roughly 938 Mbps down and 928 Mbps up with single-digit latency. For a homelab, you really can’t ask for more.
Tailscale: The Glue
The thing that makes this whole setup actually work as a lab and not just a collection of devices is Tailscale. It’s a mesh VPN built on WireGuard that creates a zero-trust network between all your devices. Every node gets a stable IP on the Tailscale network, and I can SSH into any of them from anywhere — my phone, a coffee shop, wherever.
ssh spark # DGX Spark
ssh pi # Raspberry Pi 5
ssh framework # Framework Laptop
ssh mac-mini # Mac Mini at my friend's place
No port forwarding, no dynamic DNS, no punching holes in firewalls. It just works. The devices discover each other through Tailscale’s coordination servers, establish direct WireGuard tunnels, and you’re in. It’s the single best piece of infrastructure software I’ve adopted for this project.
What’s Running
Here’s a quick snapshot of the services across the lab:
- Prometheus + Grafana on the Pi — centralized monitoring with dashboards for GPU stats, network throughput, and system health across all nodes
- Containerlab on the Framework — virtual network topologies with Junos switches for testing BGP configs and network automation
- ComfyUI + Ollama on the Spark — local image generation and LLM inference, no cloud APIs needed
- OpenClaw on the Mac Mini — running remotely on a separate network
The whole thing draws modest power, stays quiet, and fits on a desk. Not bad for a studio apartment.
But the most interesting part of this setup isn’t the hardware or even the services — it’s what happens when you give an AI agent SSH access to all of it. That’s a story for the next post.