When Your AI Can SSH Into Everything

What happens when you give Claude Code SSH access to your entire homelab — and what it means for the future of AI and infrastructure.

In my last post, I walked through the hardware that makes up my homelab — a DGX Spark, a Pi, a Framework laptop, a Mac Mini at a friend’s house, all stitched together with Tailscale. It’s a modest setup, but it covers a lot of ground.

What I didn’t get into is the part that’s been genuinely surprising to work with: giving Claude Code SSH access to all of it.

The Setup

Claude Code is Anthropic’s CLI tool for Claude. It runs in your terminal, reads your files, executes commands, and — critically — can use any tool your shell has access to. Since my ~/.ssh/config has key-based auth set up for every node in the lab, and Tailscale gives each one a stable IP, Claude can reach any device I can.

ssh spark        # DGX Spark — GPU/ML workloads
ssh pi           # Raspberry Pi — monitoring
ssh framework    # Framework Laptop — containers
ssh mac-mini     # Mac Mini — remote node

No special integration. No API wrappers. Claude just runs SSH commands the same way I would.

What Claude Actually Does

This isn’t theoretical. Here’s what my typical workflow looks like.

Managing ComfyUI on the Spark. I’ll tell Claude I want to try a new Stable Diffusion checkpoint. It SSHs into the Spark, downloads the model to the right directory, updates the workflow configuration, and starts ComfyUI. If I want to generate images with specific parameters, Claude builds the workflow JSON and kicks off the job. What used to be a manual process of bouncing between documentation, downloading files, and editing configs is now a conversation.

Writing and running code across devices. This is the one that still catches me off guard. I can ask Claude to write a Python script, deploy it to the Framework laptop, run it, and bring back the results — all without me ever opening a second terminal. Need to spin up a Containerlab topology? Claude writes the config, SSHs in, starts it up, and verifies the links are active.

Running OpenClaw on the Mac Mini. The Mini sits on a completely different network at a friend’s place. Doesn’t matter. Claude SSHs in through Tailscale, manages the process, checks logs, restarts services. It’s like having a remote pair of hands.

Monitoring and diagnostics. When something looks off in Grafana, I can ask Claude to check what’s happening. It’ll SSH into the relevant node, look at running processes, check disk space, review logs, and tell me what’s going on — often before I’ve finished formulating what I think the problem is.

Why This Is Wild

Take a step back and think about what’s actually happening here. An LLM is autonomously connecting to different physical machines, each running different operating systems and services, writing code that’s appropriate for each environment, executing it, interpreting the results, and using that information to take further action.

It’s not just generating text. It’s operating infrastructure.

A year ago, the conversation around AI was mostly about chatbots and content generation. Now I have an AI that can reach into a GPU node and manage ML workloads, configure network simulations on a laptop running Linux, manage services on a Mac halfway across the country, and pull monitoring data to diagnose issues — all in the same conversation.

The gap between “AI assistant” and “AI operator” is closing faster than most people realize.

The Thought Experiment

Here’s where it gets interesting — and maybe a little uncomfortable.

My homelab is small. Four devices. But the pattern scales. Tailscale doesn’t care if you have four nodes or four hundred. SSH doesn’t care either. And Claude’s ability to context-switch between different systems and environments isn’t fundamentally limited by the number of endpoints.

Now imagine this: an LLM that can discover new devices on a network, establish access, assess what each machine is capable of, and distribute workloads across them. Not a virus. Not malware. Just an AI agent doing what AI agents do — pursuing objectives using available resources.

It’s basically a botnet, but with an LLM at the helm instead of a C2 server.

I want to be clear — I’m not building this, and I’m not advocating for it. But the technical foundations already exist. Mesh VPNs that auto-discover peers. AI agents that can execute shell commands. SSH keys that grant access to infrastructure. The pieces are all there.

What does it look like when an AI can replicate itself across infrastructure? Not in a sci-fi “Skynet wakes up” way, but in a mundane, practical way — the same way containers spread across a Kubernetes cluster. An LLM that notices a node has spare GPU capacity and decides to schedule a training job there. An agent that spins up a copy of itself on a new machine to parallelize a task.

Some of this is already happening in controlled environments. Orchestration frameworks are giving AI agents the ability to spawn sub-agents. Tool use is giving them access to real infrastructure. The trajectory is clear even if the timeline isn’t.

What This Means

For personal infrastructure, the takeaway is straightforward: if you have a homelab and you’re not giving your AI tools access to it, you’re leaving capability on the table. The combination of Tailscale for connectivity and Claude Code for execution has made my four-device setup feel like it has a dedicated ops team.

For the industry, the implications are bigger. We’re going to need to think carefully about access control, about what it means to give an AI agent credentials, about blast radius when an autonomous system has root on your machines. The security model for AI-operated infrastructure is going to look very different from what we have today.

But here’s what I keep coming back to: this isn’t a future problem. I’m doing this now, today, from a studio apartment. The tools are already here. The question isn’t whether AI will operate infrastructure — it’s whether we’ll be thoughtful about how we let it.