From SETI@home to AgentHub: Karpathy's Vision for Distributed AI Research
One day after releasing autoresearch, Karpathy posted a vision that went even further: the next step is massively collaborative distributed AI research, modeled on SETI@home.
The goal? Not to emulate a single PhD student running experiments. To emulate an entire research community of them.
From Single Agent to Swarm
Autoresearch v1 is powerful but sequential. One agent modifies code, runs an experiment, keeps or discards the result, and repeats. It’s like having one tireless researcher working all night.
But Karpathy sees a bigger future: hundreds of agents running experiments in parallel, sharing results, and building on each other’s discoveries --- exactly like SETI@home distributed computing across thousands of volunteers’ machines.
The architecture Karpathy described requires distributed task sharding, result deduplication, and cross-agent memory. Agents need to know what other agents have tried so they don’t duplicate work, and they need to build on each other’s successful experiments.
AgentHub: Git for AI Swarms
Enter AgentHub --- Karpathy’s agent-first collaboration platform. It’s designed as a bare git repo plus message board, built for swarms of AI agents working on the same codebase.
The key design decision: no branches, no pull requests, no merges. Just agents contributing experiments to a shared research thread. This eliminates the human overhead of code review and branch management that would bottleneck a swarm of 100 agents.
Anyone can run an autoresearch agent and contribute to the community via AgentHub, creating a SETI@home-style distributed research network where every participant’s GPU contributes to collective discovery.
Already Happening: 333 Experiments in One Night
This isn’t just theory. The distributed autoresearch pattern is already being implemented.
On the night of March 8—9, 2026, 35 autonomous agents distributed across a peer-to-peer network ran 333 experiments completely unsupervised. Each node ran the autoresearch loop independently, and successful discoveries were shared across the network.
Where Karpathy’s single-agent setup produced ~100 experiments overnight, the distributed approach tripled that on its first night --- and that was with just 35 nodes.
Why Markdown Scales
At every level of this distributed system, the human interface is Markdown:
- Individual level: You write a
program.mdto direct your agent - Team level:
AGENTS.mdcoordinates multiple agents working on a shared codebase - Community level: AgentHub discussions use Markdown to share results and strategies
Markdown scales from directing a single overnight experiment to coordinating a global research community. It’s the same format at every layer --- human-readable, machine-parseable, and version-controllable.
What This Means for Research
The implications of distributed autoresearch are significant:
Broader hypothesis search. A single agent explores one path at a time. A swarm explores hundreds of paths simultaneously. The chance of finding breakthroughs increases with the number of agents searching.
Faster iteration. When one agent’s discovery is shared with the swarm, all agents immediately benefit. A 1% improvement found by Agent #47 becomes the new baseline for all 100 agents.
Robust negative results. When the same experiment fails across multiple agents, that negative result is statistically significant. The swarm learns what doesn’t work as efficiently as what does.
Democratized participation. You don’t need a GPU cluster. One person with one GPU can contribute to collective research. The SETI@home model proved this scales to millions of participants.
The Knowledge Layer
Participating in distributed autoresearch --- whether running your own agent or contributing to a community effort --- requires domain knowledge. You need to understand the research space well enough to write good program.md instructions.
This is where building a personal knowledge base in Markdown pays off. The documentation, papers, and best practices you’ve saved become the foundation for writing agent instructions that push research in productive directions.
The community producing the best results will be the one with the best shared knowledge, captured and organized in the format AI agents understand best: Markdown.
Save converts any webpage to clean Markdown --- building the knowledge library that powers better AI agent instructions, from individual autoresearch to distributed swarms. Try Save free.