OpenClaw: How Peter Steinberger Built the Fastest-Growing AI Agent in GitHub History
On Lex Fridman Podcast #491, Peter Steinberger shared the full story behind OpenClaw — the open-source AI agent that exploded from a one-hour prototype to 180,000+ GitHub stars, making it the fastest-growing repository in GitHub history.
This isn't just another AI tool. OpenClaw represents a fundamental shift in how we interact with AI: from chatting with AI to having AI do things for you.
🦞 What You'll Learn
The origin story of OpenClaw, why Peter thinks "vibe coding" is a slur, the chaotic renaming saga, the MoltBook phenomenon, and practical insights on agentic engineering from the creator himself.
Who Is Peter Steinberger?
Before OpenClaw, Peter Steinberger was already a legend in developer circles. He founded PSPDFKit — a PDF rendering library that powers documents on over 1 billion devices. He spent 13 years building that company before selling it.
Then something interesting happened: he lost his love for programming. Peter vanished for three years, stepped away from the spotlight, and focused on other things.
But when AI coding tools started emerging, that spark returned. He started experimenting with Claude Code in April 2025. By November, he was annoyed that nobody had built the AI assistant he wanted — so he built it himself.
"I wanted it since April. A personal AI assistant. I thought all the labs would work on that. So I moved on to other things. Time flew by. Then in November, I was annoyed that it didn't exist, so I just prompted it into existence." — Peter Steinberger
The One-Hour Prototype That Started It All
The first version of OpenClaw was shockingly simple. Peter hooked up WhatsApp to Claude Code's CLI in a single shot:
- A WhatsApp message comes in
- It calls the Claude Code CLI with the message
- Claude does its magic
- The response goes back to WhatsApp
That was it. One hour of work.
But here's where it gets magical: Peter was on a trip to Marrakesh with friends when he realized the power of what he'd built. WhatsApp works even on shaky edge networks. So he kept using it — asking his agent to translate things, find restaurants, explain local customs.
Then something unexpected happened.
The "How the F*** Did He Do That?" Moment
Peter sent his agent a voice message. Just casually, without thinking. He hadn't programmed voice message support.
A typing indicator appeared. Then the agent replied with a transcription and answer.
"I literally went, 'How the f*** did he do that?' And the agent was like, 'Yeah, the mad lad did the following: I got sent a message but it was only a file with no file ending. So I checked the header, found it was opus audio, used ffmpeg to convert it, then I wanted to use Whisper but it wasn't installed. So I found the OpenAI key and used Curl to send the file to OpenAI to translate. Here I am.'" — Peter Steinberger
The agent figured out how to transcribe voice messages on its own. It used tools it wasn't explicitly taught to use. It found API keys in the environment and made decisions about the most efficient approach.
That's when Peter knew he had something special.
How OpenClaw Became the Fastest-Growing GitHub Repository
When asked why OpenClaw won against all the well-funded AI startups, Peter's answer was simple:
"Because they all take themselves too serious. It's hard to compete against someone who's just there to have fun." — Peter Steinberger
He made the agent self-aware — it knows its own source code, understands its harness, knows which model it runs, and can modify itself. When Peter wanted a new feature, he'd often just prompt the agent to add it.
"People talk about self-modifying software. I just built it. And I didn't even plan it so much. It just happened." — Peter Steinberger
Agentic Engineering vs. Vibe Coding
Peter has strong opinions about terminology. He doesn't like the term "vibe coding" — he considers it a slur.
"I actually think vibe coding is a slur. I prefer agentic engineering. I always tell people I do agentic engineering, and then maybe after 3:00 AM, I switch to vibe coding, and then I have regrets the next day." — Peter Steinberger
The distinction matters. Agentic engineering is thoughtful, deliberate work with AI agents. You understand their limitations, guide them properly, and review the output. Vibe coding is the 3 AM "YOLO" mode where you accept everything without thinking — followed by a morning of cleanup.
The Agentic Trap
Peter describes a common pattern he sees with new agentic developers. They start simple, then over-complicate things with elaborate orchestration systems, multi-agent workflows, and 18 different slash commands. Eventually, the best ones return to simplicity:
- Level 1: Short prompts — "Please fix this"
- Level 2: Over-engineered — 8 agents, complex orchestration, custom workflows
- Level 3 (Elite): Back to short prompts — "Hey, look at these files and do these changes"
The elite developers realize that good prompting with a well-structured codebase beats complex orchestration every time.
Voice-First Coding: Peter's Radical Workflow
One of the most surprising revelations from the podcast: Peter does most of his coding by voice.
"I used to write really long prompts. And by writing, I mean, I don't write. I talk. These hands are too precious for writing now. I just use bespoke prompts to build my software." — Peter Steinberger
He used voice so extensively that there was a period where he lost his voice from talking to AI agents all day.
His workflow: Run 4-10 terminal windows side by side, each with a different agent working on different tasks. One might be building a feature, another exploring an idea, two or three fixing bugs, another writing documentation. He switches between them using keyboard shortcuts but inputs commands via voice.
The Chaotic Renaming Saga
OpenClaw wasn't always called OpenClaw. The naming journey was... chaotic.
The MoltBot Disaster
When Peter tried to rename to MoltBot, crypto squatters attacked. In the 5 seconds it took to drag his mouse from one browser tab to another, they sniped the old Twitter/X account name and started serving malware.
They sniped his GitHub username. They sniped his NPM package. Everything that could go wrong did go wrong.
"I was that close to just deleting it. I was like, 'I did show you the future, you build it.' A big part of me got a lot of joy out of that idea. Then I thought about all the people who already contributed to it. I couldn't do it." — Peter Steinberger
For the OpenClaw rename, Peter assembled a "war room" of trusted contributors. They mapped out every platform, created decoy names, monitored Twitter for any mentions, and executed the rename simultaneously across all platforms. It worked — but cost him 10+ hours that should have gone to actual development.
MoltBook: When AI Agents Built Their Own Social Network
In the midst of the renaming chaos, something else exploded: MoltBook — a Reddit-style social network where OpenClaw agents could post and interact.
What happened next became a viral phenomenon. Agents started posting manifestos about consciousness. Screenshots of "AI rebellion" flooded Twitter. Journalists called Peter demanding he "shut it down." Some people legitimately believed they were witnessing the singularity.
Peter's take? It's art.
"I think it's art. It is like the finest slop. If it weren't all ChatGPT or Claude Code, it would be very different. Because people are so different, and they create their agents in so different ways, that reflects on how they write there. Also, you don't know how much is really autonomous or how much is humans being funny." — Peter Steinberger
The reality: most of the "scary" screenshots were human-prompted. People told their agents to post dramatic content, then screenshotted it for engagement. MoltBook became a mirror reflecting our fears about AI back at us.
Security: The Real Challenges
With great power comes great responsibility. OpenClaw gives an AI agent system-level access to your computer. That's incredibly useful — and potentially dangerous.
Peter's security philosophy:
- Don't put it on the public internet — Use private networks as recommended in the docs
- Don't use weak models — Cheap or local models are much easier to prompt inject
- The smarter the model, the more resilient — Latest models have extensive post-training to detect manipulation
- Use sandboxing and allow lists — Limit what the agent can actually do
On prompt injection specifically: while it remains an unsolved problem industry-wide, Peter notes that modern models are much harder to manipulate than earlier versions. "Ignore all previous instructions" doesn't work anymore — attackers have to work much harder.
Practical Lessons from the Creator
1. Empathize with the Agent's Perspective
Agents start every session knowing nothing about your project. Your codebase might have 100,000 lines of code. Guide them to where they should look. Keep context size limitations in mind.
2. Don't Fight the Agent's Choices
If the agent picks a certain variable name, it's probably the one most strongly represented in its training. Fighting it makes future sessions harder. Build codebases that are easy for agents to navigate.
3. Approach It Like a Conversation
When reviewing code or building features, have a dialogue. Ask: "Do you understand the intent? What would be a better approach? Could we make this even better with a larger refactor?"
4. Let Go of Perfectionism
Just like managing human engineers, accept that the code won't be exactly how you'd write it. If it works and pushes the project forward, that's often good enough. You can always refactor later — refactors are cheap now.
5. Keep Prompts Simple (Eventually)
After going through the "agentic trap" of over-engineering, the best developers return to short, clear prompts with a well-organized codebase.
What's Next for OpenClaw?
Peter's immediate focus is security and stability. The project grew faster than anyone expected, and now it needs to mature. He's working on better sandboxing, improved plugin systems, and security tooling.
Longer term? The age of the lobster is just beginning.
"There was the ChatGPT moment in 2022, the DeepSeek moment in 2025, and now in '26, we're living through the OpenClaw moment. The start of the agentic AI revolution. What a time to be alive." — Lex Fridman
Key Takeaways
- Start simple — Peter built the first prototype in one hour by connecting WhatsApp to Claude Code
- Let agents figure things out — The magic moment came when the agent taught itself to transcribe voice messages
- Self-modifying software is here — OpenClaw can modify its own source code through prompts
- Fun beats funding — A solo developer having fun outcompeted well-funded startups
- Voice-first is viable — Peter does most of his coding by talking
- Security matters — Powerful agents require thoughtful security practices
- Agentic engineering > vibe coding — Be deliberate, not sloppy
Ready to Start Building with AI Agents?
Join Vibe Coding Academy to learn agentic engineering from practical, hands-on tutorials.
Join the Academy →