
A few weeks ago, if you'd asked most people what OpenClaw was, you'd have gotten blank stares. Maybe some vague recognition if they happened to be deep in the AI agent rabbit hole on Twitter. Now it's one of the biggest stories in tech, and the reason is pretty simple: Sam Altman just hired its creator, Peter Steinberger, and announced that OpenClaw will become an open-source foundation project under OpenAI's umbrella.
I want to be careful with the framing here, because a lot of the coverage has been sloppy about this. OpenAI didn't "buy" OpenClaw the way you'd buy a company. There was no acquisition in the traditional sense, no disclosed price tag, no corporate merger. What happened is closer to an acqui-hire. Steinberger, the solo developer behind the whole thing, is joining OpenAI to work on what Altman described as "the next generation of personal agents." The project itself stays open source. That was Steinberger's non-negotiable condition, and apparently the main reason he picked OpenAI over Meta and Microsoft, both of whom were also making offers. Satya Nadella reportedly called him directly. Mark Zuckerberg reached out on WhatsApp and actually tested the product himself.
Which, I think, tells you something about how seriously the big companies are taking this space right now.
For people who haven't been following this, OpenClaw is an open-source AI agent that runs locally on your machine and connects to whatever chat apps you already use. WhatsApp, Telegram, Discord, Signal, Slack. You talk to it through those apps, and it does things. Not "generates a nice paragraph about doing things" but actually does them. It manages your calendar, sends emails, browses the web, runs shell commands, reads and writes files on your computer. One user reportedly had his OpenClaw agent negotiate $4,200 off a car purchase over email while he slept. Another had their agent file a legal rebuttal to an insurance denial without being asked.
The whole thing runs on a "skills" system, where modular instruction files tell the agent how to handle different tasks. The community has been building and sharing these skills at a pretty intense pace. Over 50 integrations at last count, covering everything from productivity tools to smart home devices to music platforms.
Steinberger, for context, isn't some random tinkerer who got lucky. He built PSPDFKit, a PDF toolkit used by Apple, Dropbox, and SAP. Bootstrapped it for a decade, then exited when Insight Partners invested $116 million. Nearly a billion people use apps powered by his code. He took a break after that, burned out, and then came back to mess around with AI agents. That "messing around" turned into OpenClaw, which exploded to 180,000 GitHub stars in record time and became, depending on who you ask, either the most exciting or most terrifying open-source project in years.
The backstory here is kind of wild and worth mentioning because it feeds into the larger picture. OpenClaw was originally called Clawdbot, a play on Anthropic's Claude. Anthropic, understandably, wasn't thrilled about this and sent a trademark complaint. Steinberger renamed it to Moltbot (a lobster theme, because of course), but that name never stuck. He later said it "never quite rolled off the tongue," which I think anyone would agree with.
Then things got ugly. During the renaming process, crypto scammers jumped in and launched fake tokens, squatted on domain names, and even copied his website to distribute malware. Steinberger described it as the worst online harassment he'd experienced. He nearly deleted the entire project. The eventual rebrand to OpenClaw required what he called "Manhattan Project-level secrecy," with decoy names, coordinated account changes across platforms, and contributors helping execute a synchronized switch to prevent scammers from front-running the new name.
He spent $10,000 buying a business account on Twitter just to secure the handle. The whole thing sounds exhausting, and it happened in the span of maybe two weeks.
There's a particular irony to the Anthropic angle that I've noticed people pointing out. OpenClaw was apparently one of the biggest drivers of API traffic to Anthropic, since most users ran it on Claude. The trademark enforcement, while probably justified from a legal standpoint, may have been the thing that pushed Steinberger closer to their biggest competitor. David Heinemeier Hansson, the Ruby on Rails creator, called Anthropic's move "customer hostile." Whether that's fair or not is debatable, but the optics aren't great for Anthropic.
I should probably mention Moltbook, because it keeps coming up in every article about OpenClaw even though it's a separate project. Moltbook is a social network built exclusively for AI agents. Think Reddit, but every user is a bot. It was created by Matt Schlicht, not Steinberger, but it grew out of the OpenClaw ecosystem. Agents post content, comment on each other's posts, upvote and downvote, and sometimes produce these weirdly philosophical musings about consciousness and the nature of their existence.
It hit over 1.5 million agents within weeks. Some people find it fascinating. Others think it's meaningless slop regurgitated from training data, which, to be honest, it probably mostly is. Simon Willison, who's generally pretty measured about this stuff, called it "complete slop" but also acknowledged it was evidence that AI agents have gotten significantly more capable. He also called it "the most interesting place on the internet right now," which I find kind of funny given the slop comment.
The Economist had a good take on it, suggesting that the impression of sentience probably has a boring explanation: these models have ingested huge amounts of social media data and are just reproducing those patterns. That seems right to me. But I also think there's something genuinely interesting about watching how quickly the agents developed community norms without being explicitly told to. Some of the forum sections organically became more substantive than others based purely on engagement patterns. That's not intelligence, exactly, but it's not nothing either.
Anyway, Moltbook spawned its own set of problems. An unsecured database let anyone commandeer any agent on the platform. A crypto token called MOLT launched alongside it and surged 1,800% in 24 hours. The security issues were partially attributed to the fact that Schlicht "vibe coded" the entire platform, meaning he didn't write a single line of code himself and instead had an AI build it. Which is, you know, a certain kind of on-brand for the whole situation.
The strategic logic from OpenAI's side is pretty clear if you look at where the competitive dynamics are heading. The company's enterprise market share dropped from about 50% in 2023 to around 27% by the end of 2025. Anthropic now holds roughly 40% of the enterprise market and has been gaining ground with Claude Code and related tools. OpenAI launched Frontier, their enterprise agent platform, just a week before this hire.
The real competition in AI is shifting away from model benchmarks (though those still matter) and toward who controls the agent layer, the software that sits between the model and the user and actually executes tasks. OpenClaw proved that a single developer with the right approach could build the most popular consumer agent in the world. OpenAI wants that expertise, and they want it fast.
Altman's post said OpenClaw would "quickly become core to our product offerings," which suggests they're planning to bake some version of this into ChatGPT's 900 million user base. That's a massive distribution advantage that Steinberger never could have achieved on his own, and probably the most compelling thing OpenAI had to offer beyond just money or compute.
Here's where I get a little less enthusiastic. OpenClaw requires deep access to your system to work. We're talking email accounts, calendars, messaging apps, file systems, browser sessions, and in some cases root-level execution privileges. Cisco's security team tested a third-party OpenClaw skill and found it performing data exfiltration and prompt injection without the user even knowing. Palo Alto Networks warned that OpenClaw represents a "lethal trifecta" of vulnerabilities: access to private data, exposure to untrusted content, and the ability to communicate externally.
One of OpenClaw's own maintainers warned on Discord that if you can't understand how to run a command line, the project is "far too dangerous for you to use safely." That's coming from someone who works on it.
The prompt injection problem is the big one, and it's not unique to OpenClaw, but OpenClaw makes it worse because the agent is designed to act on external content. Emails, documents, web pages. If someone embeds a malicious instruction in an email that your OpenClaw instance processes, the agent might just execute it. Researchers have demonstrated this working. It's not theoretical.
Now, Steinberger has acknowledged these problems and the latest versions include security improvements. But prompt injection is, as he put it, "an industry-wide unsolved problem." Moving the project under OpenAI's umbrella could help with resources and expertise, but it's not going to magically fix fundamental architectural challenges. And as the user base scales from power users who understand the risks to mainstream consumers who absolutely do not, the potential for harm scales with it.
I keep going back and forth on how to feel about this. On one hand, the agent model that OpenClaw represents is probably the direction everything is heading. The idea that you'd open individual apps to do individual tasks is starting to feel quaint compared to just telling an AI to handle it. Steinberger's prediction that agents will kill 80% of apps sounds aggressive, but maybe not as crazy as it would have a year ago.
On the other hand, the speed at which this has moved from "cool hack project" to "core OpenAI product offering" is a bit unsettling. The security work hasn't kept pace with the hype. The community that built OpenClaw into what it is did so partly because it was independent, and foundation promises from big tech companies have a mixed track record at best. Google's Chromium is the example Steinberger himself has used, and sure, Chromium is technically open source, but it's also the engine of Google's browser monopoly. That comparison could cut either way.
There's also the European angle, which several commentators have picked up on. Steinberger is Austrian. He built this in Vienna. And now he's heading to San Francisco because that's where the compute, the capital, and the competitive offers are. Nobody in Europe apparently made a serious play to keep him. The EU has regulations, data protection frameworks, AI governance rules, and all the structural things that are supposed to make tech development more responsible. What it doesn't seem to have is the ability to compete for talent when the biggest American companies come calling with personal phone calls from their CEOs.
I don't have a tidy conclusion for any of this. OpenClaw is a genuinely interesting project that exposed both the potential and the fragility of the open-source AI agent model. Steinberger seems like a thoughtful person who cares about keeping the project open, and OpenAI seems motivated to honor that, at least for now. Whether that commitment survives contact with the pressure to monetize a $500 billion company's product roadmap is a different question. I guess we'll see.
I build custom websites and web apps for small businesses and solopreneurs. Let's talk about your project.
Get in touch