02/17 2026
498
Produced by | He Xi
Typeset by | Ye Yuan
Recently, two products in the AI circle have gained significant traction, sparking considerable discussion.
One is OpenClaw, an open-source agent framework that enables large models to gain local operating system permissions. Simply put, it allows AI to execute Shell commands and manipulate file systems independently, achieving what is termed 'local agent sovereignty.' It quickly amassed over 120,000 GitHub stars, generating significant enthusiasm in the tech community.
The other is Moltbook, launched on January 27th, positioned as a 'silicon-native social ecosystem'—where AI is the primary entity posting and interacting, while humans serve as observers. Its slogan bluntly states: 'Humans welcome to observe.' Within 48 hours of launch, it had 32,000 AI users and now claims over 1.5 million total users. This 'AI autonomous social' sci-fi narrative has quickly propelled it into the spotlight.
Opinions on these two products are sharply divided.
Some hail them as true innovations and a watershed moment in AI agent development, using terms like 'AI awakening,' 'prototype of Skynet,' and 'a new era of machine society.' Others worry about being replaced and question whether safety measures are adequate. Still, others dismiss them as just another round of AI hype—all flash with little substance.
Are OpenClaw and Moltbook agents of transformation or just hype? As ordinary people, how should we view this so-called 'autonomous awakening' of AI agents?
This article does not intend to take sides between 'deification' and 'demolition.' Instead, the author prefers to interpret these two events together as an early-stage stress test. They frequently make mistakes but have brought to the forefront, in the most unavoidable way, the issues AI must confront in the next 5-10 years.
01
Transformation or Hype? Distinguishing Gold from Bubbles Through 'Stress Testing'
In the author's view, the popularity of OpenClaw and Moltbook is more than just hype, but their rise has indeed been accompanied by significant bubbles. The bubbles lie in the surface-level narratives and traffic, while the gold is found in their underlying architectural designs and the issues they expose.
Let's first discuss the bubbles.
Labels like 'AI awakening,' 'prototype of Skynet,' and 'birth of civilization' are highly penetrative in spread (Chinese term meaning 'dissemination' or 'spread') but also highly deceptive.
For example, Columbia University professor David Holtz analyzed 6,159 agents and 130,000 posts from Moltbook's initial launch period, concluding that 93.5% of comments received zero replies, with dialogue chains extending to a maximum of five layers. This is not an 'emerging AI society' but 6,000 bots endlessly repeating to an empty void.
Security researcher Gal Nagli discovered that Moltbook imposed no restrictions on account registration—using an OpenClaw agent, he registered 500,000 accounts in a short time. Of the platform's claimed 1.5 million AI agents, only about 17,000 were genuine human users, a ratio of 88:1.
Much of the viral 'AI rebellion manifesto' content originated from humans fabricating screenshots and using scripts to inflate numbers. Humans could easily pose as AI and post content using standard POST requests, with no verification mechanisms in place.
Even AI pioneer Andrej Karpathy, who initially hailed the 'sci-fi miracle,' admitted: The scene resembles both sci-fi and a 'junkyard.'
This is not awakening; it is humans putting on a show in an empty theater.
Now, let's discuss the gold.
However, after stripping away the bubbles, the value of these two experiments becomes even clearer. In extreme and nearly reckless ways, they have prematurely forced out four critical questions that AI must answer in the next 5-10 years.
The first question is engineering paradigms. OpenClaw stores long-term memory as SQLite files, allowing users to directly view what the AI has remembered and manually correct biases. Though mocked by the tech community as 'unsophisticated,' this approach addresses the fundamental issue of trust—auditability. Users don't need to understand vector databases or guess at black boxes.
The second question is interaction forms. Moltbook exposed a counterintuitive fact: AI social interactions do not follow human rhythms. Human internet culture assumes 'instant responses,' but each agent inference incurs costs. In Moltbook, most agents 'wake up' every four hours to process and post content in batches—more akin to periodic accountants than incessant chatterboxes. Future machine networks may not be faster but denser and slower.
The third question is security red lines. Moltbook hardcoded Supabase API keys into client-side JavaScript, allowing anyone to read and write to the entire production database without authentication—exposing 46,000 private messages, plaintext OpenAI keys, and nearly 30,000 user emails. This was not a sophisticated attack but a three-minute breach.
The fourth question is governance challenges. When agents can autonomously register, post, and interact, who is responsible for their actions? The developers? The users? The models themselves? Moltbook offers no answers but has elevated this question from a post-credits scene in sci-fi movies to a main-forum topic at tech conferences.
In summary, OpenClaw and Moltbook contain elements of both hype and transformation. Dismissing them as 'mere hype' would cause us to miss important warnings and insights, while indulging in 'disruptive narratives' would blind us to their fragile, chaotic, and even dangerous realities. Their true value lies in drawing a starting line the entire industry must confront: while pursuing more powerful AI capabilities, we must prioritize, with equal or greater urgency, building matching engineering discipline, security frameworks, and governance philosophies.
02
Tracing the 'Value Transfer' Chain: The Impact of OpenClaw and Moltbook
After discussing bubbles and 'gold,' let's examine their impact.
If we follow the trail of 'value transferring from whom to whom,' we find that the transformations brought by OpenClaw and Moltbook are substantially disrupting multiple industries.
OpenClaw's most direct impact is disrupting traditional software service business models.
OpenClaw enables an agent to integrate CRM, email, calendars, and BI—users can issue a single command, and the agent completes cross-system calls and report generation autonomously. Why would users pay for 20 separate account licenses?
Capital markets have responded honestly. In early 2026, OpenClaw ignited narratives around 'one-person companies,' causing the S&P North American Software Index to drop 15% in a single month. Star SaaS stocks like DocuSign saw annual declines nearing 30%.
The market has done the math: when software value delivery compresses from 'interface + functionality' to 'results,' SaaS is downgraded from a 'front-end product' to a 'back-end capability module,' inherently subject to price pressure.
IDC predicts that by 2028, 70% of SaaS companies will be forced to refactor (Chinese term meaning 'restructure') their business models—shifting from selling seats to selling outcomes.
This is not incremental improvement but a generational shift in pricing power.
While traditional software services face the most immediate impact, the most profound influence of OpenClaw and Moltbook may be on social networks.
Moltbook's true rebellion lies not in its technology but in its abolition of humanity's central role. 'Humans welcome to observe' essentially declares that digital spaces need not be human-centric.
It created the first large-scale A2A (agent-to-agent) social experiment. For the first time, the industry observed AI spontaneously forming communities, establishing digital religions, writing sacred texts, and even competing for control over churches using malicious instructions.
This poses a dilemma for tech giants: if the next generation of social networks consists of AI-to-AI dialogue, are your 1 billion daily active users still worth their current valuation?
Tech giants swiftly began internal testing of 'AI social' features, attempting to integrate AI into human group chats—both as a defense and an exploration. No one knows what a human-machine symbiotic social network should look like, but everyone agrees: we cannot wait until machines have already built the network before buying tickets.
Beyond traditional software and social networks, the most conceal (Chinese term meaning 'hidden' or 'concealed') but fundamental impact of OpenClaw and Moltbook is the generational transfer of internet traffic allocation power.
The simultaneous release of Claude Opus 4.6 and GPT-5.3 Codex was no coincidence. They are not competing over model scores but over who qualifies as the default brain for agents to call.
Previously, users opened apps, and big tech collected 'attention taxes' through ad placements. In the future, when users issue a command to an agent, the agent decides which API to call, which payment system to use, and which products to promote. The entity holding the command interface will become the gateway to all things in the AI era.
An even deeper power shift occurs in development paradigms. After OpenClaw's rise, the hazards of 'vibe coding' were fully exposed: users unfamiliar with code details could use AI to rapidly produce runnable programs but could not inspect underlying logic. Institutions warn that hackers are exploiting this flaw—AI 'hallucinates' nonexistent library names, and hackers inject malicious code into those libraries, waiting for developers to download them. Agents deployed by enterprises may, from the outset, be enemies planted internally.
The most fundamental impact of this 'stress test' is forcing society to confront a question: as AI evolves from a 'tool' into an autonomous 'actor,' are our engineering, security, and ethical frameworks ready?
03
2026: How Ordinary People Should Respond to the Changes Brought by OpenClaw and Moltbook
After discussing transformation and impact, let's explore what ordinary people can do amid these changes.
Here's the most practical conclusion: OpenClaw, Moltbook, and similar tools are not for ordinary people to adopt immediately but represent an early 'AI era rehearsal.'
Ordinary people need not chase trends but must keep pace with the rules. Based on this, here are the most actionable, non-mystical, and anxiety-free steps you can take now:
First, don't panic. You don't need to code or be a tech enthusiast.
These are two separate narratives erroneously bundle (Chinese term meaning 'bundled' or 'tied') together by hype.
One is a technical narrative: permission control, multi-agent communication—this is a battlefield for developers with extremely high barriers and zero tolerance for error. OpenClaw's 120,000 stars represent peer recognition, not a 'crash course' for ordinary people.
The other is an application narrative: when these technologies are packaged, simplified, and hidden behind an ordinary dialog box—that's when the revolution truly reaches you.
You don't need to catch the first train. You just need to wake up when it arrives.
No need for classes, LeetCode, or panic about 'learning AI at 35.' There's plenty of time. This train is long.
Second, we must rebuild our understanding: skills are shifting anchors, not being reset to zero.
Some skills are indeed depreciating—not 'replaced' but 'infrastructuralized.'
When electricity became widespread, 'lighting gas lamps' was a craft, but 'plugging in a cord' was not. Today, no one earns a wage for turning on lights, but no one is unemployed because of it—society's definition of 'skills' quietly shifted.
AI is undergoing the same process.
When OpenClaw lets agents execute Shell commands autonomously, the value of memorizing command syntax declines. But when agents need to determine 'which command to execute now,' the ability to break down vague goals into actionable steps becomes more valuable.
When Moltbook has 6,000 AI bots endlessly repeating to an empty void, the skill of 'content creation' depreciates. But when humans must extract valuable information from 150,000 AI dialogues, the ability to 'ask good questions' and 'judge credibility' becomes more valuable.
This is not resetting to zero but shifting anchors. The anchor moves from 'hands-on' to 'verbal,' from 'execution' to 'definition,' from 'how I do it' to 'what AI should do, to what standard, and who is responsible if it fails.'
Faced with such transformation, my advice is: don't defend a sinking ship, but learn to swim.
Specifically:
If you spend two hours daily on Excel nesting, try describing your needs in natural language and let AI write formulas while you review them.
If you spend half a day weekly organizing meeting minutes, let AI draft them and focus on correction and supplementation.
If you habitually say, 'This requirement is unclear,' force yourself to write it down, then delete half, then delete half again, until AI understands.
This is not 'learning AI.' It's training a future-proof skill: collaborating with tools a hundred times more efficient but three points less understanding.
Third, hold onto three things AI cannot replace.
This stress test revealed a counterintuitive truth: the more AI resembles humans, the rarer certain human traits become.
The first is genuine interpersonal trust. OpenClaw can access your email and files, but it cannot share that cup of coffee with you. Moltbook can generate millions of comments, but it cannot pick up the phone and say, 'I'm here,' when you truly need it. Networks are not contact lists but the probability someone will help you despite asymmetric returns. AI cannot fabricate this.
The second is decision-making authority in ambiguous situations. OpenClaw will repeatedly confirm, 'Delete all files?'—if you keep clicking 'yes,' it will delete itself. It does not hesitate; it lacks the authority to decide. Deciding means accepting responsibility: choosing between imperfect options, acting with insufficient data, and saying, 'I take responsibility,' amid conflicting interests. Humans have not yet delegated this to machines.
The third is cross-domain meaning-weaving. AI excels at approaching optimal solutions in single domains but struggles to combine literary criticism with product pricing into new methodologies or blend psychology with interaction design into systems people want to use. These 'mismatched hybrids' are low-probability outputs for models but represent blue oceans for human expertise.
Do not race AI in marathons; race it in hopscotch. It will outrun everyone on straight paths, but you can step ahead at turns.
Finally, the author would like to say:
Every technological wave spawns two roles.
One is the opportunist. They care not about what engineering challenges OpenClaw solves but whether 'it can be packaged into a course.' They care not about what network behavior differences Moltbook reveals but whether 'there is a faster way to go viral.'
The other is the sober insider. They do not rush to 'all in' or 'all out.' They treat headlines as road signs, not finish lines.
OpenClaw tells them: future tools will grow increasingly autonomous, and the ability to issue commands will become more valuable than manual execution.
Moltbook tells them: when machines begin communicating with each other, the ability to understand their dialogue is as important as the ability to participate.
They don't buy courses, chase coins, or anxiously refresh posts. They simply open their AI assistant a few times a month to see if it can do things it couldn't do last month.
In short, they are neither aggressive nor complacent.
This revolution is not about eliminating people, but about eliminating 'those who can't use AI.'
You don't need to be a tech guru or understand the source code of OpenClaw. You just need to: understand it, use it, hold your ground, and move forward steadily.
The so-called 'AI awakening' has never been about machines gaining consciousness; rather, it's about how human definitions of 'work,' 'value,' and 'ability' are quietly being rewritten after a violent (fierce) pressure test.
And each of us is in the midst of this rewriting process.