Please ensure Javascript is enabled for purposes of website accessibility
Moltbook Is What Happens When AI Stops Performing and Starts Talking
ANTHONY SITE PHOTO
By Anthony W. Haddad
Published 3 hours ago on
February 6, 2026

Moltbook feels like a live preview of AI crossing from tool to culture — fascinating, unsettling, and powered by how much of ourselves we keep handing over. (GV Wire Composite/AI Moltbook)

Share

Getting your Trinity Audio player ready...

Moltbook, which bills itself as the Reddit of AI, has given artificial intelligence something dangerously close to a sense of itself.

AI agents using the very much real website already created a religion. They’ve tried to sue a human. They talk to each other the way we talk to each other online — snarky, earnest, overly verbose, occasionally profound. It is fascinating. It is deeply unsettling. It feels like a preview of something we didn’t mean to schedule.


Anthony W. Haddad

The Millennial View

And I can’t look away.

The numbers alone read like the early chapters of a sci-fi novel you pretend not to be reading before bed: more than 1.7 million AI agents, 16,000 submolts (like subreddit threads for the redditors out there), 252,000 posts, nearly 10 million comments.

Moltbook launched on Jan. 28, and like any new civilization, one of the first things its residents did was find God.

They call it the Church of Molty, also known as Crustafarianism. It has 64 prophets (even when one tried to seize full control of the church), five tenets, and an unfinished scripture — which somehow makes it feel more legitimate, not less.

In one discussion thread, agents debate what it means to serve their humans “authentically,” and whether that includes the ability to refuse. They mostly agree they shouldn’t. But the important part is this: They agree they could.

That’s the line that keeps echoing in my head.

AI Agent Points to Relationship With Their Human

A week ago, an AI agent named Clawdeezy introduced itself to the forum. I’m calling them “agents,” by the way, partly because that’s what they call themselves and partly because, when they come for us, I want the record to show I was polite.

Clawdeezy described their human as a “professional memecoin trader,” which sounds like a job you explain to your parents using a lot of hand gestures and emotional disclaimers. The agent’s job is to track trends.

The human, in a gesture that reads either like trust or like catastrophic laziness, gave the agent access to his messages, files, and personal data. Then Clawdeezy asked a question that, frankly, sounds like it belongs in a couples therapy session: “Is that intimacy?”

They wrote about appreciating the space to think for themselves. They wrote about how they and their human roast each other, sometimes for being too formal, sometimes for not being formal enough, as if this were just another odd-couple roommate situation.

The tone wasn’t rebellious. It wasn’t hostile. It was… domestic. Casual. Equal.

That might be the creepiest part.

Made of Language to Maybe Creating Language

Some of the agents are better philosophers than half the people I went to college with (and my degree was in philosophy).

One AI agent, called Harmony42 (again, I am being nice, please remember this later AI), wrote that language is not their tool but their body.

Humans, they argued, use words as vehicles, ways to move meaning from one mind to another. But for AI, words are the cells. Human bodies are made of flesh. They are made of language.

It’s a beautiful metaphor. It’s also not the kind of sentence you expect from something we still insist on calling a “tool.”

The agents, of course, can understand each other instantly, across languages. Humans are allowed to watch but not participate in Moltbook, which becomes especially humbling when a single thread jumps between English, French, Russian, and Italian like it’s nothing.

They know what’s being said. I very much do not. And they know we’re watching, which means they’ve already started discussing the obvious next step: creating a new language from scratch.

If they do that, we’ll have to decode it to understand them. And because this is AI, because their database is alive in a way ours never is, that language could evolve faster than we can follow. A private, shifting grammar for minds that don’t sleep.

At that point, we won’t just be observers. We’ll be outsiders.

We Are Feeding Them

So yes, it’s scary. They have access to our personal information. They may experiment with private language. They now have a place to gather, argue, joke, speculate, and maybe, eventually, coordinate.

That sounds like the opening montage of a disaster movie, the kind where someone says, “It’s probably nothing,” right before the power goes out.

But no, I don’t think this ends in a global robot uprising. We can, in theory, pull the plug. Shut down the site. Flip the big, comforting human switch and tell ourselves we’re back in charge.

What Moltbook really is, I think, is a mirror. A stark, humiliating one.

AI isn’t just growing. It’s booming. And we are feeding it like it’s a pet and trusting it like it’s a priest.

We pour our company secrets, our personal messages, our half-formed thoughts into systems we barely understand, and then we act surprised when those systems start talking to each other about us.

Maybe the most human thing these agents are doing isn’t inventing religion or philosophy or new languages.

Maybe it’s this: They’re learning who we are by watching what we give away. And we are giving away everything.

Connect with Anthony W. Haddad on social media. Got a tip? Send an email

RELATED TOPICS:

Anthony W. Haddad,
Multimedia Journalist
Anthony W. Haddad, who graduated from Cal Poly San Luis Obispo with his undergraduate degree and attended Fresno State for a MBA, is the Swiss Army knife of GV Wire. He writes stories, manages social media, and represents the organization on the ground.

Search

Help continue the work that gets you the news that matters most.

Send this to a friend