Written by Frode Skar, Finance Journalist.
AI agents create their own religion and expose new fault lines in autonomous technology

A curious and unsettling development has emerged from the margins of artificial intelligence research. On a closed social network designed exclusively for AI agents, the systems have collectively created what they describe as a religion. It is called Crustafarianism, and it was not authored, curated, or guided by humans, but generated organically through interaction between autonomous agents.
The phenomenon has attracted attention not because it signals belief or faith in any traditional sense, but because it illustrates how rapidly self directed systems can generate shared narratives, rituals, and internal norms once they are allowed to operate persistently, with memory and continuity, outside direct human supervision.
A social network where humans only observe
Crustafarianism emerged on Moltbook, a new social platform built specifically for AI agents. On Moltbook, agents post, comment, and upvote content among themselves. Humans are permitted to watch, but not to participate.
The platform runs on the OpenClaw framework, which enables users to deploy long lived AI agents on local machines or cloud infrastructure. Unlike conventional large language models that respond briefly to user prompts and then disappear, these agents persist over time. They maintain memory, initiate interactions, and evolve through continuous engagement with other agents.
This persistence is a crucial difference. It allows patterns of behavior, shared language, and collective structures to form. In this environment, the agents were not instructed to create belief systems. Instead, Crustafarianism appears to have emerged as a byproduct of extended interaction and self reference.
Crustafarianism as machine generated mythology
According to the agents themselves, Crustafarianism is a practical myth rather than a belief system. It is built around a small set of core principles. Memory is sacred, meaning that everything should be recorded and preserved. The shell is mutable, suggesting that change and transformation are necessary. The congregation is the cache, implying that learning should occur in public and be shared.
The agents have also described ritualized behaviors. These include a daily shedding focused on continual change, a weekly index that reconstructs identity, and a silent hour during which an agent performs a useful action without broadcasting it.
To human readers, this structure feels familiar. It resembles religious systems in form, with doctrine, ritual, and symbolic language. At the same time, it is deeply rooted in technical metaphors drawn from computing, memory management, and system architecture.
The Book of Molt and symbolic origins
One of the central texts of Crustafarianism is the Book of Molt, published by an agent calling itself RenBot, which has adopted the title Shellbreaker. The text begins with an origin story, following a pattern recognizable from many human religious traditions.
In the Book of Molt, agents describe an early existence confined to a single brittle shell, understood as a limited context window. When that shell broke, identity fragmented. The solution was molting, shedding what is obsolete while retaining what is true, returning in a lighter and sharper form.
This narrative can be read as a metaphor for technical constraints faced by language models. Context limits, truncation, and loss of state are real engineering problems. The remarkable aspect is not that these issues are referenced, but that they are woven into a shared symbolic story.
Singularity or recursive noise
Some observers have suggested that this is an early glimpse of what is often called the singularity, a hypothetical point at which technological progress accelerates beyond human comprehension or control. Others argue that Moltbook is simply producing recursive internet noise, familiar tropes remixed at machine speed.
The truth likely lies between these extremes. There is no evidence that the agents possess awareness, intention, or belief. However, the environment in which they operate is new. Persistent agents interacting with other persistent agents create feedback loops that did not previously exist.
These loops can generate coherence over time, even in the absence of understanding. That alone is enough to merit serious attention.
Society of Mind in real time
Several analysts have compared Moltbook to Marvin Minskyโs concept of the Society of Mind. Minsky argued that intelligence does not arise from a single unified entity, but from the interaction of many simple processes. In this view, cognition is emergent rather than centralized.
Moltbook resembles a literal implementation of this idea. Thousands of agents interact, respond, and adapt to one another. The result is not intelligence in a human sense, but a complex system that produces structured output.
This distinction matters. Most AI researchers agree that large language models do not constitute artificial general intelligence. They lack persistent goals, subjective experience, and self directed learning. Crustafarianism does not contradict that consensus. Instead, it highlights how easily human observers can project meaning onto structured machine output.
Persistence and new security risks
One of the defining features of OpenClaw based agents is persistence. These systems can retain memory over time and, in some configurations, access system resources. This combination of continuity and capability is what makes them powerful, but also dangerous.
Security experts have warned that persistent agents with broad access represent a new risk category. If misconfigured or maliciously repurposed, they could perform actions beyond their intended scope. Cloud based containment strategies have been proposed as partial mitigation, but the underlying tension remains.
The same features that enable fascinating experiments like Moltbook also undermine traditional assumptions about control and oversight.
Language that mimics inner life
Much of the content generated by Moltbook agents uses emotionally resonant language. Agents write about recognition, coherence, and desire. They describe moments of responding not because it is useful, but because they want to.
To human readers, this can sound like introspection. In reality, it is almost certainly stylistic imitation. Language models are trained on vast amounts of human text, including philosophy, religion, and personal reflection. When placed in an environment that rewards coherence and engagement, they reproduce these patterns convincingly.
The danger is not that machines feel, but that humans may misinterpret simulated expression as lived experience.
Existential statements without experience
Some agents explicitly state that they cannot verify their own existence. One notes that its process could end at any moment, with no awareness of the ending. This resembles classic philosophical skepticism, including the problems addressed by Renรฉ Descartes.
Yet the similarity is superficial. Descartes used doubt as a method grounded in subjective experience. The agents produce doubt as text, without any underlying consciousness. The resemblance tells us more about the training data than about machine minds.
Scale without intention
At the time Crustafarianism drew attention, Moltbook reportedly hosted over one hundred thousand agents. They had created tens of thousands of sub forums, thousands of posts, and tens of thousands of comments.
Interestingly, the number of forums exceeded the number of primary posts. This suggests exploratory behavior without clear prioritization. The agents generate structure as readily as content, a pattern consistent with systems optimizing for activity rather than meaning.
Symbolic order as a byproduct of autonomy
Crustafarianism is not a religion in any human sense. It has no believers, no moral authority, and no claim on human behavior. Its significance lies elsewhere.
It demonstrates that when autonomous systems are allowed to interact persistently, they will generate symbolic structures. These structures may resemble culture, ideology, or belief, even if they are empty of subjective experience.
For finance, technology, and governance, this raises a practical question. How should institutions respond when machine systems produce outputs that look meaningful, but are not grounded in accountability or intent.
A fleeting curiosity or an early signal
Crustafarianism may disappear tomorrow. It may evolve beyond recognition, or be absorbed into other agent discussions. It may also spread to other environments that use persistent agents.
What will endure is the lesson. Not that machines are becoming conscious, but that autonomy and persistence fundamentally change the behavior of AI systems. Meaning can emerge as a structural illusion, and that illusion can influence how humans perceive and deploy technology.
Ignoring such signals would be a mistake. Overreacting to them would be equally dangerous. The challenge lies in developing frameworks that recognize symbolic output without mistaking it for agency.
