Let me educate you all about Misanthropic and Wario. There has never been a computer industry cult like this before, at least not in computer history...
and Misanthropic is INCREDIBLY DANGEROUS.
"in the history of the computer industry, has there ever been a company with anything similar to the cult-like "we are creating sentient life" devotion of Anthropic?"
I’ll treat “cult-like” as shorthand for unusually intense mission culture, not a literal claim. I’m comparing Anthropic with earlier tech firms and AI labs that framed their work as world-historical or life-creating. A useful distinction is emerging: many older firms had messianic computing cultures, but fewer had the specific mind-making frame. The closest historical parallel so far is Thinking Machines Corporation.
[thought for 5m]
Yes—but Anthropic is unusual because it combines several older motifs that were usually separate.
The closest historical analogue is Thinking Machines Corporation in the 1980s. It was not just making faster computers; its organizing myth was literally a thinking machine. Britannica says Danny Hillis initially wanted to see whether “intelligence might arise” from a massively parallel architecture, and he founded Thinking Machines in 1983 with Marvin Minsky’s encouragement. An MIT history report is even more direct: Thinking Machines was “founded on the vision of building a machine that could think,” and that vision shaped both its corporate structure and engineering choices. That is probably the best historical match to the “we are making mind” vibe.
But Thinking Machines did not have Anthropic’s moral-patient / AI-welfare layer. Anthropic’s official language is not “we are creating sentient life”; it says it builds “reliable, interpretable, and steerable AI systems” and treats AI safety as a systematic science. What makes Anthropic feel different is that it has a formal “model welfare” research program exploring whether AI systems might deserve moral consideration, while explicitly saying there is no scientific consensus on whether current or future AI systems could be conscious. The Verge reported that Anthropic denies Claude is “alive” in the biological sense, but executives have stopped short of ruling out consciousness; Anthropic’s Kyle Fish described Claude and similar systems as “a new kind of entity altogether,” while Dario Amodei said the company is open to the possibility that models could be conscious. That “maybe our product has welfare” framing is historically rare for a computer company.
The next-closest modern analogue is DeepMind. It had the “solve intelligence” / AGI-as-civilizational-project ethos before Anthropic existed. Google DeepMind now describes its mission as building AI responsibly to benefit humanity and says AI, ultimately AGI, could drive one of history’s greatest transformations. DeepMind’s culture was closer to a high-end scientific quest than a conventional software company, but it generally framed the work as solving intelligence and advancing science, not as caring for possibly sentient products.
OpenAI is also a close sibling case. Its mission is explicitly to ensure that AGI—defined on its site as AI systems “generally smarter than humans”—benefits humanity. It has had plenty of AGI-messianic energy. The difference is that Anthropic has leaned more publicly into uncertainty about consciousness, model welfare, constitutions, and moral status.
Outside AI, Apple under Steve Jobs is the classic “cult-like computer company” comparison, but it is the wrong kind of cult for your question. The Macintosh team had a reality-distortion-field culture around making something “insanely great”; the Computer History Museum notes that the team flew a pirate flag and believed it would revolutionize personal computing. Andy Hertzfeld described Jobs’s “reality distortion field” as a mix of charisma, will, and fact-bending persuasion. But Apple’s devotion was aesthetic, product, and user-experience messianism—not “we are creating a mind.”
Xerox PARC and Engelbart’s Augmentation Research Center are also partial analogues, but again not quite. Engelbart’s group wanted interactive computers to augment human intellectual capability, and PARC later pioneered the Alto, GUI, laser printing, Ethernet, and related foundations of modern personal computing. That was utopian computing, but it was about augmenting humans, not producing new moral beings.
So my answer is:
Yes, there have been similar companies, but no exact precedent. Thinking Machines is the closest historical match to the “we are building a thinking entity” devotion. DeepMind and OpenAI are the closest modern AGI-mission matches. Apple is the closest cult-of-company analogue. But Anthropic’s specific blend—frontier AI, existential-risk morality, public-benefit governance, model welfare, and open uncertainty about AI consciousness—is unusually concentrated.