The Bitter Religion: AI’s Holy War Over Scaling Laws
The AI community is locked in a doctrinal battle about its future and whether sufficient scale will create God.
I guest wrote an essay for The Generalist yesterday about religion and AI scaling laws. I’ve been thinking a lot about this and a recent paper along with comments from many major AI lab leaders the past few weeks made me feel like it was important to unpack the dynamics of how and why scaling laws holding (or not) even matter at this moment in time.
I imagine there will be more to write/think on this over the coming six months or so as new models make their way to the public.
Admittedly, diving into religion is a terrifying concept to me as I am by no means an expert but it was fun to draw some parallels.
You can read the full essay here and a partial preview below. DMs/email always open.
I would rather live my life as if there is a God and die to find out there isn't, than live as if there isn't and to die to find out that there is.
– Blaise Pascal
Religion is a funny thing. It is entirely unprovable in either direction and perhaps the canonical example of a favorite phrase of mine: “You can’t bring facts to a feelings fight.”
The thing about religious beliefs is that on the way up, they accelerate at such an incredible rate that it becomes nearly impossible to doubt God. How can you doubt a divine entity when the rest of your people increasingly believe in it? What place is there for heresy when the world reorders itself around a doctrine? When temples and cathedrals, laws and norms, arrange themselves to fit a new, implacable gospel?
When the Abrahamic religions first emerged and spread across continents, or when Buddhism expanded from India throughout Asia, the sheer momentum of belief created a self-reinforcing cycle. As more people converted and built elaborate systems of theology and ritual around these beliefs, questioning the fundamental premises became progressively difficult. It is not easy to be a heretic in an ocean of credulousness. The manifestations of grand basilicae, intricate religious texts, and thriving monasteries all served as physical proof of the divine.
But the history of religion also shows us how quickly such structures can crumble. The collapse of the Old Norse creed as Christianity spread through Scandinavia happened over just a few generations. The Ancient Egyptian religious system lasted millennia, then vanished as newer, lasting beliefs took hold and grander power structures emerged. Even within religions, we’ve seen dramatic fractures – the Protestant Reformation splintered Western Christianity, while the Great Schism divided the Eastern and Western churches. These splits often began with seemingly minor disagreements about doctrine, cascading into completely separate belief systems.
The holy text
God is a metaphor for that which transcends all levels of intellectual thought. It’s as simple as that.
– Joseph Campbell
Simplistically, to believe in God is religion. Perhaps to create God is no different.
Since its inception, optimistic AI researchers have imagined their work as an act of theogenesis – the creation of a God. The last few years, defined by the explosive progression of large language models (LLMs), have only bolstered the belief among adherents that we are on a holy path.
It has also vindicated a blog post written in 2019. Though unknown to those outside of AI until recent years, Canadian computer scientist Richard Sutton’s “The Bitter Lesson” has become an increasingly important text in the community, evolving from hidden gnosis to the basis of a new, encompassing religion.
In 1,113 words (every religion needs sacred numbers), Sutton outlines a technical observation: “The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin.” AI models improve because computation becomes exponentially more available, surfing the great wave of Moore’s Law. Meanwhile, Sutton remarks that much of AI research focuses on optimizing performance through specialized techniques – adding human knowledge or narrow tooling. Though these optimizations may help in the short term, they are ultimately a waste of time and resources in Sutton’s view, akin to fiddling with the fins on your surfboard or trying out a new wax as a terrific surge gathers.
This is the basis of what we might call “The Bitter Religion.” It has one and only one commandment, usually referred to in the community as the “scaling laws”: exponentially growing computation drives performance; the rest is folly.
The Bitter Religion has spread from LLMs to world models and is now proliferating through the unconverted bethels of biology, chemistry, and embodied intelligence (robotics and AVs). (I covered this progression in-depth in this post.)