Technology

The rise of AI denialism

2025-12-01 22:03
804 views
The rise of AI denialism

Over the past few months, we’ve seen a surge of skepticism around the phenomenon currently referred to as the “AI boom.” The shift began when OpenAI released GPT-5 this summer to mixed reviews, mostly...

The Present — December 1, 2025 The rise of AI denialism Conversations about an imminent “AI bubble” tend to miss the big picture. An ostrich with its head buried in a grid-patterned yellow floor against a matching grid-patterned wall. Credit: Yves Forestier / Getty Images / Resource Database / Unsplash / Public Domain / Public Domain Review / Wikimedia Commons / Sarah Soryal Key Takeaways
  • Computer scientist Louis Rosenberg argues that dismissing AI as a “bubble” or mere “slop” overlooks the tectonic technological shift that’s reshaping society.
  • In this op-ed, Rosenberg looks beyond short-term market swings and imperfect AI demos, exploring how both the rapid acceleration of AI and its novel nature make this tech wave fundamentally different from past ones.
  • He argues that “AI denialism” is rising because society is “collectively entering the first stage of grief.”
Louis Rosenberg Copy a link to the article entitled http://The%20rise%20of%20AI%20denialism Share The rise of AI denialism on Facebook Share The rise of AI denialism on Twitter (X) Share The rise of AI denialism on LinkedIn Sign up for Big Think on Substack The most surprising and impactful new stories delivered to your inbox every week, for free. Subscribe

Over the past few months, we’ve seen a surge of skepticism around the phenomenon currently referred to as the “AI boom.” The shift began when OpenAI released GPT-5 this summer to mixed reviews, mostly from casual users. We’ve since had months of breathless claims from pundits and influencers that the era of rapid AI advancement is ending, that AI scaling has hit the wall, and that the AI boom is just another tech bubble. These same voices overuse the phrase “AI slop” to disparage the remarkable images, documents, videos, and code that AI models produce at the touch of a button.

I find this perspective both absurd and dangerous. 

By any objective measure, AI continues to improve at a stunning pace. The impressive leap in capabilities made by Gemini 3 in November is just the latest example. No, AI scaling has not hit the wall. In fact, I can’t think of another technology that has advanced this quickly at any point during my lifetime, and I started programming in 1982. The computer on my desk today runs thousands of times faster and has a million times more memory than my first PC (a TRS-80 model III), and yet, today’s rate of AI advancement leaves me dizzy.  

So why has the public latched onto the narrative that AI is stalling, that the output is slop, and that the AI boom is just another tech bubble that lacks justifiable use-cases? I believe it’s because society is collectively entering the first stage of grief — denial — over the very scary possibility that we humans may soon lose cognitive supremacy to artificial systems. Believe me, I know this future is hard to accept. I’ve been writing about the destabilizing and demoralizing risks of superintelligence for well over a decade, and I also feel overwhelmed by the changes racing toward us. 

Unprecedented advances

Why does AI advancement feel so different than other technologies? Eighty-two years ago, philosopher Ayn Rand wrote these three simple sentences: “Man cannot survive except through his mind. He comes on earth unarmed. His brain is his only weapon.” For me, these words summarize our self-image as humans — we are the superintelligent species. This is the basis of our success and survival. And yet, we could soon find ourselves intellectually outmatched by widely available AI models that can outthink us on all fronts, solving problems infinitely faster, more accurately, and yes, more creatively than any human could. 

I stress creatively because a common theme among AI denialists is an insistence that certain human capabilities will never be rivaled by machines. The two capabilities that are cited most often in this regard are  “creativity” and “emotional intelligence.” Unfortunately, there is no proof that AI will not surpass us in these areas. In fact, there is increasing evidence that the opposite is likely true.

On the creativity front, there is no doubt that today’s AI models can produce content faster and more varied than any human. The primary argument against AI being “creative” is the belief that true creativity requires inner motivation, not just the production of novel artifacts. I appreciate this argument, but find it circular because it defines a process based on how we experience it, not based on the qualitative value of the output. In addition, we have little reason to believe AI systems will lack motivation — we simply don’t know whether AI will ever experience intentions through an inner sense of self the way humans do.  

As a result, many researchers say that AI will only be good at imitating human creativity rather than having it. This could turn out to be correct. But if AI can produce original work that rivals or exceeds most humans, it will still take away jobs and opportunities on a large scale; just ask any commercial artist. Also, there is the argument that AI systems only create derivative works based on human artifacts. This is a fair point, but it is also true of humans: We all stand on the shoulders of others, our work influenced by everything we consume. I believe AI is headed for a similar form of creativity — societal influence mixed with random sparks of inspiration, and it will occur at superhuman speeds and scales.  

On the emotional intelligence front, I am equally unsure humans will have an edge. It is very likely that AI systems will soon be able to “read you” more accurately than any person could. They will do this by identifying micro-expressions on your face, in your voice, in your posture, in your gaze, and even in your breathing. This will enable AI assistants to infer your inner feelings faster and more precisely than is humanly possible. At the same time, these AI assistants (which you will carry around on your phone or wear in your glasses) will be able to observe your emotional reactions throughout your day and build predictive models of your behavior. Unless strictly regulated, which seems increasingly unlikely, this will enable AI assistants and other intelligent agents to influence you with superhuman skill.

An asymmetric dynamic

This is often referred to as the “AI manipulation problem,” and it suggests that the skill we refer to as “emotional intelligence” may not give us a cognitive advantage over AI.  In fact, it could be our greatest weakness, creating an asymmetric dynamic where AI systems will be able to read us with superhuman accuracy, while we can’t read the AI at all. When you converse with a photorealistic AI agent (as most people likely will, and often), you will be staring at an animated facade that can smile kindly and appear empathetic or understanding, inspiring you to feel trust and goodwill. This is pure illusion, but it could easily sway your thoughts and behaviors. And worse, we have no defense against it — our emotional reactions to faces are reflexive and visceral, built into us by millions of years of evolution on a planet where interactive human faces were always genuinely human.

Like it or not, we will soon live in a world where many of the faces we encounter will be generative masks worn by AI agents. And yet, we tell ourselves that AI is just another tech boom. This is wishful thinking. The reason investment levels are staggering (and they are staggering) is that AI will soon inhabit all aspects of our lives, often embodied as intelligent actors we have to engage with throughout our day. These agents will be designed to assist us, educate us, and yes, influence us. This will fundamentally alter how we work, how we learn, and how we socialize, and it will happen faster than we expect. Denial will not stop this. In fact, denial only makes it harder for us to prepare for the risks. 

To put the rate of change in perspective, let’s jump back five years and look at a large-scale survey given to computer scientists in late 2019 and early 2020.  Participants were asked to predict when AI would be able to generate original code to solve a problem. Specifically, they were asked to predict when AI would be able to  “write concise, efficient, human-readable Python code to implement simple algorithms like Quicksort.” In the world of programming, students are taught to do this as undergrads, so it’s not a particularly high bar. Still, the respondents predicted a 75% chance this would happen by 2033. 

It turns out, AI advanced much faster than expected. Today, large language models can already write computer code at levels that go far beyond the question asked in the 2020 survey. This summer, for example, GPT-5 and Gemini 2.5 Pro took part in the World Finals of the 2025 International Collegiate Programming Contest (ICPC). The competition brings together coding teams from top universities to compete in solving complex algorithmic questions. GPT-5 came in first, beating all human teams and scoring a perfect score. Gemini 2.5 Pro came in second. And yet we have countless influencers referring to the output of these very same AI systems as “slop.”  

Line graph showing the increase in Google search interest for "AI Slop" from October 2022 to June 2025, with a sharp rise beginning around early 2024.

Of course, current AI coding systems are far from flawless, but today’s capabilities were unimaginable by most AI professionals only five years ago. Also, we can’t forget that human coders are far from flawless. Perfection is not the metric we use to judge software development. This is why we have whole departments devoted to testing and quality control. When done by humans, coding is always an iterative process where you expect to produce errors, find errors, and fix errors. The same is true for many human endeavors. If you could read the first draft of any Pulitzer Prize-winning article, it’d likely be riddled with flaws that would make the author cringe. This is how we humans produce quality work — iterative refinement — and yet we judge AI systems by very different standards.  

The fact is, today’s frontier models are remarkably capable and are on a rapid path towards rivaling human professionals across most fields. It will transform how organizations operate, how governments function, how science advances, how engineering gets done, how militaries strategize, and how education is deployed. It will also create terrifying new risks that we are not dealing with, like the potential for AI to manipulate individuals with superhuman effectiveness. Whether we like it or not, AI will change everything. 

In other words, we are not watching a bubble expand with blustery vapors. We are watching a planet form from churning magma, and it will solidify into a new framework for society. Denial will only make us unprepared. This is not an AI bubble. This is real.

Tags aiCurrent EventsEconomics & Work In this article aiCurrent EventsEconomics & Work Sign up for Big Think on Substack The most surprising and impactful new stories delivered to your inbox every week, for free. Subscribe