AI Is Going Away

the song of the eternal doomsday carousel

The releases of Stable Diffusion and ChatGPT in late 2022 shoved AI into the forefront of society. This has spawned a variety of discourse, with “executive hypemen” such as Sam Altman extolling its graces, skeptics like Gary Marcus scrutinizing its trajectory, and still others raving about how it’s going to kill us all next Tuesday—some with undue enthusiasm. Criticism of AI is nothing new, and a fair bit of it is deserved; for example, pervasive spam and personal complacency are legitimate concerns. None of it is helped by the fact that the aforementioned executive hypemen are seemingly trying harder to sell us on a world free of humans than on a world enhanced by AI.

Still, a worrying pattern of discourse has become pervasive in certain circles of AI critics, particularly in online spaces whose participants have adopted “AI critic” as an identity rather than as a principled stance. There’s a certain style of argument one cannot un-notice—“Model collapse is going to destroy AI! Wait, the copyright lawsuits will shut it down! Actually, the datacenters are going to use too much water! Anyway, AI is going away.” The specific reason seems to shift weekly, and the conclusion has a variety of outfits—“AI will not last long,” “the bubble will burst,” etc.—but the idea remains the same: AI is going away. This rotation has created a carousel of claims, always moving yet never arriving at the conclusion. There’s no shortage of pieces examining these claims, but few give attention to the doomsday carousel itself.

The carousel is equipped with a variety of horses which can loosely be separated into distinct categories. On one hand, we have technical claims about AI’s capabilities (“model collapse is inevitable” or “this model is bad”); on the other hand, we have claims that AI isn’t legally viable (“it’s infringement” or “it enables illegal deepfakes”). In early 2025, economic and environmental complaints saw a rise due to skepticism of increasing investments and the general resource consumption of new datacenters. Aside from these, we also have the populist moral arguments, typically claiming AI is some variation of theft or soulless, or simply that it churns out far too much content to keep up with. There’s more, but I digress. While the categories are fuzzy, with some claims blurring the lines between legal, environmental, and moral classifications, we’ll soon see that the categories themselves are not too important.

Though these claims all have varying levels of nuance and legitimacy, the manner in which they are employed is suspect. When presented by AI critics, the intended use is to affix “so AI is going away” to the end of each of them:

I’m sure the pattern is quite clear by now. Notably, some of these claims aren’t even wrong, or at least they’re open for debate, but the conclusion doesn’t necessarily follow. The real problem is the pattern itself; the claims are not intended for serious engagement but rather are presented as absolute reasons that AI will or must go away. “AI doesn’t think,” you say? Well, I can’t say I believe thoughtlessness from humans is more valuable.

When a particular prediction fails (“AI models can’t do good hands”), the community’s response is to migrate to another argument (“it’s soulless”). In some cases, the failed claim is silently reintroduced once people have forgotten about its falsehood—the hand critiques persist in many online communities despite not having been true in a while. Rarely is there ever an admission that a claim was incorrect. The conclusion remains fixed regardless of what pillars fall; it’s not open for revision.

Repeat after me: AI is going away.

Unexpectedly Unfalsifiable

Now, people are not obliged to be rational at all times. To be quite honest, I’d rather not live in such a world. It’s fair to have belief in a claim or prediction without some complex reasoning framework befitting Oxford’s philosophy department, but if that’s the case, the claims ought to be presented with more honest framing. They do not serve as serious predictions or positions but instead are conjured as needed in service of the foregone conclusion that AI is going away.

This practice makes the claims completely unfalsifiable, as no outcome can change the “fact” that AI is going away. When a lawsuit doesn’t proceed as expected, the next one will be successful. Once hands in AI-generated images stop growing seven fingers and three palms, we’ll quietly sweep that under the rug. An AI company is profitable? Must be some especially creative accounting then. If Nightshade hasn’t destroyed all the AI models yet, we must not be using it enough. The procedure is as fixed as the conclusion—I think I need off this ride—and no matter the chosen reason, AI is going away.

Subjective claims are thrown into the mix for good measure as well. “AI is soulless” and “AI is (always) slop” are claims purely determined by personal values. They cannot be seriously debated in the context in which they are usually presented, and I fear that if I were to attempt to do so here, I’d get bogged down in the discomforting morass of stated versus revealed preferences. I hope the subjectivity aspect alone makes it clear enough that these opinions aren’t universal, thus lacking a significant effect on AI’s success or failure. The post-hoc rationalization mixed with these subjective beliefs positions the conclusion more as an ideological fixture than a reasoned determination.

Still, I do pray that AI is going away.

A Closed Circle

This carousel of claims operates largely in its own ecosystem, only interacting with the wider world when a member of the general public expresses some gripe about AI spam or another issue. There are entire forums dedicated to pointing out the faults and flaws of AI, and content creators whose job it is to wrap the claims in a video-shaped box, topped with a fancy bow inscribed “AI is going away.” The question of who engages with this content seems to have one main answer—those who know that AI is going away. It doesn’t appear to be the general public, whose concerns and beliefs are tempered by the realities of daily life, and proponents of AI surely have little reason to join the ride.

AI developers by and large are not watching Steven Zapata videos and reconsidering their careers at Midjourney, nor are ChatGPT users reading Reid Southen tweets and pondering the consequences of forking over $20/month to Sam Altman. Perhaps they give Gary Marcus or even Ed Newton-Rex a modicum of attention. Though the content is packaged as outreach, the audience primarily consists of existing believers. These creators serve their own community and derive value—money, attention, or validation—from doing so; however, I’d push back on any assertions that they might be grifters. Sincerity need not be tied to correctness. Their engagement appears genuine, not cynical, regardless of the nature of their claims. Well, I might concede that Gary Marcus’s goal-post-shifting is itself a bit shifty, but he’s more of a traveling merchant than a local shopkeeper.

Have you heard the good news? AI is going away.

Schrödinger’s Solutions

The idea that no complaint is complete without a solution is the principle underlying the concept of constructive feedback. Unfortunately, the solutions proposed are often hopelessly vague, resembling constructive feedback as much as a tower resembles a needle from a distance. Handwavy proposals calling for “regulation,” “protection,” or “prohibition” are both pervasive and persistent. Pressing for further clarification often results in pushback—a curt “I’m not a politician, so I don’t know,” an accusatory “Are you against protection for creatives?” or even a more pointed “So you support illegal deepfakes?” These proposed solutions are not merely underspecified; they actively resist specification! But they must be implemented somehow, otherwise AI will collapse. One could even say that without these solutions, AI is going away.

There are cases where the proposed solutions are not underspecified; they largely consist of proposals like “mandatory watermarks for all AI content” or a more blunt “sue all AI companies into the ground.” These are unworkable for reasons other than their vagueness, and consideration of tradeoffs is notably absent. I’ll leave the practical considerations for another time—there isn’t a universal set of laws for one—but they deserve more attention than their proposers give them. In many cases, the actor behind the solution is never even identified; the solution simply arrives like an afternoon shower. If we were to infer the details of the more vague solutions, we’d probably arrive at these proposals instead, but that itself is problematic for a few reasons.

The underspecified nature of the solutions is a feature first and foremost. These are Schrödinger’s solutions, and they resolve to whatever definition is best for shutting down skeptics in the moment. Seeking details about the proposals—or in the worst case, daring to attempt negotiation—means that you are being unreasonably inflexible, or perhaps even the villain. These accusations are levied without a hint of irony; well, if I had a better eye for irony, I’d say these solutions are more inflexible than most of their skeptics. In this way, these solutions serve as loyalty tests instead of actual measured policy proposals. They are mandatory yet not intended for serious implementation.

It will all be resolved once we decide that AI is going away.

Reflections in a Mirror

It wouldn’t be fair to levy a sweeping penalty across every person critical of AI. Indeed, not all AI critics attend these fairgrounds, and I have chosen to focus on a narrow subset of online individuals. While AI proponents and even critics might dare to call them irrelevant, such could be construed as a lack of consideration for “irrelevant people” rather than a reason to spare them scrutiny.

I don’t think we can blame the carousel’s construction solely on its community either. When the messaging from the executive hypemen is about how doomed and worthless everyone will be, can we expect anything other than panic? “I’m building this thing, and it’s great, but you’re all going to die.” Gee, thanks… If I had a dollar for every “AGI soon” claim, I’d be in their position. The truth is that the solutions from people like Sam Altman are almost as handwavy as the ones from the AI critics. “Universal basic income,” you say? But how and where? Carousels require power to run, and it’s easy to see where some of that is coming from.

In the grand scheme, the words of both proponents and detractors matter relatively little. Technology lives and dies by the utility it provides—not by unfounded hype and certainly not by scathing takedowns. Thomas Edison’s electric chair used AC electricity with the goal of instilling fear of such in the public; surely they’d stick with his then-dominant DC grid… Right? Well, we all know how that worked out. People come around to the truth eventually.

It would be more respectable to be honest and simply say “AI has made me fear for my future and worry for my worth.” It’s not dressed up in lengthy debates and comprehensive theses, yet it’s a simple concern that can be addressed directly. Of all the reasons I’ve given that AI is going away, this is the most relatable. I only regret sparing it but a single mention.

They wouldn’t be acting this way if they could truly say that AI is going away.

Appendix: An Incomplete Taxonomy of Critical Claims

These are from my personal observations, and nothing more. You may be familiar with some of these—outside this article, of course.

CategoryClaim
TechnicalAI will collapse from training on its own output; model collapse is inevitable.
Nightshade/Glaze is extremely effective, and will damage the models/protect your art.
AI has hit a wall.
AI doesn’t think/understand.
This AI model is bad and useless.
AI image models can’t do good hands.
LegalCopyright lawsuits will destroy the AI companies.
AI enables pervasive illegal deepfakes.
Training AI is always infringing.
EconomicAI is a bubble; the investment doesn’t justify the returns.
AI companies are buying up all the GPUs/RAM/HDDs/electronics/other resources.
EnvironmentalAI is using too much electricity.
AI is consuming too much fresh water.
MoralAI is soulless.
AI is based on theft.
AI is slop/enables too much spam, destroying online spaces.

#AI #Machine Learning #Image Generation #GLAZE #Nightshade