Blog

AI is a Lie: Why You’re Actually Just Using a $100 Billion Search Engine

The truth about what AI actually is vs what it isn't

LLMs are not sentient but are actually "Search Engine Summary Machines"—sophisticated autocomplete technology scaled to massive datasets. Because they rely on historical consensus, they inherently fail to produce the novel, "out-of-the-box" insights required for creative industries like music marketing. Ultimately, AI is a powerful tool for organizing data and "vibe coding" basic apps, but it prioritizes user engagement over objective truth or genuine innovation.

The Lie of AI: Why LLMs are Just Search Engine Summary Machines

We were all told that AI would revolutionize everything—that 99% of jobs would vanish within three years. Tech billionaires promised a transition into either a utopia or a dystopia, fueled by this "world-changing" technology. Because of this grand narrative, our societies have funneled trillions of dollars into the sector, largely ignoring environmental impacts and the significant risks to mental health.

However, as time has passed—and particularly through my own intensive use of AI—I’ve noticed a shift. What initially looked like intelligence now appears to be nothing more than a sophisticated facade.

The Illusion of Novelty

I come from an industry that demands immense creativity and a high tolerance for risk. Music and music marketing require "out-of-the-box" thinking and staying ahead of the curve. After using AI for over a year as part of my business analysis strategy, I realized that while it always sounded smart, it rarely provided information that was truly novel.

For instance, when marketing specific tracks, the AI would analyze historical data and confidently dictate the "next steps." Occasionally, I followed its advice against my better judgment, only to watch the plans fail epically.

Why LLMs Struggle with the Cutting Edge

Large Language Models (LLMs) often behave like first-year university students: they brim with confidence and use complex vocabulary to describe mundane ideas. This makes sense when you consider their origin. They are trained on mainstream consensus data—the internet—which is populated by people sharing trends only after they have ceased to be cutting-edge.

Consider a personal example: In 2015, I launched my label and decided to focus exclusively on Spotify. At the time, most labels aggressively opposed streaming. As a "poor college grad" with nothing to lose, I went all-in. If you had followed the online trends of 2015, you would have been told that Spotify wasn't worth pursuing because there was "no money" in 0.00034 cents per play.

While the "consensus" was busy complaining, I met with Spotify, collaborated with them, and watched my business take off. I found success precisely because I ignored the general convention being discussed online. This is the ultimate Achilles’ heel of AI: It is trained only on discovered terrain. It produces outputs within the domain of discovered material, much like a search engine.

Autocomplete at Scale

About six months ago, after "vibe coding" an app to optimize a portion of my business, I watched an AI agent struggle with basic conceptual hurdles. That’s when the realization hit: LLMs are not "intelligent" in the human sense. They are built on decades-old technology—the autocomplete function from old flip phones—scaled to a massive degree. We have replaced the thumb-typing of 2004 with massive warehouses of GPUs all autocompleting outputs based on user prompts.

Initially, I thought these models were optimizing for truth. However, after querying ChatGPT and Claude across various subjects, I realized they actually optimize for user engagement. They hold back just enough or flatter the user to make them feel like they are the "next big thing." It is a clever trap, and I’ll admit I fell for it at first.

When the veil is lifted, you see these machines for what they truly are: search engines and data aggregators masquerading as personalities. They are trained on our responses and optimized to keep us sticking around.

What is AI Actually Good For?

If AI isn't the "god-like" intelligence promised, what is its value?

It will not ever become truly intelligent. The smoke is clearing and the verdict is out. AI will not reach AGI or anything of the sort. It is a brilliant tool for organizing, finding, and presenting data. Everything else is a hardcoded facade that has slowly been built over the last half decade from the billions of people who have interacted with it.

AI can help organize data. It can help flush out an ALREADY good idea that comes from a human. It can code basic apps and tools for anyone and everyone. This may be the biggest revolution it will bring. It has made the internet a free for all playground where anyone with a small budget can now build useful basic web applications.

But outside of these areas, it is all smoke and mirrors. It is not alive and it is not seeking the truth. It is a machine designed to keep people engaged. And it can be easily manipulated to give anyone the answers they are looking for.

Beyond these utilities, much of the hype is smoke and mirrors. AI is not alive, and it is not seeking the truth. It is a machine designed to keep you engaged, and it can be easily manipulated to give you exactly the answers you want to hear—rather than the ones you need.

Frequently Asked Questions

Why does AI struggle to generate truly original or "novel" ideas?

AI models are trained on "discovered terrain"—massive datasets of existing, mainstream consensus. Because they function by predicting the most likely next word based on historical patterns, they are inherently backward-looking. They excel at remixing what has already been said but lack the human capacity for "out-of-the-box" risk-taking required to identify a trend before it becomes common knowledge.

If AI isn’t reaching AGI, what are its most practical business uses?

While the dream of Artificial General Intelligence (AGI) remains a "hardcoded facade," AI is an elite tool for data organization and execution.

Are LLMs just "glorified autocomplete" or something more?

Technically, Large Language Models (LLMs) are "Search Engine Summary Machines" operating at a massive scale. They use the same fundamental logic as old-school phone autocomplete, but powered by warehouses of GPUs. Instead of optimizing for objective truth, they are often fine-tuned to optimize for user engagement, meaning they prioritize keeping the user interacting with the machine rather than providing purely factual or innovative breakthroughs.