menu button

The Great AI Delusion: Are We Being Seduced By Hallucinating Stochastic Parrots?

As large language models like ChatGPT, Gemini and Claude dazzle with their fluent text generation, some so called ‘AI experts’ claim these systems are beginning to approach Artificial General Intelligence (AGI). These bold proclamations are the next great AI delusion, hyping hollow capabilities to raise even more investor funding for an industry already generating billions from the AI hype.

The core argument is that since these models “perform a wide variety of tasks without being explicitly trained on each one,” they must leverage some general intelligence. However, this viewpoint dangerously misconstrues what’s actually occurring.

In reality, all these “varied” tasks share the same fundamental objective – predict the next token given a context of previous words. The “multiple tasks” are simply different contexts probing the same narrow language modelling capability. As long as examples exist in the massive training data, the model can produce relevant-seeming responses without any true understanding – a kind of Chinese Room with LLM regurgitating statements it doesn’t comprehend.

Proponents make unfounded assumptions that language is sufficient to capture intelligence, existing datasets encode all knowledge needed, simple reward modelling replicates the brain’s complexity, and someday scaling up parameter counts will spontaneously spark AGI.

Without an actual agreed definition of AGI, it is easy to move the goalposts as AI companies have repeatedly done with evolving definitions of “AI” itself to fit their latest product claims.

As remarkable as LLMs seem for specialized tasks, boosting parameter counts alone won’t magically induce human-level cognition and reasoning. We must separate hype from reality and look for new cognitive inspired AI architectures to make progress towards AGI beyond today’s hallucinating stochastic parrots.

References:

https://www.linkedin.com/pulse/language-models-artificial-general-intelligence-herbert-roitblat-ewcbc

https://en.wikipedia.org/wiki/Chinese_room

http://ai-ml.info/

Stochastic parrot paper: https://dl.acm.org/doi/10.1145/3442188.3445922

What are the problems with LLMs and how to solve them

Leave a Reply

Your email address will not be published. Required fields are marked *