The Timmy Trap

metadat | 137 points

> LLMs mimic intelligence, but they aren’t intelligent.

I see statements like this a lot, and I find them unpersuasive because any meaningful definition of "intelligence" is not offered. What, exactly, is the property that humans (allegedly) have and LLMs (allegedly) lack, that allows one to be deemed "intelligent" and the other not?

I see two possibilities:

1. We define "intelligence" as definitionally unique to humans. For example, maybe intelligence depends on the existence of a human soul, or specific to the physical structure of the human brain. In this case, a machine (perhaps an LLM) could achieve "quacks like a duck" behavioral equality to a human mind, and yet would still be excluded from the definition of "intelligent." This definition is therefore not useful if we're interested in the ability of the machine, which it seems to me we are. LLMs are often dismissed as not "intelligent" because they work by inferring output based on learned input, but that alone cannot be a distinguishing characteristic, because that's how humans work as well.

2. We define "intelligence" in a results-oriented way. This means there must be some specific test or behavioral standard that a machine must meet in order to become intelligent. This has been the default definition for a long time, but the goal posts have shifted. Nevertheless, if you're going to disparage LLMs by calling them unintelligent, you should be able to cite a specific results-oriented failure that distinguishes them from "intelligent" humans. Note that this argument cannot refer to the LLMs' implementation or learning model.

hackyhacky | 7 hours ago

The article says that LLMs don't summarize, only shorten, because...

"A true summary, the kind a human makes, requires outside context and reference points. Shortening just reworks the information already in the text."

Then later says...

"LLMs operate in a similar way, trading what we would call intelligence for a vast memory of nearly everything humans have ever written. It’s nearly impossible to grasp how much context this gives them to play with"

So, they can't summarize, because they lack context... but they also have an almost ungraspably large amount of context?

sobiolite | 8 hours ago

LLMs mimic intelligence, but they aren’t intelligent.

They aren’t just intelligence mimics, they are people mimics, and they’re getting better at it with every generation.

Whether they are intelligent or not, whether they are people or not, it ultimately does not matter when it comes to what they can actually do, what they can actually automate. If they mimic a particular scenario or human task well enough that the job gets done, they can replace intelligence even if they are “not intelligent”.

If by now someone still isn’t convinced that LLMs can indeed automate some of those intelligence tasks, then I would argue they are not open to being convinced.

Joeri | 5 hours ago

Even stronger than our need to anthropomorphize seems to be our innate desire to believe our species is special, and that “real intelligence” couldn’t ever be replicated.

If you keep redefining real intelligence as the set of things machines can’t do, then it’s always going to be true.

nojs | 6 hours ago

- LLMs don't need to be intelligent to take jobs, bash scripts have replaced people.

- Even if CEOs are completely out of touch and the tool can't do the job you can still get laid off in an ill informed attempt to replace you. Then when the company doesn't fall over because the leftover people, desperate to keep covering rent fill the gaps it just looks like efficiency to the top.

- I don't think our tendency anthropomorphize LLMs is really the problem here.

ticulatedspline | 6 hours ago

Good point about the Turing Test:

>The original Turing Test was designed to compare two participants chatting through a text-only interface: one AI and one human. The goal was to spot the imposter. Today, the test is simplified from three participants to just two: a human and an LLM.

By the original meaning of the test it's easy to tell an LLM from a human.

intalentive | 5 hours ago

> They had known him for only 15 seconds, yet they still perceived the act of snapping him in half as violent.

This is right out of Community

ArnavAgrawal03 | 6 hours ago

What if the problem is not that we overestimate LLMs, but that we overestimate intelligence? Or to express the same idea for a more philosophically inclined audience, what if the real mistake isn’t in overestimating LLMs, but in overestimating intelligence itself by imagining it as something more than a web of patterns learned from past experiences and echoed back into the world?

stefanv | 7 hours ago

The article claims (without any evidence, argument or reason) that LLMs are not intelligent, then simply refuses to define intelligence.

How do you know LLMs aren't intelligent, if you can't define what that means?

umanwizard | 7 hours ago

I feel this article should be paired with this other one [1] that was on the frontpage a few days ago.

My impression is, there is currently one tendency to "over-anthropomorphize" LLMs and treat them like conscious or even superhuman entities (encouraged by AI tech leaders and AGI/Singularity folks) and another to oversimplify them and view them as literal Markov chains that just got lots of training data.

Maybe those articles could help guarding against both extremes.

[1] https://www.verysane.ai/p/do-we-understand-how-neural-networ...

xg15 | 6 hours ago

LLM's can shorten and maybe tend to if you just say "summarize this" but you can trivially ask them to do more. I asked for a summary of Jenson's post and then offer a reflection, GPT-5 said, "It's similar to the Plato’s Cave analogy: humans see shadows (the input text) and infer deeper reality (context, intent), while LLMs either just recite shadows (shorten) or imagine creatures behind them that aren’t there (hallucinate). The “hallucination” behavior is like adding “ghosts”—false constructs that feel real but aren’t grounded.

That ain't shortening because none of that was in his post.

pbw | 7 hours ago

You can compare the current state of LLMs to the days of chess machines when they first approached grandmaster level play. The machine approach was very brute force, and there was a lot of work done to improve the sheer amount of look ahead that was required to complete at the grandmaster level.

As opposed to what grandmasters actually did, which was less look ahead and more pattern matching to strengthen the position.

Now LLMs successfully leverage pattern matching, but interestingly it is still a kind of brute force pattern matching, requiring the statistical absorption of all available texts, far more than a human absorbs in a lifetime.

This enables the LLM to interpolate an answer from the structure of the absorbed texts with reasonable statistical relevance. This is still not quite “what humans do” as it still requires brute force statistical analysis of vast amounts of text to achieve pretty good results. For example training on all available Python sources in github and elsewhere (curated to avoid bad examples) yields pretty good results, not how a human would do it, but statistically likely to be pertinent and correct.

Isamu | 6 hours ago

Seems like this is close to the Uncanny Valley effect.

LLM intelligence is in the spot where it is simultaneously genius-level but also just misses the mark a tiny bit, which really sticks out for those who have been around humans their whole lives.

I feel that, just like more modern CGI, this will slowly fade with certain techniques and you just won't notice it when talking to or interacting with AI.

Just like in his post during the whole Matrix discussion.

> "When I asked for examples, it suggested the Matrix and even gave me the “Summary” and “Shortening” text, which I then used here word for word. "

He switches in AI-written text and I bet you were reading along just the same until he pointed it out.

This is our future now I guess.

kbaker | 7 hours ago

I might be mixing the concepts of intelligence and conscience etc, but the human mind is more than language and data; it's also experience. LLMs have all the data and can express anything around that context, but will never experience anything, which is singular for each of us, and it's part of what makes what we call intelligence (?). So they will never replicate the human mind; they can just mimic it.

I heard from Miguel Nicolelis that language is a filter for the human mind, so you can never build a mind from language. I interpreted this like trying to build an orange from its juice.

vcarrico | 6 hours ago

That's a great article.

Scott Jenson is one of my favorite authors.

He's really big on integrating an understanding of basic human nature, into design.

ChrisMarshallNY | 8 hours ago

This, along with a ton of commentary on LLMs, seems like its written by someone who has no technical understanding of LLMs.

andoando | 5 hours ago

> A philosophical exploration of free will and reality disguised as a sci-fi action film about breaking free from systems of control.

How is that a summary? It reads as a one-liner review I would leave on Letterboxed or something I would say, trying to be pretentious and treating the movie as a work of art. It is a work of art, because all movies are art, but that's an awful summary.

0x457 | 5 hours ago

I disagree with the author in a big way. '25's LLMs are designed to echo material already out there on the Internet if it exists because we value that more. If I want a summary of The Matrix, I prefer a summary that agrees with the zeitgeist, rather than a novel, unorthodox summary that requires a justification as to its deviation.

In fact, the example provided by the author is a great illustration of this:

> A philosophical exploration of free will and reality disguised as a sci-fi action film about breaking free from systems of control.

The words here refer back to the notions "free will" that is prominent in Western discourse from St. Augustine through Descartes and thereafter and similarly of "sci-fi". These are notions an uneducated East-Asian with limited Internet use and pop culture fluency will simply not understand. They would in fact prefer the latter description. The author and this hypothetical East-Asian live in very different zeitgeists, and correspondingly experience the movie differently and value different summaries of the film. They each prefer a summary that agrees with the zeitgeist, rather than a novel, unorthodox summary (relative to their zeitgeist) that requires a justification as to its deviation.

On the other hand, if you asked LLMs to explain material and concepts, one modality in which it does is use formulaic, novel, and unorthodox analogies to explain it to you. By formulaic and novel, I mean that the objects and scenarios in the analogies are frequently of a certain signature kind that it has been trained with, but it is novel in that the analogies are not found in the wild on the internet.

If you have frequently used LLMs for the purpose of explaining concepts, you will have encountered these analogies and know what I mean by this. The analogies are frequently not too accurate, but they round out the response by giving an ELI5-style answer.

Ironically, the author may have succumbed to LLM sycophancy.

jhanschoo | 4 hours ago

The LLMs are like a Huffman codec except the context is infinite and lossy

foobarian | 7 hours ago

Who are you going to lodge your complaint to that the set of systems and machines that just took your job isn’t “intelligent?”

Humans seem to get wrapped around these concepts like intelligence consciousness etc. because they seem to be the only thing differentiating us from every other animal when in fact it’s all a mirage.

AndrewKemendo | 7 hours ago

Well I, for one, can't beleive what that guy did to poor Timmy

codeulike | 7 hours ago
[deleted]
| 8 hours ago
[deleted]
| 5 hours ago

Regarding Timmy, the Companion Cube from the game Portal is the greatest example of induced anthropomorphism that I've ever experienced. If you know, you know, and if you don't, you should really play the game, since it's brilliant.

snozolli | 8 hours ago

I've mentioned this to colleagues at work before.

LLMs give a very strong appearance of intelligence, because humans are super receptive to information provided via our native language. We often have to deal with imperfect speakers and writers, and we must infer context and missing information on our own. We do this so well that we don't know we're doing it. LLMs have perfect grammar and we subtly feel that they are extremely smart because subconsciously we recognize that we don't have to think about anything that's said, it is all syntactically perfect.

So, LLMs sort of trick us into masking their true limitations and believing that they are truly thinking; there are even models that call themselves thinking models, but they don't think, they just predict what the user is going to complain about and say that to themselves as an additional, dynamic prompt on top of the one you actually enter.

LLMs are very good at fooling us into the idea that they know anything at all; they don't. And humans are very bad at being discriminate about the source of the information presented to them if it is presented in a friendly way. The combination of those things is what has resulted in the insanely huge AI hype cycle that we are currently living in the middle of. Nearly everyone is overreacting to what LLMs actually are, and the few of us that believe that we sort of see what's actually happening are ignored for being naysayers, buzz-kills, and luddites. Shunned for not drinking the Kool-Aid.

naikrovek | 7 hours ago

Good article, it's been told before but it bears repeating.

Also I got caught on this one kind of irrelevant point regarding the characterization of the Matrix: I would say Matrix is not just diguised as a story about escaping systems of control, it's quite clearly about oppressive systems in society, with specific reference to gender expression. Lilly Wachowski has explicitly stated that it was supposed to be an allegory for gender transition.

tovej | 8 hours ago

The author's argument is built on fallacies that always pop up in these kinds of critiques.

The "summary vs shortening" distinction is moving the goalposts. They makes the empirical claim that LLMs fail at summarizing novel PDFs without any actual evidence. For a model trained on a huge chunk of the internet, the line between "reworking existing text" and "drawing on external context" is so blurry it's practically meaningless.

Similarly, can we please retire the ELIZA and Deep Blue analogies? Comparing a modern transformer to a 1960s if-then script or a brute-force chess engine is a category error. It's a rhetorical trick to make LLMs seem less novel than they actually are.

And blaming everything on anthropomorphism is an easy out. It lets you dismiss the model's genuinely surprising capabilities by framing it as a simple flaw in human psychology. The interesting question isn't that we anthropomorphize, but why this specific technology is so effective at triggering that response from humans.

The whole piece basically boils down to: "If we define intelligence in a way that is exclusively social and human, then this non-social, non-human thing isn't intelligent." It's a circular argument.

nataliste | 6 hours ago