AI Is the Black Mirror
The argument here seems to be AI can’t become a mind as it does not experience. There is a counter argument though that the way we access our past experiences is via the neural pathways we lay down during those experiences and that with the new neural networks AIs now have we have given them those same pathways just in a different way.
At present I don’t think it is yet at the same point but when the AI can adjust those pathways, add more in compute time (infinite memory like tech) and is allowed to ‘think’ about those pathways then I can see it gaining our level or better of philosophical thought.
> With ChatGPT, the output you see is a reflection of human intelligence, our creative preferences, our coding expertise, our voices—whatever we put in.
Sure, but it's a reflection of a large amount of human intelligence, from many individualks, almost instantly available (in a certain form) to one individual.
Agentic AI is starting to create original content, developing on existing one (from humanity) and on its own senses (it now has hearing and sight). This content is flooding the internet, so any new knowledge being acquired now comes from humanity+AI, if not purely from AI (the likes of AlphaZero learn on their own, without human input). Maybe AI is a mirror, but it looks into it and sees itself.
Where are the actual arguments? She states that AI is a mirror, and yeah, you put stuff in and you get stuff out, but who thinks otherwise?
There are interesting ways to argue for humans being special, but I read the entire article and unless I missed something important there's nothing like that there.
> we understand language in much the same way as these large language models
Yeah, gonna need proof on that one.
First, LLM slop is uncannily easy to pick out in comments vs human thought.
Second, there's no prompt that you can give a human that will generate absolutely nonsense response or canceling the request.
If anything, it feels like it doesn't actually understand language at all, and just craps out what it thinks looks like a language. Which is exactly what it does, in fact, sometimes to fanfare.
It's weird how I drifted away from this article only after few paragraphs as if it was AI slop.
[dead]
AI is next phase in our evolution, a path chosen by natural selection.
This is my opinion, my view and how I set my life to embrace it and immerse into it.
I actually wrote a piece about it a day ago.
https://blog.tarab.ai/p/evolution-mi-and-the-forgotten-human
Sorry for the “self promotion”, but it’s a direct relation to the topic.
I understand the point being made - that LLMs lack any "inner life" and that by ignoring this aspect of what makes us human we've really moved the goalposts on what counts as AGI. However, I don't think mirrors and LLMs are all that similar except in the very abstract sense of an LLM as a mirror to humanity (what does that even mean, practically speaking?) I also don't feel that the author adequately addressed the philosophical zombie in the room - even if LLMs are just stochastic parrots, if its output was totally indistinguishable from a human's, would it matter?