AI is different

grep_it | 496 points

AI has been improving at a very rapid pace, which means that a lot of people have really outdated priors. I see this all the time online where people are dismissive about AI in a way that suggests it's been a while since they last checked-in on the capabilities of models. They wrote off the coding ability of ChatGPT on version 3.5, for instance, and have missed all the advancements that have happened since. Or they talk about hallucination and haven't tried Deep Research as an alternative to traditional web-search.

Then there's a tendency to be so 'anti' that there's an assumption that anyone reporting that the tools are accomplishing truly impressive and useful things must be an 'AI booster' or shill. Or they assume that person must not have been a very good engineer in the first place, etc.

Really is one of those examples of the quote, "In the beginner's mind there are many possibilities, but in the expert's mind there are few."

It's a rapidly evolving field, and unless you actually spend some time kicking the tires on the models every so often, you're just basing your opinions on outdated experiences or what everyone else is saying about it.

gdubs | 4 days ago

In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.

This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?

I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.

btilly | 4 days ago

There’s a simple flaw in this reasoning:

Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.

In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.

You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.

Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.

keiferski | 4 days ago

I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.

And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.

To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.

_jab | 4 days ago

> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]

What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.

Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.

itsalotoffun | 4 days ago

This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.

AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).

atleastoptimal | 4 days ago

When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."

ahurmazda | 4 days ago

I'm on team plateau, I'm really not noticing increasing competency in my daily usage of the major models. And sometimes it seems like there are regressions where performance drops from what it could do before.

There is incredible pressure to release new models which means there is incredible pressure to game benchmarks.

Tbh a plateau is probably the best scenario - I don't think society will tolerate even more inequality+ massive job displacement.

siliconc0w | 4 days ago

  We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
There will be fewer very large companies in terms of human size. There will be many more companies that are much smaller because you don't need as many workers to do the same job.

Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.

aurareturn | 4 days ago

I actually find it hard to understand how the market is supposed to react if the AI capabilities does surpass all humans in all domains. It's first of all not clear such a scenario leads to runaway wealth for a few, even though with no outside events that may be the outcome. However, such scenarios are so unsustainable and catastrophic it's hard to imagine there are no catastrophic reactions to it. How is the market supposed to react if there's a large chance of market collapse and also a large chance of runaway wealth creation? Besides the point that in an economy where AI surpass humans the demands of the market will shift drastically too. Which I also think is underrepresented in predictions, which is the induced demand of AI-replaced labor and the potential for entire industries to be decimated by secondary effects instead of direct AI competition/replacement at labor scale.

Davidzheng | 4 days ago

>We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).

I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.

Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.

Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services. Trade between these two groups would eventually stop.

You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services. However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.

This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).

If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.

But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.

m4nu3l | 4 days ago

For me it maps elegantly on previous happenings.

When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.

A simpler example is the calculator. People stopped doing it by hand and forgot how.

Most desk work is going to get obliterated. We are going to forget how.

The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.

Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)

econ | 4 days ago

> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence

Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.

If you follow the current logic of AI proponents, you get essentially:

(1) Almost all white-collar jobs will be done better or at least faster by AI.

(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.

(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.

If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.

xg15 | 4 days ago

Could, if, and maybe.

When we discuss how LLMs failed or succeeded, as a norm, we should start including

- the language/framework - task, - our experience levels (highly familiar, moderately familiar, I think I suck, unfamiliar)

Right now, we know both - Claude is magic, and LLMs are useless, but never how we move between these two states.

This level of uncertainty, when economy making quantities of wealth are being moved, is “unhelpful”.

intended | 4 days ago

Reading smart software people talk about AI in 2025 is basically just reading variations on the lump of labor fallacy.

If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.

andrewmutz | 4 days ago

I am a relentlessly optimistic person and this is the first technology that I've seen that worries me in the decades I've been in the biz.

It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.

deepfriedbits | 4 days ago

I agree with the general observation, and I've been of this mind since 2023 (if AI really gets as good as the boosters claim, we will need a new economic system). I usually like Antirez's writing, but this post was a whole lot of...idk nothing? I don't feel like this post said anything interesting, and it was kind incoherent at moments. I think in some respects it's a function of the technology and situation we're in—the current wave of "AI" is still a lot of empty promises and underdelivery. Yes, it is getting better, and yes people are getting clever by letting LLMs use tools, but these things still aren't intelligent insofar as they do not reason. Until we achieve that, I'm not sure there's really as much to fear as everyone thinks.

We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.

We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.

voidhorse | 4 days ago

Well this is a pseudo-smart article if I’ve ever seen one.

“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”

The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.

Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.

To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.

yapyap | 4 days ago

AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.

edit: ability without accountability is the catchier motto :)

mxwsn | 4 days ago

I am just not having this experience of AI being terribly useful. I don’t program as much in my role but I’ve found it’s a giant time sink. I recognize that many people are finding it incredibly helpful but when I get deeper into a particular issue or topic, it falls very flat.

mycentstoo | 4 days ago

I wouldn't trust a taxi driver's predictions about the future of economics and society, why would I trust some database developer's? Actually, I take that back. I might trust the taxi driver.

mgradowski | 4 days ago

This whole ‘what are we going to do’ I think is way out of proportion even if we do end up with agi.

Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.

Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.

Maybe that’s the problem we should focus on solving…

jrvarela56 | 4 days ago

I find it funny that almost every talking point made about AI is done in future tense. Most of the time without any presentation of evidence supporting those predictions.

s_ting765 | 4 days ago

One thing that doesn’t seem to be discussed with the whole “tech revolution just creates more jobs” angle is that, in the near future, there are no real incentives for that. If we’re going down the route of declining birth rates, it’s implied we’ll also need less jobs.

From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.

tokioyoyo | 4 days ago

> After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.

A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.

That’s a case for a moderate economic upturn though.

solarkraft | 4 days ago

The biggest difference to me is that it seems to change people in bad ways, just from interacting with it.

Language is a very powerful tool for transformation, we already knew this.

Letting it loose on this scale without someone behind the wheel is begging for trouble imo.

codr7 | 4 days ago

A lot of anxious words to say “AI is disruptive,” which is hardly a novel thought.

A more interesting piece would be built around: “AI is disruptive. Here’s what I’m personally doing about it.”

SimianLogic | 4 days ago

Is it true that current LLMs can find bugs in complex codebases? I mean, they can also find bugs in otherwise perfectly working code

HellDunkel | 4 days ago

AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.

pton_xd | 4 days ago

I don't get how post GPT-5's launch we're still getting articles where the punchline is "what if these things replace a BUNCH of humans".

BoorishBears | 4 days ago

> "However, if AI avoids plateauing long enough to become significantly more useful..."

As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.

Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.

"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.

russellbeattie | 4 days ago

Every technology tends to replace many more jobs in a given role than which ever existed inducing more demand on its precursors. If the only potential application of this was just language, the historic trend that humans would just fill new roles would hold true. But if we do the same with motor movements with a generalized form factor this is really where the problem emerges. As companies drop more employees moving towards fully automated closed loop production their consumer market fails faster than they can reach a zero cost.

Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.

rifty | 4 days ago

People thought it was the end of history and innovation would be all about funding elaborate financial schemes; but now with AI people are finding themselves running all these elaborate money-printing machines and they're unsure if they should keep focusing on those schemes as before or actually try to automate stuff. The risk barrier has been lowered a lot to actually innovate, almost as low risk as doing a scheme but still people are having doubts. Maybe because people don't trust the system to reward real innovation.

LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.

jongjong | 4 days ago

> Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.

If we factor in that LLMs only exist because of Google search, after they have indexed and collected all the data on the WWW than LLMs are not surprising. They only replicate what has been published on the web, even the coding agents are only possible because of free software and open-source, code like Redis that has been published on the WWW.

kaindume | 4 days ago

These sort of commentaries on AI are the modern equivalent of medieval theologians debating how many angels could congregate in one place.

jeffreyrogers | 4 days ago

> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.

Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.

There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.

It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).

silisili | 4 days ago

I don't think I agree. I think it's the same and there is great potential for totally new things to appear and for us to work on.

For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.

Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.

Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.

gamerDude | 4 days ago

On this, read Daniel Susskind - A world without work (2020). He says exactly this: the new tasks created by AI can in good part themselves be done by AI, if not as soon as they appear then a few years of improvement later. This will inevitably affect the job market and the relative importance of capital and labor in the economy. Unchecked, this will worsen inequalities and create social unrest. His solution will not please everyone: Big State. Higher taxes and higher redistribution, in particular in the form of conditional basic income (he says universal isn't practically feasible, like what do you do with new migrants).

fraboniface | 4 days ago

It's not a matter of "IF" LLM/AI will replace a huge amount of people, but "WHEN". Consider the current amount of somewhat low-skilled administrative jobs - these can be replaced with the LLM/AI's of today. Not completely, but 4 low-skill workers can be replaced with 1 supervisor, controlling the AI agent(s).

I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.

tobyhinloopen | 4 days ago

> However, if AI avoids plateauing long enough

I'm not sure how someone can seriously write this after the release of GPT-5.

Models have started to plateau since ChatGPT came out (3 years ago) and GPT-5 has been the final nail in this coffin.

iLoveOncall | 4 days ago

At some point far in the future, we don't need an economy: everyone does everything they need by themselves, helped by AI and replicators.

But realistically, you're not going to have a personal foundry anytime soon.

eternauta3k | 4 days ago

For every industrial revolution (and we dont even know if AI is one yet) this kind of doom prediction has been around. AI will obviously create a lot of jobs too. the infra to run AI will not building itself, the people who train models will still be needed, the AI supervisors or managers or whatever we call it will be necessary part of the new workflows. And if your job needs hands you will be largely unaffected as there is no near future where robots will replace the flexibility of what most humans can do.

ekianjo | 4 days ago

> AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction.

In which science fiction were the dreamt up robots as bad?

eviks | 4 days ago

> Since LLMs and in general deep models are poorly understood ...

This is demonstrably wrong. An easy refutation to cite is:

https://medium.com/@akshatsanghi22/how-to-build-your-own-lar...

As to the rest of this pontification, well... It has almost triple the number of qualifiers (5 if's, 4 could's, and 5 will's) than paragraphs (5).

AdieuToLogic | 4 days ago

I think something everyone is underpricing in our area is that LLMs are uniquely useful for writing code for programmers.

it's a very constrained task, you can do lots of reliable checking on the output at low cost (linters, formatters, the compiler), the code is mostly reviewed by a human before being committed, and there's insulation between the code and the real world, because ultimately some company or open source project releases the code that's then run, and they mostly have an incentive to not murder people (Telsa except, obviously).

it seems like lots of programmers are then taking that information and then deeply overestimating how useful it is at anything else, and these programmers - and the marketing people who employ them - are doing enormous harm by convincing e.g. HR departments that it is of any value to them for dealing with complaints, or much much more danderously, convincing governments that it's useful for how they deal with humans asking for help.

this misconception (and deliberate lying by people like OpenAI) is doing enormous damage to society and is going to do much much more.

bananapub | 4 days ago

It's really very simple.

We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.

If we wanted to change something about the system we would have to create that new skill ourselves.

Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.

In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.

This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.

LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.

Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.

ThomPete | 4 days ago

"It was not even clear that we were so near to create machines that could understand the human language"

LLMs don't _understand_ "the human language". They dont _understand_ anything. It would be really great if everyone would keep their heads and not lose sight of this fundamental truth.

xyz_opinion | 2 days ago

The thing that blows me away is that I woke up one day and was confronted with a chat bot that could communicate in near perfect English.

I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.

Waterluvian | 4 days ago

I like to point out that ASI will allow us to do superhuman stuff that was previously beyond all human capability.

For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.

narrator | 4 days ago

The OP is spot-on about this:

If AI technology continues to improve and becomes capable of learning and executing more tasks on its own, this revolution is going to be very unlike the past ones.

We don't how if or how our current institutions and systems will be able to handle that.

cs702 | 4 days ago

I think so too - the latest AI changes mark the new "automate everything" era. When everything is automated, everything costs basically zero, as this will eliminate the most expensive part of every business - human labor. No one will make money from all the automated stuff, but no one would need the money anyway. This will create a society in which money is not the only value pursued. Instead of trying to chase papers, people would do what they are intended to - create art and celebrate life. And maybe fight each other for no reason.

I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.

Antirez you are the best

yard2010 | 4 days ago

We currently work more than we ever have. Just a couple of generations ago it was common for a couple to consist of one person who worked for someone else or the public, and one who worked at home for themselves. Now we pretty much all have to work for someone else full time then work for ourselves in the evening. And that won't make you rich, it will just make you normal.

Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.

This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.

globular-toast | 4 days ago

If we accept the possibility that AI is going to be more intelligent than humans the outcome is obvious. Humans will no longer be needed and either go extinct or maybe be kept by the AI as we now keep pets or zoo animals.

cjfd | 4 days ago

Butlerian Jihad it is then.

yubblegum | 4 days ago

Humans Need Not Apply - Posted exactly 11 years ago this week.

https://www.youtube.com/watch?v=7Pq-S557XQU

scrollaway | 4 days ago

Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.

But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.

I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.

throwaway20174 | 4 days ago

I ll happily believe it the day something doesnt adhere to the Gartner cycle, until then it is just another bubble like dotcom, chatbots, crypto and the 456345646 things that came before it

vivzkestrel | 4 days ago

> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system).

Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.

howtofly | 4 days ago

The right way to think about "jobs" is that we could have given ourselves more leisure on the basis of previous technological progress than we actually did.

Ericson2314 | 4 days ago

> It was not even clear that we were so near to create machines that could understand the human language

It's not really clear to me to what extent LLMs even do *understand* human language. They are very good at saying things that sound like a responsive answer, but the head-scratching, hard-to-mentally-visualise aspect of all of this is that this isn't the same thing at all.

exasperaited | 4 days ago

Assuming AI improves productivity then I don't see how it couldn't result in an economic boom. Labor has always been one of the most scarce resources in the economy. Now whether or not that the wealth from the improved productivity actually trickles down to most people depends on the political climate.

IX-103 | 4 days ago

We are too far from exploring alternate economies. LLMs will not push us there, atleast not in their current state.

Sateeshm | 4 days ago

antirez should retire, his recent nonsense AI take is shadowing his merits as a competent programmer.

0points | 4 days ago

If computers are ‘bicycles for the mind’, AI is the ‘self-driving car for the mind’. Which technology results in worse accidents? Did automobiles even improve our lives or just change the tempo beyond human bounds?

bawana | 4 days ago

After reading a good chunk of the comments, I got the distinct impression that people don't realize we could just not do the whole "let's make a dystopian hellscape" project and just turn all of it off. By that I mean, outlaw AI, destroy the data centers, have severe consequences for it's use by corporations as a way to reduce headcount, I'm talking executives get to spend the rest of their lives in solitary confinement, and instead invest all of this capital in making a better world (solving homelessness, the last mile problem of food distribution, the ever present and ongoing climate catastrophe). We, as humans, can make a choice and make it stick through force of actions.

Or am I just too idealistic ?

Sidenote, I never quite understand why the rich think their bunkers are going to save them from the crisis they caused. Do they fail to realize that there's more of us than them, or do they really believe they can fashion themselves as warlords?

mekael | 4 days ago

I was skeptical of AI for a very long time.

But seeing it in action now makes me seriously question “human intelligence”.

Maybe most of us just aren’t as smart as we think…

BobbyTables2 | 4 days ago

Clear long term winners are energy producers. AI can replace everything including hardware design & production but it can not produce energy out of thin air.

yubblegum | 4 days ago

i don’t this article really says anything that hasn’t been already said for the past two years. “if AI actually take jobs, it will be a near-apocalyptic system shock if there aren’t news jobs to replace them”. i still think it’s at best too soon to say if jobs have permanently been lost

they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on

ausbah | 4 days ago

As any other technology, at the end of the day LLMs are used by humans for humans’ selfish, driven by mental issues and trauma and overcompensation, maybe even paved with good intentions but leading you know where, short-sighted goals. If we were to believe that LLMs are going to somehow become extremely powerful, then we should be concerned, as it is difficult to imagine how that can lead to an optimal outcome organically.

From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.

strogonoff | 4 days ago

Here's what I want.

A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.

I wonder if LLMs can produce this.

A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."

One that stands out in my memory is "turning billion dollar industries into million dollar industries."

With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.

We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.

This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.

Anyway... this never actually works out. The meta is a terrible predictor of where things will go.

Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.

netcan | 4 days ago
[deleted]
| 4 days ago

> However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots.

Aren't the markets massively puffed up by AI companies at the moment?

edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg

crummy | 4 days ago

Unpopular opinion: Let us say AI achieves general intelligence levels. We tend to think of current economy, jobs, research as a closed system, but indeed it is a very open system.

Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.

Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.

I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.

sawyna | 4 days ago

About 3 years late to this "hot take".

throwaway314155 | 4 days ago
[deleted]
| 4 days ago

Honestly the long-term consequences of Baumol's disease scare me more than some AI driven job disruption dystopia.

If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.

We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).

grumpy-de-sre | 4 days ago

We will continue to have poor understanding of LLMs until a simple model can be constructed and taught to a classroom of children. It is only different in this aspect. It is not magic. It is not intelligent. Until we teach the public exactly what it is doing in a way simple adults will understand, enjoy hot take after hot take.

1970-01-01 | 4 days ago
[deleted]
| 4 days ago

GenAI is a bubble, but that’s not the same as the broader field of AI, which is completely different. We will probably not even be using chat bots in a few years, better interfaces will be developed with real intelligence, not just predictive statistics.

Rob_Polding | 4 days ago

I think there is an unspoken implication built into the assumption that AI will be able to replace a wide variety of existing jobs, and that is that those current jobs are not being done efficiently. This is sometimes articulated as bullshit jobs, etc. and if AI takes over those the immediate next thing that will happen is that AI will look around ask why _anyone_ was doing that job in the first place. The answer was articulated 70 years ago in [0].

The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.

Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.

At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].

Fun times ahead.

0. https://web.archive.org/web/20180705215319/https://www.econo... 1. https://en.wikipedia.org/wiki/The_Evitable_Conflict

tgbugs | 4 days ago

Reads like it was written by an AI.

mcswell | 4 days ago

Open letter to tech magnates:

By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)

It will undoubtedly lead to great advances

But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems

alex1138 | 4 days ago

> But stocks are insignificant in the vast perspective of human history

This really misunderstands what the stock market tracks

MattDamonSpace | 4 days ago

> Yet the economic markets are reacting as if they were governed by stochastic parrots.

That's because they are. The stock market is all about narrative.

> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence.

Yes it is, the mega companies that wil be providing the intelligence are, Nvidia, AMD, TSMC, ASML, add your favourite foundry.

throwawayffffas | 4 days ago
[deleted]
| 4 days ago

> Yet the economic markets are reacting as if they were governed by stochastic parrots

uh last time I checked, "markets" around the world are a few percent from all time highs

fullstackchris | 4 days ago

At the moment I just don't see AI in its current state or future trajectory as a threat to jobs. (Not that there can't be other reasons why jobs are getting harder to get). Predictions are hard, and breakthroughs can happen, so this is just my opinion. Posting this comment as a record to myself on how I feel of AI - since my opinion on how useful/capable AI is has gone up and down and up and down again over the last couple of years.

Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.

One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.

Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.

So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.

I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.

abhaynayar | 4 days ago

Reposting the article so I can read it in a normal font:

Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.

Since LLMs and in general deep models are poorly understood, and even the most prominent experts in the field failed miserably again and again to modulate the expectations (with incredible errors on both sides: of reducing or magnifying what was near to come), it is hard to tell what will come next. But even before the Transformer architecture, we were seeing incredible progress for many years, and so far there is no clear sign that the future will not hold more. After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.

However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. But this is not the only possible outcome.

We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).

The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that, so far, and even if the economic forecasts are cloudy, wars are destabilizing the world, the AI timings are hard to guess, regardless of all that stocks continue to go up. But stocks are insignificant in the vast perspective of human history, and even systems that lasted a lot more than our current institutions eventually were eradicated by fundamental changes in the society and in the human knowledge. AI could be such a change.

andai | 4 days ago

[dead]

ETH_start | 4 days ago

[dead]

jeffWrld | 4 days ago

[dead]

inquirerGeneral | 4 days ago

This same link was submitted 2 days ago. My comment there still applies.

LLMs do not "understand the human language, write programs, and find bugs in a complex code base"

"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."

https://jenson.org/timmy/

cratermoon | 4 days ago