Lawyers are like chartered engineers. It's not that you cannot do it for yourself, it's that using them confers certain instances of "insurance" against risk in the outcome.
Where does an AI get chartered status, admitted to the bar, and insurance cover?
Ok let’s play this scenario. Why this was not the case when the internet was at its infancy? People kept pumping money at young and failing tech companies, they were not hoarding at the expectation that internet will mature and the marginal cost Production for the internet companies will become 0.
Not worth reading.
> this paper focuses specifically on the zero-sum nature of AI labor automation... When AI automates a job - whether a truck driver, lawyer, or researcher - the wages previously earned by the human worker... flow to whoever controls the AI system performing that job.
The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.
What jobs do we think will survive if AGI is achieved?
I was thinking religious leaders might get a good run. Outside of say, Futurama, I'm not sure many people will want faith-leadership from a robot?
If the singularity happens, i feel like interest rates will be the least of our concerns.
this paper asserts that when "TAI" arrives, human labor is simply replaced by AI labor while keeping aggregate labor constant. it treats human labor as a mere input that can be swapped out without consequence, which ignores the fact that human labor is the source of wages and, therefore, consumer demand. remove human labor from the equation, and the whole thing collapses.
Do you have a degree in theoretical economics?
“I have a theoretical degree in economics”
You’re hired!
real talk though, I wish I had just encountered an obscure paper that could lead me to refining a model for myself, but it seems like there would be so many competing papers that its the same as having none
Is a small group really going to control AI systems or will competition bring the price down so much that everyone benefits and the unit cost of labor is further and further reduced.
Given that the paper disappoints, I'd love to hear what fellow HN readers do to prepare?
My prep is:
1) building a company (https://getdot.ai) that I think will add significant marginal benefits over using products from AI labs / TAI, ASI.
2) investing in the chip manufacturing supply chain: from ASML, NVDA, TSMC, ... and SnP 500.
3) Staying fit and healthy, so physical labour stays possible.
There is one thing that AI can't do. Because you can't punish the AI instance, AI cannot take responsibility.
This paper's got it backwards. AI's benefits don't pile up with the owners, they flow to whoever's got a problem to solve and knows how to point the AI at it. Think of AI like a library: owning the books doesn't make you benefit much, applying knowledge to problems does. The big winners are the ones setting the prompts, not the ones owning the servers. AI developers? They're making cents per million tokens while users, solo or corporate, cash in on the real value: application.
Sure, the rich might hire some more people to aim the AI for them, but who's got a monopoly on problems? Nobody. Every freelancer, farmer, or startup's got their own problems to fix, and cheap AI access means they can. The paper's obsessed with wealth grabbing all the future benefits, but problems are everywhere, good luck cornering that market. Every one of us has their own problems and stands to get personalized benefits from AI.
In the age of AI having problems is linked to receiving its benefits. Imagine for example I feel one side of my face drooping and have speech difficulty, and I type my symptoms into a LLM, and it tells me to quickly visit the doctors. It might save my life from stroke. Who gets the largest benefit here?
Problems are distributed even if AI is not.
Whoever endorsed this author to post on arxiv should have their endorsement privileges revoked.
I suspect this is being manipulated to be #1 on HN. Looking at the paper, and looking at the comments, there's no way it's #1 by organic votes.
This paper is silly.
It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.
FWIW, the author is listed as a fellow of "The Forethought Foundation" [0], which is part of the Effective Altruism crowd[1], who have some cultish doomerism views around AI [2][3]
There's a reason this stuff goes up on a non-peer reviewed paper mill.
--
[0] https://www.forethought.org/the-2022-cohort
[1] https://www.forethought.org/about-us
[2] https://reason.com/2024/07/05/the-authoritarian-side-of-effe...
[3] https://www.techdirt.com/2024/04/29/effective-altruisms-bait...
If I understand correctly, this paper is arguing that investors will desperately allocate all their capital such that they maximize ownership of future AI systems. The market value of anything else crashes because it comes with the opportunity cost of owning less future AI. Interest rates explode, pre-existing bonds become worthless, and AI stocks go to the moon.
It's an interesting idea. But if the economy grinds to a halt because of that kind of investor behavior, it seems unlikely governments will just do nothing. E.g. what if they heavily tax ownership of AI-related assets?