If I understand correctly, this paper is arguing that investors will desperately allocate all their capital such that they maximize ownership of future AI systems. The market value of anything else crashes because it comes with the opportunity cost of owning less future AI. Interest rates explode, pre-existing bonds become worthless, and AI stocks go to the moon.
It's an interesting idea. But if the economy grinds to a halt because of that kind of investor behavior, it seems unlikely governments will just do nothing. E.g. what if they heavily tax ownership of AI-related assets?
Lawyers are like chartered engineers. It's not that you cannot do it for yourself, it's that using them confers certain instances of "insurance" against risk in the outcome.
Where does an AI get chartered status, admitted to the bar, and insurance cover?
What jobs do we think will survive if AGI is achieved?
I was thinking religious leaders might get a good run. Outside of say, Futurama, I'm not sure many people will want faith-leadership from a robot?
Is a small group really going to control AI systems or will competition bring the price down so much that everyone benefits and the unit cost of labor is further and further reduced.
Given that the paper disappoints, I'd love to hear what fellow HN readers do to prepare?
My prep is:
1) building a company (https://getdot.ai) that I think will add significant marginal benefits over using products from AI labs / TAI, ASI.
2) investing in the chip manufacturing supply chain: from ASML, NVDA, TSMC, ... and SnP 500.
3) Staying fit and healthy, so physical labour stays possible.
If the singularity happens, i feel like interest rates will be the least of our concerns.
There is one thing that AI can't do. Because you can't punish the AI instance, AI cannot take responsibility.
Whoever endorsed this author to post on arxiv should have their endorsement privileges revoked.
this paper asserts that when "TAI" arrives, human labor is simply replaced by AI labor while keeping aggregate labor constant. it treats human labor as a mere input that can be swapped out without consequence, which ignores the fact that human labor is the source of wages and, therefore, consumer demand. remove human labor from the equation, and the whole thing collapses.
Do you have a degree in theoretical economics?
“I have a theoretical degree in economics”
You’re hired!
real talk though, I wish I had just encountered an obscure paper that could lead me to refining a model for myself, but it seems like there would be so many competing papers that its the same as having none
This paper is silly.
It asks the equivalent of "what if magic were true" (human-level AI) and answers with "the magic economy would be different." No kidding.
FWIW, the author is listed as a fellow of "The Forethought Foundation" [0], which is part of the Effective Altruism crowd[1], who have some cultish doomerism views around AI [2][3]
There's a reason this stuff goes up on a non-peer reviewed paper mill.
--
[0] https://www.forethought.org/the-2022-cohort
[1] https://www.forethought.org/about-us
[2] https://reason.com/2024/07/05/the-authoritarian-side-of-effe...
[3] https://www.techdirt.com/2024/04/29/effective-altruisms-bait...
I suspect this is being manipulated to be #1 on HN. Looking at the paper, and looking at the comments, there's no way it's #1 by organic votes.
Not worth reading.
> this paper focuses specifically on the zero-sum nature of AI labor automation... When AI automates a job - whether a truck driver, lawyer, or researcher - the wages previously earned by the human worker... flow to whoever controls the AI system performing that job.
The paper examines a world people will pay an AI lawyer $500 to write a document instead of paying a human lawyer $500 to write a document. That will never happen.