Ask HN: Why is Ilya saying data is limited when the whole world is data?

georgestrakhov | 12 points

There's a ton of recent work on data curation / synthetic data generation that shows that smaller high quality datasets go a lot further than scaling up on noisy web data.

The scaling law plots are log scale so to get more juice with naive scaling we'd need to invest exponentially more resources, and we're at a point where the juice is not worth the squeeze, so people will shift to moving the curve down with new architectures, better curated datasets and test time compute / RL.

See:

- FineWeb: https://arxiv.org/abs/2406.17557

- Phi-4: https://arxiv.org/abs/2412.08905

- DataComp: https://arxiv.org/abs/2406.11794

m_ke | 15 days ago

If you show an LLM one webcam feed to train on, that's useful. Two is even more useful. But there are diminishing returns. "Useful training data" is limited.

Humans get to PhD level with barely a drop of training data compared to what LLMs are trained on.

If there were infinite useful data, then scaling AI on data would make sense. Since there isn't, the way forward is getting more efficient at using the data we have.

unsupp0rted | 15 days ago

Think about it from his perspective.

Data from the internet can be chunked, sorted, easily processed, and has a relatively high signal-to-noise ratio. Data from a webcam or a microphone -- if even legal to access in the first place -- would be a mess. Imagine chunking and processing 5TB of that sort of data. Seems to me that the effort would far outweigh the reward.

Robots are a different problem entirely. It's darkly amusing that simple problems of motion through space are more complex to replicate than painting the simulacra of a masterpiece, or acing the medical licensing exam. We'll probably have AGI before we can mimic the movement of the simple housefly.

A_D_E_P_T | 15 days ago

You’re thinking “any data”, he’s thinking “useful data for training an LLM”.

sk11001 | 15 days ago

There are lots of problems where someone has to run experiments to generate data. If the most optimized possible process to perform the experiment is expensive and takes time to generate 1 data point, then all you can do is wait till more data is produced before a solution is found. Think drug discovery.

wef22 | 15 days ago

Much of the user-generated data stored by tech companies is proprietary, which limits access by external parties.

EncryptedMan | 9 days ago

What about all the books written since antiquity?

farseer | 11 days ago

Complex systems studies is wisdom. We know how communication on internet behaves. Conway's Law hits hard and the processes of life are not dumb.

Access to physical reality is important when negotiating with the beings that can form under this constraint. People have apparently known this instinctively for a very long time and they are not going to give in to the demands of the AI industry.

It's a great mistake to humanize everything in your consciousness.

ganzuul | 15 days ago
[deleted]
| 15 days ago