Ask HN: Pull the curtain back on Nvidia's CES keynote please

btbt | 54 points

Agree with all your points on the real world consumer experience.

* I would never assume the AI answer to a consequential problem to be authoritative, unless it shows me the source and I can click on the link to verify the source and the data presented (search engine use case).

* Rewrites with AI are bug-prone and often produce hard to trace bugs to the seemingly correct nature of these bugs. Generating the scaffolding works super well.

* Images are often too smooth, videos too robotic and rhythmic, water too shiny, etc. Trained eyes can easily distinguish between AI and real.

* Hallucinations are commonplace.

zer0x4d | 24 days ago

Related ongoing thread:

Jensen Huang keynote at CES 2025 [video] - https://news.ycombinator.com/item?id=42618595 - Jan 2025 (65 comments)

dang | 24 days ago

From the consumer perspective DLSS upscaling and frame prediction both had a lot of introduced artefacts that made them less than ideal even though they do improve performance quite a bit. This generation improves their accuracy and more heavily leans on the technologies to continue performance improvements. AMD is also making the same investments in their silicon putting considerable space to their AI cores and Ray tracing and not much at all to prior compute or rasterisation.

They are either right and its just a matter of more data and more compute thrown at the problem and its going to get indistinguishable from pixels more traditionally rendered or they are going to waste considerable silicon on the problem and it never becomes convincing. Even though its got problems DLSS3 has been quite popular for gamers and below perfect in these example are fine the errors aren't very consequential.

I don't know where this goes. I do know each generation of AI has improved quite a lot and we no longer talk about the turing test, we definitely took a jump but there remains a lot of hard engineering problems in every domain of AI to make it function as we want it to. Feels to me like a lot of these generators are in the uncanny valley, they are making the sort of errors that are weird and creepy but the thing is about that valley is it hides a lot of the progress being made.

PaulKeeble | 24 days ago

I worked as an applied ML researcher for a while, so I'll give this a shot.

"- AI can solve any problem across modalities—just feed it data." - a large chunk of my time in ML is spent on data. I can't emphasize this enough - obtaining large amounts of quality data is a primary challenge with any sort of ML task. This might get easier with time, but will remain a challenge.

The corollary is that niche applications (and thus good fundamentals) are still important.

"- Are the challenges you encounter just a matter of “more compute/money,” or are they fundamental barriers?" - Well, there's a spectrum. Hallucinations are inherent to ML models - I don't think anybody has cracked ML model confidence estimation, and plenty have tried.

A slew of current limitations around LLMs stem from limited context windows. That is "only" inherent to the Transformer architecture (and there is some ongoing work on alternatives such as Mamba).

I think that "agents" and deep integration with computer interfaces will have some interesting automations come out of it.

Scene_Cast2 | 24 days ago

I'd add to your list of problems: Publicly offered AIs are tuned to present a puritanical, sexless, inoffensive view of the world aligned with "the man" and kowtowing to corporate america's rules.

fulafel | 24 days ago

Nvidia robotic tools were somewhat convoluted last time I checked them couple years ago. I've seen people train robotic control systems with their sim, but for me it seems that the correct way is to set them as reference designs and reimplement in open source setting with constant community interest. Nvidia hardware and software reminds me military-grade stuff in their engineering approach, which may be sound for robotics. World models and simulators still in the state that doesn't require multibillion dollars to make progress in open setting.

cwiz | 24 days ago

I am making software for myself to learn, and to help my kids. I am using AI to essentially make A LOT of language exercises. It's really, really good at that. And learning is a lot more fun if you're creative with prompts.

I made javascript for a range of question types (things like fill-in, multiple-choice, ...) and have AI use that to e.g. generate short stories where you have to complete the verbs. Or replace some english words with german ones ... that sort of stuff.

Oh, and any time I need either tests or do something to a large range of variables, do what needs to be done to the first variable, copy over (using the reference) the list of fields, comment out the list of fields, and ask AI to suggest what to do. Usually only needs 1 or 2 changes.

And yes, I've noticed the hallucination. If you ask AI to correctly do a scatter-gather parallel processing in Go ... it's incredible how many errors it makes and it's infuriating how you have to explain every one of it's errors again and again. You have it output a basic structure, because that's still fast, and then rewrite the whole thing. I think it still gains me a bit of time ... but I see the point.

spwa4 | 24 days ago