Gemini Live with camera and screen sharing capabilities

agnosticmantis | 190 points

I've been using this to help me read papers with mathematical notation in them. I screen share with https://aistudio.google.com/live and then ask (out loud) questions like "what do these symbols mean?" - it's a huge help for me, since I never memorized all of that stuff.

simonw | 2 days ago

Cool tech, but for some reason, the very first sentence in every reply the AI gives in the demo videos is really off-putting to me personally. It seems to me like perhaps this kind of joyful helpfulness introductory sentence is expected in US culture, but it immediately turned me off. I come from a culture that is less verbose in general and more to the point, so this feels like a mismatch right away.

kleiba | a day ago

I am getting extremely skeptical of AI “age”. I was hoping it will unlock a whole new generation of startups like the iPhone did. However genAI is too generic and too blunt a tool, as in it does everything. However it’s too expensive for a small company to do stuff with. Looks like these AI companies (google and OpenAI) realize that and so are even doing the vertical integration themselves. In such an event does genAI end up being the automation tool that you access/use through OpenAI or google and that’s it?

I am sure people here see it better than I do, so what new class of problems is this genAI going to solve?

yalogin | 2 days ago

Marketing should use their imagination!

Imagine putting dice and random objects (cups, forks..) on a table, pointing your phone at them and asking it to invent a new game for your friends. Tell it to use these objects and also use the live camera as a gameplay element.

Or recognizing bird or plant species.

Or helping a blind person go hiking, helping avoid tree roots and describing the beautiful scenes they’re in.

So much possibility!

cadamsdotcom | 2 days ago

Watching that demo video, I wonder why they chose to use that?

Gemini only talked about some useless surface knowledge that would be forgotten quickly, whereas if she actually read the Wikipedia page she would learn more and retain it better.

sureglymop | 2 days ago

I think it would be nice if the Pixel Fold could do: Have a browser on the left showing some content, and have Gemini on the right, where you can prompt it with questions or asking it to take actions on the left.

kovek | 2 days ago

It's not the best at helping me play video games yet, lol. Ah well. Blind people are used to waiting. :)

devinprater | 2 days ago

This is Apple Intelligence the way it was supposed to be ("AI for the rest of us"), but Apple just doesn't "get" AI, so here we are—the only platform provider that is taking the correct approach to AI is Google.

behnamoh | 2 days ago

Why only Pixel 9? Surely none of the computation is on-device anyways

polishdude20 | 2 days ago
[deleted]
| 2 days ago

I don't understand why Android only? Why would nobody want to use this on a PC?

Timwi | a day ago

Is there a way to record the screen WITH AUDIO and save it?

1024core | a day ago

pretty useless demos haha, i wonder why they choose those cases, maybe it really doesn't do much else correctly rn

randomsofr | 2 days ago

and then humanity became illiterate

discordance | 2 days ago

Is anyone seriously using Gemini daily? How is it compared to other agents you've tried? Do you feel it's a good value prop? What does it excel/fail at?

vorpalhex | 2 days ago

No thanks google.

pcdoodle | a day ago

Cool! I'm looking forward to having a live AI drawing tutor. None of the models are there yet but we're getting close!

sandspar | 2 days ago

the first example video tells you how to improve your home decor by saying that you could add a side table and blanket to your chair. Thank you Gemini for telling me that tables can go next to chairs.

There's really no use for AI outside of making studio ghibli drawings and giving me a bunch of broken code quickly.

terminatornet | a day ago

"Screen images simulated."

tintor | 2 days ago

[flagged]

sksxihve | 2 days ago