Show HN: openai-realtime-embedded-SDK Build AI assistants on microcontrollers

Sean-Der | 51 points

Took a bit of poking to figure out what the use case is. Doesn't seem to be mentioned in the README (usage section is empty) or the intro above. Looks like the main use case is speech-to-speech. Which makes sense since we're talking about embedded products, and text-to-speech (for example) wouldn't usually be relevant (because most embedded products don't have a keyboard interface). Congrats on the launch! Cool to see WebRTC applied to embedded space. Streaming speech-to-speech with WebRTC could make a lot of sense.

kaycebasques | 19 hours ago

Here is a nice use-case. Put this in a pharmacy - have people hit a button, and ask questions about over-the-counter medications.

Really - any physical place where people are easily overwhelmed, have something like that would be really nice.

With some work - you can probably even run RAG on the questions and answer esoteric things like where the food court in an airport or the ATM in a hotel.

jonathan-adly | 19 hours ago

Favorited and starred! I wonder if the real power of this could be in integrating large low cost sensor networks? I think with things like video and audio it might make more sense to bump up to a single board Linux board - but maybe the AI could help parse or create notifications based on sensor readings, and push back events to the real world (lights, solenoids, etc)

I think it would help to either have a freertos example, or if you want to go real crazy create a zephyr integration! It would be a lot of fun to work on AI and microcontroller combination - what a cool niche!

roland35 | 14 hours ago

Love this! Excited to give it a try.

johanam | 20 hours ago
[deleted]
| 3 days ago