Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The number one thing these glasses/software need to solve is that the words match the speech for a one to one conversation in a quiet environment. Eg doctors visit. I think they are very close.

We just got the Nreal/Xrai setup a few days ago for deaf from birth wife (hearing husband) She grew up lipreading but integrated more with signing and deaf community as an adult. She has a cochlear implant but can not understand language from sound alone. And really doesn't enjoy hearing that much unless we are watching a movie etc where the sound is 100% linked to the visual.

Initial reaction to the setup is. 1. Impressed, hopeful, excited 2. A bit complicated technically. More stuff to deal with. not an everyday thing. 3. Phone battery usage high. Maybe 3 - 4 hours 4. In the right situation they will be really powerful. 5. Need more control over the interface eg show/hide 'listening' icon etc. Can be distracting. Move subtitle position (maybe you can) 6. Processing delay can make you more an observer of the conversation. Response time is delayed enough to interrupt the flow a conversation. (satellite tv connection interviews)

The number one barrier to using them is having everything ready for the moment they are needed. You need to plan ahead. Takes a few mins to set up.

All the other high end ideas can be set aside while the core function is dialed in.

We really appreciate the effort and hope to contribute.



Thank you for this feedback! Btw, we'll support better adjustment of the subtitle position very soon. We did just add many additional font size options as well. If you haven't already, please consider joining our Discord server to provide feedback at any time: https://discord.gg/7HjyDJ3JAz




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: