Ask HN: Anyone using augmented reality, VR, glasses, helmets etc. in industry?
I've gone down this rabbit hole with customers as a PM still working in the space. Here is what I've learned. The past decade of devices (hololens, realwear, google glass, vuzix, etc) were some combination of way too heavy, expensive, fragile, short battery life, no wifi connectivity, too much UI long to get to point of value and/or simply not useful. That and most customers had a content problem. The AR/VR use in the field typically came down to looking something up in a manual or calling someone. Both easily and perhaps more effectively done on a smartphone. There was an instance where I asked techs why they weren't using the headset for showing what they were seeing in realtime and they said, it's easier to facetime (hard to argue that). The cool AR 3-D demos or overlays rarely worked in the field on real equip or didn't actually convey anything useful (everyone knows the basics of how the machine works). There are training VR use cases (like learning to operate a crane), but once again it's a nice to have supplement and not a replacement. Recent advances with LLMs (specifically voice) + Meta type "glasses form factor" have created intrigue again with innovation centers at large companies. The use case we're currently working on is inspections or filling out forms with audio/videos.
My MSc project was testing VR headset with office-based employees. Even if you ignore the expense, health-risks of sharing headsets, lack of decent tools, and power requirements - you still have to contend with a large subset of people feeling sick when using VR.
I use VR for gaming. The headsets are uncomfortable after about 45 minutes, they're hot and sweaty, and they're incredibly isolating. All that's fine if you want to slay baddies while alone at home, but utterly propellant to most people.
Yes. In the past I was working in a F500, one of the world leaders in energy services. They introduced VR to implement remote assistance and support for very remote places (like offshore platforms, oilfield in the middle of nowhere or in risky countries). A local operator will wear the VR and then a service engineer from HQ can drive him on diagnostic and troubleshooting. The service is immediate, as the engineer will not spend time on flight and they can save a lot of money. They also use the platform for training, saving time for the learners and avoiding to deploying huge and complex machineries just for training.
I saw it couple of time in action, it's really impressive.
I own two AR devices:
- Viture Pro XR glasses
- Vuzix Z100 glasses (through Mentra)
The Viture's I use as a lightweight alternative to VR headsets like the Meta Quest. I lay down on the couch/in bed and watch videos while wearing them.
The Vuzix are meant to be daily-wear glasses with a HUD, have yet to break them in.
Later this year, Google/Samsung are due big AR releases, so is Meta I think as well.
It'll be the debut of Android XR.
I’m one of the founders @ resolvebim.com YCW15. We built a VR and (some) AR platform to help engineering and construction companies use HMDs in their everyday workflows to enhance their 3D BIM review workflows.
We post case studies regularly on our blog, so you can read about real world deployments there: blog.resolvebim.com
From my experience the hardware is still a hurdle simply because it doesn’t completely replace all pc based workflows right now and therefore has to be used selectively at the right moments alongside 2D monitors.
I just got a pair of TCL Rayneo air 2 display glasses since I'm farsighted and my eyes become fatigued after a day of working on a conventional monitor. The increased focal distance seems to help, but the nose piece is weirdly designed and the pressure under the pads becomes a little painful after an hour or two. Also the field of view is too wide and so the edges are blurry (hard to see clock, corner buttons in fullscreen windows, health bar in video games, etc).
Worked great to avoid eye fatigue/posture issues on airplanes though. I'm happy I have them, but in hindsight I'd have gotten a Viture or something with a better nose bridge and a narrower field of view.
They are used for training, personally developing something intended for firefighters.
Hurdles? Battery life, proper hardening against dust/water.
I am actively looking into an Oculus 3 for a virtual desktop environment that I can use portably, such as in an Airbnb, instead of lugging around a 43" 4K monitor. I'm also looking at projectors for this purpose. The context is remote work.
I work at a company that does CAD data translation (multiple formats into 3D PDF), and a while back we had an internal hackathon where one of the projects added basic support for VR glasses to our desktop app. It was really neat, and there was some excitement about it, but there hasn't been much follow up. I think the key issue is whether or not it adds enough value to justify buying everybody a headset, and for our use case I don't know if it does, though I'm just a lowly programmer, I don't know what the customers think about it.
We have another product that's geared towards collaborating and sharing data between teams and vendors, and it seems better suited there, but that one is a web application, and I don't know how well VR glasses are supported there.
I think it'd be awesome in the CAD applications themselves but I don't know if any of them support it out of the box.
Equinor (Norway, Oil industry) uses hololens.
https://loop.equinor.com/en/stories/shaping-the-future-with-...
https://loop.equinor.com/en/stories/developers-trip-johan-sv...
There is at least one use case where I would want AR. I was doing some cable runs last night 30' in the air, and AR would have been very beneficial to highlight the exact path instead of spending a lot of time confirming and reconfirming with the team.
The adoption of it in various industries for training is larger than most people might suspect. First responders and retailers have some of the largest internal deployments out there, but they aren’t massively publicized (most people would never guess who’s fielding the largest fleet of headsets right now). That said, it’s still not mass adoption.
At the end of the day, you are asking someone to put something on their face that is still very different ergonomically than glasses (and I’m not sure even glasses would overcome enough friction). The ROI has to overcome the business (or personal) friction of buying the hardware, the friction of the form factor plus any friction from changed workflows.
Now put that in an operational workflow instead of training and the risks go up. Most are still skeptical of device reliability (not to say there aren’t suitable devices for operational roles but the perception is still a hurdle, and the applicability is often device-specific). Now add on to that limited experience with devices (many decision makers have never put one on), added security complications, specialized software development skills, limited content libraries and very real accessibility concerns and a lot of enterprises can never get past an “innovation center demo.”
For many industries the value proposition just isn’t there yet. But that said, I’d recommend digging a little deeper as there’s a lot of existing use-cases and deployments, both failed and successful, outside of IVAS.
There was a recent Nvidia video (https://www.youtube.com/watch?v=_2NijXqBESI) that showcases some of the robotics problems Nvidia is working on.
They use the Apple Vision Pro headset fairly significantly in human interaction and data gathering that they then utilize for simulations.
Worked in field installation of gas and electric meters and I went to a few conferences probably 5+ years ago where aug reality would be used to help installers complete there work. The idea is they would have basic training, but could get help from a few "expert" technicians in a central location, this could be by highlighting fault areas of the meter in the goggles. There was a lot of interest, but as far as I'm aware it hasn't gone anywhere yet.
Hello!
I spent a lot of time in graduate school researching AR/VR technology (specifically regarding its utility as an accessibility tool) and learning about barriers to adoption.
In my opinion, there are three major hurdles preventing widespread adoption of this modality:
1. *Weight*: To achieve powerful computation like that of the HoloLens, you need powerful processing. The simplest solution to this is to put the processing in the device, which adds weight to it. The HoloLens 2 weighs approximately 566g (or 1.24lb), which is a LOT of weight compared to a pair of traditional glasses, which weigh approximately 20-50g. Speaking as someone who developed with the HL2 for a few years, all-day wear with the device is uncomfortable and untenable. The weight of the device HAS to be comfortable for all-day use, otherwise it hinders adoption.
2. *Battery*: Ironically, making the device smaller to accommodate all-day wear means that you're simultaneously reducing its battery life, which reduces its utility as an all-day wearable: any onboard battery must be smaller, and thus store less energy. This is a problematic trade-off: you don't want the device to weigh too much that people can't wear it, but you also don't want the device to weigh too little that it ceases to have function.
3. *Social Acceptability*: This is where I have some expertise, as it was the subject of my research. Simply put, if a wearer feels as though they stand out by wearing an XR device, they're hesitant to wear it at all when interacting with others. This means that an XR device must not be ostentatious, as the Apple Vision Pro, HoloLens, MagicLeap, and Google Glass all were.
In recent years, there have been a lot of strides in this space, but there's a long way to go.
Firstly, there is increasingly an understanding that the futuristic devices we see in sci-fi cannot be achieved with onboard computation (yet). That said, local, bidirectional, wireless streaming between a lightweight XR device (glasses) and a device with stronger processing power (a la smartphone) provides a potential weigh of offloading computation from the device itself, and simply displaying results onboard.
Secondly, Li+ battery tech continues to improve, and there are now [simple head-worn displays capable of rendering text and bitmaps](https://www.vuzix.com/products/z100-smart-glasses) with a battery life of an entire day. There is also active development work by the folks at [Mentra (YC W25)](https://www.ycombinator.com/companies/mentra) on highlighting these devices' utility, even with their limited processing power.
Lastly, with the first two developments combined, social acceptability is improving dramatically! There are lots of new head-worn displays emerging with varying levels of ability. There was the recent [Android XR keynote](https://www.youtube.com/watch?v=7nv1snJRCEI), which shows some impressive spatial awareness, as well as the [Mentra Live](https://mentra.glass/pages/live) (an open-source Meta Raybans clone). In terms of limited displays with social acceptability, there are the [Vuzix Z100](https://www.vuzix.com/products/z100-smart-glasses), and [Even Realities G1](https://www.evenrealities.com/g1), which can display basic information (that still has a lot of utility!).
As an owner of the Vuzix Z100 and a former developer in the XR space, the progress is slow, but steady. The rapid improvements in machine learning (specifically in STT, TTS, and image understanding) indirectly improve the AR space as well.
As a bachelor thesis I created a HoloLens 1 app that communicated with Rhino3D to place waypoints for a 5 axis robot arm to cut with a plasma cutter after calculating and previewing the robot arm movements and confirming.
In my day job I occasionally hear about some AR startup doing demos for training and parts setup in CNC machines but the value add seems to be too insignificant for the work required.
Not really "hardware/engineering", but I use Apple Vision Pro for work every day, ~8h/day. And by "for work" I mean: I use it as an extended monitor, I don't write any software related to AVP.
I just received a evenrealities smart glasses yesterday and it was very cool initially. The live translation feature feels like it’s from the future. I do also own Apple Vision Pros and haven’t worn them in a year, but this feels so much lighter and more wearable for long periods. Next up is trying to extend it with custom software
Explicitly banned by policy at my employer. Possibly due to concerns around data leaks, possibly due to not wanting to create a haves / have not situation. Policy is always enforced but rarely explained.
Anyone want to buy a barely used Meta Quest Pro? I don't have a use for it.
VR is the zombie category that comes around every 10 years. All that's missing is another Lawnmower Man sequel.
I use mine heavily for programming. It's a nice comfortable third monitor. I don't do anything special, it's just another monitor in Xorg with no spacial yadda yadda.
Wow no one at all with a Vision Pro?
We’ve developed VR based industrial safety training, as a self-contained, portable product for blue collar workers in India - launched about 24 months ago with regular updates.
Multiple companies have bought it, and we have large companies as clients who’ve used it to train 1000s of their blue-collar workers, even in sectors such as construction in a relatively challenging (in terms of pricing and value) market.
We have a significant (I think!) number of devices deployed, and most of my clients end up purchasing more after the initial purchase and pilot.
I think that’s for a few reasons:
1) VR, when well designed, can offer 1st person experience of being an accident victim due to the viewer’s own oversight / someone else’s oversight. That makes it a far more effective way to draw the learner’s attention to the importance of the safety protocols, etc.
2) Our solution is multi-lingual: it’s currently available in 10 regional Indian languages - that matters, since a significant fraction of the workforce may not understand English. Our localisation extends beyond that, but language is a big thing in enabling access and usage.
3) if you have to invest 10-15 min per learner (often one-on-one as the instructor) to onboard each learner before they can use your solution, it becomes very difficult to scale and raises the bar for cost-effectiveness. So that’s something we focus on heavily.
4) Setup time- don’t create a solution that requires IT support, someone who understands how to setup / load SteamVR / Oculus Link / meta Horizon. If the solution adds 20-30 min workload to the staff on a site, then adopting it becomes that much more painful - so we’ve worked very hard to develop an integrated system where the instructor can quickly onboard 10-15 learners and get going with the session in 5 min.
5) workflow changes: often, introducing VR means changing some part of the organisation’s workflow - many VR solutions don’t consider this / acknowledge this in their design- clients get initially excited, but when it comes to actually using it on a daily basis, the deployment and workflow frictions can completely tank VR adoption.
I’ve seen multiple solutions fail because of this, and we focus extensively on this when we design our solutions.
India is a hard market for VR, honestly because it’s very price sensitive. But I think we’ve made some progress here, because we’ve focused extensively on system robustness, ease of deployment, localization, and a lot of user-centered design.
We’ve also developed sophisticated VR - based training solutions for SOP training. VR can be / is, very effective for initial onboarding and SOP training. Again, the challenge here is usability - most of the learner’s don’t know how to use the controllers. Learning how to use the controllers is not easy and takes time. So that onboarding is critical and needs to be done well.
In SOP training, our experience is that it can, if designed well, significantly reduce on-boarding time; however, you still need the last 20% of training on the actual thing for it to stick, and for the learner to actually _learn_.
Edit: formatting and word choice
Talk to an architect. (The kind that designs buildings, not software.) Last I knew a decade ago there were already early adopters from that field making daily professional use of VR. Mostly for showing clients the concept, I believe.
I work on the other side of this industry. I make the software and hardware for doing AR/XR stuff for industry.
We haven't been able to get a contract in nearly two years. Almost all of our competition in this sector have gone bust, and my company is about to follow suit.
The answer appears to be "no". The industry at large does not have enough interest in AR/XR to sustain any sort of competitive business to provide those products.
look up the ptc demos
[dead]
[dead]
[dead]
The problem for VR isn’t lack of applications, it’s that Paul Graham doesn’t invest in games.
He wouldn’t invest in Palantir either.
Convince the best seed fund in the world that it has a blind spot, maybe some risks will yield something great.
Yes, the company I work for has started using Hololens 2. We have a program that can overlay the 3D models from our CAD program onto the physical steel assemblies for QC. When it works, it works well and enables our quality checkers to perform checks faster and more accurately than using tape measures while going back and forth looking at a 2D drawing printed on 11 x 17 paper.
The biggest hurdles is that none of the large companies think there is enough profit to be made from AR. The Hololens 2 is the only headset on the market both capable of running the program required while also being safe to use in a active shop enviroment (VR with passthrough is not suitable). Unfortunately the Hololens 2 is almost 6 years old as is being stretched to the absolute limits of its hardware capabilities. The technology is good but feels like it is only 90% of the way to where it needs to be. Even a simple revision with double the RAM and faster more power efficient processor would alleviate many of the issues we've experienced.
Ultimately from what I've seen, AR is about making the human user better at their job and there are tons of industries where it could have many applications, but tech companies don't actually want to make things that could be directly useful to people that work with their hands, so instead we will just continue to toss more money at AI hoping to make ourselves obsolete.