Select Page

News

Accelerating the Readiness of Augmented and Virtual Reality for Mass Deployment

Emerging technologies that are unlike anything that came before, are often dismissed as a toy – a temporary novelty with high entertainment value for consumers, but with little practical business use. Very few examples fit that description better than Augmented and Virtual Reality (AR/VR). Through our work with our government sponsors, there are ample opportunities to apply AR/VR technology towards solving complex problems, and MITRE’s Bridging Innovation plays a key role expediting them.

Both AR and VR involve head-mounted displays which present computer-generated 3D objects to the wearer. The wearer can interact with the virtual objects by walking and looking around, or through hand-, gesture-, or voice-based interactions. The differences in the two technologies generally involve the degree of using the real-world environment in the visual field: AR augments the wearer’s reality with spatially anchored virtual objects, VR completely replaces it with computer generated imagery.

While the underlying technologies have been in development since the early 1980s, its use has mostly been limited to academic, industrial, and military settings – mainly because of: A) the very limited technical capabilities in terms of computational power, portability, display resolution, optics, field of view, and because of B) the lack of available content creation tools and the amount of knowledge and effort that went into the creation and interactive display of 3D content.

Following the advent of home computers in the late 1980s, gaming technology which allowed the creation of interactive 3D content became freely available. The smartphone evolution in the early 2000s led to significant improvements in display and sensing/tracking technologies, which has paved the way for the recent emergence and evolution of AR/VR technologies at an unprecedented pace. And with that comes a reinvigorated interest in its potential use for training, operations support, medical treatment/rehabilitation, education – all across a wide range of professional contexts.

While the appeal of AR/VR experiences for entertainment (i.e., games) is intuitively apparent, the potential benefits in a professional context are best understood when interpreted in terms of how the human brain best processes information. Behavioral scientists tend to agree that humans learn the best by active engagement with the task or activity that needs to be learned. Embodied interactions, based on naturalistic movements involving the whole body (locomotion, head/eye movement, hand movement/gestures) can lead to significant learning gains and higher levels of engagement. Furthermore, the physical world around us is inherently 3D in nature. Every object has a spatial relationship to every other object. The location of every flower, house, car, table, chair, or airplane can be described relative to each other, and to you or me as an observer.

The human brain essentially functions as a highly optimized spatial reasoning device – which allow us to recognize and navigate these spatial relationships seemingly without effort. This seems to suggest that the human brain is hardwired for optimal processing of complex information presented in immersive 3D environments – compared to presenting that same information in more traditional 2D media such as print or web.

This state of affairs does pose some core challenges to the readiness of AR/VR for mass deployment. A commonly known one is that in VR, some people are susceptible to motion sickness – resulting from the brain not being able to discern the ‘state of the world’ given potentially non-matching or lagging motion cues provided by the visual system (representing movement as optic flow on the retina of the eyes) and vestibular system (the inner ear balance organ which can detect changes in head and body movement). Other health and safety related challenges involve transient eye strain (resulting in temporal, perceptual, or cognitive impairments) after prolonged use of AR/VR devices. Additionally,  some people can’t use these devices while wearing their prescription glasses due to the ergonomics of the head mounted display. While at least some of these core challenges could be mitigated by improvements to the hardware, there is another class of challenges that needs to be addressed.

Since the introduction of the personal computer in the late 1970s, human factors engineers have managed to uncover many of the design standards, best practices, and guidelines that lead to effective, efficient, and intuitive graphical user interfaces (GUI). Most, if not all, 2D GUIs (such as the one used by applications running on your desktop, laptop, tablet, mobile phone, car, coffee maker, and so on) are based on the ‘Window, Icon, Mouse, and Pointer (WIMP)’ paradigm. But as it turns out, this WIMP paradigm does not generalize well from 2D to 3D GUIs. As a result, any standards, best practices, and guidelines for the design of effective, efficient, and intuitively usable immersive 3D GUIs is much needed.

Different stakeholders are currently developing and advocating their own candidate approaches for 3D GUIs through ‘trial and error’ – sometimes even going as far as breaking compatibility between different versions of their software development kit. You could say that it is truly a virtual Wild West out there – poised for an interaction and development model to emerge and propel the use of the technology across broader applications.

With the AR/VR hardware rapidly evolving, and allowing novel approaches to immersive visualization of, and interaction with data, it becomes clear that the readiness for mass deployment can be expedited in a relatively straightforward way. Essentially this involves co-evolving novel applications of AR/VR along with the emerging hardware, so that once the hardware reaches maturity and readiness for mass deployment, the impactful applications are also ‘ready to go’.

MITRE’s Bridging Innovation facilitators identify current operational challenges across all our government sponsors and their respective domains (e.g. healthcare, defense, intelligence, homeland security, transportation, aviation), for which emerging AR/VR technologies are uniquely positioned to be able to provide ‘disruptively innovative solutions’. We can help connect with required hardware, as well as professionals with any of the skills and experiences needed for AR/VR application development – including but not limited to software/systems engineering, graphics programming, human factors engineering, user experience professionals, cognitive scientists, and experimental psychologists.. We can also proactively find ways to get subject matter expert and other potential stakeholders’ feedback on rapidly prototyped applications of those emerging AR/VR technologies, or even integrate those emerging technologies into novel solutions for acute operational problems that our sponsors need to address.

This bridging approach provides the government sponsors with insights into novel applications driven by emerging technologies while our industry and academic partners receive valuable feedback for iterative enhancements while also gaining exposure to new markets. By facilitating the co-evolution of emerging hardware as well as software or applications, we can right now capitalize on the potential of AR/VR to address complex challenges facing our government sponsors – and accelerate the readiness for mass deployment.

Dr. Sacha Panic is a Lead Cognitive Scientist with MITRE’s AR/VR Futures Lab. His focus is on assessing or augmenting human sensory-motor performance through physiological and behavioral sensing, and the use of (immersive) displays for basic and applied research, training, and operations.

Click for article