What’s missing from a user perspective for immersive technology to make us drop our smartphones in favor of AR/VR glasses and headsets? How we interact digitally is a key factor! If the development of inputs up until now has gone from keyboard and mouse … to touch screens … what will make our use of VR / AR the most natural interaction with technology to date? Join us as we explore user-friendly technology at the most cutting edge, in our deep dive into interaction tools and inputs for virtual and augmented reality!
Background: Motion (hand) controllers and other inputs for VR/AR
VR and AR are technologies that connects human and machine in a more direct way than anything else. Of course, one can define VR into something as narrowly as “the clunky headsets we use today”… but if you look at where the development is headed, VR/AR is really about “a bio-tech input/output system”. I’m sure that doesn’t mean much to you, so let’s unfold this concept a bit.
Let’s take vision as an example – our primary sense of perception and the easiest to exemplify with: VR glasses have cameras (image sensors) that take input from how we move in our environment (mimicking what our eyes normally do) – which is transformed into an output in the form of positional tracking that gives us a sense of physical presence and freedom of movement in a synthetic 3D world.
Similarly, VR / AR strives to take ALL input from our senses and motor actions and feed it back to us in the shape of experiences and activities – or digital dream worlds and superpowers. (The Matrix is a good fictive example of “more or less technically perfect VR”).
Our hands are our main input mechanisms for influencing the world around us. We are accustomed to using keyboards, touch screens, mice or various gaming controllers to interact with digital content.
In VR, we have advanced motion controllers that work both as 3D mice / “hands” and also have sets of buttons/triggers, etc for input. As you might be aware, the VR industry has mostly been focused on gaming, which these controls are adapted and (pretty much) ideally suited to.
At Oculus Connect 6, however, we heard two announcements directly from Mark Zuckerberg that revealed a lot about the direction ahead for Oculus/Facebook. On the one hand, we had confirmation that they are actually working on AR glasses (as if we didn’t already know that…) and on the other, we got news about the upcoming software update to Oculus Quest which will give us hand tracking through the cameras on the headset!
When Oculus demonstrated and talked about the new hand-tracking interactions in VR, the focus was on enterprise/business use. (And the goal of AR glasses will, of course, be to replace our smartphones as the device we use for EVERYTHING, all day long.)
In the long term, the trend may be towards hardcore gaming becoming a small niche of VR (and AR), and that motion controllers like Oculus Touch are primarily used in that niche. In all other cases, you will be able to use other kinds of input methods.
But how? In the rest of this article, we break down the problem and look at how existing and upcoming technology can solve it – giving us unprecedented levels of control in our future digital everyday lives.
Restriction with hand controls
One reason for wanting to move away from hand controllers is, as I said, that they are focused on gaming, not communication / social interaction or productivity. Our hands are the most expressive tool we have, and we want to release their entire scale of expression.
Hand controllers for games also wouldn’t fit into the flow of everyday life. They are not exactly something we bring with us outside of our dedicated playing time. Today, we pick up our phones out of our pocket hundreds of times a day without thinking about it (alternately, we’re fiddling or talking with our smartwatch). The interface for AR glasses for everyday use must have even less friction than the behavior we are already used to. Picking up gaming controllers from a bag ( – or, what, holsters)? Not very friction-less.
Why don’t we have better options than gaming controls for VR / AR input?
How come there are no good alternatives to VR hand controllers for input in VR/AR? There’s no lack of talented and dedicated engineers and designers working hard to give us the optimal solution, but here are some of the problems they have to wrestle with:
- “Just a pointer?” For Gear VR and Google Daydream, we had a simple controller with a couple of buttons on, which sensed how we tilted/rotated the control. For the AR headset Magic Leap, we have a similar hand-held “pointer” but which also traces the position of the controller in the room. What they both have in common is their ease of use and intuitiveness, but they give us even less expression than the hand controllers.
- Gloves then? It’s a promising track, and maybe we’ll land in a version of them at least for VR – just like in Ready Player One. Today they are limited by clumsy technology in order to manage both nuanced finger tracking and resistance/haptics, the challenge to fit all sizes of hands, and the material must be hygienic.
- When it comes to the more “elegant” solution with optical hand tracking, besides the technical challenge of precise/accurate tracking, there is a big limiting factor in that you lack haptic feedback. When we are surrounded by digital objects and environments that give us a visual illusion, that illusion is easily broken if we’re unable to touch the digital world.
So, how do you work around the difficulties with function vs expression, awkward design and the desire to “feel” in the digital world? Below, we’ll review the different tracks or categories of VR/AR interaction that are in development:
Overview: New input types for VR / AR
Now let’s take a look at what other types of input are in development. Note that this is not a list of gadgets you can buy right now – most of it is in an experimental stage!
Tobii from Sweden is one of the leaders in this field and provides the eye-tracking technology in HTC’s Vive Pro Eye. Both Apple, Microsoft, and Facebook have made strategic acquisitions of other eye-tracking players in recent years. For both VR and AR, eye-tracking will undoubtedly be a part of future products.
All VR headsets and most AR glasses have microphones, and the AI that drives speech recognition is getting more advanced at a rapid pace. At the same time, the “conversational UI” / dialogue-based interface paradigm is also accelerated by the growing use of chatbots in customer service, smart home speakers and other applications. Surely we all want an effective AI assistant to control the tech around us, just like Tony Stark’s Friday and Jarvis, or Samantha in Her?
(Optical) body tracking
In VR, you want your whole body represented with all its movements and expressions, rather than just the head and hands. If we had that we could dance, goof around or practice sports in ways we can’t today.
On the one hand, this can be achieved by putting sensors on the body, which are tracked from the outside – exactly the same principle as motion capture suits for movie special effects. This is actually used today in a niche fashion by hardcore VR enthusiasts – but it is just too much of a hassle and no way forward for the “everyday use case” of the future.
Microsoft has been developing and using Kinect , a type of 3D camera, for effective body tracking in Xbox games and more. The new, even more capable Kinect Azure has just been released. Facebook has also revealed some of their secret research on body tracking that can be done with regular smartphone cameras.
But what will this look like in our everyday lives? Would we really be ok with always being filmed/scanned by surveillance cameras wherever we go? Or will we instead have private camera drones with us, in order to be able to transport our hologram avatars to other locations in real-time?
No, the more likely scenario is that we will have some form of a stationary camera in our home (maybe the one already in our video call system type Portal). We can use that camera when we need to save an avatar scan of ourselves, or when we want to step into VR with an extra level of physical presence.
We have already touched on the problems with VR gloves in the section above. Although gloves can give us a more powerful experience and the physical sense of interacting with things, unfortunately, the form factor is not really “up to date” with today’s technology.
Haptx, for example, has developed this heavy-duty glove:
… And here is Dexmo from China. (Check out this excellent in-depth review of Dexmo from Tony on the Skarredghost blog!) Both of these are geared towards enterprise use and can provide increased immersion and improved efficiency when executing various work tasks in VR.
Facebook / Oculus is researching a lighter version of gloves. Maybe something like this will become a new standard for VR?
Hand gestures / optical
When it comes to a low threshold for interaction in VR and AR, optical hand tracking is a promising direction. Leap Motion has long been used with VR, and in their AR concept “North Star”, prototypes of interfaces that look to be a winner have been demoed.
Microsoft is also leaning heavily into hand tracking with Hololens 2, as demoed below. (Their competitor Magic Leap also has hand-tracking capabilities.)
As mentioned, Oculus Quest will also receive a version of hand tracking which will be released next year.
According to early tests, the precision of the solution from Oculus/Facebook is not quite at the same level as Leap Motion, but you can expect them to put a lot of effort into making it better, until and after launch. For simple games and corporate use with users who are not used to computer games, hand tracking means a massive reduction in friction.
Related: The Swedish startup Capwings is developing an innovation in the form of a kind of Leap Motion-like gadget that is attached to the back of the hand, which can track the movements of the fingers.
Worth mentioning is also “Project Soli” from Google, which was introduced as a concept already in 2014 and came in consumer release this year with their newest phone Pixel 4. “Motion Sense”, as the feature is now called, is based on radar, and lets us control the phone (and soon more devices?) with small gestures or waving.
Muscular / Spatial (Myo)
Around 2014, just as wearable tech sailed down from its mega-hype, Canadian startup Thalmic Labs released a kind of bracelet – Myo – that could sense muscle movements and interpret gestures as computer input. The company recently changed its name to North and changed its focus to smart glasses, and in 2018 released the stylish AR product Focals. They sold the original technology behind Myo to another company…
Neural interfaces / BCI (Brain-Computer Interfaces)
Controlling the computer with the brain, or vice versa getting the mind input directly to the brain, is the ultimate VR dream that became part of popular culture with Matrix (or in anime context: Sword Art Online and lots of other shows and movies). With his company Neuralink, Elon Musk has shown a vision of a “human-integrated AI” through an implant behind the ear that can capture and transmit signals from/to the brain.
It’s easy to imagine that such a device could provide the ultimate “telekinetic” control in VR or AR (and yes, in the long run, even render physical headsets/glasses obsolete as the digital worlds can be transmitted directly to the brain’s visual center. But that’s another topic!) . The big question mark about technology like Neuralink is – how many will be willing to undergo brain surgery to become “augmented people”?
This is where CTRL Labs comes into the picture. With their prototype “CTRL Kit”, they claim to have achieved the ability to interpret “motor intent” / or intentions from the brain by being able to interpret nerve signals at the detailed level down to individual neurons. And the CTRL Kit is not a device you need to surgically insert into the brain – it is “simply” a bracelet with inward-facing sensors.
Do you remember Myo, the gadget we mentioned above? Their muscle-sensing technology was acquired by Ctrl Labs earlier in 2019 to strengthen the technology they are already researching. By reading individual nerve signals in the muscles, the user does not even have to move the arm or fingers to get the desired movement in VR / AR.
This means that you can pull or pull away objects – such as “force push” in Star Wars – or write text by tapping on the table or in the air on a keyboard that doesn’t exist … or even entering text by thinking. They have also demonstrated how you can control a virtual hand with six fingers with the bracelet – or why not a six-legged spider robot …
CTRL Labs was acquired by Facebook / Oculus this summer for an unconfirmed price tag of around a billion dollars. Facebook has previously described how they look at the possibilities of (non-invasive, unlike Neuralink’s) neural interfaces, which goes completely in line with what CTRL Labs allows – just look at this visionary presentation from two years earlier, at F8 2017:
An intriguing quote from Regina Duncan above is “semantic understanding means that one day you will have the ability to share your thoughts regardless of language … because words are just compressed thoughts.”
The future of neural interfaces is breathtaking, and it is intimately linked to the future of VR/AR.
Haptic feedback options
Even if we get these “magic” bracelets, aren’t we still missing that bit with haptic feedback? It turns out that Facebook is working on that too. The Tasbi project that was presented this year is also an armband, but with motors that can vibrate or exert pressure on your forearm muscles to simulate tactile stimuli all the way to the fingertips. So we can hold and feel digital objects, touch surfaces, push buttons, etc…
If we get this haptic effect in the same armband as the “CTRL Kit”, then we’ll have both aspects that we want – and possibly in a product that, with the right form factor, at least I can imagine wearing and using daily.
A sidetrack that links to this is the aspect of haptic feedback throughout the body, where the sleek and wearable Teslasuit is available to companies already today. Again associations with Ready Player One … And maybe this synthetic skin research project also leads to a similar product in the future.
Yet another related technology is Ultrahaptics, which has developed a kind of ultrasonic plate that can emit inaudible but “noticeable” sound at short distances. We often talk about the connection between VR / AR and the internet of things, and this technology means that the electronic gadgets and objects we will have around us can also be interacted with without having to physically touch them. By the way, Ultrahaptics and Magic Leap merged into the same company this year and is now called UltraLeap .
Superhero in AR, “God” in VR
What is the ultimate goal of this? What does it mean to achieve “perfect” interaction in VR or AR? It’s really about nothing less than being able to live in completely magical digital worlds where we use our will to control the digital in any way we want. In AR, we will be able to communicate with computers and other people in an infinitely simpler and more elegant way – when our AR glasses allow us to see through walls because just about everything is already scanned and represented in a world-wide AR cloud, we will have superpowers in our everyday lives. And in VR we will be able to shape, bend and transform worlds according to our wishes.
And yes, on the road ahead until then we will be able to do all the digital tasks we are dealing with today… just much, much better.
What do you think about the future of VR and AR interaction? What innovations will be needed for you to consider replacing your smartphone and computer with the immersive technology of the future?