2016 is turning out to be an amazing year for augmented reality
One way to tell that a new market is coming of age here in Silicon Valley is when special-purpose venture funds are formed to focus on it. Augmented reality has just achieved that milestone, with the launch of Super Ventures. The fund itself is small, but the event served as a great touch point for members of the still-close-knit AR community to come together and provide some insight on the future of AR, both for consumers and developers. It also serves as a sneak peak at the state of the industry ahead of the much-larger annual flagship Augmented World Expo this coming June.
What makes AR special?
Augmented reality is characterized by combining views of the “real world” with computer-generated content. I put real world in quotes because some AR solutions actually allow the user to see their surroundings and project generated content onto that view, while others use a live camera feed of the surroundings with an overlay of generated objects. The former have the advantage that they are much more natural to use, and allow a better sense of context for the user. The later are often simpler to implement, lower on processor power, and can run on standard-format mobile devices — without requiring special glasses.
Running untethered is even more important for AR than for VR. While some AR applications, like CastAR’s tabletop gaming, are confined to a small area, most involve allowing the user to move around in their environment and get an annotated (or augmented, if you will) view of reality. Microsoft’s HoloLens (now available to developers) and Magic Leap’s device (now being demoed privately) have attracted the most press among standalone solutions, but startups like ODG have been shipping untethered AR “smart glasses” for a while now.
AR doesn’t always require a geeky, new wearable
Unlike with VR, many AR applications run on either standard, or slightly-enhanced, mobile devices. For example, ScopeAR was demoing “over the shoulder” industrial coaching applications, where experts can be called in to help a worker in the field by illustrating the steps to take in performing a maintenance or repair operation — all as an overlay on the actual scene being captured by the mobile device’s own video camera. Pre-recorded tutorials can also be used “offline” to walk a user through a process — as long as the system has an accurate baseline image of the mechanism being worked on, initialized by the placement of a small marker device.
Another AR app that can run on a standard mobile device, WayGo, is one of the first companies to be backed by the new AR-focused Super Ventures fund. It allows the automatic identification (and translation in place) of text in the surrounding environment, as it is captured in real time by a device’s camera. Similar in concept to Google Translate, WayGo says it has better support for automatic text recognition in non-Western languages, and if focusing much of its marketing efforts on selling into commercial applications.
Processing power remains an issue: The cloud provides one answer
One of the biggest problems with mobile-device-based AR (and VR) is processing power. Demoing the creation of a 3D model from simply sweeping across a room with a device is one thing, but having it work in real life is another. We found this when we tried to replicate those amazing Google Project Tango demos in a typical apartment. The results were nowhere near as polished. Startup GeoCV is addressing this issue by adding a cloud dimension to the mix. Your depth-measurement-equipped mobile device (like a Tango or RealSense) is used to gather the initial data, but the processing is done in the cloud — allowing for more complete and more accurate models. Startup Gridraster takes a different approach, and hopes to allow mobile devices to harness the GPU of nearby computers to augment their native capabilities.
Field of view needs solving for AR to go mainstream
Narrow field of view (FOV) has been one of the factors that most limits what can be accomplished with existing AR devices. Part of what made Microsoft’s first HoloLens demo so compelling was its relatively-wide field of view (rumored to have been provided by Waveguide vendor Lumus). Subsequent versions, though, have fallen back to providing only a more narrow image (Microsoft has compared it to looking at a 15-inch monitor from 2 feet away).
Short-term, vendors like Meta are addressing the FOV problem by using a wider reflective surface to form the image. Meta, for example, is demoing its Meta 2 — claiming a 90-degree FOV for the glasses which are now available for pre-order. Unfortunately, the Meta 2 glasses are tethered, making them a prime candidate for a next-generation computer UI — which Meta promotes heavily, and the Meta 1 essentially worked as a second monitor for your PC — or for lab or studio applications, but not as a great option for outdoors or mobile. For developers on a limited budget the Meta 2 developer kit is $950, compared to $2,750 for ODG’s R-7, and $3,000 for Microsoft’s Hololens. Game and VR developers will also like that the Meta 2 supports Unity for development.
The long-term future for AR displays looks even-more promising. In addition to better wave guide modules, a new technology that uses laser projection directly into the retina (it isn’t as scary as it sounds) is on the way. Glyph by Avegant is one of the better-known laser-projecting AR devices, but not the only one. Massively-funded (and hyped) startup Magic Leap is using a type of laser projection technology (also referred to as Virtual Retinal Display or VRD) for its upcoming AR system. Judging by demo videos it has released, the VRD in Magic Leap’s system also does eye tracking to allow selective refocusing of the image based on where the user is looking. I was able to demo a research project at Stanford that allowed refocusing based on eye tracking, and it is a very powerful concept. VRDs can also completely saturate the rods and cones in your eye, allowing for completely opaque objects — and a nearly VR experience if their field of view is wide enough — in addition to supplying an augmented overlay of the real world. That allows what have sometimes been called “mixed reality” applications — ones that can combine the best of VR and AR.
VRDs also have two other potential advantages over more traditional display-based interfaces. Because the image is formed on your retina, and not on a fixed display in front of you, the issue of accommodation and vergence mis-match (simply put, the problem where your eyes think they are looking at something far away while they are focusing on a tiny display an inch in front of your head), which is a big factor in VR-induced motion sickness, is solved. Unlike waveguides, VRDs also don’t need to grow in size as the field of view increases. It is likely that they will become the display technology of choice for at least high-end AR, and even some VR, applications. VR goggle makers are developing their own answers to this problem, including the Lightfield Sereoscope developed by Stanford’s Gordon Wetzstein in cooperation with Nvidia.
Given time AR is likely to trump VR
Not surprisingly, most AR advocates (including me) make the case that eventually AR will greatly outpace VR as a technology and a market. After all, there is a limit to how much of the day we are likely to want to spend “head down” in a pair of googles using VR, while subtle AR devices will eventually be no more troublesome than a pair of designer sunglasses — and will be able to provide useful information throughout the day.
Analyst Digi-Capital predicts an AR market of $90 billion annually by 2020, compared to $30 billion for VR. That is the sort of promise that made Super Ventures founding partner, Ori Inbar, explain to me that after years of organizing AR-specific conferences, he was “ready to put his money where his mouth is.” More surprisingly, given the relative amounts of buzz, AR is already a much larger market than VR, with hundreds of commercial, industrial, and military applications provided by literally dozens of hardware and software vendors. Before it goes mainstream, though, at least another generation or two of advances in hardware will be needed.
See more at: Article.
Reprinted from ExtremeTech Website, Dec 2016. Copyright 2016. All rights reserved.