11 Incredible VR challenges for developers
Post by Ed Schmit, Executive Director at AT&T Mobility
Virtual Reality (VR) is quickly moving from a futuristic technology to a mainstream product. The release of the Samsung Gear VR, Oculus Rift, and HTC Vive is accelerating this move, but challenges still remain. This is not unusual for video and content. History has shown that hardware and connectivity has often lagged what software developers have needed to provide the desired user experience. Often, just creating better graphics cards have solved these issues. The VR and Augmented Reality (AR) experience will likely take decades to be fully realized, but that will not stop there from being heavy usage along the way.
11 Amazing VR/AR Opportunities
With VR/AR, it is not just about better graphics power. There are many challenges that present opportunities for the VR/AR ecosystem. Developers may want to consider these opportunities. Often those who are most successful with an emerging technology are those that get in before it is mainstream.
- Rendering: I first began working with video as a technical evangelist with Intel in 1998. In that role, we talked to many animation shops and asked about their biggest challenges. One consistent response was rendering. Rendering requires tons of computing cycles and cost creators are often asked to simplify frames through reducing ray traces, soft shadows, or blurry reflections.
Flash forward to 2016, and rendering is still a giant challenge. Some frames from the movie Interstellar took over 100 hours and the movie required nearly a petabyte of storage. VR is only going to make this more challenging—particularly as video game players or film viewers expect similar levels of resolution (Based on AT&T Data Calculator and comparison of 1080P resolution versus Gear 360 resolution).
One trick a developer can employ is to just make the center of the image sharp. Since the edges of images tend to be blurrier, they can be rendered with less focus without much harm. One method is to use ray tracing: you can cast more rays where the eyes will focus and cast fewer rays on the outside where they can then render faster. Alternatively, ray tracing could just be used in the focus area and then rasterization could be used on the edges (which renders faster).
In a 360-degree image, how do you know how to find the center of the image? There are eye-tracking systems that can help. SensoMotoric Instruments, FOVE, and Bournemouth University have emerging solutions for this. Even with these solutions, tracking is a challenge because the eye can move more rapidly than the system can.
Another trick is culling where you selectively render parts of a scene (for example, only objects of a certain size). Even better, you can ignore objects behind other objects, which can save a lot of rendering time when there is a large set of objects. Umbra has software that can help with this (with plug-ins for Unity and Unreal Engine)
- Nausea: With VR, there are impacts beyond just a poor experience—users can experience nausea. There is even a name for this: virtual reality sickness. It is never great news for an emerging technology when sickness is so common that is has a term.
Symptoms are similar to motion sickness due to disconnects between the visual and vestibular systems. That is, there are differences between signals to the brain vs. the inner ear. A combination of better physical tracking along with the expected improvements in optics and graphics should solve this issue. The sensory inputs need to be in agreement. This is obviously a concern.
- Video quality and graphics: Graphics are remarkably good considering the technology is in its nascent stages. Since the VR experience is so immersive, improvements still need to be made. The best way to do that is almost always via hardware, but that brings its own challenges.
Hardware graphics seem to be a non-stop challenge. At one time when PC graphics were first starting off, there was a standard AGP Pro that defined fans and minimum space guidelines for cards to help manage the heat from graphics chipsets. The latest graphic chipsets and cards are expensive—it is not unusual for the latest cards (at any time) to go for around $1000. Fortunately, graphics improvements follow Moore’s law, so the technology improves rapidly.
Normally, poor graphics do not stall usage. In the case of VR, though, poor graphics can be uncomfortable, and even cause nausea. What are the features of graphics cards and chipsets that separate top ones from other (and even non-graphics focused chips for that matter):
- Pixels: This is the foundation for images, the single point that combines with millions of others in columns and rows. As the number of pixels increase, the image improves. The number of pixel is known as the resolution.
- Texture mapping units: Realistic images depend on textures. These units apply texture to pixels—so more process units texture faster.
- Shader units (vertex and pixel): These are dedicated components where vertices support more complex 3D objects and pixels translate to increased color values which provide better graphics. Pixel shaders tend to be more effective at improving performance—and easier to do than increasing clock speeds.
- Raster Operation Processors: This is how pixel data gets written to memory—which again impacts the image. The speed of this is called the fill rate.
- Memory bandwidth: As the bus increases, so does the amount of graphics data that can be carried, which is key to performance.
- Clock speed: This is kind of obvious, but the faster the speed the more work that can be processed.
- Memory capacity: The RAM on the card can help process high-resolution texture sets faster. This is often less important than bandwidth and clock speed.
How do these specifications tie to the user experience? According to Michael Abrash, the Chief Scientist at Oculus, at a minimum they will want a display that is at least 8k in each eye . No systems can do this right now, there is much that needs to be done across the board.
- Optics: Related to graphics is the optics, the lenses and displays that project the graphics. One concern with the initial VR devices is that the user appears to be looking through glasses or binoculars rather than a normal viewing experience. Like graphics, top end optics can be very expensive and this does not seem to lend itself to rapid improvements. However, this should improve at a regular clip.
- Buffering/load times: Buffering happens when video is not streamed to the device quickly enough. For years, there have been compression algorithms, but this is not enough for VR.
Facebook has made available an encoder that maps scenes from equirectangular shape into a cube that fits Facebook’s specs. Additionally, Facebook is looking at pyramid encoding, which might reduce content by 80 percent. This may seem extreme, but Google recently announced a compiler that will improve app install times by 75 percent.
- Haptics: Users are becoming increasingly used to tactile feedback and like it. It should play an even greater role with VR—a truly immersive experience seems to require this feature. The user kind of expects their hands to be able to reach out and feel the objects they see, and to be able to interact with them. This is clearly true with many games, but obviously more important when you look at the role that VR will play in training and other practical applications (for example those learning to repair equipment or a dentist to feel what it is like to drill a cavity).
At one level, this could be vibration feedback or force feedback. In the future, this could be incorporated into parts of clothing (perhaps a shirt would provide pressure to simulate wind blowing). With the rise of wearables, the possibilities are almost endless. Typical of the way technology is moving, this presents general challenge for developers. It is not just important to think about UI as just the visual component, but increasingly it includes other factors that will make the experience feel real.
- Movement in physical space: For many Star Trek fans, the ideal state for VR will be when the user can walk around completely and freely like Captain Picard would do through the holodeck. For now, wires are clearly an issue.
A bigger challenge, however, is tracking. In the near term, there will be locomotion simulators like the Virtuix Omni that will let users move around (and help parents make sure their kids are getting a little more exercise). As the hardware becomes unwired, improved tracking will be key.
As you think about people walking around, consideration will need to be given to having proximity sensors that warn the user in the application if they are going to bump into something or fall down stairs (and perhaps the physical presence of other users as VR moves to multi-player usage).
- Making optimal use of new functionality: As any emerging technology arrives you would expect some challenges. However, I think the bigger challenge is how to most effectively use VR/AR to the fullest extent.
Look at Oculus Founder Palmer Luckey who started his VR work when he was just 17. A theory around why he was so successful was that he was not limited by development restraints—he was able to approach the technology with totally open eyes. To some extent, I hope as developers move from mobile to VR/AR that they can approach this with the fresh perspective of a 17-year old.
One challenge is providing information to the user. With my games, developers have figured out the best practices for overlays. With VR, there is a chance to provide better quality or more relevant information or provide it in innovative ways. However, care needs to be taken so it is not too obtrusive or distracting. These UX best practices will take time to develop.
- Comfortable headwear: There has been major improvements with comfort in the latest generation of devices from Oculus and HTC. However, given the variety of shapes and sizes of heads, it is not surprising that the gear does not fit everyone perfectly. Headsets will inevitably get lighter and be designed to fit more securely, but until then this will limit how long users will want to wear them no matter how compelling the content.
- Realistic user interactions: For a truly immersive experience, users will want to engage with other users in space. For these interactions to feel real, the face of the user should express similar looks and expressions as their VR image, plus move lips realistically—perhaps with the ability to realistically recreate the user’s face. As VR evolves, there will probably be cameras and sensors in the devices that can support this—plus software that will represent the face more realistically.
- Wide user acceptance: If you follow the excitement for VR at gaming conferences or by hardware pre-sales, you would think that users are widely anticipating VR. That is true for cutting-edge users, but it can take a while for technologies to take off. Polygon has a good article explaining VR is impossible, users need to try it.
People are familiar with the move to mobile phones and then to smartphones. Later stage users jumped on board when they saw how useful the phones were over time—it was easy to observe and understand the advantages. With VR, you kind of have to put on a helmet which a lot of people may not want to do and may not believe how much more of an immersive experience it can be.
The rapid growth in eSports will probably help this area grow with a larger set of users. Large hardware companies will likely put a lot of effort into demonstrations. The trick for software developers will be to make sure their applications are widely demonstrated—particularly those that use VR in more unique ways. Although the expectation is that software pricing will be more along the lines of console titles rather than mobile games, to get initial acceptance there may need to be greater usage of the marketing methods used with mobile games.
Explore VR/AR at Shape
Join us at Shape being held at AT&T Park on July 15th and 16th and discover innovative ways tech is shaping our future. We have a line-up of speakers who are visionaries in their fields including Ray Kurzweil, Jason Silva, Nate Silver, John Stankey and Ralph de la Vega along with panels where participants discuss everything from smart cities to the impact of tech on society. Stroll through interactive exhibits to get a glimpse at how technology is transforming the way we live. Take part in the Shape Hackathon and compete for prizes. Marvel as your kids learns to solder and create things they only imagined at Maker Camp Live. Round out your experience by watching the Shape Challenge semi-finalists present their projects live on stage. Enjoy a performance from iHeartRadio LIVE with Silversun Pickups at AT&T Park. Ready to immerse yourself at a tech expo unlike any other? Sign up for Shape today.
For more on articles on AR, VR and all things video, see our new AT&T Video and VR site.