Stereoscopic color transmission, spatial anchoring and scene understanding will deliver more realistic mixed reality experiences.
According to Sarthak Ray, Product Manager at Meta, the ability to see and interact with the world around you through mixed reality opens up new opportunities for virtual reality. “You can play with friends in the same physical room or have productivity experiences that combine huge virtual monitors with physical tools, and that’s a step towards our longer-term vision of augmented reality.”
Contrary to popular belief, a mixed reality experience is more effective if the VR/AR device can convincingly blend the physical and virtual worlds. That means the headset has to do more than just provide a 2D video feed, according to a recent blog post to Oculus.com.
Continuing its commitment to developing cutting-edge VR technologies, Meta announced the launch of Meta Reality, a new mixed reality system that offers insight into what goes into creating an exceptional experience.
Stereoscopic intercommunication of colors for a better knowledge of space
Meta recognizes the importance of color passthrough and stereoscopic 3D technology when it comes to delivering comfortable and immersive mixed reality experiences.
“Meta Quest Pro combines two camera views to reconstruct realistic depth, which ensures that mixed reality experiences built using Color Passthrough are comfortable for people,” explained Ricardo Silveira Cabral, Head of Meta Computer Vision engineering. “But also, stereo texture signals allow the user’s brain to do the rest and infer depth even when the depth reconstruction isn’t perfect or beyond the system’s reach.”
For example, when you use Passthrough, your brain learns that the cup of coffee is about as far away from your hand as it is from the pen that’s sitting right next to you.
Oculus Insight, which could only capture 100 points of interest to determine the position of the headset in a room, was a technology that provided the first autonomous tracking system for a consumer VR device. It has since been upgraded to allow depth detection. For comparison, Meta Quest Pro can produce up to 10,000 points of interest per second in different lighting conditions, allowing you to create an enhanced 3D representation of your physical space in real time.
Through this process, Meta is able to create a 3D model of the physical world that is continuously updated. You can then use this data to create a predictive rendering framework that can produce images of your real environment. To compensate for rendering latency, reconstructions are adjusted to the user’s left and right eye views via the Asynchronous TimeWarp algorithm.
According to Silveira Cabral, the use of a color stereoscopic camera allows the team to provide a more realistic interpretation of the real world.
Understanding the scene to integrate virtual content into the physical world
Presence Platform’s Scene Understanding component was showcased at the Connect 2021 event. This technology enables developers to create complex, scene-aware mixed reality experiences. According to Meta Quest Pro product manager Wei Lyu, the company’s Scene Understanding component was designed to help developers focus on building their businesses and experiences..
“We introduced Scene Understanding as a system solution,” Lyu said.
Scene Understanding is divided into three areas:
- Stage model – A single, complete, up-to-date, system-managed representation of the environment composed of geometric and semantic information. The fundamental elements of a scene template are the anchors. Each anchor can be attached to various components. For example, a user’s living room is organized around individual anchors with semantic labels, such as floor, ceiling, walls, desk, and sofa. Anchors are attached with a simple geometric representation: a 2D boundary or a 3D bounding box.
- Scene Capture – A system-guided flow that allows users to walk around and capture their room’s architecture and furnishings to generate a scene model. Going forward, the goal will be to provide an automated version of Scene Capture that doesn’t require people to manually capture their surroundings.
- Scene API – An interface that applications can use to access spatial information in the scene model for various use cases, including content placement, physics, navigation, and more. With the Scene API, developers can use the scene model to bounce a virtual ball off physical surfaces in the real room or a virtual robot that can climb physical walls.
“Scene Understanding reduces friction for developers, allowing them to create MR experiences that are as believable and immersive as possible with real-time occlusion and collision effects,” added Ray.
Spatial anchors for placing virtual objects
Developers can easily create first-class mixed reality experiences using Spatial Anchors, which is a core capability of the Meta Quest Pro platform. For example, a product designer can easily anchor multiple 3D models in a physical space using a platform such as Gravity Sketch to create a consistent environment for their product.
“While Stereoscopic Color Passthrough and Scene Understanding do the heavy lifting to allow MR experiences to blend the physical and virtual world, our anchoring capabilities provide the connective tissue that holds it all together,” said Laura Onu, Meta Product Manager .
With the scene model, spatial anchors can be used to create rich, automatic environments for a variety of experiences and situations. For example, you can create a virtual door connected to a physical wall. This would be a big step forward for companies considering XR technology for their enterprise solutions in the area of digital twinning, warehouse automation, and even robotics.
For now, it looks like Meta is focusing more on social experiences. “By combining Scene Understanding with Spatial Anchors, you can mix and match your MR experiences to the user’s environment to create a new world full of possibilities,” according to Onu, who adds, “You can become a secret agent in your own living room, place virtual furniture in your room or draw an extension of your house, create physics games, and more.
Shared Spatial Anchors for Collocated Experiences
Additionally, Meta added the functionality of shared spatial anchors to the presence platform. This allows you to create local multiplayer experiences by sharing your anchors with other users in the same space. This would allow you and other friends to play a VR board game on a physical table similar to Tilt five offers their gaming experiences.
A winning combination for mixed reality
The combination of spatial anchors, passage, and scene understanding can help create a rich, interactive environment designed to resemble the real world. Avinav Pashine, product manager of Meta Quest Pro, noted that there are many trade-offs involved in creating an enjoyable and comfortable mixed reality environment for users.
The future of Meta Reality continues to evolve as the company continually improves the platform through software updates and hardware innovations that will be delivered in the next generation of their products. Silveira Cabral, Product Manager of Meta Quest Pro, noted that the company’s first product is a milestone in the evolution of Meta Reality.
“We want to learn with developers as they create compelling experiences that redefine what’s possible with a VR headset. This story isn’t over yet, it’s just the first page.
Image Credit: Meta