Meta shares: How to design and develop the MR experience "First Encounter" for Quest 3

OpenXR12mos agorelease firefly
9,003 0

CheckCitation/SourcePlease click:XR Navigation Network

(XR Navigation Network December 06, 2023)allQuest 3 all come with the MR experience "First Encounters" pre-installed. Recently,Meta团队的亚历山大·道森(Alexander Dawson)撰文介绍了他们是如何通过使用和扩展Scene Anchors场景锚点和Scene Mesh场景网格功能,从而构建异世界怪兽打破次元壁并走进你家里的MR体验。下面是具体的整理:

Meta shares: How to design and develop the MR experience "First Encounter" for Quest 3

For "First Encounters", we can use Scene Mesh (Unity | UnrealWe used the features in Discover and Phantom to design a more interactive experience. We expanded on our previous ideas of utilizing scene anchors and scene grids in Discover and Phantom to build a core mechanic that allows monsters to "smash" the walls of your home and move from their world to yours.

Hopefully our experience using and extending scene anchors and scene grids in First Encounters can help you accelerate your development journey, and the insights we gained from developing this feature can help you build similar MR experiences.

scene setting

Meta shares: How to design and develop the MR experience "First Encounter" for Quest 3

First Encounters was created to showcase the types of experiences that MR can unlock in Quest 3. Although we will introduce fantasy elements into the player's space, we want to ensure that every aspect of the experience is realistic enough, grounded, and has realistic interactions. In order for the game to feel seamless and engaging, any new elements must become part of the player's world.

To achieve this we make extensive use of the Scene API (Unity | Unreal| OpenXR | WebXR |). Scene anchors provide the ability to understand the user's space in terms of markup primitives, the most basic being walls, floors, and ceilings. These three primitives can be represented as simple planes via the API, and querying the plane enables a fast basic decomposition of user space. Scene meshes further extend our understanding by providing a scanned triangle mesh of the user's room, which better approximates the physical space.

By applying a Passthrough shader to the scene mesh's material, the user will see that the room remains normal until we start punching holes in the mesh.

Prototyping in Unity

Meta shares: How to design and develop the MR experience "First Encounter" for Quest 3

To allow for experimental mechanics while maintaining consistency, we decided to punch real holes in the scene mesh, rather than using shaders to visually fake them:

  • Cast rays to determine the points in the scene mesh where holes are to be punched

  • Collect triangles with all vertices within a radius

  • Flatten the triangle's vertices to a fixed position outside the world

This fast naive hole-making algorithm is great for prototyping, but of course has significant limitations. First, the visual quality of the resulting holes did not match what we were looking for. This approach sometimes resulted in the creation of large, jagged holes due to inconsistent triangle sizes in the scene mesh and the way we selected the triangles. Using Tessellator and Decimator gave us a solution for balancing performance and visual consistency when smashing walls.

The hole-making method avoids the extra work of resizing the mesh, but it still triggers Unity/PhysX to cook the physics mesh again. Cooking a concave mesh with a large number of triangles can take a long time. In Unity, the default Mesh API makes it a world-stopping process, but it can be parallelized and asynchronously made in the Jobs system.

Since the cook process is an engine-side feature in Unity, reducing modification time and the delay between being able to query and display the final mesh requires preprocessing, or reducing the mesh size.

Evaluate existing solutions

Since this feature was core to our game design, but had huge technical unknowns, we began conducting multiple surveys of existing Unity packages and mesh destruction libraries. After some evaluation, we ultimately abandoned them for the following reasons:

  • Mesh breaking libraries usually require convex meshes or prefer them for performance reasons, while scene meshes provide concave meshes.

  • Libraries generally assume that meshes are filled volumes rather than hollow volumes, whereas scene meshes provide hollow meshes.

  • Libraries typically expect low detail and triangle count of decomposed meshes, and performance drops off quickly for high triangle count meshes.

  • Any library that requires the mesh to be pre-processed in the Unity Editor cannot be used because the scene mesh is unique to each headset at runtime.

  • Most libraries are designed for desktop systems and not for the stringent performance requirements necessary for Quest 3's MR gaming. Even if the above issues can be fixed or resolved, core performance issues in the library will prevent the delivery of a smooth experience.

Customized system

Meta shares: How to design and develop the MR experience "First Encounter" for Quest 3

To meet the demand, we obviously needed a custom system. The new DestructibleMesh system will solve this problem by using Voronoi partitioning to classify scene mesh triangles into sub-mesh blocks. Submesh blocks can be given their own GameObjects in Unity and individually disabled when they are destroyed. This allows us to move the processing cost to the beginning of the game.

The DestructibleMesh system uses Unity's Jobs system for parallelization, the Burst compiler for fast native performance and SIMD vector instructions. To sort triangles, the DestructibleMesh system performs the following tasks:

  • The game provides the system with a set of points that define the Voronoi cell locations of the Scene API plane. The points are evenly distributed, using simplex noise to add some randomness while keeping relatively consistent distances between points (note that completely random distribution of points can result in blocks that are either very large or very small).

  • The system first checks which Voronoi cell each triangle belongs to by comparing the squared distance between the triangle's centroid and the array of points that defines the location of the Voronoi cell. The index of each Voronoi cell position serves as the subgrid ID, and the system labels each triangle according to the partitioning scheme and assigns the subgrid to which the triangle belongs.

  • If a triangle is located within an area that should be reserved for Health & Safety, the system will overwrite the calculated submesh owner ID. The reserved area algorithm determines whether deleting a triangle will cause players to accidentally crash into the ring, and assigns a special owner ID to sort the triangles into their own grid. This algorithm is described in more detail below.

  • Knowing where sorting is done for each triangle, iterate over each triangle, increment the triangle count of the respective submesh, and add each vertex to a bitmap to keep track of which vertices to copy to the final vertex buffer. Because vertices may be shared between multiple triangles that may not all be in the target submesh, we only copy where necessary to the final vertex buffer.

  • Knowing the number of triangles and vertices per sub-mesh, our target vertex and index buffers can be reserved for each sub-mesh. At this point, we iterate through the vertex map and fill each submesh's vertex buffer.

  • Copy and update the index of each triangle's corresponding submesh index buffer.

  • Wait asynchronously in the Jobs system for all the above jobs to be completed.

  • Submit and finalize changes to your Unity mesh using the relevant Mesh API. At this point you can use the grid for rendering.

  • Cook in the Jobs system to create a PhysX mesh.

  • Asynchronously waits for cooking to complete. From this point on, the mesh can be used in Physics without the risk of stopping the world processing step at the last minute.

Note that in the steps above, the system makes heavy use of parallelization, SIMD vector instructions, cache-efficient data access patterns, and reduced or processor-predictable branching. For the sake of simplicity, specific implementation details are not introduced.

Reserve a safe space for mixed reality gaming

To reserve health and safety areas in player rooms, we need a well-thought-out approach. Our goal was a quick and universally effective solution across a variety of room layouts to help prevent scene mesh disruption from causing accidental tripping and collision hazards. The algorithm works very simply:

Meta shares: How to design and develop the MR experience "First Encounter" for Quest 3

  • Suppose the boundary of our room is a top-down closed-loop two-dimensional polygon. This can be obtained from the Scene API via the floor's OVRScenePlane component.

  • The polygon is reduced by pushing each line segment along its inward orthogonal vector a tuning distance of 0.3m.

  • Starting 0.3 meters below the ceiling, this polygon is extended downward, any triangle with a centroid within the resulting bounds remaining unbreakable.

  • If the center of mass of any triangle is below the fixed reserve height plane of 0.75m, this triangle is always retained, thus providing an indication that walls and obstructions along walls continue to exist.

future research

First Encounters has been released, and the DestructibleMesh system is being improved to support scalable operations, with triangle classification and modification as real-time steps in preprocessing and runtime. Operations can be queued by the user, and multiple grids can be queued for modification in the Jobs system. In addition to making scene mesh modifications and health and safety features easy to implement for future projects, this will also allow experimentation with different visual effects and more precise mesh destruction.

Think outside the box

Scene Grids and Scene Anchors offer many possibilities to completely change how users see and interact with a space. However, there are currently certain limitations to keep in mind.

When developing First Encounters, we designed the wall destruction mechanism very carefully, knowing that the scene mesh did not provide texture data for the mesh. This means we can't deform the mesh by hitting it inward or outward, because we can't deform objects in Passthrough. When large pieces of the user's room shatter, the visual effect of shattered walls and ceilings uses generic textures, but this is hidden by a combination of various VFX particles and keeps the fragments ephemeral.

It's also worth noting that in Unity certain queries (such as collider . closepoint()) should avoid using the scene mesh. Behind the scenes, this method specifically uses Physics.ClosestPoint(), neither of which work with concave meshes. When concave mesh colliders are involved, the function will fail, returning the original point passed in. For specific use cases, Physics.ComputePenetration() provides a potential alternative when using scene meshes.

at last

There is huge potential to subvert and extend people’s expectations of MR APIs for scene understanding, such as scene anchors and scene meshes, which provide new ways to tightly integrate virtual elements into the real world in a natural way. I hope that sharing the insights gained from using new features in First Encounters can help developers expand the definition of MR.

© Copyright notes

Related posts

No comments

none
No comments...