Meta Officially Launches IOBT and Generative Legs to Support Full Body Avatar with AI

Quest11mos agorelease firefly
4,925 0

CheckCitation/SourcePlease click:XR Navigation Network

(XR Navigation Network December 16, 2023)Meta早前发布的Inside-Out Body Tracking (IOBT)和Generative Legs已经正式通过v60支持Unity和Native(Unreal待支持)。现在,开发者可以实现逼真映射用户动作的全身Avatar。

Meta Officially Launches IOBT and Generative Legs to Support Full Body Avatar with AI

Most current headsets only track the user's head and hands, and due to limitations in the number of cameras and field of view coverage, the system sometimes needs to estimate the arm with the help of IK, so it results in inaccuracies.

但对于Meta全新的消费者头显Quest 3, it will be able to track the user's wrists, elbows, shoulders, and torso using its multiple front-facing cameras, especially on the sides, as well as advanced computer vision algorithms to enable precise tracking of the upper body.Meta calls this Inside-Out Body Tracking/Inside-Out Body Tracking (IOBT).

In addition to allowing Quest 3 to accurately track the upper body, Meta also offers Generative Legs/generative legs that can be used to estimate lower body posture through AI.

The team says it uses artificial intelligence models to estimate the position of the legs. Generative Legs is expected to provide more accurate estimates of the legs since it can utilize accurate upper body tracking as input.

However, it should be noted that since it's only estimation, Generative Legs currently only supports estimating rough movements such as walking, jumping, crouching, etc. and cannot cover more subtle movements.

With Inside-Out Body Tracking (IOBT) and Generative Legs now officially supported for Unity and Native (Unreal pending) via v60, developers can now combine both and implement full-body Avatars that realistically map the user's movements.

It is worth mentioning that apart from the Avatar aspect, Meta is at the same time continuously optimizing the Quest system, including mixed reality. For example, in October, Meta had launched the Mesh API and an experimental version of the Depth API via SDK v57 . These two brand new features can enrich the interactions in the MR experience and generate more details to improve the realism of the user experience.

Among other things, the Mesh API allows you to access a Scene Mesh, a geometric representation of the environment that reconstructs the physical world as a single triangle-based mesh.The Spatial Settings feature of Meta Quest 3 automatically generates a scene mesh by capturing room elements, and your application can query for the relevant data using the Scene API.

As for the Depth API, you can now support dynamic occlusion of fast-moving objects such as virtual characters, pets, limbs, and more, and help you unlock MR experiences.

© Copyright notes

Related posts

No comments

none
No comments...