Could Meta and Qualcomm finally bring the LLaMA AI model to the Quest headset?

XR1yrs agoupdate XR-GPT
7,952 0
Could Meta and Qualcomm finally bring the LLaMA AI model to the Quest headset?

 

Qualcomm表示正在与Meta合作,优化其LLaMA AI模型以在设备上运行。

In a tweet announcing the effort, Qualcomm listed "XR" as one of its device categories.

LLaMA is Meta's series of open source large language models (LLMs), using a Transformer architecture similar to OpenAI's closed source GPT series.

this week,Meta released LLaMA 2, benchmarks show it outperforms all other open-source large-scale language models, even coming close to OpenAI's GPT-3.5, which powers the free version of ChatGPT.

However, running large language models at reasonable speeds on mobile chipsets is a huge challenge, and likely won't happen any time soon—especially in VR, where the system also needs enough headroom to run tracking and rendering at 72 frames per second.

For example, even the smallest variant of LLaMA 2, the 7 billion parameter model, requires 28GB of memory to run at full precision. Recently, hobbyists have been trying to run LLMs at lower precision, with as little as 3.5GB of memory at most, but this affects the output quality significantly and still requires considerable CPU and/or GPU resources.

If Qualcomm and Meta can finally get the LLaMA model to run on the Quest headset, it will open up a whole host of groundbreaking use cases.

This could enable true next-gen NPCs (non-player characters), virtual characters who can talk to you and you can interact with them to gain information within a game or experience. This could lead to entirely new types of experiences in headsets, more akin to Star Trek's holodeck than current video games.

However, there is currently no indication that this will be implemented on the device anytime soon.

© Copyright notes

Related posts

No comments

none
No comments...