Meta FAIR 10th Anniversary: Using AI to Promote AR/VR Development and Realize Future Vision

Facebook1yrs agorelease firefly
4,752 0

CheckCitation/SourcePlease click:XR Navigation Network

(XR Navigation Network December 07, 2023)今年是Meta(原Facebook)Fundamental AI research(FAIR)基础人工智能研究团队成立10周年。在名为《庆祝十年人工智能创新+ AR/VR的未来》的一文中,团队谈到了人工智能的潜力及其在AR/VR未来中的作用。下面是具体的整理:

Meta FAIR 10th Anniversary: Using AI to Promote AR/VR Development and Realize Future Vision

FAIR is celebrating its 10th anniversary milestone with the introduction of new models, datasets and updates across audio generation, and multimodal perception. It's also a reminder that while AI may be the hot topic of the day, it's been part of our company's DNA for years.

Meta Chief Technology Officer andReality "It's been clear from the beginning of Facebook that artificial intelligence is going to be one of the most important technologies for our company, maybe even the most important technology," said Andrew Bosworth, head of Labs.

In fact, Bosworth happens to be the company’s first artificial intelligence employee. Bosworth recalls: “I was able to design and build our first heuristic-basedNews Feed system, and then a machine learning system built through Coefficient algorithm. Of course, my knowledge of AI quickly became obsolete. I remember when I was teaching Mark (CEO Zuckerberg), people thought neural networks were a dead end. We treat it as a once-great technology whose limitations have been exposed. Of course, by the time I started working in advertising a few years later, the neural network revolution was already well established. I'm extremely excited to work with our team on our first sparse neural network implementation with Pytorch. "

In the early days of AI, there was great excitement throughout the tech industry, and the race began to build cutting-edge AI teams. But Mark Zuckerberg decided early on to make a fundamental AI research lab the centerpiece of the company's AI efforts.

Bosworth noted: "Starting in 2013, FAIR has set a new standard for AI industry research labs. We prioritize open research, collaborate with the entire research community, and we publish and open source much of our work , which accelerates everyone’s progress.”

Within a year, FAIR began publishing the results of its work. In 2017, PyTorch was open sourced and quickly became a common framework for building cutting-edge artificial intelligence in research and production. From feed ranking and content recommendations to the delivery of relevant ads, image and sticker generation, and the AI you can interact with, AI is already starting to impact Meta's business and top strategic priorities.

"As exciting as this work is, it's still in its infancy," Bosworth said. "It's going to play an important role not only in the products we have today, but in products that were not possible before, of course." Including products in the areas of wearables and augmented reality. Our vision in said areas really hinges on artificial intelligence being able to truly understand the world around us and anticipate our needs. We believe that this contextualized artificial intelligence will Becoming the cornerstone of the first truly new computing platform since the PC."

Chief Scientist Michael Abrash added: “I have spent much of the past decade leading research efforts aimed at creating a new computing platform based on AR/VR, and Reality The rest of the Labs are dedicated to making sure said platform becomes a reality. It's one of Meta's two big long-term bets on future technologies, the other being, of course, artificial intelligence. As we celebrate FAIR's 10th anniversary, it's great to see these two I’m really excited about how this long-term investment is coming together in a way that feels like science fiction.”

In 1957, Joseph Licklider first proposed the vision of human-machine symbiosis, in which computers cooperate with humans to complete tasks that humans are not good at, thereby liberating us and making us more creative. This vision eventually led to a group of talented people gathering at Xerox Palo Alto Research Center and launching the Alto computer in 1973, followed closely by the Mac computer in 1984.

"The human-centered computing revolution has become so all-encompassing, I don't even need to ask you if you are," Abrash said. "I'm sure every one of you is using a direct descendant of the Alto, and now you have a small ized version (mobile phone). We live in the world Licklider created. Although this model of human-computer interaction is powerful, it is still extremely limited relative to the human ability to absorb information and take action. .”

While humans receive information from the 3D environment around us through our six senses, the digital world is often only accessible through 2D screens that are too small in size.

Abrash explains: “Today’s 2D models only scratch the surface of our perceptions and capabilities. In contrast, AR glasses and VR headsets can drive your senses in a way that is close to reality. This has the potential to make humans blind distance and truly being with each other. At the extreme, it might one day allow humans to have any experience they have, and that in itself would change the world."

With contextual AI, a never-tiring, always-available active assistant, AR glasses and VR headsets can help you achieve your goals, enhance your perception, memory and cognitive abilities, and make your life easier , more efficient.

"This hasn't been possible before because there hasn't been a device that can perceive your life from your perspective," Abrash noted. "I believe this may end up being the most important aspect of the AR/VR revolution. Just like graphics User Interfaces Just like GUIs are how we interact with the digital world today, contextual AI will be the human-computer interface of the future and will be even more transformative than GUIs because it goes right to the core of helping us live the way we want to live. "

This shift is already starting to happen. After a decade of research, the pieces are coming together. You'll get a glimpse of the future next year when Meta brings multimodal artificial intelligence to Ray-Ban Meta smart glasses and uses the Ego-Exo4D base dataset for video and multimodal perception research. But this is just the beginning. The complete contextual AI systems of the future will require a variety of technologies that simply don’t exist today.

"I always used to imagine that I was working hard and there was a box that said, 'The miracle happened,'" Abrash said. "And then in the past few years, the magic really happened. Large language model LLMs emerged, It has the potential to handle multimodal reasoning needed to understand users’ goals and help them achieve them based on context and history. The key is that LLMs have the potential to work across visual, audio, speech,eye tracking, reason between manual tracking, EMG and other situational inputs, your history and broader world knowledge, and then take action to help you achieve your goals, guiding you or disambiguating when needed. To realize this potential, LLMs need to be taken to a different level, and FAIR is the ideal team to achieve this goal. Taken as a whole, the fusion of FAIR’s AI research and Reality Labs’ AR/VR research brings together all the elements needed to create contextual AI interfaces that will fully realize Meta’s vision for the future. "

© Copyright notes

Related posts

No comments

none
No comments...