Georgia Tech, Rice University and Meta collaborate on new eye-tracking system - XR Navigator Latest News
Camera-basedeye trackingexistAR/VR systems play an important role, but there are currently some limitations, such as volume size, throughput, and between the camera and back-end systemHigh-passLetter Costs.
Recently, Georgia Institute of Technology, Rice UniversityMeta's team conducted a study in which they developed a new eye-tracking system designed to overcome these limitations by combining a lensless camera, proprietary algorithms, and an accelerated processor design.
The researchers noted, "Current VR headsets are too heavy, games can have lag, and using a controller can be cumbersome. Combined, this can prevent users from having a truly immersive experience."
Therefore, the team proposed a new system called EyeCoD. By using an encoded binary mask instead of a focusing lens, FlatCam can be 5x to 10x thinner than conventional lens-based cameras. This mask encodes the incident light, rather than focusing it directly, and reconstructs the captured image by computationally decoding the encoded information measured by the sensor.
In addition, the reduced shape parameter of the system allows the back-end eye-tracking processor to be closer to the front-end camera, which reduces the distance between the camera and the processor, and reduces the communication cost of the overall system latency.
Another feature of the EyeCoD system is that it only sets the location of the user's eye gaze to high resolution. By predicting the user's gaze points and rendering these areas on-the-fly, the system achieves high resolution. This approach not only saves computational costs, but also improves processing speed and efficiency through specialized gas pedals.
By incorporating FlatCam, the team's proposed system performs eye tracking with smaller size and higher efficiency while maintaining the accuracy of the eye tracking algorithm. In addition, the system does not utilize a lens-based camera, enhancing user privacy.
值得一提的是,团队在2023年获得了斯坦福技术授权办公室的25000美元拨款,以帮助项目实现商业化。他们计划利用这笔资金将目前的演示集成到一个紧凑的眼动追踪系统中,并将其用于商业VR/AR头显。