Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

CG9mos agorelease firefly
3,056 0

Article related citations and references:XR Navigation Network

(XR Navigation Network February 24, 2024) Recently, the U.S. Patent and Trademark Office announced a batch of newAR/VRPatents, the following is compiled by the XR Navigation Network (please click on the patent title for details), a total of 51 articles. For more patent disclosures, please visit the patent section of XR Navigation Networkhttps://patent.nweon.com/To search, you can also join the XR Navigation Network AR/VR patent exchange WeChat group (see the end of the article for details).

1. "Meta Patent | Comfortable multiplexed lighting for modeling relightable avatars

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法包括选择用于收集被摄体的图片序列的照明配置,所述照明配置包括具有围绕被摄体多个光的空间模式和具有多个摄像头曝光窗口的时间流逝模式,基于提供给被摄体平均照明强度来修改空间模式和时间流逝模式,基于所述空间模式和所述时间流逝模式按顺序激活所述光,并且在每个所述摄像头曝光窗口处从围绕所述被摄体的多个摄像头收集多个图片。

2. "Meta Patent | Systems and methods of configuring uwb physical layer headers

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, a system and method for configuring an ultra-wideband (UWB) physical layer header may include a first UWB device that generates a packet including a header. The header has information indicative of a data rate of a payload included in the packet. The first UWB device may send the packet to a second UWB device.

3.《Meta Patent | Configurable diplexer for dual band support

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the patent describes a switch-configurable duplexer architecture for dual-band communications. A first diplexer may have a first frequency band for an interconnect connection between an access point and a device and a second frequency band for an in-link connection between a device and a second device. The second diplexer may have a third frequency band for the first interconnect and a fourth frequency band for the in-link. The first and third frequency bands and the second and fourth frequency bands may partially overlap. The processor may recognize that the access point has selected a first channel corresponding to the second gap for the interconnect connection and, responsive to the recognition, determine to use the second duplexer for the in-link connection.

4.《Meta Patent | SRAM power switching with reduced leakage, noise rejection, and supply fault tolerance

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the patent describes techniques for generating a supply voltage for an SRAM array using power switching logic. The power switching logic may generate the supply voltage using a first power rail during an active state (providing a higher voltage) and using a second power rail during a deep hold state (supplying a lower voltage). In one example, a sense and recover (SR) unit is provided to sense a reduction in the second voltage.The SR unit may generate an additional voltage, the additional voltage modifying the supply voltage to be higher than the reduced second voltage, thereby reducing sag and/or noise in the second supply rail. The power switching logic, the SR cells, and the SRAM array may coexist or be distributed in a computer system. For example, the power switching logic, SR cells, and SRAM arrays may be embedded in a system-on-chip integrated circuit.

5.《Meta Patent | Automatic ontology generation for world building in an extended reality environment

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the patent describes a method comprising receiving code comprising a plurality of segments programmed to manipulate an object in an extended reality environment; using a machine learning model to learn to identify a plurality of coding patterns from the plurality of segments programmed to manipulate the object; extracting, based on said plurality of coding patterns, object terms used in said plurality of segments and relationships between said object terms; generating an ontology for use in a natural language model based on said object terms and said relationship generating an ontology for use in a natural language model, wherein said ontology is configured for use by said natural language model to parse speech into intents and slots.

6.《Meta Patent | Auto-completion for gesture-input in assistant systems

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the method described in the patent includes receiving an initial input in a first modality at a VR headset user, determining an intent corresponding to the initial input via an intent understanding module, generating a candidate consecutive input based on the intent in a respective candidate modality, wherein the candidate modality is different from the first modality, and presenting a suggested input corresponding to the one or more candidate consecutive inputs at the VR headset.

7.《Meta Patent | Monochrome and color images fusion for artificial reality systems

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the computing system may receive a color image captured by a color camera and a monochrome image captured by a monochrome camera. The color camera and the monochrome camera are associated with an artificial reality system. The computing system may compute histogram statistics for each of the color image and the monochrome image, and perform tone map matching to normalize the monochrome image with respect to the color image based on the histogram statistics. The computing system may perform local motion estimation to compute a motion vector indicative of pixel correspondence between the normalized monochrome image and the color image. The computing system may generate a monochrome merged image for display on an artificial reality system by adding, for each pixel in the normalized monochrome image, color information extracted from corresponding pixels in the color image using the motion vectors.

8. "Meta Patent | Self presence in artificial reality

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the artificial reality system described in the patent may provide a user self-representation in an artificial reality environment based on a self-portion from an image of a user. The artificial reality system may generate the self-representation by applying a machine learning model to classify the self portions of the images. The machine learning model may be trained to recognize the self portions of the images based on a set of training images, where the portions are labeled as depicting the user or not depicting the user from the self perspective. The artificial reality system may display the portion of itself as a self-representation in the artificial reality environment by positioning the portion of itself in the artificial reality scene relative to the user's perspective in the artificial reality environment. The artificial reality system may also recognize movement of the user and may adjust the self-representation to match the user's movement to provide a more accurate self-representation.

9. "Meta Patent | Method of generating a virtual environment by scanning a real-world environment with a first device and displaying the virtual environment on a second device (Meta Patent: Method of generating a virtual environment by scanning a real-world environment with a first device and displaying the virtual environment on a second device)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the patent describes a method comprising scanning a real world environment with a first device associated with a first user; generating a three-dimensional model of said real world environment, transmitting said three-dimensional model to a head-worn device associated with said first user, determining a pose of said head-worn device based on positioning said head-worn device within said three-dimensional model based on images captured by a second camera of said head-worn display, determining said head-worn display's pose, displaying on said head-worn device a virtual space corresponding to said scanned real world environment generated based on said three-dimensional model when viewed from said pose, and transmitting data corresponding to said three-dimensional modeling and said head-worn device's pose to a remote head-worn device of a second user, said data configured to be used to present, by said remote head-worn device, a first incarnation having a pose corresponding to a first user having said virtual space.

10.《Meta Patent | Automatic colorization of grayscale stereo images

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the computing system may use a first camera in a first camera pose to capture a first grayscale image and a second camera in a second camera pose to capture a second grayscale image. The computing system may use an RGB camera in a third camera pose to capture a reference color image. The computing system may use a colorized machine learning model to generate a first color image having the same camera pose as the first camera pose based on the reference color image and the first grayscale image. The computing system may use the colorization machine learning model to generate a second color image having the same camera pose as the second camera pose based on the reference color image, the second grayscale image, and the first color image.

11.《Meta Patent | Techniques to provide user authentication for a near-eye display device

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, authentication techniques based on user biometric information are provided for a wearer of a near-eye display device engaged in multimedia content and/or accessing restricted data. Biological and/or behavioral biometric information associated with the user may be captured by sensors and similar devices integrated or communicatively coupled to the near-eye display device. The biological biometric information may include data associated with a user's face, fingerprint, palm print, iris, retina, cardiac electrical signals, and the like. Behavioral biometric information may include data associated with the user's movement, gait, gestures, voice, and the like. An authentication technique may be automatically selected based on environmental conditions. The near-eye display device may detect continuous wear by the user and refresh or continue authentication after a period of non-use or between two different authentication sessions.

12.《Meta Patent | Managing updates for an artificial reality system in cross-version and cross-platform environments

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the methods described in the patent provide an XR runtime that can dynamically execute a runtime version. Implementations may provide multiple versions of the XR runtime to allow older versions of the XR runtime library to be retained as updates occur. Implementations may maintain backward compatibility and allow older experiences to run in the application. Implementations may access metadata associated with an experience to determine which runtime version the experience can be executed under, and may dynamically run the latest runtime version that can support the experience.

13.《Meta Patent | Perspective sharing in an artificial reality environment between two-dimensional and artificial reality interfaces (Meta Patent. and artificial reality interfaces in an artificial reality environment)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the method described in the patent may allow a second user on a 2D interface to follow a first user in an XR environment using an XR interface and view the virtual world from a represented location of the first user in the XR environment. The implementation may give the second user the same point of view as the first user, or a different point of view that may pivot around the represented position of the first user in the XR environment. At any time, the second user can "pop the bubble" to re-enable the ability of the second user's representation to move independently and interact with the virtual world without being locked to the first user's representation.

14.《Meta Patent | Url access to assets within an artificial reality universe on both 2d and artificial reality interfaces assets within an artificial reality universe on both 2d and artificial reality interfaces)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the implementation is directed to URL access to an asset within an artificial real world on both the 2D and XR interfaces. The implementation may use a uniform URL schema to represent the asset. accessing the URL from the 2D interface may direct the user to a login page from which the user may choose to continue and browse the world from the 2D interface or instead browse the world from the XR interface. When accessing the URL from the XR interface, the user can see prefetched information about the destination.

15.《Meta Patent | Navigating a user interface using in-air gestures detected via neuromuscular-signal sensors of a wearable device, and systems and methods of use thereof (Meta Patent: Navigating a user interface using in-air gestures detected via neuromuscular-signal sensors of a wearable device, and systems and methods of use thereof)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the methods described in the patent include for navigating a user interface using gestures detected at a wearable device. Example methods include receiving, via one or more neuromuscular signaling sensors of a wrist wearable device worn by a user, data generated during a hovering wrist movement of the user's wrist; moving a focus on the user interface based on the hovering wrist movement; and receiving, via said one or more neuromuscular signaling sensors, additional data generated during the user's execution of the hovering gesture. Said method determines that the hovering gesture is an execution gesture; and executes a command corresponding to the execution gesture.

16.《Meta Patent | Multi-stage gestures detected based on neuromuscular-signal sensors of a wearable device to activate user-interface interactions with low false positive rates, and systems and methods of use thereof (Meta Patent | Multi-stage gestures detected based on neuromuscular-signal sensors of a wearable device to activate user-interface interactions with low- false positive rates, and systems and methods of use thereof)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the method described in the patent includes, while maintaining the gesture, receiving a first indication of the performance of the adjustment gesture for a first magnitude of the user interface object associated with the plurality of values; in response to receiving the first indication, adjusting the user interface object to have the first state after moving through some of the plurality of values based on the first magnitude; and after receiving the indication of the release of the gesture In response to receiving the second indication of the performance of the adjustment gesture, the adjustment of the user interface object is abandoned such that the user interface object continues to have the first state.

17.《Meta Patent | Look to pin on an artificial reality device (Meta Patent: Look to pin on an artificial reality device)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, when a notification is to be displayed, the artificial reality notification system may add the notification to a predefined location in the user's field of view until the user's gaze is pointed directly at the notification. When the user's gaze is pointed at the notification, the artificial reality notification system may make the notification world-locked, allow the user to move their head to bring the notification to the center of the field of view, move closer to the notification to make it larger, move around the notification to see aspects from different angles, and so on. The notification may be canceled if the user never directs their gaze to the notification for a first threshold amount of time, or when the user removes their gaze from a version of the locked world for a second threshold amount of time.

18.《Meta Patent | Absolute phase unwrapping for fringe analysis in an eye tracking application

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the patent describes an absolute phase unfolding for aeye trackingSystem. Said system includes a stripe projector, a camera, and an illumination module. The projector provides a periodic stripe pattern at the eye and the camera captures reflections of the stripe pattern. A Fourier transform, a wavelet transform, a phase shift, and/or variants thereof may be used to determine the phase of the stripe pattern and generate a wrapped phase map. Spatial phase unfolding is used for phase unfolding, thereby generating a relatively unfolded phase map. An illumination module having a known relative position between the stripe projector and the camera is used to generate a flicker at the eye. The flicker detected on the original image is used to convert the relative unfolded phase map to an absolute unfolded phase map.

19.《Meta Patent | Gaze adjusted avatars for immersive reality applications

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the method described in the patent includes verifying in the receiver device that visual tracking initiating the Avatar is active, and adjusting a gaze direction of the Avatar to a fixed point in the receiver device. Adjusting the gaze orientation includes estimating the coordinates of the gaze point in the receiver frame at a later time, and rotating both of the Avatar's eyeballs in the receiver device to a point in the direction of the gaze point.

20.《Meta Patent | Light field directional backlighting based three-dimensional (3d) pupil steering

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the head-mounted display device employs light-field oriented backlighting to provide the3DLight Pupil Manipulation. The example light field oriented backlight unit includes two substantially flat components: a light source array and a modulator array. By digitally modifying the illumination pattern on the light source array and pairing the light field oriented backlighting unit with an additional transmissive display element as the main display panel and one or more viewing optics, the light field created by the light field oriented backlighting unit moves in three dimensions in a space that corresponds to the photopupil offset without interfering with the display content.

twenty one. "Meta Patent | Optical combiner apparatus

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the optical combiner may have a see-through optically transparent substrate and a patterned region included in the optically transparent substrate and disposed along a wave propagation axis of the substrate. The patterned region may be partially optically reflective and partially optically transparent. The patterned regions may include a plurality of optically transparent regions of the optically transparent substrate and a plurality of optically reflective regions tilted with respect to the wave propagation axis of the optically transparent substrate.

twenty two. "Meta Patent | Photonic integrated circuits and low-coherence interferometry for in-field sensing

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, a photonic integrated circuit may include an optocoupler and a waveguide, said optocoupler having a substrate that is substantially transparent to light of a specified wavelength, and said waveguide configured to route a beam of light having a center wavelength at the specified wavelength. The optical coupler includes a reflector provided at an angle and the reflector is provided on an angled surface of the substrate. The optical coupler also includes a beam forming element, the beam forming element configured to collect light reflected from the mirrors, and the waveguide and mirrors are integrated within the substrate.

twenty three. "Microsoft Patent | Low motion to photon latency rapid target acquisition

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

In one embodiment, the system camera and the external camera generate images. The images overlap each other and are aligned to form an overlay image. Content from the external camera image is surrounded by boundary elements in the overlay image. IMU data associated with both the system camera and the external camera is obtained. Based on the IMU data, a movement amount is determined that the system camera and/or the external camera have moved since the image was originally generated. Based on that movement, the boundary element is moved to a new position in the overlay image.

twenty four. "Microsoft Patent | Spatial localization design service(微软专利:空间定位设计服务)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,合成世界界面可用于对数字环境、传感器和运动进行建模,以评估、开发和改进定位算法。具有传感器基元库、运动生成器和具有程序和游戏功能的环境的合成数据云服务,有助于具有定位功能的制造解决方案的工程设计。在一个实施例中,传感器平台模拟器与运动协调器、环境协调器、实验生成器和实验运行器一起操作,以在虚拟环境中测试各种候选硬件配置和定位算法。

25.《Apple Patent | Haptic output system(苹果专利:触觉输出系统)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的触觉输出提供方法包括检测条件;确定用户是否正在佩戴包括两个或多个触觉致动器阵列的头戴式触觉附件;确定所述触觉致动器阵列的致动模式;以及响应于检测到所述状况并确定所述头戴式触觉配件正被所述用户佩戴,启动所述致动模式以产生定向触觉输出,所述定向触觉输出被配置为将所述用户的注意力沿着方向引导。

26.《Apple Patent | Cameras for multiple views(苹果专利:多视图摄像头)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的头戴式设备可以具有多个摄像头,所述摄像头可以用于生成图形内容、提供环境的视频透视和/或感测环境中的对象、人或事件。头戴式设备的摄像头可以捕捉输出到显示器的视图。其他传感器和/或检测器可以检测对象和/或事件在用户的环境中的存在或运动,并且提供引起用户对这些对象和/或者事件的注意的输出。输出可以包括通知、视觉显示输出的可选特征和/或包括对象和/或事件的视图。

27.《Apple Patent | Optical assemblies for shared experience(苹果专利:共享体验的光学组件)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的头戴式设备可以与其一个或多个光学组件协同操作,以向多个用户提供共享体验和内容享受。这种操作可以通过头戴式设备与其光学组件中的一个或两个之间的连接来促进,以允许不同的用户接收内容。

28.《Apple Patent | Generating tactile output sequences associated with an object(苹果专利:生成与对象相关的触觉输出序列)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,电子设备响应于检测到电子设备被定向在随着电子设备和相应对象之间的距离改变而改变的定向范围内而产生触觉输出序列。在一个实施例中,电子设备响应于检测到电子设备相对于相应对象的取向的改变而改变触觉输出的一个或多个特性。在一个实施例中,电子设备生成具有指示电子设备的摄像头相对于一个或多个AR平面的定向的特性的触觉输出。在一个实施例中,电子设备生成指示与第二电子设备的数据共享过程的触觉输出。

29.《Apple Patent | Indicating a position of an occluded physical object(苹果专利:指示被遮挡物理物体的位置)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例种,专利描述的方法包括基于来自一个或多个环境传感器的环境数据来确定与物理代理相关联的第一位置值;基于第一位置值确定计算机生成的内容的一部分满足关于物理代理的对应部分的遮挡标准;响应于确定满足遮挡标准并确定物理代理满足移动标准或姿势标准,基于第一位置值生成与物理代理相关联的网格,并在显示器上显示该网格。

30.《Apple Patent | Object centric scanning(苹果专利:以对象为中心的扫描)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例种,示例过程可以包括在设备在包括对象的物理环境中移动期间获取传感器数据,在至少一些图像中识别对象,基于识别所述图像中的至少一些图像中的所述对象来追踪所述设备在获取所述图像期间的位置,所述位置识别所述设备相对于基于所述对象的位置和取向定义的坐标系的定位,以及在获取图像期间基于图像和设备的位置生成对象的3D模型。

31.《Apple Patent | Presenting an environment based on user movement(苹果专利:呈现基于用户移动的环境)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,呈现包括虚拟对象的计算机生成的现实环境,并且检测在物理环境中发生的用户移动。响应于确定检测到的用户移动是朝向虚拟对象的并且虚拟对象阻挡真实对象与物理环境的接触,确定检测到用户移动是指向虚拟对象还是指向真实对象。根据检测到的用户移动指向真实对象的确定,修改虚拟对象的视觉外观,其中修改虚拟对象的可视外观包括显示呈现真实对象的至少一部分。根据检测到的用户移动指向虚拟对象的确定,保持虚拟对象的呈现以阻挡真实对象。

32.《Apple Patent | Method and 设备 for masked late-stage shift(苹果专利:掩模的后期阶段移位方法和装置)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法包括基于设备在显示时间段内的第一预测姿势生成第一图像;生成指示第一图像的第一区域和第一图像的第二区域的掩模;通过基于设备在显示时间段内的第二预测姿态移动第一图像的第一区域而不移动第一图像中的第二区域来生成第二图像;在显示时间段在显示器上显示第二图像。

33.《Apple Patent | Methods and 设备s for detecting and identifying features in an ar/vr scene(苹果专利:用于检测和识别AR/VR场景中特征的方法和设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法包括获得以第一姿态为特征的第一通过图像数据;获得第一通过图像数据中的像素的各个像素特征矢;根据特征的像素表征矢量满足特征置信度阈值的确定来识别第一通过图像数据内的对象的特征;显示第一通过图像数据和与该特征相对应的AR显示标记;获得以第二姿势为特征的第二通过图像数据;将AR显示标记变换到与第二姿势相关联的位置,以便跟踪特征;显示第二通过图像数据并基于变换保持与对象的特征相对应的AR显示标记的显示。

34.《Apple Patent | Image-based detection of surfaces that provide specular reflections and reflection modification(苹果专利:对提供镜面反射和反射修改的表面进行基于图像的检测)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法提供CGR环境,所述虚拟内容替换用户或用户的设备在镜子或提供反射的其他表面中的外观。例如,CGR环境可以修改为包括用户的反射,而不包括用户持有或佩戴的设备。在另一个例子中,CGR的环境修改为使得虚拟内容,例如更新版本的电子设备或虚拟棒,代替反射中的电子设备。在另一个示例中,修改CGR环境,使得诸如用户化身之类的虚拟内容替换反映中的用户。

35.《Apple Patent | Method of displaying products in a virtual environment(苹果专利:在虚拟环境中显示产品的方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,计算机生成环境可以包括虚拟产品显示器,虚拟产品显示器包括产品的一个或多个表示。计算机生成环境可以包括提供与相应产品相关联的附加信息。用户能够定制相应的产品,例如,通过选择附件并将附件拖动到相应产品的表示。在一个实施例中,用户能够通过将产品的表示放置在用户身体的一部分的表示来预览产品。

36.《Apple Patent | Colored visual markers for variable use(苹果专利:用于可变用途的彩色视觉标记)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法涉及为视觉标记选择颜色的设备,所述视觉标记包括编码数据的彩色标记。接收选择源图像的输入,并且设备、系统和方法基于源图像并且基于超过3D颜色空间中的空间距离阈值的颜色之间的距离来确定颜色。设备、系统和方法基于所确定的颜色生成视觉标记的外观,视觉标记包括使用所确定的色彩对数据进行编码的图形元素。

37.《Apple Patent | Devices, methods, and graphical user interfaces for improving accessibility of interactions with three-dimensional environments(苹果专利:用于提高与三维环境交互的可访问性的设备、方法和图形用户界面)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,当三维环境的视图经由计算机系统的显示生成组件可见时,计算机系统从用户接收一个或多个第一用户输入,该第一用户输入对应于三维环境中相对于与用户相关联的参考点的相应方向的选择,并且显示在各个方向上从参考点延伸的射线。在显示射线时,系统显示独立于用户输入沿射线移动的选择光标。当选择光标位于沿着射线的相应位置时,系统接收与停止选择光标沿着射线的移动的请求相对应的一个或多个第二用户输入,并且作为响应,将下一次用户交互的目标位置设置为三维环境中与选择光标沿着射线的相应位置相对应的位置。

38.《Apple Patent | Method and 设备 for detecting a touch between a first object and a second object(苹果专利:用于检测第一对象和第二对象之间的触摸的方法和设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法用于检测第一对象的至少一部分和第二对象的至少部分之间的触摸,其中,所述第一对象的所述至少一部分具有与所述第二对象至少一部分不同的温度。所述方法包括提供第二物体的一部分的至少一个热图像、在至少一个热图的至少一部分中确定指示特定温度值或范围或特定温度变化值或范围的图案,以及使用所确定的图案来检测第一对象的至少一部分和第二对象的至少部分之间的触摸。

39.《Apple Patent | Handheld input 设备s with sleeves(苹果专利:带套筒的手持式输入设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的系统可以包括诸如头戴式设备的电子设备和用于控制电子设备的手持式输入设备。手持输入设备可以包括触笔和触笔上的可移除套筒。手持输入设备的输入输出能力可以在触笔和可移除套筒之间共享。触笔可以包括触摸传感器电路、力敏感尖端和运动传感器。套管可以包括用于将套管上的触摸输入转换到触笔上的触摸传感器电路的导体、用于将套管上力转换到触针的力敏感尖端的可变形构件、以及可以由外部摄像头检测并与来自触笔的运动传感器数据一起使用以跟踪手持输入设备的视觉标记。可移除套筒可以包括触觉输出设备和电池,并且可以在没有电子设备的情况下附接到物品。

40.《Apple Patent | Method and 设备 for surfacing physical environment interactions during simulated reality sessions(苹果专利:在模拟现实会话中呈现物理环境交互的方法和设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法包括,在通过一个或多个显示器呈现虚拟环境的同时,从外部源获得对交互的请求。虚拟环境包括第一多个可用呈现区域和第二多个不可用呈现区域。所述方法进一步包括确定来自外部源的交互请求是否满足一个或多个交互标准,响应于确定所述外部源满足所述一个或多个交互标准。

41.《Apple Patent | Face engaging structure

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了面部啮合结构。其中,专利描述的设备包括显示器、面部接口以及显示器和面部特征之间的连接器。连接器可以包括一个挡块。

42.《Apple Patent | Conformable facial interface(苹果专利:适形面部界面)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的可穿戴电子设备包括显示器、可附接到显示器的框架、可移动地附接到框架的面部接口、以及将面部接口可移动地连接到框架的联动组件。连杆组件可包括可枢转地附接到框架的第一臂,第一臂包括第一端和第二端,第二臂可枢转地附接到第一端并附接到面部界面,第三臂可枢转到第二端并附接到脸部界面。

43.《Apple Patent | Head-mounted 设备 with optical module illumination systems(苹果专利:带光学模块照明系统的头戴式设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,为了适应与不同用户相关联的瞳孔间距离的变化,头戴式设备可以具有相对于彼此移动的左眼和右眼光学模块。每个光学模块可以具有创建图像的显示器和将图像提供给相关联的视窗以供用户观看的相应透镜。每个光学模块包括镜筒和头戴式光学模块照明系统,显示器和光学模块的透镜安装到镜筒上。照明系统可以具有发光器件,例如沿着显示器的外围边缘的一些或全部延伸的发光二极管。发光二极管可以安装在柔性印刷电路上。

43.《Apple Patent | Displaying content based on state information(苹果专利:根据状态信息显示内容)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,头戴式设备可以通过分析传感器数据来确定情景信息。头戴式设备可以使用计算机视觉分析来根据头戴式装置周围的物理环境的图像来确定情景信息。基于接收到的状态信息,头戴式设备可以显示内容、播放音频、改变头戴式装置、外部设备和/或附加外部设备上的设备设置,和/或可以打开应用程序。

44.《Google Patent | Training robot control policies(谷歌专利:训练机器人控制政策)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的技术使用增强现实AR传感器数据来训练机器人控制策略,其中增强现实传感器数据包括注入虚拟对象的物理传感器数据。在各种实施例中,可以确定在物理环境中操作的物理机器人的物理传感器的物理姿态。同时可以确定物理环境中的虚拟对象的虚拟姿态。基于物理姿态虚拟姿态,可以将虚拟对象注入到由一个或多个物理传感器生成的传感器数据中,以生成AR传感器数据。物理机器人可以基于AR传感器数据和机器人控制策略在物理环境中操作。可以基于物理机器人与一个或多个虚拟对象之间的虚拟交互来训练机器人控制策略。

45.《Sony Patent | Display Devices (Sony Patent: Display Devices)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的显示设备包括:激光光源单元,其通过扫描发射用于视频形成的激光;以及激光光源驱动单元,其中激光光源单元可以发射具有不同光束尺寸的两个或更多个激光,或者可以发射两个或多个激光并且可以切换发射的激光的数量。

46.《Sony Patent | Information processing equipment, information processing method, and information processing program

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的信息处理设备包括设置单元,其基于定义基本触发区域的基本触发区域信息,确定与基本触发区域具有预定位置关系的扩展触发区域并且将定义所确定的扩展触发区的信息设置到存储单元。

47.《Sony Patent | Information processing method, information processing 设备, and non-volatile storage medium(索尼专利:信息处理方法、信息处理设备和非易失性存储介质)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的信息处理方法包括位置信息检测处理、效果处理和显示处理。位置信息检测处理基于由ToF传感器获取的深度数据来执行真实对象的距离信息的检测。效果处理基于检测到的真实对象的距离信息执行由CG生成的真实对象和AR对象的遮挡处理。显示处理在显示器上显示遮挡处理的结果。

48.《Samsung Patent | Method and apparatus for processing packet in wireless communication system(三星专利:用于在无线通信系统中处理分组的方法和装置)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了在无线通信系统中操作发送设备的方法,包括:在RLC实体中获得多个无线电链路控制(RLC)数据协议数据单元(PDU);基于所述RLC实体中的资源分配,生成与RLC数据PDU集合相对应的自动重复请求ARQ PDU;从介质访问控制(MAC)层通过较低层向接收设备发送基于所述RLC数据PDU集合和所述ARQ PDU生成的第一MAC PDU;以及当满足特定条件时接收与所述RLC数据PDU集合相关联的状态PDU,其中,所述ARQ PDU包括用于处理与所述RL数据PDU集合有关的操作的信息。

49.《Samsung Patent | Apparatus and method for transmitting and receiving signal according to channel state in wireless communication system(三星专利:无线通信系统中根据信道状态发送和接收信号的装置和方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了一种由UE执行的方法。所述方法包括从BS接收BS和RS的反射图案和发射波束的组合之间的映射信息;接收调度信息,所述调度信息包括关于与所述BS的所述反射图案和所述发射波束的组合相对应的信道状态的信息;基于从所述BS接收的至少一个RS、所述映射信息和所述调度信息来识别接收波束;以及通过所述接收波束从所述BS接收下行链路信号。反射模式与RIS有关。

50.《Samsung Patent | Display 设备 and method for manufacturing the same(三星专利:显示设备及其制造方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的显示设备包括基板和设置在基板上的第一发光元件、第二发光元件和第三发光元件,每个发光元件包括第一半导体层、有源层和第二半导体层。第三半导体层设置在第一发光元件上,第一颜色转换层设置在第二发光元件上并且第二颜色转换层提供在第三发光元件上。

51.《Samsung Patent | Display 设备 and method of manufacturing the display 设备(三星专利:显示设备及其制造方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的显示设备包括包括像素电路单元的基板、包括分隔发光区域和非发光区域的分布式布拉格反射器(DBR)结构的分隔壁、以及在基板上方的对应于发光区域并包括第一半导体层、有源层和多孔半导体层的发光元件。

52.《Samsung Patent | Display 设备 and method for manufacturing the same(三星专利:显示设备及其制造方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的显示设备包括基板、基板上的多个像素电极、多个像素上的多多个发光元件,并且多个发光元素中的每一个都包括第一半导体层、有源层和第二半导体层,以及围绕多个发光元件并分隔多个发光单元的阻挡层,其中阻挡层包括半导体材料和包括铁或碳的掺杂剂。

53.《Samsung Patent | Electronic 设备 identifying direction of gaze and method for operating the same(三星专利:识别注视方向的电子设备及其操作方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的电子设备可以包括摄像头和至少一个处理器。所述至少一个处理器可以配置为通过摄像头获得用户的眼睛图像,从所获得的眼睛图像获得瞳孔数据和视网膜图像,基于所获得的瞳孔数据和所获得的视网膜图像获得包括视网膜图像的眼睛模型,并且基于所获得的眼睛模型识别眼睛的位置或注视方向中的至少一个。

54.《Samsung Patent | Method and 设备 for direct passthrough in video see-through (vst) augmented reality (ar)(三星专利:视频透视AR中的直接透视方法和设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法包括从具有第一和第二摄像头视点的第一和第二透视摄像头接收第一和第三图像;通过将第一映射应用于第一图像来生成与第一虚拟视点相对应的第一虚拟图像;第一映射基于与用户的第一只眼睛相对应的第一摄像头视点和第一虚拟视点的相对位置;通过对第二图像应用第二映射来生成与第二虚拟视点相对应的第二虚拟图像。第二映射基于与用户的第二只眼睛相对应的第二摄像头视点和第二虚拟视点的相对位置;在增强现实设备的至少一个显示面板上向第一虚拟视点和第二虚拟视点呈现第一虚拟图像和第二虚像。

55.《Samsung Patent | Display 设备(三星专利:显示设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的显示设备包括配置为输出光以显示图像的显示元件、包括光入射到的第一表面和与第一表面相对的第二表面的波导、设置在波导中以将光输入到波导中的输入耦合器,以及远心组件。所述远心组件被配置为使得入射到所述输入耦合器的光束的入射角彼此相同。

56.《Lumus Patent | Methods of fabrication of compound light-guide optical elements having embedded coupling-in reflectors(Lumus专利:在反射器中具有嵌入耦合的复合光导光学元件的制造方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,叠层具有第一面和第二面以及多个LOE,每个LOE具有两个平行的主表面和倾斜于主表面的第一多个平行内刻面。第一块体具有第三面和第四面以及第二多个平行的内刻面。第一块体和堆叠体被接合,使得第二面接合第三面,并且第一和第二刻面不平行,从而形成第二块体。第二块体在穿过第一面的平面处被切割,形成具有界面的第一结构。第三块体具有第五面和第六面以及多个平行的内部反射器。第三块体和第一结构接合,使得第五面接合界面表面,并且内部反射器不平行于所有小平面,从而形成第二结构。从第二结构中切出化合物LOE。

57.《Qualcomm Patent | Distributed generation of virtual content(高通专利:分布式生成虚拟内容)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了用于在用户之间建立一个或多个虚拟会话的系统和技术。例如,第一设备可以向第二设备发送针对虚拟会话的虚拟表示呼叫的呼叫建立请求,并且可以从第二设备接收指示接受呼叫建立请求的呼叫接受。第一设备可以向第二设备发送第一设备的第一用户的第一虚拟表示的第一网格信息和第一虚拟表示中的第一网格动画参数。第一设备可以从第二设备接收用于第二设备的第二用户的第二虚拟表示的第二网格信息和用于第二虚拟表达的第二网状动画参数。第一设备可以基于第二网格信息和第二网格动画参数生成第二用户的第二虚拟表示。

58.《Qualcomm Patent | Calibration of interconnected tracking 设备s(高通专利:互联追踪设备的校准)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了用于校准追踪设备的系统。例如,相关过程可以包括在追踪设备网络的主追踪设备处生成主追踪设备的姿势信息;获取所述追踪设备网络的次追踪设备的相对姿态信息,其中,所述次追踪设备的相对姿势信息包括所述主追踪设备与所述次追踪设备之间的相对位置和方位;使用所述主追踪装置的姿态信息和所述次追踪装置的相对姿态信息来获得所述次追踪设备的绝对姿态信息;以及在主追踪设备处将绝对姿态信息发送到次追踪设备以执行校准动作。

59.《Magic Leap Patent | Systems And Methods For Sign Language Recognition(Magic Leap专利:手语识别系统和方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,用于混合现实设备的传感眼戴系统可以促进用户与其他人或环境的交互。作为一个示例,传感眼戴系统可以识别和解释手语,并将翻译后的信息呈现给混合现实设备的用户。可穿戴式系统同时可以在用户环境中识别文本,修改文本,以及渲染修改后的文本以遮挡原始文本。

60.《Magic Leap Patent | Tunable attenuation of light transmission artifacts in wearable displays(Magic Leap专利:可穿戴显示器中透光伪影的可调谐衰减)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了一种使用可穿戴显示系统显示图像的方法。所述方法包括通过目镜将显示光从显示器指向用户,以在用户视场中投射图像;确定环境光源和目镜之间的相对位置;以及根据环境光源和目镜之间的相对位置,调整来自环境光源的环境光通过目镜的衰减。

61.《Magic Leap Patent | Methods for refining rgbd camera poses(Magic Leap专利:改进RGBD摄像头姿势的方法)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了一种用于细化姿态的方法,包括接收多个姿态并通过确定图像帧对的第一子集的图像帧对之间的第一组相对姿态来计算相对姿态集,所述图像帧对中的图像帧之间的时间间隔小于阈值,以及为图像帧对的第二子集确定图像帧对之间的相对姿态的第二集合,所述图像帧对中的图像帧之间的时间间隔大于阈值。

62.《Magic Leap Patent | Cross reality system with accurate shared maps(Magic Leap专利:具有精确共享地图的跨现实系统)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例种,跨现实系统使多个设备中的任何一个能够高效且准确地访问先前保存的非常大规模环境的地图,并呈现相对于这些地图指定的虚拟内容。跨现实系统可以通过合并来自多个设备的跟踪映射来构建持久化映射,该持久化映射可以是规范形式的。地图合并过程确定跟踪地图与规范地图的可合并性,并根据可合并性标准将跟踪地图与标准地图合并,例如当跟踪地图的重力方向与规范地图重力方向对齐时。

63.《Magic Leap Patent | Calibration for virtual or augmented reality systems(Magic Leap专利:用于虚拟现实或增强现实系统的校准)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,当头显移动通过头戴式耳麦的一系列姿势时,摄像头可以在不同的时间获得图像数据。可以检测头显在一系列姿势中移动时发生的头戴式耳麦的一个或多个错误校准条件。可以基于一个或多个错误校准条件将姿势系列划分为姿势组,并且可以使用单独的一组摄像头校准数据来执行姿势组的束调整。每组中姿势的束调整是使用该组的相同校准数据集来执行的。一起估计每个组的摄像头校准数据与该组中姿态的束调整。

64.《Magic Leap Patent | Lidar simultaneous localization and mapping(Magic Leap专利:激光雷达SLAM)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的系统配置用于在混合现实环境中映射信息。专利描述的方法包括扫描环境,包括用传感器捕获环境的多个点;跟踪所述环境的平面;通过将关键帧插入到所述观测中来更新与所述环境相关联的观测;确定所述平面是否与所述环境的第二平面共面;根据所述平面与所述第二平面共面的确定,对与所述环境相关联的观测执行平面束调整;以及根据所述平面与所述第二平面不共面的确定,对与所述环境相关联的观测的一部分执行平面束调整。

65.《Magic Leap Patent | Single pupil rgb light source(Magic Leap专利:单瞳RGB光源)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的显示系统包括配置为发射第一光的光源、配置为接收第一光的透镜、以及配置为接受第一光并发射第二光的图像生成器。所述显示系统同时包括多个波导,其中所述多个波导中的至少两个波导包括耦入光栅,所述耦入光栅配置为选择性地耦合所述第二光。

66.《Snap Patent | Graphical marker generation system for synchronization(Snap专利:用于同步的图形标记生成系统)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了用于生成交互式图形标记的系统和方法,所述交互式图形标记包括具有第一指示符的第一区域和具有第二指示符的第二区域,所述第二区域围绕所述第一区域的圆周。所述方法同时用于监测交互式图形标记的动画以检测第一指示器和第二指示器何时以预定旋转角度对准,在第二计算设备和第三计算设备启动交互式游戏应用程序。

67. TheSnap Patent | Detecting wear status of wearable 设备(Snap专利:检测可穿戴设备的磨损状态)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的系统将无线电信号从可穿戴设备的第一通信设备发送到可穿戴设备中的第二通信设备,并测量与由第二通信装置接收的无线电信号相关联的信号强度。系统将信号强度与阈值进行比较,并基于信号强度与该阈值的比较生成与可穿戴设备相关联的佩戴状态的指示。

68. TheSnap Patent | Hand gestures for animating and controlling virtual and graphical elements(Snap专利:用于设置和控制虚拟和图形元素的手势)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了用于响应于眼镜设备检测到的手势来控制显示器上的一个或多个虚拟元件的示例系统。图像处理系统检测手,并根据检测到的当前手的位置在显示器上呈现菜单图标。图像处理系统检测所捕获的视频数据帧中的一系列手形,并确定检测到的手形是否与手势库中存储的多个预定义手势中的任何一个相匹配。响应于匹配,根据匹配的手势执行动作。响应于打开手势,元素动画系统呈现沿着远离菜单图标延伸的路径递增地移动的一个或多个图形元素。闭合手势会使元素沿着朝向菜单图标的路径后退。

69. TheSnap Patent | Partial-bootup execution of instructions in a head-worn augmented reality 设备(Snap专利:在头戴式增强现实设备中部分启动执行指令)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法包括激活头戴式设备并执行部分引导。当接收到执行功能的用户输入指令时,确定用户输入指令是否被允许用于部分引导执行。基于允许进行部分引导执行的用户输入指令来执行用户输入指令,并且基于不允许进行部分启动执行的用户输入指令来完成头工作设备的引导。所述方法同时可以包括确定用户输入指令是否需要用户认证才能被执行,并且基于部分启动兼容并且不需要用户认证的用户输入指令来执行用户输入指令。

70. TheSnap Patent | Virtual clothing try-on(Snap专利:虚拟服装试穿)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,消息系统执行虚拟服装试穿。虚拟服装试穿的方法可以包括访问穿着源服装的人的目标服装图像和人物图像,并处理该人物图像以生成源服装掩模和人物掩模;处理源服装掩模、人物掩模、目标服装图像和目标服装掩模以产生翘曲,翘曲指示要应用于目标服装图像的翘曲;处理目标服装以根据翘曲使目标服装翘曲以生成翘曲的目标服装图像,处理翘曲的目标衣服图像以与人物图像混合以生成具有混合目标服装图像的人物,以及处理具有混合目标衣服图像的人物以填充孔以生成输出图像。

71. TheSnap Patent | Augmented reality content generators for spatially browsing travel destinations(Snap专利:用于空间浏览旅行目的地的增强现实内容生成器)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,从多个增强现实内容生成器中选择一组增强现实内容发生器;使得在客户端设备处显示包括多个可选图形项目的图形界面;在客户端设备处接收对多个可选图形项目中的第一可选图形项目的选择,第一可选图形项包括与特定地理位置相对应的第一增强现实内容生成器;使得在客户端设备处显示由第一增强现实内容生成器生成的至少一个增强内容现实项目。

72. TheSnap Patent | Augmented reality content generator for suggesting activities at a destination geolocation(Snap专利:用于在目的地地理位置建议活动的增强现实内容生成器)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的本技术在客户端设备处接收对第一可选图形项目的选择,第一可选图形项包括与特定地理位置相对应的第一增强现实内容生成器。发明描述的技术使得在客户端设备处显示包括多个可选择的增强现实内容项目的图形界面,每个可选择的加强现实内容项目部分地基于特定地理位置对应于特定活动。

73. TheSnap Patent | Presenting available augmented reality content items in association with multi-video clip capture(Snap专利:呈现与多视频片段捕获相关的可用增强现实内容项目)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,系统包括存储用于呈现可用增强现实(AR)内容项的程序和方法的计算机可读存储介质;提供配置为捕获多个视频剪辑的摄像头模式来显示捕获界面;显示转盘界面,并用于呈现第一组AR内容项目,每个项目都可选择以将相应的AR内容应用于所捕获的视频;接收选择所包括的探索选项卡的第一用户输入,所述探索选项卡可选择切换到用于呈现第二组AR内容项的探索界面;从所述捕获接口切换到所述资源管理器接口;经由所述浏览器接口接收从所述第二集合中选择AR内容项目的第二用户输入;以及响应于接收到第二用户输入,更新AR内容项目的第一集合以包括所选择的AR内容项目。

74. TheHTC Patent | Antenna structure and head mounted display 设备(HTC专利:天线结构和头戴式显示设备)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述了一种天线结构和头戴式显示设备。天线结构包括第一结构体、第二结构体和馈电元件。第一结构体接收参考接地电压。第二结构体联接到第一结构体。第二结构体包括导电部分、轴套部分和轴体,轴体穿过轴套部分,轴体与导电部分电耦合,导电部分与馈电点耦合。馈电元件与馈电点和第一结构体耦合,用于发射和接收射频信号。

75. TheHTC Patent | Object tracking method and host(HTC专利:对象追踪方法和主机)

Excerpts from New AR/VR Patent Applications filed at the USPTO on 02/24/2024

在一个实施例中,专利描述的方法包括:基于第一预测运动状态和校准因子来确定参考运动状态;获得所述主机的第一运动数据和参考对象的第二运动数据;基于所述第一运动数据、所述第二运动数据和所述参考运动状态来确定所述参考对象相对于所述主机的第一相对姿态;以及基于第一相对姿态来确定参考对象的特定姿态。

© Copyright notes

Related posts

No comments

none
No comments...