Excerpts from New AR/VR Patent Applications filed at the USPTO on March 02, 2024
Article related citations and references:XR Navigation Network
(XR Navigation Network March 02, 2024) Recently, the U.S. Patent and Trademark Office announced a batch of newAR/VRPatents, the following is compiled by the XR Navigation Network (please click on the patent title for details), a total of 51 articles. For more patent disclosures, please visit the patent section of XR Navigation Networkhttps://patent.nweon.com/To search, you can also join the XR Navigation Network AR/VR patent exchange WeChat group (see the end of the article for details).
In one embodiment, the method described in the patent comprises identifying, by a wireless communication device, individual data units of a plurality of data units, each corresponding to various types of communications, by the wireless communication device and determining, based on individual of the plurality of data units, the importance of each parameter indicative of the corresponding data unit of said plurality of data units according to said various types of communications, by said wireless communication device selected based on various parameters satisfying a first heuristic indicative of a communication type among said types of communications, various data units of said plurality of data units corresponding to parameters satisfying a second heuristic corresponding to a quality of service (QoS) level of said type of communications, and transmission of the various selected data units by said wireless communication device based on said second heuristic.
在一个实施例中,专利描述的音频设备包括第一扬声器和第二扬声器。第二扬声器的膜片的前表面面向第一扬声器的膜片后表面。当音频设备输出音频内容时,第一扬声器的膜在与第二扬声器的膜相反的方向移动。这种配置为第一扬声器和第二扬声器之间共享的后音量保持等压,从而在输出音频内容时减少音频设备的振动。
3.《Meta Patent | Region of interest sampling and retrieval for artificial reality systems》
In one embodiment, the patent-described imaging device is configured to performeye tracking. The imaging device includes an image sensor and an image controller. The image sensor is configured to capture an image of a portion of a user's face. The image controller is configured to: in a first access step, read a set of sample pixels from the image sensor to the main memory; identify a pixel of interest and a position of said pixel of interest from said set of sample pixels; in a second access step, read a remaining pixel in the set of photodetectors of interest corresponding to the position of the pixel of interest from the image sensor to the main memory; generate a high resolution region of interest by combining said pixel of interest and said remaining pixels in said photodetector set to generate a high resolution region of interest; and performing gaze point estimation using the full resolution region of interest.
4.《Meta Patent | Mesh network for propagating multi-dimensional world state data》
In one embodiment, multidimensional world state data is propagated among nodes of a mesh network and from the mesh network to a client system by applying a multilevel filter. The world state data may represent state data of entities in a shared software application environment. An implementation of a mesh network may propagate the world state data by applying a first level of server node filtering to server nodes of the mesh network and a second level of client specific filtering to client systems connected to the mesh network.
In holographic calls, it is difficult to capture the eyes of the caller due to the lighting effects of the artificial reality headset. However, it may be important to capture the eyes of the caller when presenting them because they can exhibit emotions, gazes, physical features, etc. that contribute to natural communication. Therefore, by briefly turning off the lighting effects of the XR headset, it was realized that the eyes of the caller could be captured using an external image capture device.
6.《Meta Patent | Systems and methods of configuring reduced repetitions for uwb physical layer headers》
In one embodiment, a duplication reduction system for configuring an ultra-wideband (UWB) physical layer header may include a first UWB device that generates a packet including a first header and a second header. The first header may include a plurality of bits indicative of a data rate and coding scheme for a payload of the packet. The first UWB device may send the packet to a second UWB device. The second UWB device may receive and decode the packet based on the coding scheme and the data rate.
7.《Meta Patent | Transparent combination antenna system》
In one embodiment, the patent describes a transparent combination antenna providing bandwidth below 6 GHz and above 24 GHz spectrum. For example, the transparent combination antenna may include a first layer of transparent conductive material and a second layer of transparent conductive substance. In one embodiment, both the first layer and the second layer may have different pitches. Alternatively, the substrate may be positioned between the first layer and the second layer.
8. "Meta Patent | Authoring context aware policies through natural language and demonstrations》
In one embodiment, the patent describes a method involving defining and modifying behavior in an extended reality environment. Said system includes capturing a natural language interpretation of a rule or policy from said user using said one or more audio sensors, extracting features from said natural language interpretation of said rule or policy, predicting a control structure comprising one or more conditional statements based on the extracted features and model parameters learned from historical rules or policies, and generating said rule or policy based on said control structure, wherein said rule or policy includes said one or more conditional statements for performing said one or more actions based on an evaluation of said one or more conditions.
9. "Meta Patent | Predicting context aware policies based on shared or similar interactions》
In one embodiment, the patent describes methods for predicting a strategy based on shared or similar interactions utilizing an artificial intelligence platform. In particular, data including information about a user and a corpus of strategies is collected, embeddings are generated for the collected data, and strategies are predicted based on the embeddings. Strategies may be predicted based on content-based filtering, collaborative filtering, and/or game theory methods.
10. The
In one embodiment, the method described in the patent comprises receiving a request to establish or modify a rule or policy for a given user or activity, obtaining a template based on that user or activity, displaying the template and a tool for editing the template, receiving modifications to the template via a user interacting with the tool of the user interface, and modifying said template based on said modifications to generate said rule or policy.
11.《Meta Patent | Vr venue separate spaces》
In one embodiment, the methods described in the patent may provide a virtual separation space about a virtual venue hosting a virtual VR experience (e.g., concert, sports game, conference, etc.). A more intimate experience can be created for users viewing the VR experience by providing a quiet, curated space.
12.《Meta Patent | Artificial reality browser configured to trigger an immersive experience》
In one embodiment, when loading an immersive experience, the implementation displays the element at the browser chrome of the XR browser. For example, the XR browser may include an application programming interface API. the API calls may cause the display of the browser chrome element, change the display properties of the browser chrome element, or configure the browser chrome element in any other suitable manner.
13.《Meta Patent | Refining context aware policies in extended reality systems》
In one embodiment, the patent describes a method for improving a situational awareness policy in an extended reality system. Related operations include: accessing data collected from user interactions while using a situational awareness policy in an extended reality environment, determining a support set and a confidence score for the situational awareness policy based on the data, generating a replacement policy for the situational awareness policy, determining a support set and a confidence score for each of said replacement policies based on said data, identifying, based on said support set and confidence score, a replacement policy from said replacement policy as a replacement for said situational awareness policy, and updating one or more conditions or actions defined by said situational awareness policy with a modified version of said one or more conditions or actions defined by said replacement policy to generate an updated situational awareness policy.
14.《Meta Patent | Authoring context aware policies with intelligent suggestions》
In one embodiment, the method described in the patent involves generating and modifying strategies based on user activities using an artificial intelligence platform. In particular, data including characteristics of activities performed by the user in real-world and virtual environments are collected, a control structure is predicted based on data and model parameters learned from historical policies, and new policies are generated or pre-existing policies are modified based on the control structure.
15.《Meta Patent | Enhanced grin lc lens response time using temperature control》
In one embodiment, the system described in the patent includes an optic having a GRIN LC lens; a sensor configured to evaluate properties of the liquid crystal layer; a heat source; and a controller. The controller is configured to mediate heat flow between the heat source and the liquid crystal layer based on a signal provided by the sensor.
In one embodiment, a spherical mechanical interface is configured for coupling a spectacle arm to a frame. The spherical mechanical interface includes a surface having substantially spherical curvature and at least two apertures extending through the surface. Said surface is configured to be secured to a portion of the eyeglass arm by a fastener. Said surface is configured to be secured to said portion of the eyeglass arm by another fastener.
17.《Meta Patent | System and method using eye tracking illumination》
In one embodiment, the eye tracking system described in the patent includes an eye tracking camera and one or more illuminator assemblies. The illuminator assembly includes a light source and provides infrared or near infrared (NIR) light. The radiated light from the light source is received by the beam shaping element and provided as beam shaping light to the viewport. The beam shaping element controls the direction and/or extension of the beam shaping light. A collimator may be used between the light source and the beam shaping element to collimate the radiated light to the beam shaping element for efficiency. The beam shaping element may be a diffractive optical element (DOE) having a diffractor angle and shape selected based on the specified direction and extension of the beam shaping light.
In one embodiment, a display system includes a wearable eyewear device having a projector for propagating display light associated with an image and a waveguide for propagating the display light to a viewing window. The waveguide includes a volume Bragg grating (VBG), which may be grouped in sets of three or more gratings having the same horizontal period, allowing each color to be coupled out of the waveguide through the same type of grating. 100% mirrors or arrays of mirrors are used for broad-spectrum reflections into the waveguide used for light input.
19.《Meta Patent | Fiber illuminated backlight and monitor》
In one embodiment, the patent describes technology for a head-up display panel. The backlight may include a light guide panel, the light guide panel being illuminated via a light transmission element coupled to a light source. The light guide plate sequentially illuminates the display panel. The light source is disposed within you tou'xi but set away from the display panel. A photodiode is coupled to a surface of the light-transmitting element and is configured to detect a characteristic of light transmitted by the light-transmitting element.
20.《Meta Patent | Photodiode monitor for fiber illuminated backlight》
In one embodiment, the method described in the patent may include emitting light from a light source. The light may be transmitted from the light source via a light transmission element to illuminate a light guide panel disposed away from the light source. The light guide panel may illuminate a display panel disposed away from the light source. The photodiode may detect a characteristic of the light transmitted by the light transmission element at a location proximate the display panel and away from the light source. The one or more processors may determine a power level of the light source based, at least in part, on the characteristics detected by the photodiode.
twenty one. "Meta Patent | Reflective polarizer with integrated anti-reflective coating》
In one embodiment, the multilayer polymer film described in the patent includes an anti-reflective coating directly overlying the reflective polarizer stack. The reflective polarizer stack includes alternating first and second polymer layers, wherein the first layer includes an isotropic polymer film and the second layer includes an anisotropic polymer film. The anti-reflective coating (ARC) includes alternating third and fourth layers, wherein the third layer includes an isotropic polymer film or an anisotropic polymer film and the fourth layer includes an anisotropic polymer film. The multilayer polymer film may be formed by co-extrusion wherein the reflective polarizer stack and the anti-reflective coating are formed simultaneously.
twenty two. "Microsoft Patent | Systems and methods for generating depth information from low-resolution images》
In one embodiment, a system for generating depth information from a low-resolution image is configured to access a plurality of image frames capturing an environment, identify a first set of image frames from the plurality of image frames, and use the first set of image frames as an input to generate a first image comprising a first composite image of the environment. The first synthesized image has an image resolution that is higher than a resolution of the image frames of the first set of image frames. The system is simultaneously configured to obtain a second image of the environment, wherein there is parallax between a captured viewpoint associated with the first image and a captured viewpoint associated with the second image. The system is simultaneously configured to generate depth information of the environment based on the first image and the second image.
twenty three. "Apple Patent | Method and apparatus for presenting a cgr environment based on audio data and lyric data》
In one embodiment, semantic analysis based on the audio data and lyrics data is performed by a device comprising a processor, a non-transient memory, a speaker, and a display to generate theCGR content to accompany an audio file; obtaining an audio file comprising audio data and lyrics data associated with the audio data; performing natural language analysis on at least a portion of the lyrics data to determine a plurality of candidate meanings for that portion of the lyrics data; performing semantic analysis on that portion of the lyrics data to determine the meaning of that portion of the lyrics data by selecting, based on a corresponding portion of the audio data, one of the plurality of candidate meanings as the meaning of that portion to determine a meaning of that portion of the lyrics data; generating CGR content associated with that portion based on the meaning of that portion of the lyrics data.
twenty four. "Apple Patent | Scene classification》
In one embodiment, the patent describes an exemplary process for recognizing a type of physical environment in a plurality of types of physical environments. Said process includes obtaining image data corresponding to a physical environment using one or more cameras; identifying at least one portion of an entity in the physical environment based on the image data; determining, based on said at least one portion of said entity identified, whether said entity is an entity of a first type; determining, if said entity is an entity of said first type, a type of said physical environment; and rendering one or more representations of virtual objects and entities.
25.《Apple Patent | Presenting content based on a point of interest》
In one embodiment, the electronic device may use one or more sensors to detect a user's intent for content associated with a point of interest in the user's physical environment. In response to the user's intent for the content, the electronic device may send information associated with the point of interest to an external server. The sent information may include depth information, color information, feature point information, location information, and the like. The external server may compare the received information with a database of points of interest and identify matching points of interest in the database. The external server then sends additional scenario information and/or application information associated with the matched points of interest to the electronic device. The electronic device may run the application and/or present content based on the received information associated with the matching points of interest.
26.《Apple Patent | Tracking systems with electronic devices and tags》
In one embodiment, the electronic device described in the patent may include wireless communication circuitry that receives location information from tags coupled to different items. The electronic device may use the tags to maintain tracking of the different items. The electronic device may include control circuitry that analyzes historical tracking data from the tags to identify patterns and relationships between the tags. Based on the historical tracking data, the control circuit may categorize tagged items into bags of other tagged items and may automatically alert a user when a tagged item is missing from its bag. The control circuit may generate rules based on the historical tracking data, such as rules regarding acceptable ranges between marked items and acceptable ranges between marked items and electronic devices. The control circuit may determine when to provide real-time location information and other notifications for the tagged items based on the historical tracking data.
27.《Apple Patent | Presenting content based on a state change》
In one embodiment, the electronic device described in the patent may maintain a list of possible locations of the electronic device and a list of possible activities of said electronic device. The electronic device may collect sensor data and determine the location and activity of the electronic device based on the sensor data. In response to detecting a change in location and/or activity, the electronic device may use at least one previously turned off sensor to obtain additional sensor data. Using the additional sensor information, the electronic device may determine to present content to the user. In response to detecting the change in location and/or activity, the electronic device may increase the sampling rate (and power consumption) of the at least one sensor.
28.《Apple Patent | Head-mounted electronic device with magnification tool》
In one embodiment, the head-mounted device described in the patent includes a display configured to display an image and simultaneously display a magnification window presenting an enlarged portion of the image. The magnification window is disposed in a magnification plane that is fixed relative to a user's head. One or more processors in the head-mounted device may be used to perform a first ray casting operation to identify an input point where detected user input intersects the magnification window, obtain a remapped point from the input point, compute an orientation vector based on the remapped point and a reference point associated with the user's head, obtain a shifted point by displacing the remapped point from the magnification plane to another plane parallel to the magnification plane, and use the the shift point and the direction vector to perform a second ray casting operation.
29.《Apple Patent | Interactive motion-based eye tracking calibration》
In one embodiment, the patent describes a method for performing a calibration process for calibrating an eye tracking device, wherein a stimulus object is displayed within a particular display area such that the stimulus object is moved, at least temporarily, along a defined trajectory and an image of at least one eye of at least one user is captured during the display of the stimulus object. Based on the captured image, gaze data is provided and a point of gaze of the at least one eye of the user relative to the display area is determined based on the gaze data.
30.《Apple Patent | Electronic devices with sweat mitigation structures》
In one embodiment, the head-mounted device described in the patent may include a main housing portion and a display in the main housing portion. The optical seal may be coupled to the main housing portion and may at least partially surround the viewport. The head-mounted device may include a sweat-relieving structure, such as a moisture-conducting channel in the optical seal, to direct sweat and other moisture away. The moisture-conducting channels may be formed by grooves or moisture-absorbing fabric. A control circuit may regulate the fan based on humidity, temperature, or fog detected by sensors. The optical seal may include a directional pattern of recesses through which moisture is guided away. A moisture barrier may be interposed between the interior foam layer and the exterior light blocking fabric to prevent perspiration from penetrating the foam.
31.《Apple Patent | Handheld controllers with charging and storage systems》
In one embodiment, the system described in the patent may include an electronic device such as a head-mounted device and a handheld controller for controlling the electronic device. The handheld controller may have a housing, said housing having an elongate shaft extending between first and second tip portions. The handheld controller may have power receiving circuitry configured to receive power from a power source. The power source may be incorporated into an electronic device, such as a wireless charging cradle or charging stick, a battery case, or a head-mounted device. The power source may be powered by terminals that form ohmic contacts with mating terminals in the finger device, or may wirelessly transmit power using a capacitive coupling or inductive charging arrangement. Magnets may be used to hold and align the elongate shaft of the handheld controller at the power source.
32.《Apple Patent | Fan with debris mitigation》
In one embodiment, the headgear may include a fan, said fan being effective in managing heat while reducing the intrusion of particles and other debris. The fan may include a protrusion, the protrusion may form a zigzag path that directs incoming particles away from the sensitive area. The fan may include openings that allow particles to exit the fan housing and avoid interaction with the impeller. The fan may include variable spacing between the impeller and the substrate to avoid aggregation of particles. The fan may include adhesive pads, the adhesive pads collecting and holding the particles in a position that does not interfere with the rotation of the impeller.
33.《Google Patent | Generating microphone arrays from user devices》
In one embodiment, the patent describes a method of generating a virtual microphone array according to: identifying a plurality of microphones, identifying a relative position in space of each of said plurality of microphones, generating an array of virtual microphones based on the relative position in space of said plurality of sub-microphones and each of said plurality of head microphones, sensing audio at each microphone of said plurality of microphones , and generating an audio signal of the virtual microphone array based on the sensed audio.
34.《Google Patent | Methods, systems, and media for generating compressed images》
In one embodiment, the patent describes a method for generating a compressed image. Said method comprises: identifying a multi-planar image MPI representing a three-dimensional image, wherein the MPI comprises a plurality of forward-parallel planes; segmenting said MPI into a plurality of subdimensions, wherein each subdimension of said plurality of subdimensions comprises a subset of said plurality of forward-parallel planes; calculating a depth mapping for each of the subdimensions of the MPI; converting each of the depth mappings to a mesh, wherein each of the mesh corresponds to to one of a plurality of layers associated with a multi-depth image MDI to be rendered; computing an image for each of said plurality of layers; and storing the mesh corresponding to the plurality of layers of the MDI and the image corresponding to the plurality of layers of the MDI as the MDI.
35.《Sony Patent | Information processing apparatus and image generation method》
In one embodiment, the patent describes an information processing apparatus comprising at least one processor having hardware. Said at least one processor generates a first content image to be displayed on a head-mounted display in a three-dimensional virtual reality space, generates a second content image to be displayed on a flat panel display, and generates a third content image from said first content image and/or said second content image.
36.《Sony Patent | Non-fungible tokens (nfts) for management of virtual assets》
In one embodiment, the electronic device tracks a virtual asset associated with the user at the virtual reality platform and receives metadata associated with the virtual asset. Based on the storage of the metadata to a distributed ledger associated with the virtual reality platform, a first NFT associated with the virtual asset is created.The first NFT is split into a set of NFTs.Each NFT is associated with user permissions corresponding to the virtual asset of the first user.The set of NFTs is stored in the distributed ledger. Based on user input, a second NFT associated with the first virtual asset is retrieved from the collection of NFTs. ownership or use of the virtual asset is controlled on the virtual reality platform based on the second NFT.
37.《Sony Patent | Information processing equipment, information processing method, and program》
In one embodiment, the information processing apparatus described in the patent includes a captured image acquisition portion that acquires a captured image taken by a head-mounted display camera and a captured image acquisition portion that uses a captured parameter adjusted by the camera based on the brightness, and a playback area control portion. The playback region control portion detects a playback region for defining a movable range of a user by analyzing the captured image while changing an analysis condition based on the estimated brightness based on the captured parameter, and then acquiring 3D information about a real object.
38.《Sony Patent | Dual camera tracking system》
In one embodiment, a dual camera tracking system includes a primary imager and a secondary imager, with the output of the secondary imager used to change the target and/or focus of the primary imager. Both imagers may be mounted on top of a common housing. The common housing may be a head-mounted display.
39.《Sony Patent | Information processing apparatus and information processing method》
In one embodiment, the information processing apparatus described in the patent comprises an image correction portion that corrects a camera image captured by a camera of a head-mounted display, a state estimation portion that estimates a state of an actual body using the corrected camera image, a corrected state estimation portion, and a calibration portion, said calibration portion causing said head-mounted display to display a guide image. Further, the calibration portion collects data from said feature points, performs the calibration, and updates the calibration parameters used by said image calibration portion.
40.《Sony Patent | Information processing apparatus, information processing method, and program》
In one embodiment, the information processing apparatus described in the patent includes a display controller. The display controller detects an interfering object based on a position of a user's point of view and a position of at least one object displayed on a display that performs a stereoscopic display based on the user's point of view; and controls the display of the interfering object performed at the display such that stereoscopic visual contradictions associated with the interfering object are suppressed.
In one embodiment, the patent describes a hologram recording medium that inhibits degradation of storage stability in an unexposed state. The hologram recording medium includes a substrate and a light-sensitive layer. The substrate has an oxygen permeability of greater than 0.1 cm3-(m2-day-atm)-1 and 10,000 cm3-(m2.day-atm)-1 or less as measured at 23°C and 0%RH. The photosensitive layer comprises a polymerizable compound, the polymerizable compound comprising a compound represented by a particular general formula.
42.《Sony Patent | Object recognition method and time-of-flight object recognition circuitry》
In one embodiment, the patent describes a method of object recognition from time-of-flight camera data, said method comprising: recognizing a real object based on a pre-training algorithm, wherein the pre-training algorithm is trained based on time-of-flight training data, wherein time-of-flight training data is generated based on a combination of real time-of-flight data indicative of a background, and generating a mask simulated object by applying a mask on synthetic overlay image data representing the simulated object generated simulated time-of-flight data, thereby generating a masked simulated object.
43.《Sony Patent | Information processing apparatus and information processing method》
In one embodiment, the patent describes an information processing apparatus and an information processing method comprising obtaining data of content and cutting a first field of view image corresponding to a field of view of a first user from the content image based on the data of content. Further, field of view information indicating a field of view of a second user viewing the content image is acquired. In a display device, the first field of view image is displayed, and the second user is displayed based on the field of view information of the second user.
44.《Sony Patent | Augmented reality system with tangible recognizable user-configured substrates》
In one embodiment, a tangible substrate such as paper is provided for end-user configuration into, for example, a stairway, a runway, and the like. Machine vision processes images of the substrate to generate an electronic map. Using said map, images of virtual objects are generated and rendered on an AR display as moving across the substrate. The substrate is visible through the display, wherein the virtual object appears to be superimposed on top of the tangible substrate.
In one embodiment, the patent describes a method of performing communication by a first distributed unit (DU) in a wireless communication system, said method may comprise: identifying whether to request a connection to at least one second DU based on resource usage information of the first DU; obtaining DU inter-port setup information for connecting to said at least one of said said second DUs based on the identification; and determining, based on the obtained DU indirect interface setting information to determine a connection to said at least one of said second DUs for transmission and reception of packets.
In one embodiment, the wearable electronic device described in the patent may include a housing, at least one processor included in the housing, at least one first camera module configured to capture an image of a first field of view, and at least one second camera module configured to capture an image of a second field of view different from the first field of view.
47.《Samsung Patent | Video encoding method and device and video decoding method and device》
In one embodiment, the patent describes video encoding/decoding methods and apparatus for determining an intra-frame prediction model for a current block based on the width and height of the current block. When the current block has a square shape with equal width and height, an intra-frame prediction model for the current block is determined from a first intra-frame prediction mode candidate including a plurality of predetermined intra-frame prediction directions, and when the current block is a non-square shape with unequal widths and heights, an intra-frame prediction model for the current block is determined from a second intra-frame prediction module candidate based on said non-square shape configuration.
In one embodiment, the patent describes an ultra-high pixel per inch Micro LED display comprising a Micro LED layer including a plurality of Micro LEDs, a backplane layer including a switching device coupled to the Micro LED layer, and a field shielding member disposed between the plurality of Micro LEDs and the switching device. The field shielding member is configured to shield the switching device from fields applied to the switching device from the plurality of Micro LED tubes during operation of the Micro LED display, wherein the Micro LED layer and the backplane layer are formed integrally in a sequentially stacked structure.
49.《Samsung Patent | Display device and driving method thereof》
在一个实施例中,专利描述的显示设备包括像素电路层,其中像素电路层包括基底层和在该基底层上的电路元件;以及在像素电路层上的显示层,其中显示层包括在像素电路层上的第一电极;在所述第一电极上的绝缘层;在所述绝缘层上的第二电极;发光堆叠结构,其中发光堆叠结构在所述第二电极上并且包括第一绝缘层、第二绝缘层以及在所述第一绝缘层和所述第二电绝缘层之间的发光层;以及在发光堆叠结构上的第三电极。发光层包括二维材料,通过第一绝缘层与第二电极分离,并且通过第二绝缘层与第一电极分离。
在一个实施例中,专利描述的电子设备包括处理器,所述处理器进一步配置为将对象转换成多个虚拟对象以在由所述头戴式电子设备的显示器输出的视觉图像中显示所述对象。处理器进一步配置为验证关于与应用程序的执行屏幕的颜色相对应的主题的信息,以及关于由头戴式电子设备的显示器输出的视觉图像的信息。处理器进一步配置为基于关于主题的信息和关于视觉图像的信息中的至少一个来控制虚拟对象的亮度和饱和度中的至少之一。
在一个实施例中,专利描述了一种用于获得现实世界中物体的深度信息的增强现实设备,包括:视线追踪传感器;深度传感器;存储器;以及至少一个处理器。处理器配置为执行所述一个或多个指令以基于所述注视点的移动速度、加速度、注视时间、注视次数或位置中的至少一个来确定感兴趣区域(ROI)置信水平,所述感兴趣区域置信水平指示所述AR设备的视场(FOV)内的至少一部分区域被预测为ROI的程度;基于所述ROI置信水平来确定ROI;并且设置用于控制深度传感器的操作的参数以获得ROI内的对象的深度信息。
52.《Samsung Patent | Display apparatus providing image with immersive sense(三星专利:提供身临其境感图像的显示设备)》
在一个实施例中,专利描述的显示设备包括配置为提供图像光的图像形成装置;配置为聚焦由所述图像形成装置提供的图像光的凹面镜;以及输送光学系统。所述输送光学系统配置为经由所述凹面镜将由所述图像形成装置提供的图像光输送到观察者的视场。
在一个实施例中,电子设备的处理器通过控制可穿戴设备,在可穿戴设备的视场中显示第一屏幕的一部分,并在电子设备的显示器的显示区域内显示第二屏幕。在基于从可穿戴设备接收的第一信息识别将在视场内显示的参考位置的第一状态中,并且为了识别指针在视场中的参考位置,处理器基于参考位置和第二屏幕上的触摸输入的接触点的位置来获得指针在视场中的位置。处理器基于接触点在第二状态内的位置获得指针在视场中的位置,并基于所获得的位置将用于在第一屏幕中显示指针的第二信息发送到可穿戴设备。
增强现实(“AR”)设备应用平滑校正方法来校正呈现给用户的虚拟对象的位置。AR设备可以应用角度阈值来确定虚拟对象是否可以从原始位置移动到目标位置。角度阈值是从AR设备到虚拟对象的线可以在时间步长内改变的最大角度。类似地,AR设备可以应用运动阈值,该运动阈值是基于虚拟对象的运动可以校正虚拟对象的位置的距离的最大值。此外,AR设备可以将像素阈值应用于虚拟对象的位置的校正。像素阈值是基于虚拟对象的位置变化,虚拟对象的像素投影可以改变的距离的最大值。
55.《HTC Patent | Virtual reality tracker and tracker correction position method(HTC专利:虚拟现实追踪器和追踪器校正位置方法)》
在一个实施例中,专利描述的虚拟现实跟踪器,包括第一部分和第二部分。第一部分包括多个第一发光二极管和内部测量单元。惯性测量单元用于测量第一部分的加速度和三轴角速度。第二部分包括多个第二发光二极管。另外,第一部分和第二部分通过柔性部件连接。
在一个实施例中,专利描述了一种应用于物理环境中的第一电子设备和第二电子设备的控制方法。第一电子设备和第二电子设备配置为通过在第一电子设备与第二电子设备之间建立的第一无线连接来相互通信。所述控制方法包括:通过第一电子设备和第二电子设备中的至少一个,确定是否更新物理环境的地图;以及响应于更新所述物理环境的地图的确定,在所述第一电子设备和所述第二电子设备之间建立不同于所述第一无线连接的第二无线连接,其中所述第二无线连接配置为发送地图更新数据,并且地图更新数据配置为更新物理环境的地图。
在一个实施例中,专利描述的设备接收多个虚拟角色的多个角色运动数据。所述设备基于角色运动数据对虚拟角色进行分类,以生成多个帧更新组,并且每个帧更新组对应于运动计算时段。在每一帧,设备选择每一帧更新组的至少一个子集中的一个,以计算每一虚拟角色的运动合成。
在一个实施例中,专利描述了一种包括摄像头和处理器的追踪设备。摄像头配置为依次捕获具有第一设置组的若干第一图片和具有第二设置组的多个第二图片。处理器与摄像头相连。处理器,用于根据当前时间点的跟踪设备的当前时间点速度,从多个第一图片或多个第二图片中选择当前时间点图片,并根据该当前时间点照片确定当前时间点追踪设备的目前姿态。
在一个实施例中,360度视频帧的球形表示可以分割成顶部区域、底部区域和中间区域。中间区域可以映射到输出视频帧的一个或多个矩形区域中。可以使用将正方形转换为圆形的映射将顶部区域映射到输出视频帧的第一矩形区域,使得圆形顶部区域中的像素扩展以填充第一矩形区域。底部区域可以映射到输出视频帧的第二矩形区域中,使得圆形底部区域中的像素扩展以填充第二矩形区。
头戴式显示器可以包括框架、目镜、图像注入装置、传感器阵列、反射器和离轴光学元件。该框架可以被配置为被支撑在用户的头部上。目镜可以连接到框架并且被配置为设置在用户的眼睛前面。目镜可以包括多个层。图像注入装置可以被配置为向目镜提供图像内容以供用户观看。传感器阵列可以集成在目镜中或者集成在目镜之一中。反射器可以设置在目镜中或目镜上,并被配置为反射从物体接收的光以供传感器阵列成像。离轴光学元件可以设置在目镜中或目镜之一中。离轴光学元件可以被配置为接收从反射器反射的光,并将至少一部分光引向传感器阵列。
在一个实施例中,专利描述了用于在三维环境中创建、保存和渲染包括多个虚拟内容项目设计的方法。所述设计可以保存为场景,所述场景由用户从预建的子组件、已建的组件和/或先前保存的场景中构建。可以保存表示为保存的场景锚节点的位置信息,从而表示每个虚拟内容项目的位置信息。在打开场景时,保存的场景锚节点可以与用户在混合现实环境中的位置相关联。
62.《Magic Leap Patent | Display system with low-latency pupil tracker(Magic Leap专利:低延迟瞳孔追踪器显示系统)》
在一个实施例中,通过改变输出光的光源部分的位置,显示系统将出瞳位置与用户瞳孔位置对齐。光源可以包括输出光的像素阵列。显示系统包括捕捉眼睛图像的摄像头,所述图像的阴像由所述光源显示。在阴像中,眼睛的暗瞳孔是一个亮点,当由光源显示时,它定义了显示系统的出瞳。可以通过捕获眼睛的图像来追踪眼睛瞳孔的位置,并且可以通过使用光源显示捕获的图像的阴像来调整显示系统的出瞳位置。
在一个实施例中,专利描述了用于识别目标对象的一个或多个特征的可穿戴光谱系统和方法。光谱系统可包括配置成在辐照视场中发射光的光源,以及配置成接收来自目标对象的反射光的电磁辐射检测器。系统的一个或多个处理器可以基于目标对象的光吸收水平来识别目标对象的特征。所述系统和方法可包括对散射光和/或环境光的一个或多个校正,例如应用环境光校正。
64.《Snap Patent | Voice controlled uis for ar wearable 设备s(Snap专利:适用于AR可穿戴设备的声控UI)》
在一个实施例中,用户能够在不使用物理用户接口设备的情况下与AR可穿戴设备交互。应用程序具有非语音控制的UI模式和语音控制的用户界面模式。用户选择UI的模式。在AR可穿戴设备的显示器显示UI元素。UI元素具有类型。预定的动作与每个UI元素类型相关联。预定动作与其他信息一起显示,并由用户用来调用相应的UI元素。
65.《Snap Patent | Touch-based augmented reality experience(Snap专利:基于触摸的增强现实体验)》
在一个实施例中,第一用户可以抓握对象。可以记录在抓握中施加在物体上的力的强度和/或抓握的持续时间。可以捕捉持有对象的第一用户的体三维表示。在体验阶段期间,第二用户可以触摸对象,对象可以以与施加在对象上的力的强度和抓握对象的持续时间相对应的强度和持续时间向第二用户提供触觉反馈(例如,振动)。如果捕捉到第一用户握持该对象的体三维表示,则触摸该对象可以引起握持该物体的第一用户的体三维体的呈现。
66.《Snap Patent | Contextual memory experience triggers system(Snap专利:情景记忆体验触发系统)》
在一个实施例中,系统通过计算设备中包含的一个或多个传感器监测环境,并根据监测结果应用触发器检测记忆体验是否存储在数据存储中。如果检测到记忆体验,则系统根据触发器创建增强现实记忆体验、虚拟现实记忆体验或其组合。系统同时经由计算设备投影增强现实记忆体验、虚拟现实记忆体验或其组合。
67. TheSnap Patent | Timelapse re-experiencing system(Snap专利:延时再体验系统)》
在一个实施例中,系统经由计算设备的一个或多个传感器捕获由该一个或更多个传感器在第一时隙观察到的环境的数据,并将该数据存储在数据存储器中作为延时记忆体验的第一部分。该系统还经由计算设备的一个或多个传感器捕获由一个或更多个传感器在第二时隙观察到的环境的数据,并将该数据存储在数据存储器中作为延时记忆体验的第二部分。系统同时将时间流逝记忆体验与记忆体验触发器相关联,其中记忆体验触发器可以启动时间流逝记忆经验的呈现。
68. 《》
在一个实施例中,专利描述的显示设备呈现包括体三维视频的体三维内容。体三维视频包括三维空间中的一个或多个元素的体三维表示。接收指示与体三维视频的呈现相关联的控制操作的输入。通过执行控制操作来控制显示设备对体三维视频的呈现。在执行控制操作的同时,基于用户的移动从多个视角显示三维空间的一个或多个元素的体三维表示。
69. TheSnap Patent | Multi-perspective augmented reality experience(Snap专利:多视角增强现实体验)》
在一个实施例中,专利描述了用于提供多视角增强现实体验的方法。捕捉三维空间的体三维视频。三维空间的体三维视频包括三维空间内的第一用户的体三维表示。体三维视频由第二用户佩戴的显示设备显示,并且第二用户在三维空间内看到第一用户的体三维表示。检测指示第二用户与第一用户的体三维表示的交互的输入。基于检测到指示交互的输入,显示设备切换到第一用户的记录视角的显示。因此,通过在体三维视频中与第一用户的体三维表示交互,第二用户观看第一用户对三维空间的视角。
70. TheSnap Patent | Optical waveguide combiner systems and methods(Snap专利:光波导组合器系统和方法)》
在一个实施例中,专利描述的光学显示系统具有光波导组合器和一个或多个照摄像头。一个或多个摄像头光学耦合到光波导组合器,并且具有可由光学显示系统显示的至少一个真实对象和至少一个虚拟对象的视场。一个或多个摄像头可以设置在耦出器的可用视场之外。可以使用由光学显示系统可显示的一个或多个虚拟对象的摄像头捕获的图像来电子地自校准一个或更多个摄像头。设备和/或显示的虚拟对象与真实对象的AR/VR/MR配准可以使用由显示的虚拟对象和真实世界物体的一个或多个摄像头捕获的图像来实现。可以根据所捕获的图像来确定或估计相对于光波导组合器的真实物体距离和/或空间位置。
71. TheSnap Patent | Sharing social augmented reality experiences in video calls(Snap专利:在视频通话中分享社交增强现实体验)》
在一个实施例中,专利描述的系统可以在通信平台的与第一用户相关联的第一设备和与第二用户相关联地第二设备之间建立视频呼叫。系统可以呈现用于视频呼叫的视频接口,视频接口包括由第一用户的第一设备生成的第一视频流和由与第二用户相关联的第二设备生成的第二视频流。系统可以在视频接口中呈现由通信平台选择的第一组图像增强,第一组视频增强可由第一用户选择用于增强由第一用户设备生成的第一视频流。系统可以识别由通信平台的另一组用户使用的第二组图像增强。
在一个实施例中,专利描述的方法在计算设备显示第一增强现实内容,第一增强现实属性包括第一输出媒体属性。计算设备提供用于显示多个可选图形项目,每个可选图形项目对应于不同的增强现实内容,包括利用面部合成修改的一组媒体内容。计算设备接收对多个可选图形项目中的一个的选择。计算设备至少部分地基于所述选择来识别第二增强现实内容,并在计算设备显示的第二增强现实内容。
在一个实施例中,从一个或多个二维图像中识别二维元素。基于从一个或多个二维图像识别的二维元素来生成体三维内容项目。显示设备呈现覆盖在显示设备的用户的视场内的真实世界环境上的体三维内容项目。
在一个实施例中,使用增强现实AR向用户提供体验。在捕捉阶段,旅途中的人可以使用智能手机、GoPros和/或智能眼镜拍摄视频或图片。无人机同时可以在旅途中拍摄视频或图片。在体验阶段,可以在桌面上渲染旅程的真实世界环境的AR地形渲染,突出显示/动画化人们在旅程中所走的路径。
在一个实施例中,专利描述的体三维内容呈现系统包括头戴式显示设备和存储器,头戴式显示器设备包括一个或多个处理器,存储器存储指令,当所述指令由所述一个或更多个处理器执行时,所述存储器将所述显示设备配置为访问对应于真实世界对象或虚拟对象的AR内容项,混合和匹配这些AR内容项,并且呈现包括覆盖在真实世界环境上的这些混合和匹配的AR内容项的体三维内容,以创建用户可以体验的新AR场景。
76. TheSnap Patent | Social memory re-experiencing system(Snap专利:社交记忆再体验系统)》
在一个实施例中,专利描述的系统通过计算设备中包含的一个或多个传感器监测用户环境,并通过触发器检测基于监测的事件数据存储在数据存储中。系统同时检测事件数据中的一名或多名参与者,并邀请该一名或多名参与者共享增强现实事件数据和/或虚拟现实事件数据。系统进一步基于事件数据创建增强现实事件数据和/或虚拟现实事件数据,并经由计算设备以同步模式和/或异步模式向一个或多个参与者呈现增强现实事件数据和/或虚现实事件数据。
77. TheSnap Patent | External computer vision for an eyewear 设备(Snap专利:眼镜设备的外部计算机视觉)》
在一个实施例中,专利描述了用于使用外部视觉系统在AR设备执行操作的系统和方法。系统通过AR设备建立与外部客户端设备的通信。系统通过AR设备将第一AR对象叠加在使用AR设备观看的真实世界环境。系统从外部客户端设备接收表示由外部客户端设备确定的用户的移动的交互数据。响应于从外部客户端设备接收到交互数据,系统通过AR设备修改第一AR对象。
78. TheSnap Patent | Object counting on ar wearable 设备s(Snap专利:AR可穿戴设备的对象计数)》
在一个实施例中,专利描述了用于在AR可穿戴设备进行对象计数的系统。在接收到计数对象的请求时,AR可穿戴设备捕获用户视图的图像。AR可穿戴设备将图像传输到后端进行处理,以确定图像中的对象。AR可穿戴设备选择所确定的对象中的一组对象进行计数,并将边界框叠加在用户视图内的计数对象上。边界框的位置被调整以考虑AR可穿戴设备的移动。
79. 《Snap Patent | Wrist rotation manipulation of virtual objects(Snap专利:虚拟对象的手腕旋转操作)》
在一个实施例中,AR系统使用手势和DMVO方法的组合来提供用户对AR体验的虚拟对象的选择和修改。用户指示他们想要通过移动他们的手以与虚拟对象重叠来与AR体验的虚拟对象交互。在保持手处于重叠位置的同时,用户旋转手腕,虚拟对象也会旋转。为了结束交互,用户移动他们的手,使得他们的手不再与虚拟对象重叠。
80. 《Snap Patent | One-handed zoom operation for ar/vr 设备s(Snap专利:适用于AR/VR设备的单手变焦操作)》
在一个实施例中,AR系统使用手势和DMVO方法的组合来提供用户对AR体验的虚拟对象的选择和修改。用户指示他们想要通过移动他们的手以与虚拟对象重叠来与AR体验的虚拟对象交互。在将手保持在重叠位置的同时,用户做出手势,使得用户对虚拟对象的视点放大或缩小。为了结束交互,用户移动他们的手,使得他们的手不再与虚拟对象重叠。
81. 《Snap Patent | Hand-tracking stabilization(Snap专利:手部追踪稳定)》
在一个实施例中,AR系统提供了手部追踪输入数据的稳定性。AR系统提供用于显示AR应用程序的用户界面。AR系统使用AR系统的一个或多个摄像头捕获用户在与AR用户界面交互时做出的手势的视频帧跟踪数据。AR系统基于视频帧跟踪数据生成用户的手的骨架3D模型数据,视频帧追踪数据包括与识别的用户的手部分的视觉地标相对应的一个或多个骨架3D模型特征。AR系统基于骨架3D模型数据生成目标数据,其中目标数据标识AR用户界面的虚拟3D对象。AR系统使用目标过滤器组件过滤目标数据,并将过滤后的目标数据提供给AR应用。
82. 《Snap Patent | External mesh with vertex attributes(Snap专利:具有顶点属性的外部网格)》
在一个实施例中,专利描述的操作包括接收包括对真实世界对象的描绘的视频;生成与真实世界对象相关联的3D身体网格,所述3D身体网格跟踪真实世界对象在视频的帧之间的移动;获得与AR元素相关联的外部网格;访问与外部网格相关联的多个变形属性,每个属性对应于不同的变形模型;基于各自的变形模型分别使外部网格的第一部分和外部网格的第二部分变形;基于外部网格的分开变形的第一部分和第二部分来修改视频以包括AR元素的显示。
83. 《Snap Patent | Object relighting using neural networks(Snap专利:使用神经网络重新照明对象)》
在一个实施例中,系统执行图像处理,以利用神经网络为消息传递系统的用户提供的图像重新照明对象。一种用神经网络重新照明对象的方法包括接收具有第一照明特性的输入图像。卷积神经网络被训练为将所述第二照明特性修改为与所述第一照明特性所指示的照明条件一致,以生成所述第三照明特性。所述方法同时包括修改对象的第二照明特性以生成具有修改的第二光照特性的对象。
84. 《Snap Patent | Multisensorial presentation of volumetric content(Snap专利:体三维内容的多传感器呈现)》
在一个实施例中,接收指示对用于呈现的体三维内容的选择的输入。体三维内容包括真实世界三维空间的一个或多个元素的体三维表示。响应于该输入,访问与体三维内容相关联的设备状态数据。设备状态数据描述与真实世界三维空间相关联的一个或多个网络连接设备的状态。给出了体三维含量。体三维内容的呈现包括显示设备对覆盖在真实世界三维空间上的一个或多个元素的体三维表示的呈现,以及使用设备状态数据配置一个或更多个网络连接设备。
85. 《Snap Patent | Avatar call on an eyewear 设备(Snap专利:眼戴设备的Avatar呼叫)》
在一个实施例中,专利描述的系统通过第一AR设备在多个用户之间建立语音通信会话。系统通过多个用户中的第一用户的第一AR设备显示表示多个用户的第二用户的化身。系统通过多个用户中的第一用户的第一AR设备接收来自第一用户的输入。系统基于从第二用户的第二AR设备接收的运动信息来对代表第二用户的化身进行动画化。
86. 《Snap Patent | Personalized try-on ads(Snap专利:个性化试穿广告)》
在一个实施例中,专利描述的操作包括:访问从与第一用户相关联的第一客户端设备接收的内容;处理所述内容以识别描绘所述第一用户穿着第一时尚项目的第一图像;确定在所述第一图像中描绘的所述第一用户的第一姿势;搜索多个产品以识别第一产品;修改所述第一图像以生成描绘所述第一用户佩戴所述第一产品的广告;以及在所述第一客户端设备正在访问的内容浏览会话期间,使得描绘所述第一用户佩戴所述第一产品的广告自动显示在所述第一客户端设备。