Excerpts from New AR/VR Patent Applications filed at the USPTO on January 06, 2024
Article related citations and references:XR Navigation Network
(XR Navigation Network January 06, 2024) Recently, the U.S. Patent and Trademark Office announced a batch of newAR/VRPatents, the following is compiled by the XR Navigation Network (please click on the patent title for details), a total of 51 articles. For more patent disclosures, please visit the patent section of XR Navigation Networkhttps://patent.nweon.com/To search, you can also join the XR Navigation Network AR/VR patent exchange WeChat group (see the end of the article for details).
1. "Meta Patent | Readout methods for foveated sensing》
The readout method may include a pixel array, readout circuitry, and processing logic. The pixel array may have a plurality of pixels. The readout circuitry may be configured to read image data from the pixel array for each of the plurality of pixels. The processing logic may be configured to identify a plurality of regions of interest (ROIs) within the pixel array. The processing logic may be configured to associate the image data of the pixel with the one or more ROIs. The processing logic may be configured to arrange the image data into data frames. The data frame may include image data sorted by ROI. Image data with inactive pixels may be removed from the image data and the data frame prior to transmission. The processing logic may be configured to send the data frames in an order based on the ROIs.
2. "Meta Patent | Travel in artificial reality》
In one embodiment, the user may be at the source location and the received input may trigger the user to travel to the destination XR environment. The user may generate a virtual portal in the source virtual world and use it to incarnate to travel to the destination virtual world. In one embodiment, the user may be at the source location and the received input may trigger the user to travel to the destination XR environment.
3.《Meta Patent | Apparatus, systems, and methods for heat transfer in optical devices》
In one embodiment, the optical device described in the patent may include (i) a heat source that generates heat during operation, (ii) a thermally conductive optical element that is optically transparent and dissipates the heat generated by the heat source, and (iii) a thermally conductive connector that transfers heat between the heat source and the thermally conductive optical element.
4.《Meta Patent | Systems and apparatuses for optical modulation via manipulation of charge carriers》
In one embodiment, the apparatus described in the patent may comprise an optical modulator, said optical modulator comprising a first layer having at least an organic layer and/or an organometallic layer; and a second layer having a non-intrinsic semiconductor layer and/or an electrode layer. Wherein said optical modulator may be optically modulated by manipulating charge carriers.
In one embodiment, the patent describes polymers and methods of preparation thereof prepared in a one-pot system according to a defined sequential reaction scheme and from multifunctional monomers. The patent also describes tunable resin blends prepared from the3DPrinted material.
6.《Microsoft Patent | Scanning mirror device (Microsoft Patent: Scanning mirror component)》
In one embodiment, the patent describes a scanning mirror device comprising a mirror provided on a mirror support member. The scanning mirror device includes a first pair of actuators, the first pair of actuators including a first actuator and a second actuator. The first actuator is positioned on a first side of the mirror support member. The second actuator is positioned on a second side of the mirror support member opposite the first side of the mirror support member. The first actuator and the second actuator are coupled to the mirror support member along the first axis of rotation. The scanning mirror device also includes a second pair of actuators, the second pair of brakes including a third actuator and a fourth actuator. The third actuator is positioned on a first side of the mirror support member. The fourth actuator is positioned on a second side of the mirror support member. The third actuator and the fourth actuator are coupled to the mirror support member along the second axis of rotation.
7.《Microsoft Patent | Intelligent keyboard attachment for mixed reality input》
In one embodiment, the patent describes a system for attaching a virtual input device to a virtual object in an MR environment. Said system includes a memory, a processor communicatively coupled to the memory, and a display device. The display device is configured to display an MR environment provided by at least one application program implemented by the processor. The mixed reality environment includes a virtual object corresponding to the application program and a virtual input device. The at least one application program attaches the virtual input device to the virtual object at an offset relative to the virtual object.
8. "Microsoft Patent | Representing two dimensional representations as three-dimensional avatars》
In one embodiment, one or more input video streams are received. A first subject within the one or more input video streams is identified. Based on the one or more input video streams, a first view of the first subject is identified. A second view of the first subject is identified based on the one or more input video streams. The first subject is segmented into a plurality of planar objects. The plurality of planar objects are transformed with respect to each other. The plurality of planar objects are based on the first view and the second view of the first subject. The plurality of planar objects are output in an output video stream. The plurality of planar objects provide a view of the first object to one or more viewers.
9. "Apple Patent | Non-contact respiration sensing》
In one embodiment, the patent describes a method of non-contact breath sensing. A head-mounted device may include one or more interferometric sensors, the interferometric sensors positioned and oriented in a housing to sense particle motion caused by a user's breathing. Interference signals from the one or more interferometric sensors may be used to determine information about the user's breathing.
10.《Apple Patent | Wearable device including optical sensor circuitry》
In one embodiment, the head-mounted device described in the patent includes a housing and a set of one or more SMI sensors. The set of one or more SMI sensors is provided in the housing. The collection of one or more SMI sensors is configured to emit electromagnetic radiation toward an anatomical structure adjacent to a nasal passageway of a user and to generate one or more SMI signals comprising information about movement of the anatomical structure.
11.《Apple Patent | System and method for monitoring and responding to surrounding context》
In one embodiment, performing a correction operation for an environmental condition associated with a predetermined eye condition includes obtaining environmental sensor data from one or more sensors of the device; determining a current contextual scenario of the device based on the environmental sensor data; and determining that eye state criteria are satisfied based on the current contextual scenario. In response to determining that the eye state criteria are satisfied, a correction operation is determined based on the eye state criteria, and the correction operation is performed. When performed, the correction operation is configured to address an environmental condition associated with the eye state criteria.
12.《Apple Patent | Command disambiguation based on environmental context》
In one embodiment, the method described in the patent includes receiving a voice command; obtaining an image of the physical environment using an image sensor; detecting an object in the image of the physical environment based on a visual model of the object stored in non-transitory memory in association with an object identifier of the object; generating an instruction including the object identifier of the object based on the voice command and the detection of the object; and realizing the change in state of the object Commands for realizing a change in the state of an object.
In one embodiment, the patent describes a method of providing a fading audio experience during a transition from a first audio experience to a second audio experience. The first audio experience may include playback of a spatialized audio signal using a first spatial impulse response generated by the audio system. The second audio experience includes playback of the spatialized audio signal using a second spatial impulse response received by the audio system. The audio system generates a hybrid spatial impulse response based on the first spatial impulse response and the second spatial impulse response. During transitions between audio experiences, the hybrid spatial impulse response is used to spatialize the audio signal to create a faded audio experience. Other aspects are also described and claimed for protection
14.《Apple Patent | Audio capture with multiple devices》
In one embodiment, a method of visualizing a combined audio pickup pattern is performed at a first device in a physical environment. Said method comprises determining a first audio pickup pattern of the first device; determining one or more second audio pickup patterns of the corresponding one or more second devices; determining a combined audio pickup pattern of said first device and said another one or more second devices based on said first audio pickup pattern and said one or more second audio pickup patterns; and displaying a representation of the combined audio pickup pattern on a display.
15.《Apple Patent | Method and apparatus for sound processing for a synthesized reality setting》
In one embodiment, the headset performs transforming a sound into a virtual sound for use in a synthetic reality (SR) setup. Said method includes displaying an image representation of the synthetic reality (SR) setup at the display; recording real sounds generated in a physical environment via a microphone; generating virtual sounds by transforming the real sounds based on acoustic reverberation characteristics of the SR setup using one or more processors; and including playing the virtual sounds via a loudspeaker.
16.《Apple Patent | Displaying content in electronic devices with gaze detection》
In one embodiment, the electronic device described in the patent may include one or more sensors and one or more displays. The electronic device may receive, from at least one external server, content to be displayed at the one or more displays, information identifying regions of interest in the content, and actions associated with the regions of interest. The electronic device may display said content. The electronic device may obtain an attention point via one or more sensors and determine that the attention point overlaps with the region of interest in the content. Based on the determination that the gaze point overlaps with the region of interest in the content, the electronic device may perform an action associated with the region of interest, including providing visual, audio, and/or haptic feedback.
17.《Apple Patent | Techniques for viewing 3d photos and 3d videos》
In one embodiment, conversion between different types of views of 3D content is determined and provided. For example, example processes may include obtaining a 3D content item, providing a first view of the 3D content item within the 3D environment, determining a conversion from the first view of the 3D content item to a second view based on criteria, and providing a second view of the 3D content item within the 3D environment. wherein the left eye view and the right eye view of the 3D content item are based on at least one of the left eye content and the right eye content.
18.《Apple Patent | Object detection with instance detection and general scene understanding》
In one embodiment, an object type of an object depicted in an image of a physical environment is identified. A particular instance is then identified based on the object type and the image. The particular instance of the object has a set of features that are different from a set of features associated with other instances of the object type. Then, a set of features of the particular instance of the object depicted in the physical environment is obtained.
19.《Apple Patent | Real time visual mitigation of a live camera feed》
In one embodiment, mitigating a trigger display condition comprises obtaining an image frame comprising image data captured by the camera, determining image statistics for at least a portion of the image frame from the image data. Said technique also includes determining that the image statistics satisfy a trigger criterion, wherein the trigger criterion is associated with at least one predetermined display condition, and in response, modifying an image parameter of the at least a portion of the image frame; rendering the image frame based on the modified image parameter, and displaying the rendered image frame.
20.《Apple Patent | Positioning content within 3d environments》
In one embodiment, an example process may include obtaining virtual content by determining on-screen content and off-screen content and positioning the virtual content within a view of the 3D environment; positioning the on-screen content on a virtual screen within the 3D environment and positioning the off-screen content outside of the virtual screen within the 3D environment; and presenting a view of the 3D environment, wherein the view comprises presenting on-screen content and presenting off-screen content outside of the virtual screen.
twenty one. "Apple Patent | Context-based avatar quality》
In one embodiment, during a communication session, a first device receives and uses streaming avatar data to present a view that includes a time-varying avatar. For example, some or all of the video content in another user sent from another user's device during the communication session. To use resources efficiently, the avatar provisioning process (e.g., video frame rate, image resolution, etc.) is adjusted based on user scenarios, such as whether the viewer is viewing the avatar, whether the avatar is within the viewer's point-of-attention region, or whether the avatar is within the viewer's field of view.
twenty two. "Apple Patent | Virtual object kit (Apple Patent: Virtual pair kit)》
In one embodiment, the method described in the patent includes obtaining a virtual object toolkit, the virtual object toolkit including a set of virtual object templates for a particular virtual object type. The virtual object toolkit includes a plurality of groups of components. Each of the plurality of sets of components is associated with a particular portion of the virtual object. Said method includes receiving a request to assemble a virtual object. The request includes selecting a component from the plurality of components. The virtual object is then synthesized based on the request.
twenty three. "Apple Patent | Content transformations based on reflective object recognition》
In one embodiment, an example process may include obtaining sensor data, such as images, sound, motion, and the like, from a sensor of an electronic device in a physical environment that includes one or more objects; detecting a reflective object in the one or more objects based on the sensor data; determining a 3D location of the reflective object in the physical environment; and rendering virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D location of the reflected object.
twenty four. "Apple Patent | Method and apparatus for synchronizing augmented reality coordinate systems》
In one embodiment, the method described in the patent includes determining a reference position in three-dimensional space based on the feature; obtaining, with respect to the reference position, a first reference coordinate in an augmented reality coordinate system of a first electronic device and a second reference coordinate in an augmented reality coordinate system of a second electronic device; determining a coordinate transformation based on a function of the first reference coordinate and the second reference coordinate; using the coordinate transformation to synchronize the augmented reality coordinate system of the first electronic device and synchronizing the enhanced reality coordinate system of the second electronic device using the coordinate transformation.
25.《Apple Patent | User representation using depths relative to multiple surface points》
In one embodiment, an example process may include obtaining sensor data for a user, wherein the sensor data is associated with a point in time; generating a set of values representing the user based on the sensor data; and providing the set of values; the set of values comprising depth values defining a 3D position of the user portion with respect to a plurality of 3D positions of the points of the projected surface and appearance values defining the appearance of the user portion such as color, texture, opacity transparency, and the like.
26.《Apple Patent | Perspective correction of user input objects》
In one embodiment, a method of determining a display location is performed by a device comprising one or more processors and a non-transient memory. Said method includes obtaining a camera set of two-dimensional coordinates of a user input object in a physical environment; obtaining depth information of the physical environment excluding the user input object; and converting the camera set of two-dimensional coordinates to a display set of two-dimensional coordinates based on the depth information of the physical environment excluding the user input object.
27.《Apple Patent | Immediate proximity detection and breakthrough with visual treatment》
In one embodiment, providing visual processing based on proximity to an obstacle comprises collecting sensor data of an environment by a device; determining a state of each of a plurality of regions of the environment, wherein at least one of the regions of the environment is assigned an occupancy state; and, based on said device satisfying a determination of a predetermined proximity threshold for said at least one region of the environment that is assigned an occupancy state, such that a visual processing by said device is present a visual treatment. wherein said visual processing indicates a location of said at least one region of said environment having an occupied state.
28.《Apple Patent | Out-of-process hit-testing for electronic devices》
In one embodiment, the application specifies a control style for a UI window to be managed separately from the application, e.g., by a system process running outside of the application process. When user input is received at a location corresponding to a portion of the application UI that is separate from the UI window for which the control style has been specified, the user input may be redirected to the UI window for which the control style has been specified. Out-of-process hit testing can improve the privacy and efficiency of computerized user input systems.
29.《Apple Patent | Systems and methods of reducing obstruction by three-dimensional content》
In one embodiment, the patent describes a method for preventing three-dimensional content from obscuring portions of a web browser or other user interface in a three-dimensional environment. The described method comprises applying one or more visual treatments to the three-dimensional content; applying the one or more visual treatments to the portion of the web browser or the portion of the other user interface; applying the one or more visual treatments at least from the perspective of a user; and applying the one or more visual treatments being based on the three-dimensional visualization of the three-dimensional content.
In one embodiment, in response to detecting a corresponding user input: if the corresponding user output satisfies a first criterion, the computer system causes said virtual object to move in said three-dimensional environment relative to said three-dimensional space based on the movement of said one or both hands in said three-dimensional environment; and if the corresponding user input satisfies a second criterion, the computer system replaces said view of said three-dimensional environment corresponding to said three-dimensional environment with a second view corresponding to a viewpoint that is different than said first viewpoint of said three-dimensional environment corresponding to said first viewpoint replacing said view of said three-dimensional environment corresponding to said first viewpoint.
31.《Apple Patent | Fit guidance for head-mountable devices》
In one embodiment, guidance for optimal placement of the headgear may be provided to direct the user to position the headgear in a manner that achieves proper alignment of the components relative to the user and maximizes user comfort. For example, the headgear and/or another device may include sensors for detecting the user's facial features, forces distributed across the face when worn, and/or alignment with the face (e.g., eyes). The head-mounted device device and/or another device may detect changes in eye alignment and infer user discomfort based on the frequency and/or magnitude of such changes. Alternatively, the head-mounted device device and/or another device may detect changes in the user's facial features before, during, and/or after use of the device.
32.《Apple Patent | Head-mountable electronic device spacer》
In one embodiment, the patent describes a head-mounted device comprising a display portion having a housing and a display, an optical seal extending from the housing and comprising an optical seal offset from the housing by a certain distance, and an adjustment mechanism coupled to the housing and the optical seal and configured to vary the distance.
33.《Apple Patent | Eye imaging system》
In one embodiment, the patent describes a waveguide having a coupler and a coupler for redirecting reflected light to a camera. The waveguide may be integrated into a lens of a wearable device such as eyeglasses. A light source emits a light beam toward the eye. A portion of the light beam is reflected by the coupler. The coupler may be implemented according to diffraction or reflection techniques, and may be a straight or curved line of narrow width to focus at close range, but long enough to adequately image the eye. The coupler changes the angle of the beam so that total internal reflection is used to relay the beam and focus the beam to the coupler of the waveguide. The beam is redirected by the coupler to the camera.
34.《Google Patent | Partially curved lightguide with pupil replicators》
In one embodiment, the partially curved lightguide includes an embedded collimator, the embedded collimator having a surface for collimating light for display before the light encounters a surface in the region of the optical pupil extension of the outgoing optical pupil extender. By collimating the light within the lightguide rather than collimating the light prior to entering the lightguide, the necessary volume of the shape parameter of the AR display can be minimized by eliminating the selected collimating optics, resulting in a smaller volume of the AR display in the region of the light source.
In one embodiment, techniques for operating an AR system include determining a gesture formed by a user based on a sequence of two-dimensional images of the skin of the user's wrist acquired from a near-infrared camera. Specifically, an image capture device provided in a wristband worn around the user's wrist includes a source of electromagnetic radiation, e.g., a light emitting diode that emits radiation into the infrared (IR) band in the user's wrist; and an IR detector that generates a sequence of two-dimensional images of an intradermal region in the user's wrist. Based on the sequence, the pose detection circuit determines a value for the bioflow metric based on a training model that generates the metric from the sequence. Finally, the pose detection circuitry maps the value of the bio-flow metric to a specific hand/finger movement that determines a pose.
36.《Sony Patent | Image display device, image display system, and image display method》
In one embodiment, the patent describes an image display device that automatically adjusts image light based on a user's pupil distance. Said image display apparatus comprises a plurality of mirrors reflecting image light emitted from at least one light source and projecting the image light onto a user's eye and/or an optical element; a mirror angle adjustment unit that adjusts an angle of each of the plurality of mirrors based on a position of the eye or the optical element; a mirror angle adjustment unit; and an inter-error distance adjustment unit that adjusts said plurality of mirrors based on said angle distance between said plurality of mirrors based on said angle.
37.《Sony Patent | Gaze tracking for user interface》
In one embodiment, performing gaze tracking to track a user's gaze within a user view screen comprises: combining gaze tracking with head movement tracking, wherein the head movement of the head movement tracking provides a rough estimation of the user's direction of gaze, and the eye movement of the gaze tracking provides a fine-tuning of the user's direction of gaze within the user view screen. When gaze region estimation is turned on, dividing the user viewing screen into a plurality of gaze regions; and combining gaze tracking with gaze region estimation to select a gaze region from the plurality of gaze regions as the user's gaze direction.
In one embodiment, a method for updating graphics pipeline information includes executing an application program at a CPU in a first frame cycle to generate primitives for a scene of a first video frame. receiving gaze tracking information of a user's eye in a second frame cycle. predicting its landing point at the head-up display in the second frame cycle based at least on the gaze tracking information. A late update of the predicted landing point for a GPU-accessible buffer is performed in the second frame cycle. a shader operation is performed in the GPU in the second frame cycle to generate pixel data based on the primitives and based on the predicted landing point. The pixel data is scanned from the frame buffer to the head-up display in the third frame cycle.
39.《Sony Patent | Representing virtual objects outside of a display screen》
In one embodiment, the patent describes a method comprising: receiving data defining a virtual environment comprising a plurality of virtual objects, each of the virtual objects having a virtual location within the virtual environment; determining whether at least one of the virtual objects of a subset of said virtual objects is outside of said field of view but at least partially within a virtual display area adjacent to said display screen. In response to determining that the virtual objects of the subset of virtual objects are at least partially within the virtual display area, converting the virtual locations of the virtual objects to real world locations, and displaying virtual elements representing the virtual objects at the real world locations using a mixed reality display device.
40.《Sony Patent | Method for detecting display screen boundaries》
In one embodiment, the patent describes a computer-implemented method for detecting a display screen using an extended reality display device. Said method comprises receiving, at an extended reality display device, an image or data defining an image corresponding to a predetermined frame of a video stream,; monitoring a display screen that is displaying the video stream; detecting at least some of a plurality of specified feature points in a predetermined frame of the video stream displayed by the display screen; and determining a physical boundary of the display screen based on the detected specified feature points.
41.《Sony Patent | Visual perception assistance apparatus and method》
In one embodiment, the patent describes a visual perception aid comprising a rendering unit configured to render a virtual environment for display; a selection unit configured to select one or more of said virtual elements; and an adaptation unit configured to adapt one or more aspects associated with said given selected virtual element for a given selected virtual element and based on a distance between said given selected virtual element and a predetermined position within said virtual environment, to said given selected virtual element's adapting unit associated with one or more aspects of the appearance of said given selected virtual element.
In one embodiment, the patent describes a method comprising: determining whether at least one controller is connected to the gaming system, and performing and repeating the following operations until converting to one-handed operation or two-handed operation by the user:(a) if a plurality of controller connections are detected, converting to two-handed operation; (b) if no controller connections are detected, requesting the user to connect said at least one controller; (c) if only a first controller connection is detected, the user is requested to connect a second controller; (d) if no connection of said second controller is detected, conversion to said one-handed operation. wherein the conversion to said one-handed operation is performed after determining and deciding to continue using only said first controller.
43.《Sony Patent | Head-mounted display, display control method, and program》
In one embodiment, the patent describes a wearable display that facilitates the user to understand the proximity between the user and objects around the user. The display module is arranged in front of the eyes of the user wearing the headset. Based on the proximity between the user and the objects around the user, the head-up display controls the display module such that the user is able to visually recognize the forward direction.
44.《Samsung Patent | Accelerator, storage device, and vr system》
In one embodiment, the gas pedal described in the patent includes a memory access module and a computing module, the memory access module configured to obtain a plurality of raw images from an input device for generating a VR image. The computing module includes a stitching region detector, a stitching processor, an image processor, and a combination processor. The memory access module is configured to send the plurality of raw images to the stitching region detector. The stitching region detector is configured to detect at least one stitching region and an image region from each of the plurality of raw images by performing a detection process on each of the plurality of raw images received from the memory access module, provide the at least one stitching region to the stitching processor, and provide the image region to the image processor. The stitching processor is configured to generate the at least one post-processed stitching region.
In one embodiment, the patent describes a method for facilitating interaction with a simulated object via a device. Said method includes rendering a user interface that includes a map of a physical location; receiving a user selection of an area on the map via the user interface; obtaining location data for the device while displaying at least one image obtained by a camera of the device; responding to determining that said location data obtained by said device meets criteria specifying where said simulation object can be generated associated with said physical object by providing at least one image obtained by said camera to the simulation object; and automatically adjusting said simulation object based on changes in motion of said device. object with at least one image obtained by said camera; and automatically adjusting said simulated object based on changes in motion of said device.
46.《Samsung Patent | Wearable device and controlling method thereof》
In one embodiment, the patent-described AR device may be configured to generate a virtual representation of a user's physical environment.The AR device may capture an image of the user's physical environment to generate a grid map.The AR device may project graphics at a specified location on the virtual bounding box to guide the user in capturing an image of the user's physical environment.The AR device may provide visual, auditory, or haptic guidance to direct the AR device of the user to look toward waypoints to generate a grid map of the user's environment.
In one embodiment, the patent describes an electronic device comprising: an input device; and a mobile device configured to communicate with said input device. wherein said mobile device determines a position of said mobile device or a position of a first external device as a reference coordinate, receives relative positional information of said input device with respect to said reference coordinate and motion information of said input device, determines a motion track based on said relative positional information as well as said motion information, and either displays said motion track or sends said motion track to a second external device.
48.《Samsung Patent | Holographic display apparatus for providing expanded viewing window》
In one embodiment, the patent describes anholographic displayApparatus. Said holographic display device comprises a spatial light modulator; and an aperture amplification film configured to amplify a beam diameter of a light beam from each pixel of the plurality of pixels of the spatial light modulator. The beam diameter of each light beam amplified by the aperture amplification film may be larger than a width of an aperture of each pixel of the spatial light modulator.
49.《Samsung Patent | Wearable electronic device including lens assembly》
In one embodiment, the patent describes a wearable electronic device comprising: at least four lenses arranged along an optical axis from the user's eye side to the display, said at least four lenses comprising a first lens and a second lens; wherein said first lens is closest to said user's eye side of said at least four lenses and comprises at least one flat surface, and provided on said at least one flat surface are a first quarter wave plate QWP and a first refractive member. wherein said second lens from the eye side of said user includes at least one convex surface and a second refractive member provided on said at least one convex surface.
In one embodiment, the patent describes an electronic device comprising a first sensor comprising an anchor and a tag, a second sensor comprising an inertial sensor and a geomagnetic sensor, and a processor configured to estimate a first relative position of the external electronic device based on an arrival time of a signal from the anchor reaching the external device. The processor estimates a first relative attitude of said external electronic device based on measurements from said inertial sensor and said geomagnetic sensor, calculates a relative acceleration of said external device based on said relative attitude of said external electrical device by converting an acceleration of said electronic device in a sensor frame to an acceleration in a navigation frame, and calculates a relative acceleration of said external device by applying the calculated relative position of the external electronic device and the first relative position to an extended Kalman filter to estimate the relative position and relative attitude of the external electronic device.
51.《Snap Patent | Rotational navigation system in augmented reality environment》
In one embodiment, in order to direct a user to a target object located outside of a field of view of a wearer of an AR computing device, a rotary navigation system displays arrows or pointers called direction indicators at a display device. The direction indicator is generated based on an angle between the direction of the user's head and the direction of the target object and a correction factor. The correction factor is defined such that the greater the angle between the direction of the user's head and the direction of the target object, the greater the horizontal component of the direction indicator.
52.《Snap Patent | Neural rendering using trained albedo textures》
In one embodiment, the patent describes operations comprising: accessing a set of albedo textures and a machine learning model associated with a real-world object; obtaining a 3D mesh of said real-world object; receiving an input that selects a new viewpoint that differs from said plurality of viewpoints of said real-world object; and generating a photorealistic rendering from the new viewpoint based on the 3D mesh of the real-world object, the set of albedo textures, and the machine learning model, generating a photo-realistic rendering of the real world object from the new viewpoints.
53.《Snap Patent | Background replacement using neural radiance field》
In one embodiment, the patent describes a system for providing a virtual experience. Said system accesses an image depicting a person and one or more camera parameters indicating a point of view associated with a camera used to capture the image. The system extracts a portion of the image. The system processes the one or more camera parameters through a Neural Radiation Field (NeRF) machine learning model to render an estimated depiction of the scene from the point of view associated with the camera. The system combines the portion of the image that includes the depiction of the person with the estimated depiction of the scene to generate an output image, and causes the output image to be rendered to the client device.
54.《Qualcomm Patent | Delay-dependent priority for extended reality data communications》
In one embodiment, the technology described in the patent relates to wireless communications. Based at least in part on a physical data channel configuration corresponding to a physical data channel, a network node may communicate a first XR data communication corresponding to a first physical data channel timing associated with a physical data channel setting, wherein a first priority is associated with said first XR data communication. The network node may communicate a second XR data communication corresponding to a second physical data channel timing, and wherein said second priority is higher than said first priority, while at least partially based on a packet delay budget associated with said physical data channel configuration being within a specified packet delay threshold.
55.《Qualcomm Patent | Object scanning using planar segmentation》
In one embodiment, the patent describes techniques for generating a three-dimensional model of an object from one or more images or frames. For example, at least one frame of an object in a scene may be obtained. A portion of the object is positioned on a plane in the at least one frame. The planes may be detected in the at least one frame, and based on the detected planes, the object may be segmented from the planes in the at least one frame. A 3D model of the object may be generated based on segmenting the object from the planes. A fine mesh may be generated for a portion of the 3D model that corresponds to a portion of the object disposed in the plane.
56.《Qualcomm Patent | Synchronization of target wakeup times》
In one embodiment, the patent describes techniques for wireless communication at a station. The described techniques typically include obtaining signaling indicative of a resolution of a target wake-up time (TWT) field, determining a start time of a TWT service period (SP) based on the indicated resolution, and taking action based on the determined start time of the TWT SP.
57.《Qualcomm Patent | High bandwidth low latency cellular traffic awareness》
In one embodiment, the patent describes techniques for wireless communication. A network node is configured with a quality of service (QoS) profile including one or more file-level QoS parameters, wherein the QoS profile is applied to one or more files associated with a service flow associated with a user equipment (UE), and wherein the header of each PDU of said files includes a file identifier and a file type for the file, while implementing the QoS profile at said one or more files.
58.《Qualcomm Patent | Low-power fusion for negative shutter lag capture》
In one embodiment, the patent describes a system for processing one or more frames. For example, the process may include obtaining a first plurality of frames associated with a first setup domain from an image capture system; obtaining a reference frame associated with a second setup domain from the image capture system, wherein the reference frame is captured in proximity to obtaining a capture input; obtaining a second plurality of frames associated with the second setup domain from the image capture system, wherein the second plurality of frames is captured after the reference frame; and based on the reference frame, transforming at at least a portion of the first plurality of frames to generate a transformed plurality of frames associated with the second setup domain.
59.《Qualcomm Patent | Scale image resolution for motion estimation》
In one embodiment, the example process described in the patent may comprise: determining a motion vector identifying motion between an input image and a reference image based on the input image and the reference image; determining whether motion indicated by the motion vector is below a first threshold; inhibiting the determination of localized motion between said input image and said reference image based on the determination that motion indicated by said motion vector is below said first threshold; determining a transformation matrix based on said motion vector and without using said localized motion between said input image and said reference image; and adjusting the input image based on the transformation matrix.
60.《Vuzix Patent | Multi-antenna augmented reality display》
In one embodiment, the patent describes methods for switching between data connections established between an augmented reality display unit and a peripheral. Said method may automatically switch between communication modules, antennas or frequencies based on sensor data to reduce or eliminate the effects of proximity to water or submergence under water. The switching between communication modules, antennas or frequencies is based on the movement of the user's head or the electrical resistance, salinity or chemical composition of the body of water in which the augmented reality display unit is placed.
In one embodiment, the light-transmitting substrate is arranged to have a first of two main surfaces opposite the eye of a viewer and to direct light by internal reflection between the two main surfaces. An optically coupled output configuration couples image light corresponding to the collimated image and guided by the internal reflection between the two main surfaces outside the light-transmitting substrate. A first optical coupling configuration collimates light from the eye to produce collimated light and couples the collimated light into the optical transmission substrate to be guided by the internal reflection. A second optical coupling configuration couples the collimated light from the optical transmission substrate to an optical sensor that senses the coupled-out light. The processing system derives a current gaze direction of the eye by processing signals from the optical sensor.
62.《Magic Leap Patent | Massive Simultaneous Remote Digital Presence World》
In one embodiment, the patent describes methods and devices for allowing one or more users to be able to interact with a virtual reality or augmented reality environment. Example systems include a computing network in which a computer server is interconnected to a gateway via a high bandwidth interface to process data and/or enable data communication between the server and one or more local user interface devices. The server includes memory, processing circuitry, and software for designing and/or controlling virtual worlds, as well as storing and processing user data and data provided by other components of the system. One or more virtual worlds may be presented to a user via a user device. A large number of users may each use a device to interact with one or more of the digital worlds at the same time, use the devices to observe and interact with each other, and interact with objects generated in the digital worlds.
In one embodiment, the patent describes an imaging and visualization system for determining and executing the intent of a group of users in a shared space. A related method includes identifying, for a group of users in a shared virtual space, a corresponding goal for each of two or more users in the group. For each of the two or more users, the corresponding intent of the user is determined based on input from a plurality of sensors having different input modalities. At least a portion of the plurality of sensors are sensors of a device of a user that enables the user to participate in the shared virtual space. Determining, based on the respective intent, whether the user is performing a respective goal for the user.
64.《Magic Leap Patent | Tool bridge》
In one embodiment, the patent describes systems and methods for sharing and synchronizing virtual content. A method may include receiving a first packet of data including first data from a host application via a wearable device including a transmissive display; identifying virtual content based on said first data; presenting a view of said virtual content via said transmissive display; receiving a first user input for said virtual content via said wearable device; and generating second data based on said first data and said first user input; sending, via said wearable device, a second packet of data including said second data to said host application, wherein said host program is configured to be executed via one or more processors of a computer system remote from said wearable device and in communication with said wearable device.
65.《Magic Leap Patent | Individual viewing in a shared space》
In one embodiment, a mixed reality virtual environment may be shared among multiple users by using multiple view modes that may be selected by the presenter. The plurality of users may wish to view public virtual objects, such as virtual objects used for educational purposes, let's say artwork in a museum, automobiles, biological specimens, chemical compounds, and the like. The virtual objects may be presented to any number of users in a virtual room. By utilizing the information associated with the virtual object, the presenter (e.g., teacher) can control the presentation to guide multiple participants (e.g., students). The use of different viewing modes will allow a single user to view different virtual content in a shared viewing space, or to view the same virtual content at different locations in the shared space.
66.《Magic Leap Patent | Waypoint Creation In Map Detection》
In one embodiment, the augmented reality device may be configured to generate a virtual representation of the user's physical environment. The augmented reality device may capture an image of the user's physical environment to generate a grid graph. The augmented reality device may project a graphic at a specified location of the virtual bounding box to guide the user in capturing the image of the user's physical environment. The augmented reality device may provide visual, auditory, or tactile guidance to direct the user to look at waypoints and generate a grid map of the user's environment.
67. TheHTC Patent | Image sensing device and head-mounted display》
In one embodiment, the patent describes an image sensing device comprising a plurality of lens groups, a connector, at least one antenna element, and a plurality of image sensing elements. Each lens group includes a plurality of lens elements arranged along an optical axis from a first side. The connector includes a plurality of first portions and a plurality of second portions. The second portions are connected between the first portions. The antenna elements are respectively provided on the first portions and are configured to provide a plurality of sensing beams to the target area toward the first side. The image sensing elements are respectively provided on a side of the lens set facing the second side, and are configured to sense the reflected beams from the target area. The direction of the second side is opposite to the direction of the first side.
68. TheHTC Patent | Method for pose correction and host》
In one embodiment, the method described in the patent comprises: obtaining a first image; determining a first relative position between Host and the first reference object in response to determining that a first reference object of the at least one reference object is present in the first image; obtaining a first reference pose based on said first relative position; and calibrating a pose of Host based on the first reference pose.