Excerpts from New AR/VR Patent Applications filed at the USPTO on December 16, 2023
Article related citations and references:XR Navigation Network
(XR Navigation Network December 16, 2023) Recently, the U.S. Patent and Trademark Office announced a batch of new AR/VR patents. The following is a summary of the XR Navigation Network (for details, please click on the patent title), a total of 66 articles. For more patent disclosures, please visit the patent section of XR Navigation Networkhttps://patent.nweon.com/To search, you can also join the XR Navigation Network AR/VR patent exchange WeChat group (see the end of the article for details).
In one embodiment, the patent describes a method for monitoring an image capture trigger condition using sensor data from a wrist wearable device to determine when to capture an image using an imaging device in a head-up display. Said method includes receiving sensor data from a wrist wearable device communicably coupled to the head unit; and determining whether an image capture trigger condition for the head unit is met based on the sensor data received from the wrist wearable device. Said method also includes instructing the imaging device to capture the image data based on the determination that the image capture trigger condition is satisfied.
2. "Meta Patent | System and method for incorporating audio into audiovisual content》
In one embodiment, the patent describes a method comprising detecting a merge attribute of an audiovisual content; analyzing an audio library to determine audio files mapped to said merge attribute; selecting from said audio library said audio files mapped to said merge attribute; merging the selected audio files into said audiovisual content to generate self-centered audiovisual content; and providing the self-centered audiovisual content to the user for audiovisual consumption.
在一个实施例中,专利描述的系统可以包括支撑结构、安装到支撑结构的至少一个透镜、以及设置在透镜的至少一部分上的透明天线膜层。透明天线膜层可以包括至少一个天线,并且透明天线膜层的放置可以在支撑结构和透明天线膜层之间形成间隙。
4. The
In one embodiment, the computer-implemented method described in the patent includes receiving, via the UI of the AR design tool, a selection of a configurable interface element to place the AR design tool and the UI in a configuration phase to configure the AR effect. The computer-implemented method described includes receiving, via the UI of the AR design tool, an instruction to add a voice command module to the AR effect after the AR design tool and the UI have been placed in the configuration phase in response to the selection of the configurable interface element; configuring one or more parameters of the voice command module when the AR design tool and the UI are placed in the configuration phase; based on the configured one or more parameters of the voice command module, at runtime to generate the AR effect utilizing specific voice commands.
In one embodiment, a method described in the patent comprises receiving sensor data from one or more sensors of an electronic device or from a head-up display worn by a user in communication with the electronic device when a user interface is displayed on a display of an electronic device associated with the user; determining, at least in part based on said sensor data from said one or more sensors, whether or not enhanced display criteria are met for said electronic device, the and, based on the determination that said enhanced display criteria are satisfied, causing an enhanced representation of said user interface to be presented via the display of the head unit.
6.《Meta Patent | Force feedback trigger》
In one embodiment, the system described in the patent may include a handheld device in electronic communication with a controller. The handheld device may include a housing defining an internal cavity and an orifice providing access to the internal cavity. The trigger may include an outer surface and an inner surface. The trigger may be oriented to cover the aperture. A transfer member may be oriented in the internal cavity and coupled to the internal surface of the trigger. The force sensor may be oriented in the internal cavity and may engage a distal end of the transfer member and the internal surface of the trigger.
7.《Meta Patent | Systems and methods for predicting lower body poses》
In one embodiment, the computing system may receive sensor data from one or more sensors coupled to the user. Based on the sensor data, the computing system may generate an upper body pose corresponding to a first portion of the user's body, the first portion including the user's head and arms. The computing system may determine situational information associated with the user. The computing system may generate, based on the upper body pose and the situational information, a lower body pose corresponding to a second portion of the user's body including the user's legs. The computing system may generate a full body pose of the user based on the first upper body pose and the lower body pose.
In one embodiment root, the tunable visible light source may include a periodically polarized lithium niobate (PPLN) waveguide and a control mechanism to optimize phase matching of the PPLN waveguide in response to an input signal having a varying wavelength. The control mechanism may include an electro-optical (EO) tuning mechanism, a micro-heater based thermo-optic (TO) control mechanism, and/or an acousto-optic (AO) control mechanism. The control mechanism may generate an electric field, heat, or acoustic waves, respectively, to affect a change in the refractive index of the PPLN waveguide to optimize the conversion efficiency to maximize the output power at the output wavelength of the PPLN waveguide when tuning the input wavelength.
9. "Meta Patent | Systems and methods for cell temperature measurement》
In one embodiment, the patent describes a method of estimating a temperature in an apparatus. Said device includes a main printed circuit board (PCB), a protection and control module (PCM), and one or more batteries electrically connected to the PCM. Said method includes sensing a temperature of the PCB via a PCB temperature sensor and sensing a temperature of the PCM via a PCM temperature sensor; estimating a temperature of the at least one battery based on the temperature determined via the PCB temperature sensor and (the temperature determined as described via the PCM temperature sensor.
10.《Apple Patent | Displaying representations of environments》
In one embodiment, the method described in the patent includes displaying a home ER environment characterized by home ER world coordinates. Each stereo model view representation includes ER objects arranged in a spatial relationship according to the corresponding ER world coordinates. In response to detecting an input directed to the first stereo model view representation, the home ER environment is transformed. Transforming the home ER environment includes a function that transforms spatial relationships between subsets of ER objects into home ER world coordinates and corresponding ER world coordinates. In response to detecting an input associated with a first of the plurality of stereo model view representations, changing a display of said first of said plurality of stereo view representations from a first viewing vector to a second viewing vector while maintaining an arrangement of ER objects.
11.《Apple Patent | Three-dimensional mesh compression using a video encoder》
In one embodiment, the system described in the patent includes an encoder, the encoder configured to compress and encode data from a three-dimensional mesh. To compress the 3D mesh, the encoder will determine sub-meshes and for each sub-mesh determine texture facets and geometry patches. different encoding parameters may be used for such different encoding units. The encoding parameters are adjusted for vertices that are shared between encoding units to avoid introducing artifacts. The decoder receives the bit stream generated by the encoder and reconstructs the 3D mesh.
12.《Apple Patent | Mesh compression using coding units with different encoding parameters》
In one embodiment, the system described in the patent includes an encoder, the encoder configured to compress and encode data from a three-dimensional mesh. To compress the 3D mesh, the encoder will determine sub-meshes and for each sub-mesh determine texture facets and geometry patches. different encoding parameters may be used for such different encoding units. The encoding parameters are adjusted for vertices that are shared between encoding units to avoid introducing artifacts. The decoder receives the bit stream generated by the encoder and reconstructs the 3D mesh.
13.《Apple Patent | Generating content for physical elements》
In one embodiment, the first content may be obtained in response to identifying a first physical element of the first object type. The first content may be associated with the first object type. A second content may be obtained in response to identifying a second physical element of a second object type. The second content may be associated with a second object type. The second physical element may be detected as being within a threshold distance of the first physical element. A third content may be generated based on a combination of the first content and the second content. The third content may be associated with a third object type that is different from the first object type and the second object type.
14.《Apple Patent | Method and apparatus for visualizing sensory perception》
In one embodiment, the methods described in the patent are practical for performing generation of computer-generated reality (CGR) environment in a third person view. Said method comprises: obtaining a first viewing vector associated with a first user within a CGR environment; determining a first viewing cone of a first user within a CGR environment based on the first viewing vector associated with the first user and one or more depth attributes; generating a representation of said first viewing cone; and displaying a third-person view of said CGR environment via said display device, said third-person view comprising an Avatar of said first user and a representation of said first view cone adjacent to said Avatar of said first user.
15.《Apple Patent | Extended reality based digital assistant interactions》
In one embodiment, the example process described in the patent comprises: while displaying a portion of an XR environment representing a user's current field of view: detecting a user's gaze on a first object displayed in the XR environment; in response to detecting the user's gaze on the first object, expanding the first object to include a list of objects representing a second object of the digital assistant; detecting the user's gaze on said second object; based on detecting the user's gaze on the second object, displaying a first animation of the second object indicating that a digital assistant session is initiated; receiving a first audio input from said user; and displaying a second animation of the second object, the second animation indicating that the digital assistant is actively listening to the user.
16.《Apple Patent | Accessible mixed reality applications (Apple Patent: Accessible MR applications)》
In one embodiment, the patent describes an example process for placing a virtual object in an environment, comprising: displaying a first view of the environment; detecting movement of said electronic device from said current location to an updated location; based on the detection of movement from the current location to the updated location: displaying a second view of the environment; and receiving a user input for placing the virtual object; and, in response to receiving the user input, placing the virtual object is placed at the second location.
17.《Apple Patent | Merged 3d spaces during communication sessions》
In one embodiment, while in a communication session, a participant views an extended reality environment representing a portion of a physical area of a first user merged with a portion of a physical space of a second user. The individual spaces are aligned based on selected vertical surfaces within each physical environment, such as walls. For example, each user may manually select the corresponding wall of their own physical room, and each user may be presented with a view of the two rooms appearing to be stitched together along the selected wall. In one embodiment, the rooms are aligned and merged so that the walls appear to be pushed down/erased and turned into entrances into other users' rooms.
18.《Apple Patent | Digital-to-analog converter clock tracking systems and methods》
In one embodiment, a plurality of unit cells of a digital-to-analog converter (DAC) may be activated simultaneously to generate an analog signal based on the decoded digital signal. Latches may be used on one or more decoding LEVELS, and the latches may be activated in response to a clock signal to recapture at least a portion of the decoded data signals, thereby maintaining/improving synchronization of the activation of the unit cells. However, the latch may consume additional power during operation. Accordingly, clock tracking techniques such as static clock tracking, dynamic clock tracking, or differential clock tracking may be utilized to generate clock path activation signals.
19.《Apple Patent | Image display within a three-dimensional environment》
In one embodiment, the example process described in the patent may include obtaining a 3D image including a stereoscopic image pair, wherein the stereoscopic image pair includes left eye content corresponding to a left eye viewpoint and right eye content corresponding to a right eye viewpoint; generating a projection of the 3D image in the 3D environment by projecting a portion of the 3D image to form a shape within the 3D environment, and providing a projection of the 3D environment including the projection of the 3D image view of the 3D environment including the projection of the 3D image.
20.《Apple Patent | Electronic devices with display operation based on eye activity》
In one embodiment, the electronic device described in the patent may have a display for displaying image content. A head-mounted support structure in the device may be used to support the display. The electronic device may have an eye monitoring system for detecting eye sweeps and blinks. Control circuitry in the electronic device may coordinate operation of the display with periods of suppressed visual sensitivity associated with the sweeps and blinks.
twenty two. "Apple Patent | Head-mounted device with adjustment mechanism》
In one embodiment, the head-mounted device described in the patent includes a first device portion and a second device portion. A first coupler portion of the first device portion is connectable to a second coupler portion of the second device portion to define a connection position where the first device portion is connected to the second device portion and a disconnection position where the first device portion is disconnected from the second device portion. The second regulator portion of the second device portion causes the first regulator portion of the first device portion to move the first optical module and the second optical module in response to movement of the first device portion and the second device from the disconnected position to the connected position.
twenty two. "Apple Patent | Sensor for head-mounted display》
In one embodiment, the head-mounted display described in the patent includes a housing. A support member is attached to the housing. A sensor is disposed on a front surface of the support member and is configured to measure a characteristic of a user.
twenty three. "Microsoft Patent | Full body tracking using fusion depth sensing》
In one embodiment, the technology described in the patent may be used to detect, measure, and/or track the location of an object via a radar sensor device attached to a wearable device. Each radar sensor generates, captures, and evaluates a radar signal associated with the wearable device and the surrounding environment. Objects located within a field of view with sufficient reflectivity will generate radar return signals, each with a characteristic time of arrival (TOA), angle of arrival (AOA), and frequency shift (Doppler shift). The sensed return signals can be processed to determine distance and direction, as well as to identify objects based on their radar characteristics. Object information including position and identification may be further parsed based on correlation with measurements from one or more digital cameras or inertial measurement units.
twenty four. "Microsoft Patent | Spad array for intensity image capture and time of flight capture》
In one embodiment, the patent describes a system for facilitating intensity image capture and time-of-flight capture. Said system includes an image sensor array including a plurality of image sensor pixels, one or more processors, and one or more hardware storage devices storing instructions, said instructions being executable by said one or more processors, to configure said system to facilitate intensity image capture and time-of-flight capture operations by configuring said system to use said image sensor array to perform interleaved intensity image capture as well as time-of-flight capture.
25.《Qualcomm Patent | Virtual models for communications between user devices and external observers》
In one embodiment, a system and method for interaction between a self-driving vehicle and one or more external observers includes a virtual model of a driver of the self-driving vehicle. The virtual model may be generated by the self-driving vehicle and displayed to the one or more external observers, and may use a device worn by the external observers. The virtual model may use gestures or other visual cues to facilitate interaction between the external observer and the self-driving vehicle. The virtual model may be encrypted with features of the external observer, such as an image of the external observer's face, iris, or other representative features. A plurality of virtual models for a plurality of external observers may be used simultaneously for a plurality of communications while preventing interference due to possible overlap of the plurality of virtual models.
26.《Qualcomm Patent | Thermal management of an electronic device》
In one embodiment, the patent describes a system for managing a thermal profile of an electronic device. For example, the described method may include setting the electronic device to a first thermal profile based on a first scenario, receiving a scenario change indicating that the electronic device is associated with a second scenario, and setting the electronic device to a second thermal profile based on the second scenario. The setting of the electronic device to the second thermal profile may occur in an interactive mode in which user input is received or a non-interactive mode in which the electronic device automatically selects the second thermal profile.
27.《Qualcomm Patent | Verbal communication in a virtual world》
In one embodiment, the patent describes a method for providing communication in a meta-universe. Said method generally comprises receiving, via one or more microphones, voice data of a user corresponding to an Avatar, selecting at least one device from a plurality of devices associated with a plurality of other Avatars based at least in part on the strength of the user's voice message, and transmitting said voice data to the selected at least one device.
28.《Qualcomm Patent | Systems and methods of automated imaging domain transfer》
In one embodiment, the patent describes imaging systems and techniques. The imaging system receives an image of a user, such as a pose and/or facial expression, from an image sensor. The image sensor captures a first set of images in a first electromagnetic (EM) frequency domain such as the infrared and/or near infrared domain. The imaging system generates a representation of the user in the first pose in the second EM frequency domain at least in part by inputting the images into one or more trained machine learning models. The representation of the user is based on image characteristics associated with at least a portion of the image data of the user in the second EM frequency domain. The imaging system outputs a representation of the pose of the user in the second EM frequency domain.
29.《Google Patent | Closed loop photography》
In one embodiment, the patent describes improved techniques for generating photographs using a camera or other image capture device. Said techniques include the use of "soft" indicators, such as tactile, audio, flat panel displays and other modes, to warn the user of potential occlusions before the user activates a shutter trigger.
30.《Google Patent | Scanning projector pixel placement》
In one embodiment, the system described in the patent is used in a technique for displaying graphical content by modulating the intensity of one or more emitted beams while redirecting the emitted beam generation along a scanning path that includes a plurality of locations of the projection surface. The one or more emitted beams each having a corresponding intensity are emitted and redirected along a scanning path including at least some of the plurality of locations. During the redirection of said one or more emitted beams, modulating the corresponding intensity of each of said one or more emitted beams according to said timing information to display one or more pixels of an image at each of said at least some of the plurality of locations.
31.《Google Patent | Wearable devices with camera》
In one embodiment, the method described in the patent may comprise, based on a scenario of a wearable device, periodically capturing and storing a plurality of images by the wearable device, said plurality of images being stored in a buffer included in the wearable device, periodically erasing the least valuable image of the plurality of images from the buffer, receiving a request to view the plurality of images stored in the buffer, and, in response to the request to view the plurality of images stored the plurality of images in the buffer, outputting the plurality of images stored in the buffer.
32.《Google Patent | Smart eyewear with access point for data input/output》
In one embodiment, the eyewear described in the patent includes a frame configured to hold a first lens and a second lens, a first side support coupled to the frame using a first hinge, and a second side support coupled to the frame using a second hinge. At least one of the first side support and the second side support includes one or more electronic components, and at least one of the first hinge and the second hinge is electrically coupled to the at least one electronic component and serves as an electrical contact to access a signal from the electronic component. Accordingly, at least one of the first hinge and the second hinge is electrically connected to the at least one electronic component and is provided to form an access point at the eyewear for accessing signals from the electronic component by connecting an external device to the first hinge and at least one of the first hinge and said second hinge.
33.《Samsung Patent | Retinal projection display device and phase profile optimization method thereof》
In one embodiment, the patent describes a retinal projection display device. The retinal projection display device includes a light source configured to emit light, a spatial light modulator configured to generate diffracted light by diffracting the emitted light, a holographic optical element configured to reflect the diffracted light by compounding the diffracted light into a plurality of complex wavefronts, and a field lens. Said field lens is configured to focus said plurality of complex wavefronts to a plurality of corresponding focal points in a viewing window, wherein said plurality of sub-wavefronts overlap each other.
In one embodiment, the patent describes a multi-degree-of-freedom mobile table comprising a plurality of jointed portions. At least one of said plurality of joint portions has a joint structure comprising a first flexible member and a first rigid member and a second rigid member, said first rigid member and said second rigid member being arranged to be spaced apart from each other and having a first gap therebetween such that said first gap exposes a first surface of said first flexible member. A honeycomb pattern having shapes complementary to each other is provided at a first end of the first rigid member adjacent to the first gap and at a second end of the second rigid member adjacent to the first gap, respectively.
In one embodiment, the patent describes a method performed by a terminal in a wireless communication system. Said method comprises receiving from a base station configuration information associated with a set of a plurality of downlink resources, said set configured to occur periodically, monitoring said plurality of downlink resources of said set in a time slot for receiving downlink data, and skipping monitoring of said set in said time slot if said downlink data is received in said downlink resources of said set in said time slot of remaining one or more downlink resources of said set.
In one embodiment, the patent describes a method performed by a terminal supporting non-terrestrial network (NTN) communications in a wireless communication system. Said method may comprise: identifying a synchronization grating to perform an initial access of the NTN communication, and receiving a synchronization signal block (SSB) from a satellite based on the identified synchronization grating. Wherein the position of said synchronization grating may be identified by a global synchronization channel number (GSCN).
In one embodiment, the patent describes a method performed by a user equipment (UE) in a wireless communication system, said method comprising receiving from a base station with aAImodel-related configuration information; performing monitoring of said first AI model of said UE for encoding and decoding channel state information (CSI); and reporting the monitoring results to the base station. The first AI model includes a first encoder and a first decoder of the UE, and the first decoder may be associated with a second AI model of a base station including a second decoder.
38.《Samsung Patent | Electronic device for providing ar/vr environment, and operation method thereof》
In one embodiment, the electronic device described in the patent may include a display, a processor, and a memory. The memory may store instructions to cause the processor to: execute a first application providing a stereoscopic screen; store in a first frame buffer a first execution screen generated by rendering a screen provided by said first application; while displaying said first execution screen, recognize a request for execution of a second application providing a non-stereoscopic screen; respond to the request for execution of said second application to execute said second application; storing a second execution screen generated by rendering a screen provided by said second application in said first frame buffer; storing a third execution screen generated by stereoscopically rendering said second execution screen in a second frame buffer different from said first frame buffer; changing the frame buffer referenced by the display from the first frame buffer to the second frame buffer; and displaying the third execution screen.
39.《Samsung Patent | Device and method to calibrate parallax optical element》
In one embodiment, the patent describes an electronic device comprising a display for outputting an image, a parallax optical element configured to provide light corresponding to the image to a plurality of viewpoints, an input interface configured to receive an input for calibrating the parallax optical element, and a processor. Said processor is configured to output a patterned image generated by drawing a calibration pattern to said reference viewpoint, adjust at least one of a pitch parameter, a tilt angle parameter, and a positional offset parameter of said parallax optical element based on said input, output the adjusted parameter by said display, and re-render the calibration pattern to adjust the patterned image based on the adjusted parameter.
In one embodiment, the patent describes an electronic device comprising a light source configured to output light, a pattern mask configured to alter a path of light transmitting a pattern through the pattern mask, an image sensor configured to receive light, and at least one processor. The processor is configured to obtain a coded image that is phase-modulated based on the light transmitted through said pattern mask, to obtain feature points of the eye from said coded image, and to obtain gaze information of a user based on said feature points.
41.《Samsung Patent | Method and apparatus for generating augmented reality view》
In one embodiment, a method of generating an AR view for a first user performed by a first electronic device comprises: obtaining an image of a space captured by a second electronic device and AR experience information about at least one AR object in the space, wherein said AR experience information corresponds to a second user; obtaining at least one path for generating the AR view based on said image of the space and said AR experience information ; generating said AR view based on said at least one path; and outputting the AR view.
在一个实施例中,专利描述的优化方法包括:将第一全息轮廓设置为变量;以及执行预定次数的优化循环,其中所述优化循环包括通过使用ApprovSign函数将所述第一全息图轮廓编码为二进制全息图轮廓;通过使用平铺函数考虑全息图像的高阶衍射项噪点,计算二元全息轮廓的显示表面上的全息图像的场值;计算所述显示表面上的所述全息图像的强度;基于全息图像的强度和目标图像的强度之间的差来计算损失函数值;以及基于所述损失函数值将所述第一全息图轮廓更新为第二全息图轮廓。
43.《Sony Patent | Dual sphere foot operated position-based controller》
In one embodiment, the patent describes a controller apparatus comprising two or more balls, said two or more balls being of sufficient size to be maneuvered by a person with a single foot. Three or more bearings support at least one ball from below and allow rotation relative to at least two axes. One or more position encoders are provided near a surface of each of the two or more balls. The encoders are configured to determine a rotational displacement of each of the two or more balls relative to the two or more axes.
In one embodiment, the patent describes methods for processing audio for a user. Said method includes recognizing a user's interaction region when the user interacts with the game in a real world environment. The real world environment is monitored to detect any changes that may affect the interaction region. In response to the detected changes, a volume of audio directed to one or both sides of the head unit is dynamically adjusted.
In one embodiment, the position and orientation of the controller is determined by a known configuration of the two or more light sources relative to each other and relative to the body of the controller, as well as by an output signal from a dynamic vision sensor (DVS) generated in response to changes in light output from the light sources. The output signal indicates a time event at a corresponding photosensitive element in an array in the DVS and a position of the array of photosensitive elements. The position and orientation of one or more objects is determined by signals generated by two or more photosensitive elements, and said signals are generated by other light reaching said two or fewer photosensitive elements.
In one embodiment, the DVS has an array of photosensitive elements of a known configuration and, in response to changes in light output from two or more light sources of the known configuration relative to each other and relative to the controller body, outputs a signal corresponding to an event at a corresponding photosensitive element in the array. The signals are indicative of the timing of the event and the array position of the corresponding photosensitive element. The filter selectively transmits light from the light source to the photosensitive elements in the array and selectively blocks other light from reaching those elements and vice versa. The processor determines the position and orientation of the controller based on the time of the event, the position of the array of corresponding photosensitive elements, and the known configuration of the light source, and determines the position and orientation of one or more objects based on signals generated by two or more photosensitive elements generated by other light.
47.《Sony Patent | Asynchronous dynamic vision sensor led ai tracking system and method》
In one embodiment, the tracking system described in the patent uses a dynamic vision sensor (DVS) operably coupled to a processor. the DVS has an array of photosensitive elements of known configuration. the DVS outputs signals corresponding to events at corresponding photosensitive elements in the array in response to light from two or more light sources mounted in a known configuration. The output signals indicate the timing of the event and the location of the corresponding photosensitive element. The processor determines an association between each event and the two or more corresponding light sources and uses the determined association, the known configuration of the light sources, and the positions of the corresponding photosensitive elements in the array to fit the position and orientation of the controller.
48.《Sony Patent | Synchronous dynamic vision sensor led ai tracking system and method》
In one embodiment, the tracking system and method described in the patent includes a processor; and a controller operably coupled to said processor. Two or more light sources are mounted in a known configuration relative to each other and relative to the body of the controller, and the two or more light sources are configured to flash in a predetermined time sequence. A dynamic vision sensor is configured to output a signal corresponding to two or more events at two or more corresponding photosensitive elements in the array in response to changes in light output from the two or more light sources, and to output a signal corresponding to the timing of the two or more events and the position of the events in the photosensitive elements in the array. The processor determines a position and orientation of the controller based on the timing and position of the two or more events and known information.
In one embodiment, the methods described in the patent may provide an enhanced augmented reality entertainment environment in which known or unknown physical objects may be recognized and used as reference locations for visual elements, and the visual elements may then be rendered in the augmented reality environment. The described method may be initialized in response to user input. The user input may select the physical object to be used for recognition. Said method may be performed in real time, i.e. in response to the user providing input to the user interface, and then performing a next step when the information and/or data required for the previous step has been identified.
In one embodiment, the patent describes a data processing apparatus comprising circuitry configured to receive an image of a user in an environment; detecting an object in said image; executing one of a plurality of user-selectable processes associated with said object, each of said plurality of user-selectable processes being associated with a visual feature that hides said object in said image; and sending data representing said image after having executed said plurality of user-selectable processes said one of said plurality of user-selectable processes has been performed, sending data representing said image.
51.《Sony Patent | Single sphere foot operated position-based controller》
In one embodiment, the patent describes a controller apparatus operated with the feet. Said apparatus includes a ball large enough to be maneuvered by a person with two feet, and support means. The support device is configured to support the ball from below, limiting translation of the ball and allowing the ball to rotate about its center relative to at least two axes. One or more position encoders are disposed proximate a surface of the ball. The encoders are configured to determine a rotational displacement of the ball relative to the two or more axes.
52.《Sony Patent | Dynamic vision sensor tracking based on light source occlusion》
In one embodiment, the patent describes a tracking system comprising a processor, a controller, two or more light sources, and a dynamic vision sensor (DVS). The light sources have known configurations with respect to each other relative to the controller and are turned on and off in a predetermined sequence.The DVS includes an array of photosensitive elements of known configurations.The DVS responds to a change in light from the light sources by outputting a signal corresponding to an event at a corresponding photosensitive element in the array. The signals indicate the timing of the event and the position of the corresponding photosensitive element. The processor determines an association between each event and one or more light sources and determines a masking of one or more light sources based on the association. The processor estimates the location of the object using the determined masking, the known configuration of the light sources, and the positions of the corresponding photosensitive elements in the array.
53.《Sony Patent | Dynamic vision sensor based eye and/or facial tracking》
In one embodiment, the patent describes a tracking system using one or more light sources to direct light toward one or more eyes of a user. A dynamic vision sensor (DVS) operably coupled to a processor is configured to observe the user's eye.The DVS has an array of photosensitive elements of a known configuration.The DVS, in response to a change in light reflected from a portion of the user's eye from the light source, outputs a signal that corresponds to an event at the corresponding photosensitive element in the array. The output signal includes information corresponding to a position of the corresponding photosensitive element in the array. The processor determines an association between each event and the corresponding particular light source and uses the determined association, the relative position of the light source with respect to the user's eye, and the position of the corresponding photosensitive element in the array to fit the orientation of the user's eye.
54.《Snap Patent | Procedurally generating augmented reality content generators》
In one embodiment, the patent describes program generation for an augmented reality content generator. Said method may determine at least one primitive shape based on at least one graphical element in an AR facial pattern; generate a JavaScript Object Representation (JSON) file using the at least one primitive shape; and generate Internal Facial Makeup Format (IFM) data using the JSON file.
54.《Snap Patent | Fast ar device pairing using depth predictions》
In one embodiment, the patent describes a method for aligning a coordinate system from a separate AR device. Said method comprises generating a predicted depth of a first point cloud by applying a pre-trained model to a first individual image generated by a first monocular camera of a first AR device and a first sparse 3D point generated by a first SLAM system at the first AR device, generating a predicted depth of a second point cloud by applying said pre-trained model to a second individual image generated by a second monocular camera of said second AR device and a second sparse 3D point generated by said first SLAM system at said second AR device to generate a predicted depth of a second point cloud, determining a relative pose between said first AR device and said second AR device by registering said first point cloud with said second point cloud.
55.《Snap Patent | Ar assisted safe cycling (Snap Patent: AR assisted safe cycling)》
In one embodiment, the patent describes a method of providing a proximity warning using a head-mounted device. Said method uses image processing techniques to determine the distance between the head-mounted device and an associated object, such as a bicyclist, jogger, or vehicle. The speed of the head-mounted device is determined using a GPS receiver or other location component located in the head-mounted device or an associated user device. A braking distance of the head-mounted device is determined based on the speed of the head-mounted device and compared to a distance to an associated object. If the distance to the object of interest is less than the braking distance, the head-mounted device provides a warning notification.
In one embodiment, the patent describes an augmented reality head-mounted display system comprising an eyepiece having a transparent emissive display. The goggles and the transparent emitting display are disposed in the optical path of a user's eye so as to transmit light into the user's eye to form an image. Due to the transparent nature of the display, the user can see the external environment through the transparent emitting display. The transmissive emissive display comprises a plurality of emitters, said plurality of emitters configured to emit light into the user's eye.
57.《Magic Leap Patent | Systems and methods for augmented reality》
In one embodiment, the patent describes a method for reducing errors in noisy data received from a high frequency sensor by collecting a first set of dynamic inputs from the high frequency sensor, collecting a correction input point from the low frequency sensor, fusing the received inputs with data received from the low frequency sensor, and damping the correction input point by a full translation to the correction input point or a damping method towards the correction input point, based on the correction input point to adjust the propagation path of a second set of dynamic inputs from the high frequency sensor.
58.《Magic Leap Patent | Reverberation fingerprint estimation》
In one embodiment, the patent describes systems and methods for estimating acoustic characteristics of an environment. In the example method, a microphone of a wearable head device receives a first audio signal. An envelope of the first audio signal is determined and a first reverberation time is estimated based on the envelope of the first audio signal. A difference between the first reverberation time and the second reverberation time is determined. Determining a change in the environment based on the difference between the first reverberation time and the second reverberation time. A speaker of the wearable head device presents a second audio signal, wherein the second audio signal is based on the second reverberation time.
In one embodiment, the patent describes non-equally spaced light emitting devices that can be used in displays in augmented reality, virtual reality, and mixed reality environments. Specifically, the patent primarily relates to non-equally spaced small light emitting devices, such as Micro-LEDs.The patent also describes systems and methods for emitting a plurality of light through a plurality of panels, wherein the spacing of one panel is different from the spacing of the other panels. Each panel may include a corresponding array of light emitters. The plurality of lights may be combined by a combiner.
60.《HTC Patent | Virtual reality device (HTC Patent: Virtual reality device)》
In one embodiment, the virtual reality device described in the patent includes a main body portion, a plurality of first type antennas, and a plurality of second type antennas. The main body portion has a first side eyeglass frame, a second side eyeglass frame, and a connection portion. The connecting portion is connected to the first side eyeglass frame and the second side eyeglass frame. The second type antennas and the corresponding first type antennas are provided on the first side of the first side eyeglass frame, on the second side of the second side eyeglass frame, and on the connecting portion, respectively. The first side of the first side spectacle frame is opposite to the second side of the second side spectacle frame.
61.《HTC Patent | Speaker module and wearable device (HTC Patent: Speaker module and wearable device)》
In one embodiment, a speaker module is configured to a wearable device. The speaker module includes a housing and a driver unit. The driver unit is used to generate sound. A sum of sounds of sounds output from a front opening, a first rear opening and a second rear opening of the housing has a directivity. A connection vector defined by the openings and an inverse vector of the connection normal vector are added to form a combination vector. A unit vector of the combination vector and a unit vector of the front normal vector facing the outside of the front opening are added to form a sum vector. The direction of the sum vector is the direction of the sound sum.