Excerpt from new AR/VR patent applications filed by the US Patent Office on December 2, 2023
Article related citations and references:XR Navigation Network
(XR Navigation Network December 02, 2023) Recently, the U.S. Patent and Trademark Office announced a batch of new AR/VR patents. The following is a summary of the XR Navigation Network (for details, please click on the patent title), a total of 66 articles. For more patent disclosures, please visit the patent section of XR Navigation Networkhttps://patent.nweon.com/To search, you can also join the XR Navigation Network AR/VR patent exchange WeChat group (see the end of the article for details).
In one embodiment, the method described in the patent is used for sending a flow classification service (SCS) request frame by a first wireless communication device to a second wireless communication device, the flow classification service request frame including the information to be sent between the first wireless communication device and the third wireless communication device. Traffic information communicated between wireless communication devices. The first wireless communication device, the second wireless communication device, and the third wireless communication device share a wireless communication channel. The method may include, by the first wireless communications device responsive to receiving an indication that a transmission opportunity (TXOP) for the second wireless communications device is shared with the first wireless communications device, communicating between the first link and the second wireless communications device during a first portion of the TXOP. The wireless communications device communicates and communicates with a third wireless communications device on the second link during the second portion of the TXOP.
In one embodiment, the patent-described method may include: receiving user input from a user specifying a visual media item for insertion into an artificial reality environment; and responsive to receiving the user input, inserting the visual media item into the artificial reality environment. ; enabling the user to comment on visual media items within the artificial reality environment by inserting the user's Avatar into the artificial reality environment; and enabling at least one additional user to view from at least one viewpoint of the artificial reality environment.
3.《Meta Patent | Power management and distribution (Meta patent: Power management and distribution)》
In one embodiment, the patent describes a system for managing power distribution between devices based on one or more applicable power policies. The system can receive Power status data including Power condition data representing Power conditions associated with a plurality of devices including the first device. The system may determine the second device to wirelessly provide Power to the first device based on the Power status data. The system can generate a Power allocation command configured to cause a wireless Power transfer from the second device to the first device, and send the Power allocation command to at least one of the first device or the second device to initiate a transfer of power from the second device to the second device. Wireless Power is sent to the first device.
In one embodiment, the patent describes may include and/or implement a Micro LED display embedded with phase change material that stores thermal energy. Phase change materials can be configured and/or designed to store thermal energy generated by Micro LED displays without increasing temperature during phase change. In other words, phase change materials are capable of absorbing and/or capturing large amounts of heat generated by Micro LED displays while maintaining a substantially constant and/or unchanging temperature.
In one embodiment, a computing system may receive a video that includes multiple image frames. The system can target each of multiple image frames and use a spatial attention encoder to generate image frame features corresponding to the image frame. For each image frame feature, the system may use a temporal attention decoder based on one or more of the image frame features corresponding to one or more of the plurality of image frames that precede the time associated with the predicted future feature. to generate predicted future features. The system can generate future action expectations based on predicted future characteristics. Future action anticipation corresponds to the expectation of future actions occurring following an action sequence observed in multiple image frames in a video.
In one embodiment, the method described in the patent includes determining that a user has performed a triggering action on a triggering object; generating artificial reality content in response to determining that the user has performed said triggering action; and The display presents the artificial reality content; determines that the user has performed a detriggering action; and in response to determining that the user has performed a detriggering action, terminates presentation of the artificial reality content.
In one embodiment, the first device described in the patent may include a processor, a first wireless network interface, and a second wireless network interface. The processor may be configured to establish a second wireless network via the second wireless network interface while associating with the access point via the first wireless network on the first channel via the first wireless network interface. A group of client devices including one or more client devices may join the second wireless network on the first channel via the second wireless network interface. The processor may be configured to receive first information from the access point via the first wireless network interface. The processor may be configured to send the first information to the set of client devices via the second wireless network interface.
8. "Meta Patent | Compatible accessory module (Meta patent: compatible accessory module)》
In one embodiment, the patent describes a universally compatible accessory module that includes a haptic driver. Embodiments may also include a light emitting diode (LED) driver; a battery; a plurality of LEDs controlled by the LED driver; and at least one mechanical actuator controlled by a haptic driver.
In one embodiment, the patent describes a cognitive load estimation system. The system includes an in-ear device (IED) configured to be placed within a user's ear canal. The IED includes a first set of functional near-infrared spectroscopy (fNIRS) photodiodes configured to capture first fNIRS signal data representative of hemodynamic changes in the user's brain; configured to capture electrical signals corresponding to brain activity of the user at least one electroencephalogram (EEG) electrode; a controller configured to filter the first fNIRS signal data based in part on the electrical signal to generate filtered fNIRS data. The controller is also configured to estimate the user's cognitive load based on the filtered fNIRS signal data.
In one embodiment, the method described in the patent may include determining the first or more features from the first frame. The first frame includes the target object; a first mask associated with the first frame is obtained. the first mask includes an indication of the target object; generates a representation of the foreground and background of the first frame based on the first mask and the first one or more features; determines the second one or more features from the second frame, and Determine the location of the target object in the second frame based on the foreground and background representations of the first frame and the second or more features.
In one embodiment, the method described in the patent is used for file delivery feedback and/or application feedback for specific application services. Example methods generally include communicating files with a user equipment (UE) with multiple packets per file, determining that a delivery failure has occurred for at least one file, and sending a notification of the delivery failure to a server entity.
In one embodiment, the patent describes a computer-implemented method that includes receiving an image of an eye through a vision correction lens via a camera in a head mounted display device, obtaining an inverse transform corresponding to the vision correction lens, applying the inverse transform to the received of the image to obtain an uncorrected image of the eye, and perform an uncorrected image based on the eyeeye tracking.
In one embodiment, the patent describes a scanning mirror system that includes a mirror portion, a curved arm extending from the mirror portion, and a piezoelectric actuator support portion that supports a piezoelectric actuator including a piezoelectric membrane. The scanning mirror system also includes a transmission arm extending between the bending arm and the piezoelectric actuator support portion to transmit the motion of the piezoelectric membrane to the bending arm. The transfer arm is at least partially separated from the piezoelectric actuator support portion by a first gap. The scanning mirror system also includes an anchor portion at least partially separated from the piezoelectric actuator support portion by a second gap, the anchoring portion being configured to anchor the scanning mirror system Set to another structure.
In one embodiment, the conferencing system can perform context determination and automatically select and apply appropriate corresponding 2D or 3D user interface constructs for meeting participants. The system also enables client endpoints to switch display between 2D and 3D settings using different user interface structures. The system can leverage meeting agendas to perform scenario determination and identify meeting types based on agenda items, such as: financial reports, brainstorming sessions, andsocial contactreunion. In response, the system automatically applies 2D and 3D structures suitable for each agenda item.
15.《Microsoft Patent | Video compression (Microsoft Patent: Video Compression)》
In one embodiment, the decoder receives compressed red-green-blue depth RGBD frames of video. The decoder accesses the reference RGBD frame. The decoder re-projects the reference RGBD frame using its depth channel and camera pose to compute a re-projected version of the reference frame. The decoder uses a reprojected version of the reference frame to decode the compressed RGBD frame.
In one embodiment, the patent describes a system of automated visual indicators to display the active speakers of a communication session displayed as a 3D representation. Specific participants of the communication session can be displayed in the user interface using 3D representations, such as Avatar. The user interface may also include renderings and quantities of 2D images of other participants displayed in the gallery, for example, designated display areas for active speakers. When a user displayed as a 3D representation starts speaking, the system can detect speaker activity by detecting audio signals from the user's device. In response to detection, the system can then automatically add the user's complementary images to the gallery. Complementary images can help viewers navigate through complex user interface arrangements that display large numbers of Avatars.
In one embodiment, the patent describes a system for converting a user interface arrangement from a display of a two-dimensional image of a user to a presentation of a three-dimensional representation of the user. The system may start with a UI that includes a user's rendering based on a 2D image file. The system can receive input configured to cause the system to convert a display of a rendering of a 2D image of a selected user to a rendering of a 3D representation of the selected user. To display a representation of the selected user's 3D representation, the system uses permission data and a 3D model that defines position and orientation to display the user's 3D representation. The system allows users to switch between viewing modes to allow users to interact with content using the most efficient type of hardware.
In one embodiment, the patent describes a system that includes a first device that presents image data, a second device that stores the image data, and a display panel that displays the image data stored in a memory. The first device renders multiple frames of image data, compresses the multiple frames into a single superframe, and transmits the single superframe. The second device receives the single superframe, decompresses the single superframe into multiple frames of image data, and stores the image data in the memory of the second device.
19.《Apple Patent | Environment for remote communication (Apple Patent: Remote Communication Environment)》
In one embodiment, the patent describes an electronic device including instructions for receiving, by a first electronic device, a request to present a virtual representation of a remote participant of a communication session while presenting an extended reality environment, wherein the first electronic device connecting to a communication session; obtaining the capability of the remote participant of the communication session; and in response to receiving a request to present a virtual representation of a remote participant of the communication session, based on the obtained remote participation in the communication session The ability of the operator to present said virtual representation of the remote participant of the communication session.
20.《Apple Patent | Group communications platform (Apple Patent: Group communications platform)》
In one embodiment, a group communication platform facilitates sharing application environments with other users. The platform can receive requests to initiate group sessions for local users and remote users. An out-of-process network connection to a system communication channel between a local computing device associated with the local user and a remote computing device associated with the remote user may be established for the group session. A system call may be received from a local instance of the first application on the local computing device to transfer local data to a remote instance of the first application on the remote computing device via an out-of-process network connection. The local data may be transferred to the remote instance of the first application on the remote computing device via out-of-process network connections and system communication channels. The local data may include state data for a local instance of the first application that updates the state of r.
In one embodiment, the patent describes a method that includes instantiating a first object achiever (OE) associated with a first attribute and a second OE associated with a second attribute into a synthetic reality (SR) setting, wherein the first OE is encapsulated in the second OE; a first target is provided to the first OE based on the first and second attributes; a second target is provided to the second OE based on the second attribute, wherein the first target and the second OE are Two goals are associated with a time period between a first point in time and a second point in time; a first action set for the first OE is generated based on the first goal, and a first action set is generated for the first OE based on the second goal. a second set of actions for the second OE; and being presented for displaying the SR settings during a time period that includes the first set of actions performed by the first OE and the second set of actions performed by the second OE.
In one embodiment, a representation of the field of view of one or more cameras including multiple objects in the physical environment is displayed in an augmented reality user interface. In response to the one or more first user inputs, the system places or moves the virtual object to a location in the representation of the field of view that corresponds to a physical location on or near the first surface of the first physical object. If the virtual object is located at a portion of the first surface that does not include other physical objects, or that includes a portion of a physical object that extends from the first surface by less than a threshold amount, then the first virtual object is identical in the representation of the field of view to the first surface. Represents a predetermined spatial relationship.
In one embodiment, when the combined view is presented, if an input of the first type is detected, the combined view is adjusted by increasing the visibility of the image of the virtual environment and decreasing the visibility of the image of the real environment. If input of the second type is detected, the combined view is adjusted by reducing the visibility of the image of the virtual environment and increasing the visibility of the image of the real environment.
In one embodiment, the patent describes a method that includes determining that the electronic device changes from a first distance to a physical surface to a second distance from the physical surface while displaying computer-generated content on a display in accordance with a first lock mode; Based on the determination that the second distance meets the lock mode change criteria, the display of the computer-generated content is changed from the first lock mode to the second lock mode; based on the determination that the second distance does not meet the lock mode change criteria, the computer-generated content is maintained in accordance with the first lock mode display of content. Examples of lock mode change criteria include occlusion criteria and distance criteria.
In one embodiment, a computer system facilitates manipulation of a three-dimensional environment relative to a viewpoint of a user of the computer system. In one embodiment, a computer system facilitates manipulation of virtual objects in a virtual environment. In one embodiment, a computer system facilitates manipulation of a three-dimensional environment relative to a reference point determined based on attention of a user of the computer system.
In one embodiment, the computer system provides non-visual feedback based on a determination that information about the user has been captured. As the user and/or the computer system move relative to each other, the computer system displays different portions of the user's representation. The computer system displays different three-dimensional content associated with different steps of the registration process. The computer system displays visual elements at different simulated depths, with the visual elements moving in simulated parallax to facilitate alignment of the user and the computer system. The computer system prompts the user to make the facial expression and displays a progress bar indicating the amount of progress in making the facial expression. The computer system adjusts dynamic audio output to indicate the amount of progress toward completing a step of the registration process.
26.《Apple Patent | Hinged wearable electronic devices (Apple patent: hinged wearable electronic devices)》
In one embodiment, the electronic device may have rigid members, such as links in a band connected by swivel joints. In a headset, the headband can be folded to be stored within an interior area in the headset's housing. A headband with two parallel strap portions may be configured to serve as a support for a cell phone or other device with a display. Electronic devices may have multiple support structures connected by friction hinges.
In one embodiment, the system described in the patent may include a head-mounted device. The head mounted device may have a head mounted housing including a display. The headset housing may have a compressible opaque light seal. The light seal can have an annular shape and block stray ambient light around the periphery of the headset housing, ensuring that stray light does not interfere with the user's viewing of the image. Sensors can be provided in the light seal to measure facial expressions and collect other measurements. Information about the user's measured facial expression can be sent to an external device such that the external device can update the corresponding facial expression on the Avatar to reflect the user's current facial expression.
In one embodiment, the computer-implemented method described in the patent includes: detecting, by a first computer system, first content of a tangible instance of a first document; generating, by the first computer system, a first hash using the first content; The first computer system sends the first hash for receipt by a second computer system; and the first computer system receives a response to the first hash generated by the second computer system, the response Information corresponding to a second document associated with the first content is included.
In one embodiment, a technique for compressing level of detail (LOD) data involves sharing texture image LODs between different mesh LODs for single rate encoding. In one embodiment, the first texture image LOD corresponding to the first mesh LOD may be derived by refining the second texture image LOD corresponding to the second mesh LOD. This sharing is possible when the LOD mesh's texture map is compatible.
In one embodiment, the method described in the patent may include capturing, by one or more of a plurality of sensors of a computing device, corresponding sensor data corresponding to a physical environment; and generating, by a processor of the computing device, a representation of the physical environment based on the corresponding sensor data. A cube of the environment; the cube may include corresponding sensor data, geometric data corresponding to the physical environment, semantic data corresponding to the physical environment; multiple elements of the cube are identified and indexed by the processor of the computing device Dimensions enable at least one of the plurality of dimensions of the cube to be searched, queried, and/or modified to display a visualization of the physical environment of the device based on the search, query, and/or modification.
In one embodiment, the patent describes a method performed by a user equipment (UE) in a wireless communication system, including receiving from a base station a radio resource control (RRC) including first information for a small data transmission (SDT) configuration. ) release message, sends small data to the base station based on the first information, and sends a random access RA report including SDT information to the base station.
In one embodiment, the patent describes a method performed by a network device in a communications system. The method includes the following steps: receiving the terminal's access request message to the network slice from the access and mobility management function (AMF), obtaining the terminal's external network identifier from the unified data repository (UDR), based on the terminal's external network identifier symbol, information about whether the terminal is allowed to access the network slice, and sends an access control response message to the AMF.
In one embodiment, the patent describes a 5G or 6G communication system for supporting higher data transmission rates, and methods performed by user equipment in the communication system. The method includes determining a power headroom report (PHR) and/or transmit signal power based on the first information; and transmitting the PHR and/or transmitting the signal with the transmit signal power.
In one embodiment, the patent describes a method performed by a terminal in a wireless communication system. The method includes receiving a first message associated with triggering transmission of a second message from a base station, identifying whether the transmission of a second message associated with network energy saving is triggered based on the first message, and sending a first message associated with network energy saving to the base station. two information, in case the transmission of the second message is triggered, and receiving a third message from the base station associated with a network energy status of the base station configured based on the second message.
In one embodiment, the method described in the patent includes a primary node (MN) sending a secondary node (SN) add request message to a target candidate SN, the MN receiving an add request confirmation message from the target candidate SN, and the MN sending an add request message to a user equipment (UE). ) sends a Radio Resource Control (RRC) reconfiguration message, and the MN receives an RRC reconfiguration complete message from the UE, and the RRC reconfiguration complete message is the UE access candidate SCG The RRC reconfiguration completion message sent after the PS cell, the MN sends an SN release request message to the source SN, receives the SN release request confirmation message from the source SN, and updates the saved UE history information by the MN. Send UE history information to the target SN.
In one embodiment, the method described in the patent includes: generating an AGK for an ad hoc group; sending an ad hoc group communication request message including a list of individuals included in the ad hoc group and connected to the first IDs of multiple second terminals of a server; receiving an ad hoc group communication return message signed by the first server; verifying all the second terminals based on the first authentication information related to the first terminal and the second authentication information. The AGK performs encryption and signing operations; sends an ad hoc group communication security material request message including the encrypted and signed AGK; and in response to sending the ad hoc group communication security material request message, receives an ad hoc group communication response message ; and perform secure communications.
In one embodiment, the patent describes an apparatus that includes a communications circuit and a processor operatively connected to the communications circuit. The processor is configured to, in a state connected to an access point (AP), relate to an external electronic device related to a first communication scheme for transmitting and/or receiving data between an external electronic device connected to the AP and an electronic device information, and information about the external electronic device related to a second communication scheme that is different from the first communication scheme, based on the information about the external electronic device related to the first communication scheme, established by the first communication scheme A first session with the external electronic device supported by a communication scheme, and according to the selection, the external electronic device is discovered based on information about the external electronic device related to a second communication scheme.
In one embodiment, the patent describes a system that includes retrieving a user's hearing attributes and analyzing the hearing attributes to detect any hearing imbalance between the user's two ears. The audio generated by the interactive application is dynamically calibrated based on the auditory imbalance detected between the user's two ears, thereby generating calibrated audio. The interactive application's audio is forwarded to the first side of the headset directed toward the user's first ear, and the calibrated audio is sent to the second side of the headset directed toward the second ear. Hearing imbalance detected in the user is compensated for by different audio provided by different sides.
39.《Sony Patent | Dynamic audio optimization (Sony Patent: Dynamic audio optimization)》
In one embodiment, the method described in the patent may store one or more audio profiles in memory, the one or more audio profiles including one or more of one or more audio output devices available to the user. Audio settings. Incoming audio streams associated with the user's current interactive session can be monitored as well as real-world audio within the identified real-world space in which the user is located. One or more audio deviations may be detected based on a comparison of the incoming audio stream to real-world audio within the identified real-world space. The audio profile associated with the current interactive session can be automatically recalibrated to modify at least one of the audio settings. The audio output device can then process the audio of the incoming audio stream according to the at least one modified audio setting.
In one embodiment, the method described in the patent includes the following operations: recording the voices of players participating in a game session game; analyzing a game state generated by execution of a session of the game, wherein analyzing the game state identifies the context of the game ; Analyze the recorded speech using the recognized context of the game and the speech recognition model to identify the textual content of the recorded speech; Apply the recognized textual content as game input for the session of the game.
In one embodiment, the method described in the patent includes executing an interactive application, wherein executing the interactive application includes rendering video and audio of a virtual environment; the video is presented on a display viewed by a user, and the audio is passed through a headset to present; executing the interactive application in response to user input generated from the user's interaction with the presented video and audio; receiving environmental input from at least one sensor that senses a local environment in which the user is located; analyzing the Environmental input to identify activity occurring in the local environment; in response to identifying the activity, the level of active noise reduction applied by the headset is then adjusted.
In one embodiment, the information processing device described in the patent includes a control unit that controls the display of the virtual space. Wherein, the control unit performs control to obtain communication information of one or more other users in another virtual space, and presents the obtained communication information through virtual objects arranged in the virtual space.
In one embodiment, the patent describes a wearable data processing device including one or more attachment members for attaching the wearable data processing device to a portion of a user's limb, for responding to one or more users One or more sensors for generating user input data, a wireless communication circuit for sending user input data to an external device and receiving control data based on the user input data from the external device, and generating one or more sensors based on the control data. a processing circuit for the output signal, and an output unit for outputting one or more of the output signals.
In one embodiment, the patent describes a process that includes rendering a virtual reality scene on a head-mounted display; one or more sensors receiving sensor data; then, identifying the object location of a sound-producing object in real-world space; and rendering a virtual reality scene in the head-mounted display. Media cues are generated in a real-world scene; media cues are presented in virtual locations related to the object's location in real-world space.
In one embodiment, the patent describes a method that includes providing a user with access to play games for a gaming session via a game controller, and access to the user's profile model during the gaming session. A profile model is a machine learning model used to predict gaming skills from selected gameplay context within a game. The method also includes detecting an in-game context during a gaming session in which the profile model predicts that the user lacks gaming skills to advance in the game, and activating haptic cues to the game controller. Vibrations to specific areas hint at the type of input to make using the game controller to progress through the game.
46.《Sony Patent | Cooperative and coached gameplay (Sony Patent: cooperative and coached gameplay)》
In one embodiment, the patent describes a method for collaborating or directing games in a virtual environment. The memory may store a content control profile regarding a set of control inputs associated with actions in the virtual environment. Requests for cooperative gameplay of a digital content title may be received from a set of one or more users associated with different origin devices. In response to the request, at least one avatar can be generated for the interactive session. Multiple control inputs can be received from multiple different source devices and combined into a combined set of control inputs. Generating the combined set of control inputs may be based on the content control profile. Virtual actions associated with the Avatar can be controlled within the virtual environment based on a combined set of control inputs.
47.《Sony Patent | Gaze-based coordination of virtual effects indicators》
In one embodiment, the patent describes a gaze-based generation method that provides virtual effect indicators associated with directional sounds. Gaze data can be tracked through a camera associated with the client device to identify the focal point within the three-dimensional virtual environment on which one or both of the player's eyes are focused. When the focus does not move toward the source location within the three-dimensional virtual environment when the directional sound is received, the focus indication indicated by the gaze data should generate a virtual effect indicator associated with the directional sound type of the indicated directional sound.
In one embodiment, the patent describes a method that includes generating game state data during a game; determining a game context at the current point in the game's play based on the game state data; and inputting the game state data and game context into an artificial intelligence (AI) model. . Among them, the artificial intelligence model is trained to identify one or more user immersion levels that define the user's participation in the game. The method further includes using an AI model to determine a user immersion level in the game at a current point in the game and automatically generating indicators presented to the player's surroundings, wherein the indicators provide notification that the player should not be interrupted.
In one embodiment, the patent describes a method for using the user's eye gaze to provide navigation assistance. The method includes receiving input from a user device during a gameplay session of a game; capturing the user's eye gaze during gameplay to identify areas of a scene of the video game associated with the eye gaze. Gaze navigation is then activated to move the focus of the scene to the area identified using the captured eye gaze. Gaze navigation is triggered automatically without said input from the user device.
In one embodiment, the patent describes an information processing device that can assist Synecoculture. Gets an ecosystem object that indicates an ecosystem component that constitutes an ecosystem of farmland mixed with a variety of vegetation, and a task object that indicates a task to perform for that ecosystem component, and the ecosystem object is associated with the ecosystem component The real positions of the system components are displayed in AR at positions in the predetermined background space corresponding to them. At the same time, the task object is displayed in AR in the background space.
In one embodiment, the patent describes a method of limiting motion blur in a visual tracking system. The method includes accessing a first image generated by an optical sensor of a vision tracking system, identifying camera operating parameters of the optical sensor of the first image, determining motion of the optical sensor of the first image, based on the camera operating parameters of the optical sensor and the optical sensor to determine a motion blur level of the first image, and adjust camera operating parameters of the optical sensor based on the motion blur level.
In one embodiment, the patent describes a method that includes accessing video data including images of a demonstrator creating a tutorial depicting the demonstrator applying a beauty product to a part of the demonstrator's body. The method simultaneously includes processing the video data to identify changes in body parts of the presenter from the application of the beauty product, and in response to the identified changes in the body parts of the presenter from the application of the beauty product by processing the video data to identify the beauty product. The method further includes retrieving information about the beauty product and causing the information about the beauty product to be presented on the display device.
In one embodiment, the patent describes a method that applies 3D effects to image data and depth data based on an augmented reality content generator; generates a segmentation mask based on the image data; and performs background restoration and blurring of the image data using at least the segmentation mask, to generate background inpainted image data; generate a packed depth map based at least in part on the depth map of the depth data; and use a processor to generate a message including information related to the applied 3D effect, the image data, and the depth data.
In one embodiment, the method described in the patent includes: receiving input via a graphical user interface (GUI) specifying a plurality of image transformation parameters; accessing a set of sample source images; modifying the sample source image based on the plurality of image transformation parameters collecting to generate a set of sample target images; training a machine learning model to generate a given target image from a given source image by establishing a relationship between the set of sample source images and the set of sample target images; and automatically generating Augmented reality experiences for trained machine learning models.
In one embodiment, the method described in the patent includes receiving a first service routine including a plurality of procedural steps for servicing a vehicle, identifying at least one of the plurality of procedural steps to supplement supplementary service information; receiving information regarding Vehicles share information about vehicles with one or more attributes; determine at least one piece of supplementary service information to supplement the at least one identified process step; and provide the supplementary service process including the identified first service process to a supplementary service process including the first service process. At least one piece of additional service information for at least one process step.
55.《Snap Patent | Ar-based virtual keyboard (Snap patent: AR-based virtual keyboard)》
In one embodiment, the patent describes a gesture-based text input user interface for an augmented reality device. The AR system detects a start text input gesture made by a user of the AR system, generates a virtual keyboard user interface including a virtual keyboard with a plurality of virtual keys, and provides the virtual keyboard user interface to the user. The AR system uses one or more cameras to determine user selection of one or more selected virtual keys among a plurality of virtual keys and generate input text data based on the one or fewer selected virtual keys. The AR system uses the AR system's display to provide input text data to the user.
In one embodiment, the patent describes a wearable device that can present an audible or visual representation of an audio file to a user, generate an audio mix based on the user's gestures; generate a visualization of the audio mix based on the user's gestures and the audio mix; and represent the audio mix. The audio signal is conveyed to the speaker; the video signal representing the visualization is conveyed to the display.
In one embodiment, the patent describes a system and method for presenting an output audio signal to a listener located at a first location in a virtual environment. The method includes receiving an input audio signal; for each sound source in a plurality of sound sources in the virtual environment, determining each sound source corresponding to the input audio signal according to the position of each sound source in the virtual environment and each first intermediate audio. The first intermediate audio signal. The signal is associated with the first bus. For each of the plurality of sound sources in the virtual environment, a corresponding second intermediate audio signal is determined. Each second intermediate audio signal corresponds to the reverberation of the input audio signal in the virtual environment. According to the position of the corresponding sound source and at the same time according to the acoustic characteristics of the virtual environment, the corresponding second intermediate audio signal is determined. Each second intermediate audio signal is associated with the second bus. The output audio signal is presented to the listener through the first bus and the second bus.
58.《Magic Leap Patent | Dual mode ported speaker (Magic Leap Patent | Dual mode ported speaker)》
In one embodiment, the patent describes systems and methods for rendering audio using an audio system that supports multiple modes of operation. The components of the audio system are configured to operate in different modes. For example, the audio system is configured to operate in a first mode and a second mode. The audio system may operate in the first mode or the second mode based on applications running on the system or signals generated by the system.
In one embodiment, the patent describes a voice user interface (VUI) and methods for operating the VUI. The VUI is configured to receive and process verbal and non-verbal input. For example, the VUI receives an audio signal, and the VUI determines whether the audio input includes verbal and/or non-verbal input. Upon determination that the audio signal includes non-verbal input, the VUI causes the system to perform an action associated with the non-verbal input.
In one embodiment, the patent describes a method for managing and displaying virtual content in a mixed reality environment by each application independently on a one-to-one basis, with each virtual content presented to the system by its respective application. in a bounded volume called a "prism" in this paper. Each prism has characteristics and attributes that allow applications to manage and display the prism in a mixed reality environment, so that applications can manage the placement and display of virtual content in mixed reality by managing the prism itself.
In one embodiment, the patent describes an architecture for selectively coupling one or more optical streams from a multiplexed optical stream into a waveguide. Multiplexed optical streams can have light rays with different characteristics (such as different wavelengths and/or different polarizations). The waveguide may include an in-coupling element that may selectively couple one or more optical streams from the multiplexed optical stream into the waveguide while emitting one or more other optical streams from the multiplexed optical stream.
62.《Lumus Patent | Active optical engine (Lumus patent: active optical engine)》
In one embodiment, the device described in the patent includes at least one processor. The processor is configured to determine the target coupling output surface, identify the optical path to the target coupling output surface, identify the active wave plate corresponding to the optical path, and determine the optical path corresponding to the optical path. a target state of the active wave plate, setting the active wave plate to the identified target state, and causing the projection device to project a light beam including the image field component along the identified optical path.
In one embodiment, the patent describes a method for generating images in a near-eye display. The method may include operating a light source to emit an image as incident light. The light source may be configured such that incident light received by the light reflective element compensates for the color reflectivity of the light reflective element. The method may include coupling incident light into a light-transmissive substrate such that the light is trapped by total internal reflection between a first major surface and a second major surface of the light-transmissive substrate and by a light reflective element having a colored reflectivity Couple the light out of the substrate.
In one embodiment, the patent describes an optical waveguide comprising: a substrate arranged to guide an optical image inside the substrate from an inlet to an outlet of the optical waveguide via a plurality of reflections; at the inlet or outlet of the optical waveguide a surface relief grating that guides the image in the optical waveguide via diffraction and includes a plurality of grooves; a coating disposed on the surface relief grating and at least partially filling at least one groove . The at least one groove is provided at an edge of the surface relief grating at which the image coincides with the at least one groove after a first reflection of the image inside the substrate, the paint having a The optical properties of the substrate converge to the optical properties.
In one embodiment, the method described in the patent includes: determining candidate gratings; determining a combination of gratings based on the candidate gratings, wherein each of the grating combinations includes at least one of the candidate gratings, and in each grating The at least one of the candidate gratings in the combination is different from each other; determining a first diffraction response map of a first multiplexed grating corresponding to a first grating combination of the grating combination; by modifying the first multiplexed grating based on at least one parameter of the light engine. a diffraction response map to determine a first luminous intensity map of the first multiplexed grating corresponding to the first grating combination; and determining a first reconstructed image corresponding to the light engine by processing the template image based on the first luminous intensity map.
In one embodiment, the display device control method described in the patent includes: obtaining, by the processor, information related to the display device, wherein the information includes processor usage of the processor, display status, the wearing status of the display, the noise data of the heat dissipation unit, the rotation speed data of the heat dissipation unit, or any combination thereof; and through the processor, control the system sound and the heat dissipation unit according to the information related to the display device At least one of, wherein the system sound is configured to be generated by a sound output unit.