WO2002063601A1 - Method and system to present immersion virtual simulations using three-dimensional measurement - Google Patents

Method and system to present immersion virtual simulations using three-dimensional measurement Download PDF

Info

Publication number
WO2002063601A1
WO2002063601A1 PCT/US2002/003433 US0203433W WO02063601A1 WO 2002063601 A1 WO2002063601 A1 WO 2002063601A1 US 0203433 W US0203433 W US 0203433W WO 02063601 A1 WO02063601 A1 WO 02063601A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
image
display
virtual
control
Prior art date
Application number
PCT/US2002/003433
Other languages
French (fr)
Inventor
Abbas Rafii
Cyrus Bamji
Cheng-Feng Sze
Original Assignee
Canesta, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canesta, Inc. filed Critical Canesta, Inc.
Publication of WO2002063601A1 publication Critical patent/WO2002063601A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0421Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by interrupting or reflecting a light beam, e.g. optical touch-screen
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Arrangement of adaptations of instruments
    • B60K35/60
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • B60K2360/785
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye

Definitions

  • the present invention relates generally to so-called virtual simulation methods and systems, and more particularly to creating simulations using three- dimensionally acquired data so as to appear immerse the user in what is being simulated, and to permit the user to manipulate real objects by interacting with a virtual object.
  • So-called virtual reality systems have been computer implemented to mimic a real or a hypothetical environment.
  • a user or player may wear a glove or a body suit that contains sensors to detect movement, and may wear goggles that present a computer rendered view of a real or virtual environment.
  • User movement can cause the viewed image to change, for example to zoom left or right as the user turns.
  • the imagery may be projected rather than viewed through goggles worn by the user.
  • rules of behavior or interaction among objects in the virtual imagery being viewed are defined and adhered to by the computer system that controls the simulation.
  • aircraft flight simulators may be implemented in which a pilot trainee (e.g., a user) views a computer-rendered three-dimensional representation of the environment while manipulating controls similar to those found on an actual aircraft. As the user manipulates the controls, the simulated aircraft appears to react, and the three-dimensional environment is made to change accordingly. The result is that the user interacts with the rendered objects in the viewed image.
  • U.S. patent no. 5,168,531 to Sigel (1992 entitled “Real-time Recognition of Pointing Information From Video” discloses a luminosity-based two-dimensional information acquisition system.
  • Sigel attempts to recognize the occurrence of a predefined object in an image by receiving image data that is convolved with a set of predefined functions, in an attempt to define occurrences of elementary features characteristic of the predefined object.
  • Sigel's reliance upon luminosity data requires a user's hand to exhibit good contrast against a background environmentto prevent confusion with the recognition algorithm used.
  • Two-dimensional data acquisition systems such as disclosed by Korth in U.S. patent no. 5,767,842 (1998) entitled "Method and Device for Optical Input of Commands or Data use video cameras to image the user's hand or body. In some applications the images can be combined with computer-generated images of a virtual background or environment. Techniques including edge and shape detection and tracking, object and user detection and tracking, color and gesture tracking, motion detection, brightness and hue detection are sometimes used to try to identify and track user action. In a game application, a user could actually see himself or herself throwing a basketball in a virtual basketball court, for example, or shooting a weapon towards a virtual target. Such systems are sometimes referred to as immersion systems.
  • two-dimensional data acquisition systems only show user motion in two dimension, e.g., x-axis, y-axis but not also z-axis.
  • the user in real life would use a back and forth motion to accomplish a task, e.g., to throw a ball
  • the user in two-dimensional systems the user must instead substitute a sideways motion, to accommodate the limitations of the data acquisition system.
  • the acquisition system would be highly challenged to capture all gestures and motions.
  • such systems do not provide depth information, and such data that is acquired is luminosity-based and is very subject to ambient light and contrast conditions. An object moved against a background of similar color and contrast would be very difficult to track using such prior art two-dimensional acquisition systems. Further, such prior art systems can be expensive to implement in that considerable computational power is required to attempt to resolve the acquired images.
  • Prior art systems that attempt to acquire three-dimensional data using multiple two-dimensional video cameras similarly require substantial computing power, good ambient lighting conditions, and suffer from the limitation that depth resolution is limited by the distance separating the multiple cameras. Further, the need to provide multiple cameras adds to the cost of the overall system. What is needed is a virtual simulation system in which a user can view and manipulate computer-generated objects and thereby control actual objects, preferably without requiring the user to wear sensor-implemented devices. Further, such system should permit other persons to see the virtual objects that are being manipulated. Such system should not require multiple image acquiring cameras (or equivalent) and should function in various lighting environments and should not be subject to inaccuracy due to changing ambient light and/or contrast. Such system should use Z-values (distance vector measurements) rather than luminosity data to recognize user interaction with system-created virtual images.
  • the present invention provides such a system.
  • the present invention provides computer simulations in which user-interaction with computer-generated images of objects to be manipulated is captured in three-dimensions, without requiring the user to wear sensors.
  • the images may be projected using conventional methods including liquid crystal displays and micro-mirrors.
  • a computer system renders objects that preferably are viewed preferably in a heads-up display (HUD).
  • HUD heads-up display
  • the display may indeed include goggles, a monitor, or other display equipment.
  • the HUD might be a rendering of a device for the car, e.g., a car radio, that is visible by the vehicle driver looking toward the vehicle windshield.
  • the driver would move a hand close as if to "touch” or otherwise manipulate the projected image of an on/off switch in the image.
  • the driver would “move” the projected image of a volume control.
  • the physical location and movement of the driver's fingers in interacting with the computer-generated images in the HUD is determined non-haptically in three- dimensions by a three-dimensional range finder within the system.
  • the three- dimensional data acquisition system operates preferably by transmitting light signals, e.g., energy in the form of laser pulses, modulated light beams, etc.
  • return time-of-flight measurements between transmitted energy and energy reflected or returned from an object can provide (x,y,z) axis position information as to the presence and movement of objects.
  • objects can include a user's hand, fingers, perhaps a held baton, in a sense-vicinity to virtual objects that are projected by the system.
  • such virtual objects may be projected to appear on (or behind or in front of) a vehicle windshield.
  • ambient light is not relied upon in obtaining the three- dimensional position information, with the result that the system does not lose positional accuracy in the presence of changing light or contrast environments.
  • modulated light beams could instead be used.
  • the three-dimensional range output data is used to change the computer-created image in accordance with the user's hand or finger (or other) movement. If the user hand or finger (or other) motion "moves" a virtual sliding radio volume control to the right within the HUD, the system will cause the virtual image of the slider to be moved to the right. At the same time, the volume on the actual radio in the vehicle will increase, or whatever device parameter is to be thus controlled. Range finding information is collected non-haptically, e.g., the user need not actually touch anything for (x,y,z) distance sensing to result.
  • the HUD system can also be interactive in the sense of displaying dynamic images as required.
  • a segment of the HUD might be motor vehicle gages, which segment is not highlighted unless the user's fingers are moved to that region.
  • the system can automatically create and highlight certain images when deemed necessary by the computer, for example a flashing "low on gas" image might be projected without user request.
  • a CRT or LCD display can be used to display a computer rendering of objects that may be manipulated with a user's fingers, for example a virtual thermostat to control home temperature. "Adjusting" the image of the virtual thermostat will in fact cause the heating or cooling system for the home to be readjusted.
  • Advantageously such display(s) can be provided where convenient to users, without regard to where physical thermostats (or other controls) may actually have been installed.
  • the user may view an actual object being remotely manipulated as a function of user movement, or may view a virtual image that is manipulated as a function of user movement, which system-detected movement causes an action object to be moved.
  • the present invention may also be used to implement training systems.
  • the present invention presents virtual images that a user can interact with to control actual devices. Onlookers may see what is occurring in that the user is not required to wear sensor-equipped clothing, helmets, gloves, or goggles.
  • FIG. 1 a heads-up display of a user-immersible computer simulation, according to the present invention
  • FIG. 2A is a generic block diagram showing a system with which the present invention may be practiced
  • FIG. 2B depicts clipping planes used to detect user-proximity to virtual images displayed by the present invention
  • FIGS. 3A-3C depict use of a slider-type virtual control, according to the present invention.
  • FIG. 3D depicts exemplary additional images created by the present invention
  • FIGS. 3E and 3F depict use of a rotary-type virtual control, according to the present invention.
  • FIGS. 3G, 3H, and 31 depict the present invention used in a manual training type application
  • FIGS. 4A and 4B depict reference frames used to recognize virtual rotation of a rotary-type virtual control, according to the present invention.
  • FIGS. 5A and 5B depict user-zoomable virtual displays useful to control a GPS device, according to the present invention.
  • Fig. 1 depicts a heads-up display (HUD) application of a user-immersible computer simulation system, according to the present invention.
  • the present invention 10 is shown mounted in the dashboard or other region of a motor vehicle 20 in which there is seated a user 30.
  • system 10 computer-generates and projects imagery onto or adjacent an image region 40 of front windshield 50 of vehicle 20. Image projection can be carried out with conventional systems such as LCDs, or micro-mirrors.
  • user 30 can look ahead through windshield 50 while driving vehicle 20, and can also see any image(s) that are projected into region 40 by system 10.
  • system 10 may properly be termed a heads-up display system.
  • Fig. 1 are the three reference x,y,z axes. As described later herein with reference to Fig. 2B, region 40 may be said to be bounded in the z- axis by clipping planes.
  • system 10 knows what virtual objects (if any) are displayed in image region 40, the interaction between the user's finger and such images may be determined. Detection in the present invention occurs non-haptically, that is it is not required that the user's hand or finger or pointer actually make physical contact with a surface or indeed anything in order to obtain the (x,y,z) coordinates of the hand, finger, or pointer.
  • Fig. 1 depicts a device 60 having at least one actual control 70 also mounted in vehicle 20, device 60 shown being mounted in the dashboard region of the vehicle.
  • Device 60 may be an electronic device such as a radio, CD player, telephone, a thermostat control or window control for the vehicle, etc.
  • system 10 can project one or more images, including an image of device 60 or at least a control 70 from device 60.
  • Exemplary implementations for system 10 may be found in co-pending U.S. patent application 09/401 ,059 filed 22 September 1999 entitled “CMOS- Compatible Three-Dimensional Image Sensor IC", in co-pending U.S. patent application 09/502,499 filed 11 February 2000 entitled “Method and Apparatus for Creating a Virtual Data Entry Device”, and in co-pending U.S. patent application 09/727,529 filed 28 November 2000 entitled “CMOS-Compatible Three-Dimensional Image Sensor IC". In that a detailed description of such systems may be helpful, applicants refer to and incorporate by reference each said pending U.S. patent application.
  • System 100 preferably collects data at a frame rate of at least ten frames per second, and preferably thirty frames per second. Resolution in the x-y plane is preferably in the 2 cm or better range, and in the z-axis is preferably in the 1 cm to 5 cm range.
  • a less suitable candidate for a multi-dimensional imaging system might be along the lines of U.S. patent no. 5,767,842 to Korth (1998) entitled “Method and Device for Optical Input of Commands or Data”.
  • Korth proposes the use of conventional two-dimensional TV video cameras in a system to somehow recognize what portion of a virtual image is being touched by a human hand. But Korth's method is subject to inherent ambiguities arising from his reliance upon relative luminescence data, and upon adequate source of ambient lighting.
  • the applicants' referenced co-pending applications disclose a true time- of-flight three-dimensional imaging system in which neither luminescence data nor ambient light is relied upon.
  • Fig. 2A is an exemplary system showing the present invention in which the range finding system is similar to that disclosed in the above-referenced co-pending U.S. patent applications. Other non-haptic three-dimensional range finding systems could instead be used, however.
  • system 100 is a three-dimensional range finding system that is augmented by sub-system 110, which generates and can project via an optical system 120 computer-created object images such as 130A, 130B. Such projection may be carried out with LCDs or micro-mirrors, or with other components known in the art.
  • the images created can appear to be projected upon the surface of windshield 50, in front of, or behind windshield 50.
  • the remainder of system 100 may be as disclosed in the exemplary patent applications.
  • An array 140 of pixel detectors 150 and their individual processing circuits 160 is provided preferably on an IC 170 that includes most if not all of the remainder of the overall system.
  • a typical size for the array might be 100x100 pixel detectors 150 and an equal number of associated processing circuits 160.
  • An imaging light source such as a laser diode 180 emits energy via lens system 190 toward the imaging region 40. At least some of the emitted energy will be reflected from the surface of the user's hand, finger, a held baton, etc., back toward system 100, and can enter collection lens 200.
  • a phase-detection based ranging scheme could be employed.
  • the time interval from start of a pulse of emitted light energy from source 190 to when some of the reflected energy is returned via lens 200 to be detected by a pixel diode detector in array 140 is measured.
  • This time-of-flight measurement can provide the vector distance to the location on the windshield, or elsewhere, from which the energy was reflected.
  • locations of the surface of the finger may, if desired, also be detected and determined.
  • System 100 preferably provides computer functions and includes a microprocessor or microcontroller system 210 that preferably includes a control processor 220, a data processor 230, and an input/output processor 240.
  • IC 170 preferably further includes memory 250 having random access memory (RAM) 260, read-only memory (ROM) 270, and memory storing routine(s) 280 used by the present invention to calculate vector distances, userfinger movement velocity and movement direction, and relationships between projected images and location of a user's finger(s).
  • Circuit 290 provides timing, interface, and other support functions.
  • each preferably identical pixel detector 150 can generate data from to calculate Z distance to a point p1(t) in front of windshield 50, on the windshield surface, or behind windshield 50, or to an intervening object.
  • each pixel detector preferably simultaneously acquires two types of data that are used to determine Z distance: distance time delay data, and energy pulse brightness data.
  • Delay data is the time required for energy emitted by emitter 180 to travel at the speed of light to windshield 40 or, if closer, a user's hand or finger or other object, and back to sensor array 140 to be detected.
  • Brightness is the total amount of signal generated by detected pulses as received by the sensor array. It will be appreciated that range finding data is obtained without touching the user's hand orfingerwith anything, e.g., the data is obtained non-haptically.
  • region 40 may be considered to be bounded in the z-axis direction from a front clipping plane 292 and by a rear clipping plane 294.
  • Rear clipping plane 292 may coincide with the z-axis distance from system 100 to the inner surface of windshield 50 (or other substrate in another application).
  • the z- axis distance separating planes 292 and 294 represents the proximity range within which a user's hand or forefinger is to be detected with respect to interaction with a projected image, e.g. 130B.
  • the tip of the user's forefinger is shown as passing through plane 292 to "touch" image 130B, here projected to appear intermediate the two clipping planes.
  • clipping planes 292 and 294 will be curved and the region between these planes can be defined as an immersion frustum 296. As suggested by Fig.
  • image 130B may be projected to appear within immersion frustum 296, or to appear behind (or outside) the windshield. If desired, the image could be made to appear in front of the frustum.
  • the upper and lower limits of region 40 are also bounded by frustum 296 in that when the user's hand is on the car seat or on the car roof, it is not necessary that system 100 recognize the hand position with respect to any virtual image, e.g., 130B, that may be presently displayed. It will be appreciated that the relationship shown in Fig. 2B is a very intuitive way to provide feedback in that the user sees the image of a control 130B, reaches towards and appears to manipulate the control.
  • Three-dimensional range data is acquired by system 100 from examination of time-of-flight information between signals emitted by emitter 180 via optional lens
  • system 100 knows a priori the distance and boundaries of frustum 296 and can detect when an object such as a user's forefinger is within the spaced bounded by the frustum.
  • Software 290 recognizes the finger or other object is detected within this range, and system 100 is essentially advised of potential user intent to interact with any displayed images.
  • system 100 can display a menu of image choices when an object such as a user's finger is detected within frustum 296. (For example, in Fig. 3D, display 130D could show icons rather than buttons, one icon to bring up a cellular telephone dialing display, another icon to bring up a map display, another icon to bring up vehicle control displays, etc.)
  • Software 290 attempts to recognize objects (e.g., user's hand, forefinger, perhaps arm and body, head, etc.) within frustum 206, and can detect shape (e.g., perimeter) and movement (e.g., derivative of positional coordinate changes). If desired, the user may hold a passive but preferably highly reflective baton to point to regions in the virtual display. Although system 100 preferably uses time-of-flight z-distance data only, luminosity information can aid in discerning objects and object shapes and positions.
  • objects e.g., user's hand, forefinger, perhaps arm and body, head, etc.
  • shape e.g., perimeter
  • movement e.g., derivative of positional coordinate changes
  • the user may hold a passive but preferably highly reflective baton to point to regions in the virtual display.
  • system 100 preferably uses time-of-flight z-distance data only, luminosity information can aid in discerning objects and object shapes and positions.
  • Software 290 could cause a display that includes virtual representations of portions of the user's body. For example if the user's left hand and forefinger are recognized by system 100, the virtual display in region 40 could include a left hand and forefinger. If the user's left hand moved in and out or left and right, the virtual image of the hand could move similarly. Such application could be useful in a training environment, for example where the user is to pickup potentially dangerous items and manipulate them in a certain fashion. The user would view a virtual image of the item, and would also view a virtual image of his or her hand grasping the virtual object, which virtual object could then be manipulated in the virtual space in frustum 296.
  • Figs. 3A, 3B, and 3C show portion 40 of an exemplary HUD display, as used by the embodiment of Fig. 1 in which system 100 projected image 130A is a slider control, perhaps a representation or token for an actual volume control 80 on an actual radio 70 within vehicle 20.
  • system 100 projected image 130A is a slider control, perhaps a representation or token for an actual volume control 80 on an actual radio 70 within vehicle 20.
  • the virtual slider bar 300 is "moved" to the right, it is the function of the present invention to command the volume of radio 70 to increased, or if image 130A is a thermostat, to command the temperature within vehicle 20 to change, etc.
  • a system 100 projected image of a rotary knob type control 130B having a finger indent region 310 is also depicted in Fig. 3A.
  • Fig.3A optionally none of the projected images is highlighted in that the user's hand is not sufficiently close to region 40 to be sensed by system 100.
  • Fig. 3B that the user's forefinger 320 has been moved towards windshield 50 (as depicted in Fig. 1), and indeed is within sense region 40.
  • the (x,y,z) coordinates of at least a portion of forefinger 320 are sufficiently close to the virtual slider bar 300 to cause the virtual slider bar and the virtual slider control image 130A to be highlighted by system 100.
  • the image may turn red as the user's foregoing "touches" the virtual slider bar.
  • the vector relationship in three-dimensions between the user's forefinger and region 40 is determined substantially in real- time by system 100, or by any other system able to reliably calculate distance coordinates in three-axes.
  • system 100 calculates the forefinger position, calculates that the forefinger is sufficiently close to the slider bar position to move the slider bar, and projects a revised image into region 40, wherein the slider bar has followed the user's forefinger.
  • electrical bus lead 330 (see Fig. 2A), which is coupled to control systems in vehicle 20 including all devices 70 that are desired to at least have the ability to be virtually controlled, according to the present invention. Since system 100 is projecting an image associated, for example, with radio 70, the volume in radio 70 will be increased as the user's forefinger slides the computer rendered image of the slider bar to the right. Of course if the virtual control image 130 were say bass or treble, then bus lead 330 would command radio 70 to adjust bass or treble accordingly.
  • system 100 will store that location and continue to project, as desired by the user or as pre-programmed, that location for the slider bar image. Since the projected images can vary, it is understood that upon re-displaying slider control 130A at a later time (e.g., perhaps seconds or minutes or hours later), the slider bar will be shown at the last user-adjusted position, and the actual control function in device 70 will be set to the same actual level of control.
  • Fig. 3D assume that no images are presently active in region 40, e.g., the user is not or has not recently moved his hand or forefinger into region 40. But assume that system 100, which is coupled to various control systems and sensors via bus lead 330, now realizes that the gas tank is nearly empty, or that tire pressure is load, or that oil temperature is high. System 100 can now automatically project an alert or warning image 130C, e.g., "ALERT" or perhaps "LOW TIRE PRESSURE", etc. As such, it will be appreciated that what is displayed in region 40 by system 100 can be both dynamic and interactive.
  • an alert or warning image 130C e.g., "ALERT" or perhaps "LOW TIRE PRESSURE"
  • Fig. 3D also depicts another HUD display, a virtual telephone dialing pad 130D, whose virtual keys the user may "press" with a forefinger.
  • device 70 may be a cellular telephone coupled via bus lead 130 to system 100.
  • routine(s) 280 within system 100 knows a priori the location of each virtual key in the display pad 130D, and it is a straightforward task to discern when an object, e.g., a user's forefinger, is in close proximity to region 40, and to any (x,y,z) location therein.
  • the key When a forefinger hovers over a virtual key for longer than a predetermined time, perhaps 100 ms, the key may be considered as having been "pressed".
  • the "hovering" aspect may be determined, for example, by examining the first derivative of the (x(t),y(t),z(t)) coordinates of the forefinger. When this derivative is zero, the user's forefinger has no velocity and indeed is contacting the windshield and can be moved no further in the z-axis. Other techniques may instead be used to determine location of a user's forefinger (or other hand portion), or a pointer held by the user, relative to locations within region 40.
  • Virtual knob 130B may be "grasped" by the user's hand, using for example the right thumb 321 , the right forefinger 320, and the right middle finger 322, as shown in Fig. 3E.
  • “grasped” it is meant that the user simply reaches forthe computer-rendered and projected image of knob 130B as though it were a real knob.
  • virtual knob 130B is rendered in a highlight color (e.g., as shown by Fig.
  • knob 130B when the user's hand (or other object) is sufficiently close to the area of region 40 defined by knob 130B.
  • knob 130B might be rendered in a pale color, since no object is in close proximity to that portion of the windshield.
  • software 280 recognizes from acquired three- dimensional range finding data that an object (e.g., a forefinger) is close to the area of region 40 defined by virtual knob 130B. Accordingly in Fig. 3E, knob 130B is rendered in a more discernable color and/or with bolder lines than is depicted in Fig. 3A.
  • Fig. 3E In Fig.
  • System 100 can compute and/or approximate the rotation angle ⁇ using any of several approaches.
  • the exact rotation angle ⁇ is determined as follows. Let the pre-rotation (e.g., Fig. 3E
  • the axis of rotation is approximately normal to the plane of the triangle defined by the three fingertip contact points ⁇ , ⁇ and ⁇ .
  • the (x,y,z) coordinates ofpointp can be calculated by the following formula:
  • angle #can be calculated as follows:
  • system 100 may approximate rotation angle ⁇ using a second approach, in which an exact solution is not required.
  • this second approach it is desired to ascertain direction of rotation (clockwise or counter-clockwise) and to approximate the magnitude of the rotation.
  • the z-axis extends from system 100, and the x-axis and y-axis are on the plane of the array of pixel diode detectors 140.
  • Line L may be represented by the following equation:
  • the clockwise or counter-clockwise direction of rotation may be defined by the following criterion:
  • the magnitude of the rotation angle ⁇ may be approximated as follows:
  • Fig. 3E A more simplified approach may be used in Fig. 3E, where user 30 may use a fingertip to point to virtual indentation 310 in the image of circular knob 130B.
  • the fingertip may now move clockwise or counter-clockwise about the rotation axis of knob 130B, with the result that system 100 causes the image of knob 130B to be rotated to track the user's perceived intended movement of the knob.
  • an actual controlled parameter on device 70 or vehicle 20
  • the relationship between user manipulation of a virtual control and variation in an actual parameter of an actual device may be linear or otherwise, including linear in some regions of control and intentionally nonlinear in other regions.
  • Software 290 may of course use alternative algorithms, executed by computer system 210, to determine angular rotation of virtual knobs or other images rendered by computing system 210 and projected via lens 190 onto windshield or other area 50. As noted, computing system 210 will then generate the appropriate commands, coupled via bus 330 to device(s) 70 and/or vehicle 20.
  • Figs. 3G and 3H depict use of the present invention as a virtual training tool in which a portion of the user's body is immersed in the virtual display.
  • the virtual display 40' may be presented on a conventional monitor rather than in an HUD fashion.
  • system 100 can output video data and video drive data to a monitor, using techniques well known in the art.
  • a simple task is shown.
  • the user whose hand is depicted as 302
  • the virtual image is shown as 130H (for example a small test tube containing a highly dangerous substance), and to carefully tile the object so that its contents pour out into a target region, e.g., a virtual beaker 1301.
  • a target region e.g., a virtual beaker 1301.
  • FIG. 3G the user's hand, which is detected and imaged by system 100, is depicted as 130G in the virtual display.
  • virtual hand 130G is shown as a stick figure, but a more realistic image may be rendered by system 100.
  • Fig. 3H the user's real hand 302 has rotated slightly counter-clockwise, and the virtual image 40' shows virtual object 130H and virtual hand 130G similarly rotated slightly counter-clockwise.
  • System 100 can analyze movements of the actual hand 302 to determine whether such movements were sufficiently carefully executed.
  • the virtual display could of course depict the pouring-out of contents, and if the accuracy of the pouring were not proper, the spilling of contents.
  • Object 130H and/or its contents might, for example, be highly radioactive, and the user's hand motions might be practice to operate a robotic control that will grasp and tilt an actual object whose virtual representation is shown as 130H.
  • use of the present invention permits practice sessions without the risk of any danger to the user. If the user "spills" the dangerous contents or "drops" the held object, there is no harm, unlike a practice session with an actual object and actual contents.
  • Fig.31 depicts the present invention used in another training environment.
  • user 302 perhaps actually holds a tool 400 to be used in conjunction with a second tool 410.
  • the user is being trained to manipulate a tool 400' to be used in conjunction with a second tool 410', where tool 400' is manipulated by a robotic system 420, 430 (analogous to device 70) under control of system 100, responsive to user-manipulation of tool 400.
  • Robotically manipulated tools 400', 410' are shown behind a pane 440, that may be a protective pane of glass, or that may be opaque, to indicate that tools 400', 410' cannot be directly viewed by the user.
  • tools 400', 410' may be at the bottom of the ocean, or on the moon, in which case communication bus 330 would include radio command signals. If the user can indeed view tools 400', 410' through pane 440, there would be no need for a computer-generated display. However if tools 400', 410' cannot be directly viewed, then a computer- generated display 40' could be presented. In this display, 130G could now represent the robotic arm 420 holding actual tool 400'. It is understood that as the user 302 manipulates tool 400 (although manipulation could occur without tool 400), system 100 via bus 330 causes tool 400' to be manipulated robotically. Feedback to the user can occur visually, either directly through pane 440 or via display 40', or in terms of instrumentation that in substantial real-time tells the user what is occurring with tools 400, 410'.
  • Fig. 5A depicts a HUD virtual display created and projected by system 100 upon region 40 of windshield 50, in which system 70 is a global position satellite (GPS) system, or perhaps a computer storing zoomable maps.
  • system 70 is a global position satellite (GPS) system, or perhaps a computer storing zoomable maps.
  • image 130E is shown as a roadmap having a certain resolution.
  • a virtual scroll-type control 130F is presented to the right of image 130E, and a virtual image zoom control 130A is also shown.
  • Scroll control 130F is such that a user's finger can touch a portion of the virtual knob, e.g., perhaps a north-east portion, to cause projected image 130E to be scrolled in that compass direction.
  • Zoom control 130A shown here as a slider bar, permits the user to zoom the image in or out using a finger to "move" virtual slider bar 300. If desired, zoom control 130A could of course be implemented as a rotary knob or other device, capable of user manipulation.
  • Fig. 5B the user has already touched and "moved" virtual slider bar 300 to the right, which as shown by the indica portion of image 130A has zoomed in image 130E.
  • the image, now denoted 130E has greater resolution and provides more details.
  • system 100 detects the user's finger (or pointer or other object) near bar 300, detected three-dimensional (x,y,z) data permits knowing what level of zoom is desired.
  • System 100 then outputs on bus 330 the necessary commands to cause GPS or computer system 70 to provide a higher resolution map image. Because system 100 can respond substantially in real-time, there is little perceived lag between the time the user's finger "slides” bar 300 left or right and the time map image 130E is zoomed in or out. This feedback enables the user to rapidly cause the desired display to appear on windshield 50, without requiring the user to divert attention from the task of driving vehicle 20, including looking ahead, right through the images displayed in region 40, to the road and traffic ahead.

Abstract

A virtual simulation system generates an image of a virtual control on a display that may be a heads-up-display in a vehicle. The system uses three-dimensional range finding data to determine when a user is sufficiently close to the virtual control to 'manipulate' the virtual control. The user 'manipulation' is sensed non-haptically by the system, which causes the displayed control image to move in response to user manipulation. System output is coupled, linearly or otherwise, to an actual device having a parameter that is adjusted substantially in real-time by user-manipulation of the virtual image. System generated displays can be dynamic and change appearance when a user's hand is in close proximity displays can disappear until needed, or can include menus and icons to be selected by the user who points towards or touches the virtual images. System generate images can include representation of the user for use in a training or gaming system.

Description

METHOD AND SYSTEM TO PRESENT IMMERSION VIRTUAL SIMULATIONS USING THREE-DIMENSIONAL MEASUREMENT
RELATION TO PREVIOUSLY FILED APPLICATION Priority is claimed from U.S. provisional patent application, serial number 60/180,473 filed 3 February 2000, and entitled "User Immersion in Computer Simulations and Applications Using 3-D Measurement, Abbas Rafii and Cyrus Bamji, applicants.
FIELD OF THE INVENTION The present invention relates generally to so-called virtual simulation methods and systems, and more particularly to creating simulations using three- dimensionally acquired data so as to appear immerse the user in what is being simulated, and to permit the user to manipulate real objects by interacting with a virtual object.
BACKGROUND OF THE INVENTION So-called virtual reality systems have been computer implemented to mimic a real or a hypothetical environment. In a computer game context, for example, a user or player may wear a glove or a body suit that contains sensors to detect movement, and may wear goggles that present a computer rendered view of a real or virtual environment. User movement can cause the viewed image to change, for example to zoom left or right as the user turns. In some applications, the imagery may be projected rather than viewed through goggles worn by the user. Typically rules of behavior or interaction among objects in the virtual imagery being viewed are defined and adhered to by the computer system that controls the simulation. U.S. patent no. 5,963,891 to Walker (1999) entitled "System for Tracking Body Movements in a Virtual Reality System" discloses a system in which the user must wear a data-gathering body suit. U.S. patent no. 5,337,758 to Moore (1994) entitled "Spine Motion Analyzer and Method" discloses a sensor-type suit that can include sensory transducers and gyroscopes to relay back information as to the position of a user's body.
In training type applications, aircraft flight simulators may be implemented in which a pilot trainee (e.g., a user) views a computer-rendered three-dimensional representation of the environment while manipulating controls similar to those found on an actual aircraft. As the user manipulates the controls, the simulated aircraft appears to react, and the three-dimensional environment is made to change accordingly. The result is that the user interacts with the rendered objects in the viewed image.
But the necessity to provide and wear sensor-implemented body suits, gloves, helmets, or the necessity to wear goggles can add to the cost of a computer simulated system, and can be cumbersome to the user. Not only is freedom of motion restricted by such sensor-implemented devices, but is often necessary to provide such devices in a variety of sizes, e.g., large-sized gloves for adults, medium-sized gloves, small-sized gloves, etc. Further, only the one user wearing the body suit, glove, helmet, goggles can utilize the virtual system; onlookers for example see essentially nothing. An onlooker not wearing such sensor-laden garments cannot participate in the virtual world being presented and cannot manipulate virtual objects.
U.S. patent no. 5,168,531 to Sigel (1992 entitled "Real-time Recognition of Pointing Information From Video" discloses a luminosity-based two-dimensional information acquisition system. Sigel attempts to recognize the occurrence of a predefined object in an image by receiving image data that is convolved with a set of predefined functions, in an attempt to define occurrences of elementary features characteristic of the predefined object. But Sigel's reliance upon luminosity data requires a user's hand to exhibit good contrast against a background environmentto prevent confusion with the recognition algorithm used.
Two-dimensional data acquisition systems such as disclosed by Korth in U.S. patent no. 5,767,842 (1998) entitled "Method and Device for Optical Input of Commands or Data use video cameras to image the user's hand or body. In some applications the images can be combined with computer-generated images of a virtual background or environment. Techniques including edge and shape detection and tracking, object and user detection and tracking, color and gesture tracking, motion detection, brightness and hue detection are sometimes used to try to identify and track user action. In a game application, a user could actually see himself or herself throwing a basketball in a virtual basketball court, for example, or shooting a weapon towards a virtual target. Such systems are sometimes referred to as immersion systems.
But two-dimensional data acquisition systems only show user motion in two dimension, e.g., x-axis, y-axis but not also z-axis. Thus if the user in real life would use a back and forth motion to accomplish a task, e.g., to throw a ball, in two-dimensional systems the user must instead substitute a sideways motion, to accommodate the limitations of the data acquisition system. In a training application, if the user were to pick up a component, rotate the component and perhaps move the component backwards and forwards, the acquisition system would be highly challenged to capture all gestures and motions. Also, such systems do not provide depth information, and such data that is acquired is luminosity-based and is very subject to ambient light and contrast conditions. An object moved against a background of similar color and contrast would be very difficult to track using such prior art two-dimensional acquisition systems. Further, such prior art systems can be expensive to implement in that considerable computational power is required to attempt to resolve the acquired images.
Prior art systems that attempt to acquire three-dimensional data using multiple two-dimensional video cameras similarly require substantial computing power, good ambient lighting conditions, and suffer from the limitation that depth resolution is limited by the distance separating the multiple cameras. Further, the need to provide multiple cameras adds to the cost of the overall system. What is needed is a virtual simulation system in which a user can view and manipulate computer-generated objects and thereby control actual objects, preferably without requiring the user to wear sensor-implemented devices. Further, such system should permit other persons to see the virtual objects that are being manipulated. Such system should not require multiple image acquiring cameras (or equivalent) and should function in various lighting environments and should not be subject to inaccuracy due to changing ambient light and/or contrast. Such system should use Z-values (distance vector measurements) rather than luminosity data to recognize user interaction with system-created virtual images.
The present invention provides such a system.
SUMMARY OF THE INVENTION The present invention provides computer simulations in which user-interaction with computer-generated images of objects to be manipulated is captured in three-dimensions, without requiring the user to wear sensors. The images may be projected using conventional methods including liquid crystal displays and micro-mirrors.
A computer system renders objects that preferably are viewed preferably in a heads-up display (HUD). Although neither goggles nor special viewing equipment is required by the user in an HUD embodiment, in other applications the display may indeed include goggles, a monitor, or other display equipment. In a motor vehicle application, the HUD might be a rendering of a device for the car, e.g., a car radio, that is visible by the vehicle driver looking toward the vehicle windshield. To turn the virtual radio on, the driver would move a hand close as if to "touch" or otherwise manipulate the projected image of an on/off switch in the image. To change volume, the driver would "move" the projected image of a volume control. There is substantially instant feedback between the parameter change in the actual device, e.g., loudness of the radio audio, as perceived (e.g., heard) by the user, and user "movement" of the virtual control. To change stations, the driver would "press" the projected image of a frequency control until the desired station is heard, whereupon the virtual control would be released by the user. Other displayed images may include warning messages concerning the state of the vehicle, or other environment, or GPS-type map displays that the user can control.
The physical location and movement of the driver's fingers in interacting with the computer-generated images in the HUD is determined non-haptically in three- dimensions by a three-dimensional range finder within the system. The three- dimensional data acquisition system operates preferably by transmitting light signals, e.g., energy in the form of laser pulses, modulated light beams, etc. In a preferred embodiment, return time-of-flight measurements between transmitted energy and energy reflected or returned from an object can provide (x,y,z) axis position information as to the presence and movement of objects. Such objects can include a user's hand, fingers, perhaps a held baton, in a sense-vicinity to virtual objects that are projected by the system. In an HUD application, such virtual objects may be projected to appear on (or behind or in front of) a vehicle windshield. Preferably ambient light is not relied upon in obtaining the three- dimensional position information, with the result that the system does not lose positional accuracy in the presence of changing light or contrast environments. In other applications, modulated light beams could instead be used.
When the user's hand (or other object evidencing user-intent) is within a sense- frustum range of the projected object, the three-dimensional range output data is used to change the computer-created image in accordance with the user's hand or finger (or other) movement. If the user hand or finger (or other) motion "moves" a virtual sliding radio volume control to the right within the HUD, the system will cause the virtual image of the slider to be moved to the right. At the same time, the volume on the actual radio in the vehicle will increase, or whatever device parameter is to be thus controlled. Range finding information is collected non-haptically, e.g., the user need not actually touch anything for (x,y,z) distance sensing to result. The HUD system can also be interactive in the sense of displaying dynamic images as required. A segment of the HUD might be motor vehicle gages, which segment is not highlighted unless the user's fingers are moved to that region. On the other hand, the system can automatically create and highlight certain images when deemed necessary by the computer, for example a flashing "low on gas" image might be projected without user request.
In other applications, a CRT or LCD display can be used to display a computer rendering of objects that may be manipulated with a user's fingers, for example a virtual thermostat to control home temperature. "Adjusting" the image of the virtual thermostat will in fact cause the heating or cooling system for the home to be readjusted. Advantageously such display(s) can be provided where convenient to users, without regard to where physical thermostats (or other controls) may actually have been installed. In a factory training application, the user may view an actual object being remotely manipulated as a function of user movement, or may view a virtual image that is manipulated as a function of user movement, which system-detected movement causes an action object to be moved.
The present invention may also be used to implement training systems. In its various embodiments, the present invention presents virtual images that a user can interact with to control actual devices. Onlookers may see what is occurring in that the user is not required to wear sensor-equipped clothing, helmets, gloves, or goggles.
Other features and advantages of the invention will appear from the following description in which the preferred embodiments have been set forth in detail, in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 a heads-up display of a user-immersible computer simulation, according to the present invention; FIG. 2A is a generic block diagram showing a system with which the present invention may be practiced;
FIG. 2B depicts clipping planes used to detect user-proximity to virtual images displayed by the present invention;
FIGS. 3A-3C depict use of a slider-type virtual control, according to the present invention;
FIG. 3D depicts exemplary additional images created by the present invention;
FIGS. 3E and 3F depict use of a rotary-type virtual control, according to the present invention;
FIGS. 3G, 3H, and 31 depict the present invention used in a manual training type application;
FIGS. 4A and 4B depict reference frames used to recognize virtual rotation of a rotary-type virtual control, according to the present invention; and
FIGS. 5A and 5B depict user-zoomable virtual displays useful to control a GPS device, according to the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Fig. 1 depicts a heads-up display (HUD) application of a user-immersible computer simulation system, according to the present invention. The present invention 10 is shown mounted in the dashboard or other region of a motor vehicle 20 in which there is seated a user 30. Among other functions, system 10 computer-generates and projects imagery onto or adjacent an image region 40 of front windshield 50 of vehicle 20. Image projection can be carried out with conventional systems such as LCDs, or micro-mirrors. In this embodiment, user 30 can look ahead through windshield 50 while driving vehicle 20, and can also see any image(s) that are projected into region 40 by system 10. In this embodiment, system 10 may properly be termed a heads-up display system. Also shown in Fig. 1 are the three reference x,y,z axes. As described later herein with reference to Fig. 2B, region 40 may be said to be bounded in the z- axis by clipping planes.
User 30 is shown as steering vehicle 20 with the left hand while the right hand is near or touching a point p1 (t) on or before an area of windshield within a detection range of system 10. By "detection range" it is meant that system 10 can determine in three-dimensions the location of point p1 (t) as a function of time (t) within a desired proximity to image region 40. Thus, p1(t) may be uniquely defined by coordinates p1 (t) = (x1 (t),y1 (t),z1 (t)). Because system 10 has three- dimensional range finding capability, it is not required that the hand of user 30 be covered with a sensor-laden glove, as in many prior art systems. Further, since system 10 knows what virtual objects (if any) are displayed in image region 40, the interaction between the user's finger and such images may be determined. Detection in the present invention occurs non-haptically, that is it is not required that the user's hand or finger or pointer actually make physical contact with a surface or indeed anything in order to obtain the (x,y,z) coordinates of the hand, finger, or pointer.
Fig. 1 depicts a device 60 having at least one actual control 70 also mounted in vehicle 20, device 60 shown being mounted in the dashboard region of the vehicle. Device 60 may be an electronic device such as a radio, CD player, telephone, a thermostat control or window control for the vehicle, etc. As will be described, system 10 can project one or more images, including an image of device 60 or at least a control 70 from device 60.
Exemplary implementations for system 10 may be found in co-pending U.S. patent application 09/401 ,059 filed 22 September 1999 entitled "CMOS- Compatible Three-Dimensional Image Sensor IC", in co-pending U.S. patent application 09/502,499 filed 11 February 2000 entitled "Method and Apparatus for Creating a Virtual Data Entry Device", and in co-pending U.S. patent application 09/727,529 filed 28 November 2000 entitled "CMOS-Compatible Three-Dimensional Image Sensor IC". In that a detailed description of such systems may be helpful, applicants refer to and incorporate by reference each said pending U.S. patent application. The systems described in these patent applications can be implemented in a form factor sufficiently small to fit into a small portion of a vehicle dashboard, as suggested by Fig. 1 herein. Further, such systems consume low operating power and can provide real-time (x,y,z) information as to the proximity of a user's hand or finger to a target region, e.g., region 40 in Fig. 1. System 100, as used in the present invention, preferably collects data at a frame rate of at least ten frames per second, and preferably thirty frames per second. Resolution in the x-y plane is preferably in the 2 cm or better range, and in the z-axis is preferably in the 1 cm to 5 cm range.
A less suitable candidate for a multi-dimensional imaging system might be along the lines of U.S. patent no. 5,767,842 to Korth (1998) entitled "Method and Device for Optical Input of Commands or Data". Korth proposes the use of conventional two-dimensional TV video cameras in a system to somehow recognize what portion of a virtual image is being touched by a human hand. But Korth's method is subject to inherent ambiguities arising from his reliance upon relative luminescence data, and upon adequate source of ambient lighting. By contrast, the applicants' referenced co-pending applications disclose a true time- of-flight three-dimensional imaging system in which neither luminescence data nor ambient light is relied upon.
However implemented, the present invention preferably utilizes a small form factor, preferably inexpensive imaging system that can find range distances in three dimensions, substantially in real-time, in a non-haptic fashion. Fig. 2A is an exemplary system showing the present invention in which the range finding system is similar to that disclosed in the above-referenced co-pending U.S. patent applications. Other non-haptic three-dimensional range finding systems could instead be used, however. In Fig. 2A, system 100 is a three-dimensional range finding system that is augmented by sub-system 110, which generates and can project via an optical system 120 computer-created object images such as 130A, 130B. Such projection may be carried out with LCDs or micro-mirrors, or with other components known in the art. In the embodiment shown, the images created can appear to be projected upon the surface of windshield 50, in front of, or behind windshield 50.
The remainder of system 100 may be as disclosed in the exemplary patent applications. An array 140 of pixel detectors 150 and their individual processing circuits 160 is provided preferably on an IC 170 that includes most if not all of the remainder of the overall system. A typical size for the array might be 100x100 pixel detectors 150 and an equal number of associated processing circuits 160. An imaging light source such as a laser diode 180 emits energy via lens system 190 toward the imaging region 40. At least some of the emitted energy will be reflected from the surface of the user's hand, finger, a held baton, etc., back toward system 100, and can enter collection lens 200. Alternatively, rather than use pulses of energy, a phase-detection based ranging scheme could be employed.
The time interval from start of a pulse of emitted light energy from source 190 to when some of the reflected energy is returned via lens 200 to be detected by a pixel diode detector in array 140 is measured. This time-of-flight measurement can provide the vector distance to the location on the windshield, or elsewhere, from which the energy was reflected. Clearly if a human finger (or other object) is within the imaging region 40, locations of the surface of the finger may, if desired, also be detected and determined.
System 100 preferably provides computer functions and includes a microprocessor or microcontroller system 210 that preferably includes a control processor 220, a data processor 230, and an input/output processor 240. IC 170 preferably further includes memory 250 having random access memory (RAM) 260, read-only memory (ROM) 270, and memory storing routine(s) 280 used by the present invention to calculate vector distances, userfinger movement velocity and movement direction, and relationships between projected images and location of a user's finger(s). Circuit 290 provides timing, interface, and other support functions.
Within array 140, each preferably identical pixel detector 150 can generate data from to calculate Z distance to a point p1(t) in front of windshield 50, on the windshield surface, or behind windshield 50, or to an intervening object. In the disclosed applications, each pixel detector preferably simultaneously acquires two types of data that are used to determine Z distance: distance time delay data, and energy pulse brightness data. Delay data is the time required for energy emitted by emitter 180 to travel at the speed of light to windshield 40 or, if closer, a user's hand or finger or other object, and back to sensor array 140 to be detected. Brightness is the total amount of signal generated by detected pulses as received by the sensor array. It will be appreciated that range finding data is obtained without touching the user's hand orfingerwith anything, e.g., the data is obtained non-haptically.
As shown in Fig. 2B, region 40 may be considered to be bounded in the z-axis direction from a front clipping plane 292 and by a rear clipping plane 294. Rear clipping plane 292 may coincide with the z-axis distance from system 100 to the inner surface of windshield 50 (or other substrate in another application). The z- axis distance separating planes 292 and 294 represents the proximity range within which a user's hand or forefinger is to be detected with respect to interaction with a projected image, e.g. 130B. In Fig. 2B, the tip of the user's forefinger is shown as passing through plane 292 to "touch" image 130B, here projected to appear intermediate the two clipping planes.
In reality, clipping planes 292 and 294 will be curved and the region between these planes can be defined as an immersion frustum 296. As suggested by Fig.
2B, image 130B may be projected to appear within immersion frustum 296, or to appear behind (or outside) the windshield. If desired, the image could be made to appear in front of the frustum. The upper and lower limits of region 40 are also bounded by frustum 296 in that when the user's hand is on the car seat or on the car roof, it is not necessary that system 100 recognize the hand position with respect to any virtual image, e.g., 130B, that may be presently displayed. It will be appreciated that the relationship shown in Fig. 2B is a very intuitive way to provide feedback in that the user sees the image of a control 130B, reaches towards and appears to manipulate the control.
Three-dimensional range data is acquired by system 100 from examination of time-of-flight information between signals emitted by emitter 180 via optional lens
190, and return signals entering optional lens 200 and detected by array 140.
Since system 100 knows a priori the distance and boundaries of frustum 296 and can detect when an object such as a user's forefinger is within the spaced bounded by the frustum. Software 290 recognizes the finger or other object is detected within this range, and system 100 is essentially advised of potential user intent to interact with any displayed images. Alternatively, system 100 can display a menu of image choices when an object such as a user's finger is detected within frustum 296. (For example, in Fig. 3D, display 130D could show icons rather than buttons, one icon to bring up a cellular telephone dialing display, another icon to bring up a map display, another icon to bring up vehicle control displays, etc.)
Software 290 attempts to recognize objects (e.g., user's hand, forefinger, perhaps arm and body, head, etc.) within frustum 206, and can detect shape (e.g., perimeter) and movement (e.g., derivative of positional coordinate changes). If desired, the user may hold a passive but preferably highly reflective baton to point to regions in the virtual display. Although system 100 preferably uses time-of-flight z-distance data only, luminosity information can aid in discerning objects and object shapes and positions.
Software 290 could cause a display that includes virtual representations of portions of the user's body. For example if the user's left hand and forefinger are recognized by system 100, the virtual display in region 40 could include a left hand and forefinger. If the user's left hand moved in and out or left and right, the virtual image of the hand could move similarly. Such application could be useful in a training environment, for example where the user is to pickup potentially dangerous items and manipulate them in a certain fashion. The user would view a virtual image of the item, and would also view a virtual image of his or her hand grasping the virtual object, which virtual object could then be manipulated in the virtual space in frustum 296.
Figs. 3A, 3B, and 3C show portion 40 of an exemplary HUD display, as used by the embodiment of Fig. 1 in which system 100 projected image 130A is a slider control, perhaps a representation or token for an actual volume control 80 on an actual radio 70 within vehicle 20. As the virtual slider bar 300 is "moved" to the right, it is the function of the present invention to command the volume of radio 70 to increased, or if image 130A is a thermostat, to command the temperature within vehicle 20 to change, etc. Also depicted in Fig. 3A is a system 100 projected image of a rotary knob type control 130B having a finger indent region 310.
In Fig.3A, optionally none of the projected images is highlighted in that the user's hand is not sufficiently close to region 40 to be sensed by system 100. Note, however, in Fig. 3B that the user's forefinger 320 has been moved towards windshield 50 (as depicted in Fig. 1), and indeed is within sense region 40. Further, the (x,y,z) coordinates of at least a portion of forefinger 320 are sufficiently close to the virtual slider bar 300 to cause the virtual slider bar and the virtual slider control image 130A to be highlighted by system 100. For example, the image may turn red as the user's foregoing "touches" the virtual slider bar. It is understood that the vector relationship in three-dimensions between the user's forefinger and region 40 is determined substantially in real- time by system 100, or by any other system able to reliably calculate distance coordinates in three-axes. In Fig. 3B the slider bar image has been "moved" to the right, e.g., as the user's forefinger moves left to right on the windshield, system 100 calculates the forefinger position, calculates that the forefinger is sufficiently close to the slider bar position to move the slider bar, and projects a revised image into region 40, wherein the slider bar has followed the user's forefinger.
At the same time, electrical bus lead 330 (see Fig. 2A), which is coupled to control systems in vehicle 20 including all devices 70 that are desired to at least have the ability to be virtually controlled, according to the present invention. Since system 100 is projecting an image associated, for example, with radio 70, the volume in radio 70 will be increased as the user's forefinger slides the computer rendered image of the slider bar to the right. Of course if the virtual control image 130 were say bass or treble, then bus lead 330 would command radio 70 to adjust bass or treble accordingly. Once the virtual slider bar image 300 has been "moved" to a desirable location by the user's forefinger, system 100 will store that location and continue to project, as desired by the user or as pre-programmed, that location for the slider bar image. Since the projected images can vary, it is understood that upon re-displaying slider control 130A at a later time (e.g., perhaps seconds or minutes or hours later), the slider bar will be shown at the last user-adjusted position, and the actual control function in device 70 will be set to the same actual level of control.
Turning to Fig. 3D, assume that no images are presently active in region 40, e.g., the user is not or has not recently moved his hand or forefinger into region 40. But assume that system 100, which is coupled to various control systems and sensors via bus lead 330, now realizes that the gas tank is nearly empty, or that tire pressure is load, or that oil temperature is high. System 100 can now automatically project an alert or warning image 130C, e.g., "ALERT" or perhaps "LOW TIRE PRESSURE", etc. As such, it will be appreciated that what is displayed in region 40 by system 100 can be both dynamic and interactive.
Fig. 3D also depicts another HUD display, a virtual telephone dialing pad 130D, whose virtual keys the user may "press" with a forefinger. In this instance, device 70 may be a cellular telephone coupled via bus lead 130 to system 100. As the user's forefinger touches a virtual key, the actual telephone 70 can be dialed. Software, e.g., routine(s) 280, within system 100 knows a priori the location of each virtual key in the display pad 130D, and it is a straightforward task to discern when an object, e.g., a user's forefinger, is in close proximity to region 40, and to any (x,y,z) location therein. When a forefinger hovers over a virtual key for longer than a predetermined time, perhaps 100 ms, the key may be considered as having been "pressed". The "hovering" aspect may be determined, for example, by examining the first derivative of the (x(t),y(t),z(t)) coordinates of the forefinger. When this derivative is zero, the user's forefinger has no velocity and indeed is contacting the windshield and can be moved no further in the z-axis. Other techniques may instead be used to determine location of a user's forefinger (or other hand portion), or a pointer held by the user, relative to locations within region 40.
Referring to Fig. 3E, assume that the user wants to "rotate" virtual knob 130B, perhaps to change frequency on a radio, to adjust the driver's seat position, to zoom in or zoom out on a projected image of a road map, etc. Virtual knob 130B may be "grasped" by the user's hand, using for example the right thumb 321 , the right forefinger 320, and the right middle finger 322, as shown in Fig. 3E. By "grasped" it is meant that the user simply reaches forthe computer-rendered and projected image of knob 130B as though it were a real knob. In a preferred embodiment, virtual knob 130B is rendered in a highlight color (e.g., as shown by Fig. 3E) when the user's hand (or other object) is sufficiently close to the area of region 40 defined by knob 130B. Thus in Fig. 3A, knob 130B might be rendered in a pale color, since no object is in close proximity to that portion of the windshield. But in Fig. 3E, software 280 recognizes from acquired three- dimensional range finding data that an object (e.g., a forefinger) is close to the area of region 40 defined by virtual knob 130B. Accordingly in Fig. 3E, knob 130B is rendered in a more discernable color and/or with bolder lines than is depicted in Fig. 3A. In Fig. 3E, the three fingers noted will "contact" virtual knob 130B at three points, denoted a1 (thumb tip position), a2 (forefinger tip position), and a3 (middle fingertip position). With reference to Figs.4A and 4B, analysis can be carried out by software 280 to recognize the rotation of virtual knob 130B that is shown in Fig. 3F, to recognize the magnitude of the rotation, and to translate such data into commands coupled via bus 330 to actual device(s) 70.
Consider the problem of determining the rotation angle Θ of virtual knob 130B given coordinates for three points a1 , a2, and a3, representing perceived tips of user fingers before rotation. System 100 can compute and/or approximate the rotation angle Θ using any of several approaches. In a first approach, the exact rotation angle Θ is determined as follows. Let the pre-rotation (e.g., Fig. 3E
position) points be denoted a_
Figure imgf000017_0001
and let
Figure imgf000017_0002
be the respective coordinates after rotation through angle θ, as shown in Fig. 3F. In Figs. 3E and 3F and 4A and 4B, rotation of the virtual knob is shown in a counter-clockwise direction.
Referring to Fig. 4A, the center of rotation may be considered to be point p= xpiyp,zp). whose coordinates are unknown. The axis of rotation is approximately normal to the plane of the triangle defined by the three fingertip contact points ^,^ and < . The (x,y,z) coordinates ofpointp can be calculated by the following formula:
+ Y + Z{
Figure imgf000017_0003
Figure imgf000017_0004
+ Y_\ + Z_\j — y_\ — z-i
Figure imgf000017_0005
If the rotation angle #is relatively small, angle #can be calculated as follows:
Figure imgf000018_0001
Alternatively, system 100 may approximate rotation angle Θ using a second approach, in which an exact solution is not required. In this second approach, it is desired to ascertain direction of rotation (clockwise or counter-clockwise) and to approximate the magnitude of the rotation.
Referring now to Fig. 4C, assume that point c =\ \ ,c jc ,'c v' ,c z I is the center of the
triangle defined by the three pre-rotation points fl,,^ and #3. The following formula may now be used:
Figure imgf000018_0002
Z. + Z2 + Zj
Cz =
Again, as shown in Fig. 1 , the z-axis extends from system 100, and the x-axis and y-axis are on the plane of the array of pixel diode detectors 140. Let L
be a line passing through points a_,C . , and let L^ be the projection of line L
onto the x-y plane. Line L may be represented by the following equation:
L(x,y)-^-^(x-x ) + y-y. =0
•Λ"y Λ| The clockwise or counter-clockwise direction of rotation may be defined by the following criterion:
Rotation is clockwise if
Figure imgf000019_0001
and rotation is counter¬
clockwise if l{cx,cy)-l{X2,Y2) >0,
Figure imgf000019_0002
a software algorithm, perhaps part of routine(s)
290, executed by computer sub-system 210 selects points ^ty, passes line
L through points tf^^, and uses the above criterion to define the direction of rotation. The magnitude of rotation may be approximated by defining d„ the
distance between a_ and A„ as follows:
= - ,)2 +(J -^)2 +( , -2,)2 for I =1A .
The magnitude of the rotation angle Θ may be approximated as follows:
θ≡h{d +d2 +d^ where k is a system constant that can be adjusted.
The analysis described above is somewhat generalized to enable remote tracking of rotation of any three points. A more simplified approach may be used in Fig. 3E, where user 30 may use a fingertip to point to virtual indentation 310 in the image of circular knob 130B. The fingertip may now move clockwise or counter-clockwise about the rotation axis of knob 130B, with the result that system 100 causes the image of knob 130B to be rotated to track the user's perceived intended movement of the knob. At the same time, an actual controlled parameter on device 70 (or vehicle 20) is moved, proportionally to the user movement of the knob image. As in the other embodiments, the relationship between user manipulation of a virtual control and variation in an actual parameter of an actual device may be linear or otherwise, including linear in some regions of control and intentionally nonlinear in other regions.
Software 290 may of course use alternative algorithms, executed by computer system 210, to determine angular rotation of virtual knobs or other images rendered by computing system 210 and projected via lens 190 onto windshield or other area 50. As noted, computing system 210 will then generate the appropriate commands, coupled via bus 330 to device(s) 70 and/or vehicle 20.
Figs. 3G and 3H depict use of the present invention as a virtual training tool in which a portion of the user's body is immersed in the virtual display. In this application, the virtual display 40' may be presented on a conventional monitor rather than in an HUD fashion. As such, system 100 can output video data and video drive data to a monitor, using techniques well known in the art. For ease of illustration, a simple task is shown. Suppose the user, whose hand is depicted as 302, is to be trained to pick up an object, whose virtual image is shown as 130H (for example a small test tube containing a highly dangerous substance), and to carefully tile the object so that its contents pour out into a target region, e.g., a virtual beaker 1301. In Fig. 3G, the user's hand, which is detected and imaged by system 100, is depicted as 130G in the virtual display. For ease of illustration, virtual hand 130G is shown as a stick figure, but a more realistic image may be rendered by system 100. In Fig. 3H, the user's real hand 302 has rotated slightly counter-clockwise, and the virtual image 40' shows virtual object 130H and virtual hand 130G similarly rotated slightly counter-clockwise.
The sequence can be continued such that the user must "pour out" virtual contents of object 130H into the target object 130I without spilling. System 100 can analyze movements of the actual hand 302 to determine whether such movements were sufficiently carefully executed. The virtual display could of course depict the pouring-out of contents, and if the accuracy of the pouring were not proper, the spilling of contents. Object 130H and/or its contents (not shown) might, for example, be highly radioactive, and the user's hand motions might be practice to operate a robotic control that will grasp and tilt an actual object whose virtual representation is shown as 130H. However use of the present invention permits practice sessions without the risk of any danger to the user. If the user "spills" the dangerous contents or "drops" the held object, there is no harm, unlike a practice session with an actual object and actual contents.
Fig.31 depicts the present invention used in another training environment. In this example, user 302 perhaps actually holds a tool 400 to be used in conjunction with a second tool 410. In reality the user is being trained to manipulate a tool 400' to be used in conjunction with a second tool 410', where tool 400' is manipulated by a robotic system 420, 430 (analogous to device 70) under control of system 100, responsive to user-manipulation of tool 400. Robotically manipulated tools 400', 410' are shown behind a pane 440, that may be a protective pane of glass, or that may be opaque, to indicate that tools 400', 410' cannot be directly viewed by the user. For example, tools 400', 410' may be at the bottom of the ocean, or on the moon, in which case communication bus 330 would include radio command signals. If the user can indeed view tools 400', 410' through pane 440, there would be no need for a computer-generated display. However if tools 400', 410' cannot be directly viewed, then a computer- generated display 40' could be presented. In this display, 130G could now represent the robotic arm 420 holding actual tool 400'. It is understood that as the user 302 manipulates tool 400 (although manipulation could occur without tool 400), system 100 via bus 330 causes tool 400' to be manipulated robotically. Feedback to the user can occur visually, either directly through pane 440 or via display 40', or in terms of instrumentation that in substantial real-time tells the user what is occurring with tools 400, 410'.
Thus, a variety of devices 70 may be controlled with system 100. Fig. 5A depicts a HUD virtual display created and projected by system 100 upon region 40 of windshield 50, in which system 70 is a global position satellite (GPS) system, or perhaps a computer storing zoomable maps. In Fig. 5A, image 130E is shown as a roadmap having a certain resolution. A virtual scroll-type control 130F is presented to the right of image 130E, and a virtual image zoom control 130A is also shown. Scroll control 130F is such that a user's finger can touch a portion of the virtual knob, e.g., perhaps a north-east portion, to cause projected image 130E to be scrolled in that compass direction. Zoom control 130A, shown here as a slider bar, permits the user to zoom the image in or out using a finger to "move" virtual slider bar 300. If desired, zoom control 130A could of course be implemented as a rotary knob or other device, capable of user manipulation.
In Fig. 5B, the user has already touched and "moved" virtual slider bar 300 to the right, which as shown by the indica portion of image 130A has zoomed in image 130E. Thus, the image, now denoted 130E, has greater resolution and provides more details. As system 100 detects the user's finger (or pointer or other object) near bar 300, detected three-dimensional (x,y,z) data permits knowing what level of zoom is desired. System 100 then outputs on bus 330 the necessary commands to cause GPS or computer system 70 to provide a higher resolution map image. Because system 100 can respond substantially in real-time, there is little perceived lag between the time the user's finger "slides" bar 300 left or right and the time map image 130E is zoomed in or out. This feedback enables the user to rapidly cause the desired display to appear on windshield 50, without requiring the user to divert attention from the task of driving vehicle 20, including looking ahead, right through the images displayed in region 40, to the road and traffic ahead.
Modifications and variations may be made to the disclosed embodiments without departing from the subject and spirit of the invention as defined by the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method of presenting a virtual simulation to control an actual device, the method comprising the following steps:
(a) generating a display including an image of a control to change a parameter of said device;
(b) sensing (x,y,z) axes proximity of a user to said image on said display; (c) determining non-haptically from data sensed at step (b), user intended movement of said image of said control; and
(d) outputting a signal coupleable to said actual device to control said parameter as a function of sensed user intended movement of said image of said control.
2. The method of claim 1 , wherein at step (a), said display is a heads- up-display.
3. The method of claim 1 , wherein step (b) includes sensing using time-of-flight data.
4. The method of claim 1, wherein step (c) includes modifying said display to represent movement of said control created by said user.
5. The method of claim 1 , wherein step (a) includes generating an image of a slider control.
6. The method of claim 1 , wherein step (a) includes generating an image of a rotary control.
7. The method of claim 1 , wherein step (a) includes generating an image including a menu of icons selectable by said user.
8. The method of claim 1 , wherein said actual device is selected from a group consisting of (i) an electronic entertainment device, (ii) radio, (iii) a cellular telephone, (iv) a heater system, (v) a cooling system, (vi) a motorized system.
9. The method of claim 1 , wherein at step (a) said display is generated only after detection of a user in close proximity to an area whereon said display is presentable.
10. The method of claim 9, further including displaying a user-alert warning responsive to a parameter of said device, independently of user proximity to said area.
11. The method of claim 1 , wherein said display is a heads-up-display in a motor vehicle operable by a user, and said device is selected from a group consisting of (i) said motor vehicle, and (ii) an electronic accessory disposed in said motor vehicle.
12. The method of claim 11 , wherein said device is a global position satellite system, said display includes a map, and said control is user-operable to change displayed appearance of said map.
13. Amethod of presenting a virtual simulation, the method comprising the following steps: (a) generating a display including a virtual image of an object;
(b) non-haptically sensing in three-dimensions proximity of at least a portion of a user's body to said display;
(c) modifying said display substantially in real-time to include a representation of said user's body; and (d) modifying said display to depict substantially in real-time said representation of said user's body manipulating said object.
14. The method of claim 13, wherein said manipulating is part of a regime to train said user to manipulate a real object represented by said virtual image.
15. A virtual simulation system, comprising: an imaging sub-system to generate a display including an image; a detection sub-system to non-haptically detect in three-dimensions proximity of a portion of an object to a region of said display; and said imaging sub-system modifying said image in response to detected proximity of said portion of said object.
16. The system of claim 15, wherein said image is a representation of a control, said object is a portion of a user's hand, and said proximity includes user manipulation of said image; further including: a system outputting a signal coupleable to a real device having a parameter variable in response to said user manipulation of said image.
17. The system of claim 15, wherein: said system is a heads-up-system; said display is presentable on a windshield of a motor vehicle; and said image includes an image of a control.
18. The system of claim 17, wherein: said system includes a circuit outputting a command signal responsive to said detection of said proximity, said command signal coupleable to a device selected from a group consisting of (a) an electrically-controllable component of said motor vehicle, (b) an electrically-controllable electronic device disposed in said motor vehicle.
19. The system of claim 18, wherein said device is a global positioning satellite (GPS) system, wherein said image is a map generated by said GPS system, and said image is a control to change appearance of said image of said map.
20. The system of claim 17, wherein said detection sub-system operates independently of ambient light.
PCT/US2002/003433 2001-02-05 2002-02-05 Method and system to present immersion virtual simulations using three-dimensional measurement WO2002063601A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/777,778 US20020140633A1 (en) 2000-02-03 2001-02-05 Method and system to present immersion virtual simulations using three-dimensional measurement
US09/777,778 2001-02-05

Publications (1)

Publication Number Publication Date
WO2002063601A1 true WO2002063601A1 (en) 2002-08-15

Family

ID=25111243

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/003433 WO2002063601A1 (en) 2001-02-05 2002-02-05 Method and system to present immersion virtual simulations using three-dimensional measurement

Country Status (2)

Country Link
US (1) US20020140633A1 (en)
WO (1) WO2002063601A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1496426A2 (en) * 2003-07-11 2005-01-12 Siemens Aktiengesellschaft Method for the inputting of parameters of a parameter field
EP1501007A3 (en) * 2003-07-23 2006-05-31 Bose Corporation System and method for accepting a user control input
EP1798588A1 (en) * 2005-12-13 2007-06-20 GM Global Technology Operations, Inc. Operating system for functions in a motor vehicle
US7415352B2 (en) 2005-05-20 2008-08-19 Bose Corporation Displaying vehicle information
US7768876B2 (en) 2004-12-21 2010-08-03 Elliptic Laboratories As Channel impulse response estimation
EP1477351B1 (en) * 2003-05-15 2014-01-08 Webasto AG Vehicle roof equipped with an operating device for electrical vehicle components
FR3005751A1 (en) * 2013-05-17 2014-11-21 Thales Sa HIGH HEAD VISOR WITH TOUCH SURFACE
EP2821884A1 (en) * 2013-07-01 2015-01-07 Airbus Operations GmbH Cabin management system having a three-dimensional operating panel
CN109842808A (en) * 2017-11-29 2019-06-04 深圳光峰科技股份有限公司 Control method, projection arrangement and the storage device of projection arrangement

Families Citing this family (152)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7831358B2 (en) * 1992-05-05 2010-11-09 Automotive Technologies International, Inc. Arrangement and method for obtaining information using phase difference of modulated illumination
US7614008B2 (en) 2004-07-30 2009-11-03 Apple Inc. Operation of a computer with touch screen interface
US9239673B2 (en) 1998-01-26 2016-01-19 Apple Inc. Gesturing with a multipoint sensing device
US9292111B2 (en) 1998-01-26 2016-03-22 Apple Inc. Gesturing with a multipoint sensing device
US7834855B2 (en) 2004-08-25 2010-11-16 Apple Inc. Wide touchpad on a portable computer
US8479122B2 (en) 2004-07-30 2013-07-02 Apple Inc. Gestures for touch sensitive input devices
US7133812B1 (en) 1999-08-30 2006-11-07 Ford Global Technologies, Llc Method of parametic design of an instrument panel support structure
US6760693B1 (en) 2000-03-29 2004-07-06 Ford Global Technologies, Llc Method of integrating computer visualization for the design of a vehicle
US7158923B1 (en) 2000-03-29 2007-01-02 Ford Global Technologies, Llc Method of integrating product information management with vehicle design
US7761269B1 (en) * 2000-04-14 2010-07-20 Ford Global Technologies, Llc System and method of subjective evaluation of a vehicle design within a virtual environment using a virtual reality
LU90675B1 (en) * 2000-11-10 2002-05-13 Iee Sarl Device control method
US6917907B2 (en) 2000-11-29 2005-07-12 Visteon Global Technologies, Inc. Method of power steering hose assembly design and analysis
GB2370818B (en) * 2001-01-03 2004-01-14 Seos Displays Ltd A simulator
US6943774B2 (en) * 2001-04-02 2005-09-13 Matsushita Electric Industrial Co., Ltd. Portable communication terminal, information display device, control input device and control input method
US6968073B1 (en) 2001-04-24 2005-11-22 Automotive Systems Laboratory, Inc. Occupant detection system
JP4613449B2 (en) * 2001-06-01 2011-01-19 セイコーエプソン株式会社 Output service providing system, virtual object management terminal, moving object, virtual object management terminal program, moving object program, and output service providing method
US8035612B2 (en) 2002-05-28 2011-10-11 Intellectual Ventures Holding 67 Llc Self-contained interactive video display system
US6801187B2 (en) 2001-06-22 2004-10-05 Ford Global Technologies, Llc System and method of interactive evaluation and manipulation of a geometric model
US7091964B2 (en) 2001-11-30 2006-08-15 Palm, Inc. Electronic device with bezel feature for receiving input
US7069202B2 (en) 2002-01-11 2006-06-27 Ford Global Technologies, Llc System and method for virtual interactive design and evaluation and manipulation of vehicle mechanisms
US6641054B2 (en) * 2002-01-23 2003-11-04 Randall L. Morey Projection display thermostat
US7174280B2 (en) 2002-04-23 2007-02-06 Ford Global Technologies, Llc System and method for replacing parametrically described surface features with independent surface patches
US7710391B2 (en) 2002-05-28 2010-05-04 Matthew Bell Processing an image utilizing a spatially varying pattern
US20050122308A1 (en) * 2002-05-28 2005-06-09 Matthew Bell Self-contained interactive video display system
JP4125931B2 (en) * 2002-08-26 2008-07-30 株式会社ワコー Rotation operation amount input device and operation device using the same
US20070135943A1 (en) * 2002-09-18 2007-06-14 Seiko Epson Corporation Output service providing system that updates information based on positional information, terminal and method of providing output service
US7693702B1 (en) * 2002-11-01 2010-04-06 Lockheed Martin Corporation Visualizing space systems modeling using augmented reality
WO2004055776A1 (en) * 2002-12-13 2004-07-01 Reactrix Systems Interactive directed light/sound system
US20040200505A1 (en) * 2003-03-14 2004-10-14 Taylor Charles E. Robot vac with retractable power cord
US20050010331A1 (en) * 2003-03-14 2005-01-13 Taylor Charles E. Robot vacuum with floor type modes
US20040211444A1 (en) * 2003-03-14 2004-10-28 Taylor Charles E. Robot vacuum with particulate detector
US7801645B2 (en) 2003-03-14 2010-09-21 Sharper Image Acquisition Llc Robotic vacuum cleaner with edge and object detection system
US7805220B2 (en) 2003-03-14 2010-09-28 Sharper Image Acquisition Llc Robot vacuum with internal mapping system
JP2004334590A (en) * 2003-05-08 2004-11-25 Denso Corp Operation input device
US7046151B2 (en) * 2003-07-14 2006-05-16 Michael J. Dundon Interactive body suit and interactive limb covers
US7382356B2 (en) 2003-09-15 2008-06-03 Sharper Image Corp. Input unit for games and musical keyboards
EP1667874B1 (en) * 2003-10-03 2008-08-27 Automotive Systems Laboratory Inc. Occupant detection system
CN1902930B (en) 2003-10-24 2010-12-15 瑞克楚斯系统公司 Method and system for managing an interactive video display system
US8448083B1 (en) * 2004-04-16 2013-05-21 Apple Inc. Gesture control of multimedia editing applications
US20080018671A1 (en) * 2004-06-07 2008-01-24 Sharp Kabushiki Kaisha Information Display Control Device, Navigation Device, Controlling Method Of Information Display Control Device, Control Program Of Information Display Control Device, And Computer-Readable Storage Medium
US8466893B2 (en) * 2004-06-17 2013-06-18 Adrea, LLC Use of a two finger input on touch screens
US7653883B2 (en) * 2004-07-30 2010-01-26 Apple Inc. Proximity detector in handheld device
US8381135B2 (en) 2004-07-30 2013-02-19 Apple Inc. Proximity detector in handheld device
DE102005010843B4 (en) * 2005-03-07 2019-09-19 Volkswagen Ag Head-up display of a motor vehicle
US9128519B1 (en) 2005-04-15 2015-09-08 Intellectual Ventures Holding 67 Llc Method and system for state-based control of objects
US8081822B1 (en) 2005-05-31 2011-12-20 Intellectual Ventures Holding 67 Llc System and method for sensing a feature of an object in an interactive video display
US8098277B1 (en) 2005-12-02 2012-01-17 Intellectual Ventures Holding 67 Llc Systems and methods for communication between a reactive video system and a mobile communication device
US8279168B2 (en) * 2005-12-09 2012-10-02 Edge 3 Technologies Llc Three-dimensional virtual-touch human-machine interface system and method therefor
US20070173355A1 (en) * 2006-01-13 2007-07-26 Klein William M Wireless sensor scoring with automatic sensor synchronization
US20080015061A1 (en) * 2006-07-11 2008-01-17 Klein William M Performance monitoring in a shooting sport using sensor synchronization
US8970501B2 (en) 2007-01-03 2015-03-03 Apple Inc. Proximity and multi-touch sensor detection and demodulation
US20080252596A1 (en) * 2007-04-10 2008-10-16 Matthew Bell Display Using a Three-Dimensional vision System
US10198958B2 (en) * 2007-05-04 2019-02-05 Freer Logic Method and apparatus for training a team by employing brainwave monitoring and synchronized attention levels of team trainees
WO2009035705A1 (en) 2007-09-14 2009-03-19 Reactrix Systems, Inc. Processing of gesture-based user interactions
US8159682B2 (en) 2007-11-12 2012-04-17 Intellectual Ventures Holding 67 Llc Lens system
US7998004B2 (en) * 2008-01-24 2011-08-16 Klein William M Real-time wireless sensor scoring
FR2928468B1 (en) * 2008-03-04 2011-04-22 Gwenole Bocquet DEVICE FOR NON-TOUCH INTERACTION WITH AN IMAGE NOT BASED ON ANY SUPPORT
US8259163B2 (en) 2008-03-07 2012-09-04 Intellectual Ventures Holding 67 Llc Display with built in 3D sensing
JP2009245392A (en) * 2008-03-31 2009-10-22 Brother Ind Ltd Head mount display and head mount display system
ES2364875T3 (en) * 2008-05-20 2011-09-15 Fiat Group Automobiles S.P.A. ELECTRONIC SYSTEM TO INDUCE OCCUPANTS OF A VEHICLE TO BUCK THE SEAT BELTS.
US8595218B2 (en) 2008-06-12 2013-11-26 Intellectual Ventures Holding 67 Llc Interactive display management systems and methods
US8957835B2 (en) 2008-09-30 2015-02-17 Apple Inc. Head-mounted display apparatus for retaining a portable electronic device with display
EP2356809A2 (en) * 2008-12-10 2011-08-17 Siemens Aktiengesellschaft Method for transmitting an image from a first control unit to a second control unit and output unit
EP2389664A1 (en) * 2009-01-21 2011-11-30 Georgia Tech Research Corporation Character animation control interface using motion capture
US9417700B2 (en) 2009-05-21 2016-08-16 Edge3 Technologies Gesture recognition systems and related methods
US8890650B2 (en) * 2009-05-29 2014-11-18 Thong T. Nguyen Fluid human-machine interface
JP5218354B2 (en) * 2009-09-16 2013-06-26 ブラザー工業株式会社 Head mounted display
KR101651568B1 (en) 2009-10-27 2016-09-06 삼성전자주식회사 Apparatus and method for three-dimensional space interface
DE102009046376A1 (en) * 2009-11-04 2011-05-05 Robert Bosch Gmbh Driver assistance system for automobile, has input device including manually operated control element that is arranged at steering wheel and/or in area of instrument panel, where area lies in direct vicinity of wheel
US20110175918A1 (en) * 2010-01-21 2011-07-21 Cheng-Yun Karen Liu Character animation control interface using motion capure
NL2004333C2 (en) * 2010-03-03 2011-09-06 Ruben Meijer Method and apparatus for touchlessly inputting information into a computer system.
US8396252B2 (en) 2010-05-20 2013-03-12 Edge 3 Technologies Systems and related methods for three dimensional gesture recognition in vehicles
JP5012957B2 (en) * 2010-05-31 2012-08-29 株式会社デンソー Vehicle input system
US8508347B2 (en) * 2010-06-24 2013-08-13 Nokia Corporation Apparatus and method for proximity based input
KR20120000663A (en) * 2010-06-28 2012-01-04 주식회사 팬택 Apparatus for processing 3d object
US8582866B2 (en) 2011-02-10 2013-11-12 Edge 3 Technologies, Inc. Method and apparatus for disparity computation in stereo images
US8666144B2 (en) 2010-09-02 2014-03-04 Edge 3 Technologies, Inc. Method and apparatus for determining disparity of texture
WO2012030872A1 (en) 2010-09-02 2012-03-08 Edge3 Technologies Inc. Method and apparatus for confusion learning
US8655093B2 (en) 2010-09-02 2014-02-18 Edge 3 Technologies, Inc. Method and apparatus for performing segmentation of an image
US8633979B2 (en) 2010-12-29 2014-01-21 GM Global Technology Operations LLC Augmented road scene illustrator system on full windshield head-up display
FR2971066B1 (en) 2011-01-31 2013-08-23 Nanotec Solution THREE-DIMENSIONAL MAN-MACHINE INTERFACE.
US8970589B2 (en) 2011-02-10 2015-03-03 Edge 3 Technologies, Inc. Near-touch interaction with a stereo camera grid structured tessellations
US8723789B1 (en) 2011-02-11 2014-05-13 Imimtek, Inc. Two-dimensional method and system enabling three-dimensional user interaction with a device
US9857868B2 (en) 2011-03-19 2018-01-02 The Board Of Trustees Of The Leland Stanford Junior University Method and system for ergonomic touch-free interface
US8840466B2 (en) 2011-04-25 2014-09-23 Aquifi, Inc. Method and system to create three-dimensional mapping in a two-dimensional game
US8686943B1 (en) 2011-05-13 2014-04-01 Imimtek, Inc. Two-dimensional method and system enabling three-dimensional user interaction with a device
JP5694883B2 (en) * 2011-08-23 2015-04-01 京セラ株式会社 Display device
US20130106916A1 (en) * 2011-10-27 2013-05-02 Qing Kevin Guo Drag and drop human authentication
US9672609B1 (en) 2011-11-11 2017-06-06 Edge 3 Technologies, Inc. Method and apparatus for improved depth-map estimation
US8887043B1 (en) * 2012-01-17 2014-11-11 Rawles Llc Providing user feedback in projection environments
US8854433B1 (en) 2012-02-03 2014-10-07 Aquifi, Inc. Method and system enabling natural user interface gestures with an electronic system
US9459781B2 (en) 2012-05-09 2016-10-04 Apple Inc. Context-specific user interfaces for displaying animated sequences
US9098739B2 (en) 2012-06-25 2015-08-04 Aquifi, Inc. Systems and methods for tracking human hands using parts based template matching
US9111135B2 (en) 2012-06-25 2015-08-18 Aquifi, Inc. Systems and methods for tracking human hands using parts based template matching using corresponding pixels in bounded regions of a sequence of frames that are a specified distance interval from a reference camera
US8836768B1 (en) 2012-09-04 2014-09-16 Aquifi, Inc. Method and system enabling natural user interface gestures with user wearable glasses
WO2014049787A1 (en) * 2012-09-27 2014-04-03 パイオニア株式会社 Display device, display method, program, and recording medium
US20140181759A1 (en) * 2012-12-20 2014-06-26 Hyundai Motor Company Control system and method using hand gesture for vehicle
US9092665B2 (en) 2013-01-30 2015-07-28 Aquifi, Inc Systems and methods for initializing motion tracking of human hands
US9129155B2 (en) 2013-01-30 2015-09-08 Aquifi, Inc. Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map
FR3002052B1 (en) 2013-02-14 2016-12-09 Fogale Nanotech METHOD AND DEVICE FOR NAVIGATING A DISPLAY SCREEN AND APPARATUS COMPRISING SUCH A NAVIGATION
JP6102330B2 (en) * 2013-02-22 2017-03-29 船井電機株式会社 projector
US10721448B2 (en) 2013-03-15 2020-07-21 Edge 3 Technologies, Inc. Method and apparatus for adaptive exposure bracketing, segmentation and scene organization
US9298266B2 (en) 2013-04-02 2016-03-29 Aquifi, Inc. Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9372344B2 (en) * 2013-04-08 2016-06-21 TaiLai Ting Driving information display device
FR3008198B1 (en) * 2013-07-05 2016-12-02 Thales Sa VISUALIZATION DEVICE COMPRISING A SCREEN-CONTROLLED TRANSPARENCY SCREEN WITH A HIGH CONTRAST
US9798388B1 (en) 2013-07-31 2017-10-24 Aquifi, Inc. Vibrotactile system to augment 3D input systems
DE102013013166A1 (en) 2013-08-08 2015-02-12 Audi Ag Car with head-up display and associated gesture operation
CN110850705B (en) 2013-09-03 2021-06-29 苹果公司 Crown input for wearable electronic device
US11068128B2 (en) 2013-09-03 2021-07-20 Apple Inc. User interface object manipulations in a user interface
JP2015060296A (en) * 2013-09-17 2015-03-30 船井電機株式会社 Spatial coordinate specification device
WO2015047242A1 (en) * 2013-09-25 2015-04-02 Schneider Electric Buildings Llc Method and device for adjusting a set point
US20150116200A1 (en) * 2013-10-25 2015-04-30 Honda Motor Co., Ltd. System and method for gestural control of vehicle systems
US20150187143A1 (en) * 2013-12-26 2015-07-02 Shadi Mere Rendering a virtual representation of a hand
US9507417B2 (en) 2014-01-07 2016-11-29 Aquifi, Inc. Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9619105B1 (en) 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
WO2015200890A2 (en) * 2014-06-27 2015-12-30 Apple Inc. Reduced size user interface
US20160011667A1 (en) * 2014-07-08 2016-01-14 Mitsubishi Electric Research Laboratories, Inc. System and Method for Supporting Human Machine Interaction
JP3194297U (en) * 2014-08-15 2014-11-13 リープ モーション, インコーポレーテッドLeap Motion, Inc. Motion sensing control device for automobile and industrial use
WO2016036416A1 (en) 2014-09-02 2016-03-10 Apple Inc. Button functionality
WO2016036509A1 (en) 2014-09-02 2016-03-10 Apple Inc. Electronic mail user interface
WO2016036541A2 (en) 2014-09-02 2016-03-10 Apple Inc. Phone user interface
DE102014224599A1 (en) * 2014-12-02 2016-06-02 Robert Bosch Gmbh Method for operating an input device, input device
US9554207B2 (en) 2015-04-30 2017-01-24 Shure Acquisition Holdings, Inc. Offset cartridge microphones
US9565493B2 (en) 2015-04-30 2017-02-07 Shure Acquisition Holdings, Inc. Array microphone system and method of assembling the same
EP3153950A1 (en) * 2015-10-08 2017-04-12 Funai Electric Co., Ltd. Input device
JP6726674B2 (en) * 2015-10-15 2020-07-22 マクセル株式会社 Information display device
US11010972B2 (en) 2015-12-11 2021-05-18 Google Llc Context sensitive user interface activation in an augmented and/or virtual reality environment
CN105894889B (en) * 2016-05-09 2018-06-12 合肥工业大学 A kind of multidimensional adjustable automobile handling maneuver simulation and the what comes into a driver's control method of test system
DK201770423A1 (en) 2016-06-11 2018-01-15 Apple Inc Activity and workout updates
US10367948B2 (en) 2017-01-13 2019-07-30 Shure Acquisition Holdings, Inc. Post-mixing acoustic echo cancellation systems and methods
US10585525B2 (en) 2018-02-12 2020-03-10 International Business Machines Corporation Adaptive notification modifications for touchscreen interfaces
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments
WO2019231632A1 (en) 2018-06-01 2019-12-05 Shure Acquisition Holdings, Inc. Pattern-forming microphone array
US11297423B2 (en) 2018-06-15 2022-04-05 Shure Acquisition Holdings, Inc. Endfire linear array microphone
US10402081B1 (en) 2018-08-28 2019-09-03 Fmr Llc Thumb scroll user interface element for augmented reality or virtual reality environments
US11435830B2 (en) 2018-09-11 2022-09-06 Apple Inc. Content-based tactile outputs
US11310596B2 (en) 2018-09-20 2022-04-19 Shure Acquisition Holdings, Inc. Adjustable lobe shape for array microphones
US10884614B1 (en) * 2018-11-30 2021-01-05 Zoox, Inc. Actuation interface
US10895918B2 (en) * 2019-03-14 2021-01-19 Igt Gesture recognition system and method
US11303981B2 (en) 2019-03-21 2022-04-12 Shure Acquisition Holdings, Inc. Housings and associated design features for ceiling array microphones
JP2022526761A (en) 2019-03-21 2022-05-26 シュアー アクイジッション ホールディングス インコーポレイテッド Beam forming with blocking function Automatic focusing, intra-regional focusing, and automatic placement of microphone lobes
US11558693B2 (en) 2019-03-21 2023-01-17 Shure Acquisition Holdings, Inc. Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition and voice activity detection functionality
US11445294B2 (en) 2019-05-23 2022-09-13 Shure Acquisition Holdings, Inc. Steerable speaker array, system, and method for the same
WO2020243471A1 (en) 2019-05-31 2020-12-03 Shure Acquisition Holdings, Inc. Low latency automixer integrated with voice and noise activity detection
US11297426B2 (en) 2019-08-23 2022-04-05 Shure Acquisition Holdings, Inc. One-dimensional array microphone with improved directivity
US20210191514A1 (en) * 2019-12-18 2021-06-24 Catmasters LLC Virtual Reality to Reality System
US11552611B2 (en) 2020-02-07 2023-01-10 Shure Acquisition Holdings, Inc. System and method for automatic adjustment of reference gain
US11706562B2 (en) 2020-05-29 2023-07-18 Shure Acquisition Holdings, Inc. Transducer steering and configuration systems and methods using a local positioning system
GB2597492B (en) * 2020-07-23 2022-08-03 Nissan Motor Mfg Uk Ltd Gesture recognition system
CN116918351A (en) 2021-01-28 2023-10-20 舒尔获得控股公司 Hybrid Audio Beamforming System
US11315335B1 (en) 2021-03-30 2022-04-26 Honda Motor Co., Ltd. Mixed-reality interaction with touch device
US11893212B2 (en) 2021-06-06 2024-02-06 Apple Inc. User interfaces for managing application widgets

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4818048A (en) * 1987-01-06 1989-04-04 Hughes Aircraft Company Holographic head-up control panel
US5486840A (en) * 1994-03-21 1996-01-23 Delco Electronics Corporation Head up display with incident light filter
US5721679A (en) * 1995-12-18 1998-02-24 Ag-Chem Equipment Co., Inc. Heads-up display apparatus for computer-controlled agricultural product application equipment
US5831584A (en) * 1995-07-28 1998-11-03 Chrysler Corporation Hand calibration system and virtual display selection for vehicle simulator
US5990865A (en) * 1997-01-06 1999-11-23 Gard; Matthew Davis Computer interface device
US6115128A (en) * 1997-09-17 2000-09-05 The Regents Of The Univerity Of California Multi-dimensional position sensor using range detectors

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4818048A (en) * 1987-01-06 1989-04-04 Hughes Aircraft Company Holographic head-up control panel
US5486840A (en) * 1994-03-21 1996-01-23 Delco Electronics Corporation Head up display with incident light filter
US5831584A (en) * 1995-07-28 1998-11-03 Chrysler Corporation Hand calibration system and virtual display selection for vehicle simulator
US5721679A (en) * 1995-12-18 1998-02-24 Ag-Chem Equipment Co., Inc. Heads-up display apparatus for computer-controlled agricultural product application equipment
US5990865A (en) * 1997-01-06 1999-11-23 Gard; Matthew Davis Computer interface device
US6115128A (en) * 1997-09-17 2000-09-05 The Regents Of The Univerity Of California Multi-dimensional position sensor using range detectors

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1477351B1 (en) * 2003-05-15 2014-01-08 Webasto AG Vehicle roof equipped with an operating device for electrical vehicle components
EP1496426A2 (en) * 2003-07-11 2005-01-12 Siemens Aktiengesellschaft Method for the inputting of parameters of a parameter field
EP1496426A3 (en) * 2003-07-11 2006-09-20 Siemens Aktiengesellschaft Method for the inputting of parameters of a parameter field
EP1501007A3 (en) * 2003-07-23 2006-05-31 Bose Corporation System and method for accepting a user control input
US8072842B2 (en) 2004-12-21 2011-12-06 Elliptic Laboratories As Channel impulse response estimation
US7768876B2 (en) 2004-12-21 2010-08-03 Elliptic Laboratories As Channel impulse response estimation
US8305843B2 (en) 2004-12-21 2012-11-06 Elliptic Laboratories As Channel impulse response estimation
US8531916B2 (en) 2004-12-21 2013-09-10 Elliptic Laboratories As Channel impulse response estimation
US7415352B2 (en) 2005-05-20 2008-08-19 Bose Corporation Displaying vehicle information
EP1798588A1 (en) * 2005-12-13 2007-06-20 GM Global Technology Operations, Inc. Operating system for functions in a motor vehicle
FR3005751A1 (en) * 2013-05-17 2014-11-21 Thales Sa HIGH HEAD VISOR WITH TOUCH SURFACE
EP2821884A1 (en) * 2013-07-01 2015-01-07 Airbus Operations GmbH Cabin management system having a three-dimensional operating panel
CN109842808A (en) * 2017-11-29 2019-06-04 深圳光峰科技股份有限公司 Control method, projection arrangement and the storage device of projection arrangement
WO2019104830A1 (en) * 2017-11-29 2019-06-06 深圳光峰科技股份有限公司 Projection device and control method therefor, and storage device

Also Published As

Publication number Publication date
US20020140633A1 (en) 2002-10-03

Similar Documents

Publication Publication Date Title
US20020140633A1 (en) Method and system to present immersion virtual simulations using three-dimensional measurement
US7920071B2 (en) Augmented reality-based system and method providing status and control of unmanned vehicles
US9152248B1 (en) Method and system for making a selection in 3D virtual environment
JP6116064B2 (en) Gesture reference control system for vehicle interface
CN110647237A (en) Gesture-based content sharing in an artificial reality environment
JP2022535316A (en) Artificial reality system with sliding menu
EP3283938B1 (en) Gesture interface
CN108027657A (en) Context sensitive user interfaces activation in enhancing and/or reality environment
EP2558924B1 (en) Apparatus, method and computer program for user input using a camera
CN113050802A (en) Method, system and device for navigating in a virtual reality environment
WO2001088679A2 (en) Browser system and method of using it
CN116719415A (en) Apparatus, method, and graphical user interface for providing a computer-generated experience
EP3118722B1 (en) Mediated reality
US20170371523A1 (en) Method for displaying user interface of head-mounted display device
WO2017021902A1 (en) System and method for gesture based measurement of virtual reality space
CN112068757B (en) Target selection method and system for virtual reality
US9310851B2 (en) Three-dimensional (3D) human-computer interaction system using computer mouse as a 3D pointing device and an operation method thereof
US20070277112A1 (en) Three-Dimensional User Interface For Controlling A Virtual Reality Graphics System By Function Selection
WO2003003185A1 (en) System for establishing a user interface
JP4678428B2 (en) Virtual space position pointing device
US20070200847A1 (en) Method And Device For Controlling A Virtual Reality Graphic System Using Interactive Techniques
CN117043722A (en) Apparatus, method and graphical user interface for map
Knödel et al. Navidget for immersive virtual environments
JP4186742B2 (en) Virtual space position pointing device
Schwald et al. Controlling virtual worlds using interaction spheres

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP