WO2002048642A2 - Method for enhancing performance in a system utilizing an array of sensors that sense at least two-dimensions - Google Patents

Method for enhancing performance in a system utilizing an array of sensors that sense at least two-dimensions Download PDF

Info

Publication number
WO2002048642A2
WO2002048642A2 PCT/US2001/045420 US0145420W WO0248642A2 WO 2002048642 A2 WO2002048642 A2 WO 2002048642A2 US 0145420 W US0145420 W US 0145420W WO 0248642 A2 WO0248642 A2 WO 0248642A2
Authority
WO
WIPO (PCT)
Prior art keywords
virtual
coordinates
input device
user
distortion
Prior art date
Application number
PCT/US2001/045420
Other languages
French (fr)
Other versions
WO2002048642A3 (en
Inventor
Cheng-Feng Sze
Original Assignee
Canesta, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canesta, Inc. filed Critical Canesta, Inc.
Priority to AU2002243265A priority Critical patent/AU2002243265A1/en
Publication of WO2002048642A2 publication Critical patent/WO2002048642A2/en
Publication of WO2002048642A3 publication Critical patent/WO2002048642A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • G06F3/0426Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected tracking fingers with respect to a virtual keyboard projected or printed on the surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • the invention relates generally systems that use an array of sensors to detect distances in at least two dimensions, and more specifically to enhancing performance of such system by reducing computational overhead, overcoming geometric error , elliptical error, and lens distortion.
  • Fig. 1 depicts a generic three-dimensional sensing system 10 that includes a pulsed light source 20 some of whose emissions 30 strike a target object 40 and are reflected back as optical energy 50. Some of the reflected energy 50 passes through a lens 60 and is collected by at least some three-dimensional sensors 70ij in a sensor array 80, where i,j represent indices.
  • An electronics system 90 coordinates operation of system 10 and carries out signal processing of sensor-received data.
  • An exemplary such system is described in U.S. patent application serial no. 09/401 ,059 "CMOS-Compatible Three- dimensional Image Sensor IC", now U.S. patent no. (2001).
  • each imaging sensor 70i,j calculates total time of flight (TOF) from when a light pulse left source 20 to when energy reflected from target object 40 is detected by sensor 80.
  • TOF total time of flight
  • Surface regions of target 40 are typically identified in (x,y,z) coordinates. Different (x,y,z) regions of target object 40 will be imaged by different ones of the imaging sensors 70ij in array 80.
  • Data representing TOF and/or brightness of the returned optical energy is collected by each senor element (i,j) in the array, and may be referred to as the data at sensor pixel detector (i,j).
  • a frame of three-dimensional image data may be collected by system 10.
  • Fig. 2 depicts one potential application for system 10, in which system 10 attempts to detect the spatial location of interaction between a virtual input device 100 (here shown as a virtual keyboard) and a user-controlled object 110 (here shown as a user's hand).
  • Virtual input device 100 may be simply an image of an input device, such as a keyboard.
  • system 10 attempts to discern in the (x,y,z) coordinate system which keys on an actual keyboard would have been typed upon by the user.
  • an image representing the surface of virtual input device 110 will have been stored in (x,y,z) coordinates in memory within system 10.
  • FIG. 2 the user's left forefinger is shown contacting (or typing upon) the region of the virtual input device where the "ALT" character would be located on an actual keyboard.
  • regions of contact or at least near contact between user-controlled object 110 and the virtual input device 110 are determined using TOF information.
  • Pixel detector information from sensor array 80 would then be translated to (x,y,z) coordinates, typically on a per-frame of data acquired basis.
  • the resultant data (e.g., here the key scancode for the ALT key) would be output, if desired, as DATA to an accessory device, perhaps a small computer.
  • An example of such an application as shown in Fig. 2 may be found in co-pending U.S. patent application serial no. 09/502,499 entitled "CMOS-Compatible Three-dimensional Image Sensor IC", assigned to assignee herein.
  • CMOS-Compatible Three-dimensional Image Sensor IC assigned to assignee herein.
  • Several error mechanisms are at work in the simplified system of Fig. 2. For example, geometric error or distortion is present in the raw data acquired by the sensor array. Referring to Figs.
  • geometric or distortion error arises from use of distance measurement D at pixel (ij) as the z-value at pixel (ij). It is understood that the z-value is distance along the z- axis from the target object 40 or 110 to the optical plane of the imaging sensor array 80. It is known in the art to try to compensate for geometric error, by transforming the raw data into (x,y,z) coordinates using a coordinate transformation that is carried out on a per-pixel basis. Such coordinate transformation is a transformation from one coordinate system into another coordinate system.
  • Fig. 3C depicts another and potentially more serious geometric error, namely so-called elliptical error.
  • Elliptical error results from approximating imaging regions of interest as lying on planes orthogonal to an optical axis of system 10, rather than lying on surfaces of ellipsoids whose focal points are optical emitter 20 and optical sensor 80.
  • Elliptical error is depicted in Fig. 3C with reference to points A and point B, which are equal light travel distances from optical energy emitter 20 shown in Fig. 1.
  • optical source 20 and sensor array 80 are spaced-apart vertically (in the figure) a distance 2c.
  • points A and B which have the same distance value from the optical plane, may in fact map to different planes on the three- dimensional grid.
  • points A and B both lie on the same elliptical curve Ec
  • point A lies on plane P a
  • point B lies on a parallel plane P b
  • a bit farther from the optical plane than is plane P a .
  • array 80 includes 100 rows and 100 columns of pixel detectors 70i,j (e.g., 1 ⁇ i ⁇ 100,1 ⁇ j ⁇ 100).
  • a single frame of three-dimension data acquired for each pulse of energy from emitter 20 includes information from 10,000 pixels.
  • correcting for geometric or distortion error requires performing 10,000 coordinate transformations for each frame of data acquired. If the frame rate is 30 frames per second, the computational requirement just for the coordinate transformations will be 300,000 coordinate transformations performed within each second.
  • coordinate transformation typically involves use of floating-point calculation and/or memory to store transformation tables.
  • the necessity to perform a substantial number of coordinate transformations can be computationally intensive and can require substantial memory resources.
  • the available computational power and available memory may be quite low. But even if the overhead associated with increased computational power and memory is provided to carry-out coordinate transformation, correction to geometric error does not correct for distortion created by lens 60.
  • Fig. 4A depicts a cross-hatch image comprising parallel and vertical lines.
  • Fig. 4B depicts the image of Fig. 4A as viewed through a lens having substantial barrel distortion
  • Fig. 4C depicts the image of Fig. 4A as viewed through a lens having substantial pincushion distortion.
  • Barrel distortion and pin cushion distortion are two common types of lens distortion.
  • An additional type of lens distortion is fuzziness, e.g., imaged parallel lines may not necessary be distorted to bow out (Fig. 4B) or bow in (Fig.
  • the present invention provides such a method.
  • the present invention provides a methodology to simplify operation and analysis overhead in a system that acquires at least two-dimensional information using a lens and an array of detectors.
  • the information acquired represents interaction of a user-controlled object with a virtual input device that may be represented in conventional (x,y,z) space coordinates.
  • the information is acquired preferably with detectors in an array that may be represented in (i,j,k) array space coordinates.
  • the invention defines sub-regions, preferably points, within the virtual input device reduces computational overhead and memory associated with correcting for geometric error, elliptical error, and non-linear barrel and pincushion type lens distortion in a system that acquires at least two- dimensional information using a lens and an array of detectors.
  • Geometric error correction is addressed by representing data in the sensor array coordinate system (i,j,k) rather than in the conventional (x,y,z) coordinate system. Thus, data is represented by (i,j,k), where (ij) identifies pixel (i ) and k represents distance.
  • Distance k is the distance from an active light source (e.g., a light source emitting optical energy) to the imaged portion of the target object, plus the return distance from the imaged portion of the target object to pixel (i j). In the absence of an active light source, k is the distance between pixel (ij) and the imaged portion of the target object.
  • an active light source e.g., a light source emitting optical energy
  • the sensor coordinate system avoids having to make coordinate transformation, thus reducing computational overhead and cost and memory requirements. Further, since the sensor coordinate system relates to raw data, geometric error correction is not applicable, and no correction for geometric error correction is needed.
  • the present invention addresses non-linear barrel and pincushion type lens distortion effects by simply directly using distorted coordinates of an object, e.g., a virtual input device, to compensate for such lens distortion.
  • an object e.g., a virtual input device
  • computation cost is reduced by simply eliminating correction of such lens distortion upon the data.
  • a virtual keyboard input device application since the coordinates of all the virtual keys are distorted coordinates, if the distorted image of a user- controlled object, e.g., a fingertip, is in close proximity to the distorted coordinate of a virtual key, the fingertip should be very close to the key. This permits using distorted images to identify key presses by using distorted coordinates of the virtual keys.
  • using distorted key coordinates permits software associated with the virtual input device keyboard application to overcome non-linear pincushion and barrel type lens distortion without directly compensating for the lens distortion.
  • FIG. 1 depicts a generic three-dimensional data acquisition system, according to the prior art
  • FIG. 2 depicts the system of Fig. 1 in an application that tries to detect interaction between a virtual input device and a user-controlled object;
  • FIGS. 3A-3C depict the nature of geometric error in a three-dimensional imaging system, according to the prior art
  • FIG. 4A depicts a cross-hatch image as viewed with a perfect lens
  • FIG. 4B depicts the image of Fig. 4A as viewed through a lens that exhibits barrel distortion, according to the prior art
  • FIG. 4C depicts the image of Fig. 4A as viewed through a lens that exhibits pincushion distortion, according to the prior art
  • FIG. 5 depicts an electronics control system and use of distorted coordinates, according to the present invention.
  • FIGS. 6 and 7 depict a virtual mouse/trackball and virtual pen application using distorted coordinates, according to the present invention.
  • Fig. 5 depicts a system 10' in which energy transmitted from emitter 20 impinges regions of interest within the field of view of system 10', and is at least partially reflected back via an optional lens 60 to an array 80 of preferably pixel detectors 70i j.
  • a region of interest may be considered to be a zone that ranges from the vertical plane of virtual input device 100 upward for perhaps 12 mm or so.
  • system 10' seeks to learn when and where a user-controlled object (e.g., hand 110, stylus or other object held by the user 40) approaches the surface of a virtual input device a virtual input device (e.g., a virtual keyboard 100, and/or a virtual trackpad 100', and/or a virtual trackball 100") within the region of interest and thus interacts with the virtual input device.
  • a virtual input device e.g., a virtual keyboard 100, and/or a virtual trackpad 100', and/or a virtual trackball 100
  • the user-controlled object may be said to have intersected or otherwise interacted with a region, e.g., a virtual key in Fig. 5, on the virtual input device.
  • Interaction is sensed, for example by pixel photodetectors 70i,j within a detector array 80.
  • detector array 80 is definable in (i j,k) coordinate space
  • the virtual input device is definable in (x,y,z) coordinate space.
  • a virtual controlled device may be moved across virtual device 100' to "write” or “draw” and/or to manipulate an object, e.g., a cursor on a display, or to select a menu, etc.
  • the virtual trackball/mouse device 100" may be manipulated with a user's hand to produce movement, e.g, a cursor on a display, to select a menu on a display, to manipulate a drawing, etc.
  • interactions of a user-controlled object with a virtual input device are also broad in their scope.
  • system 10' includes electronics 90' that preferably comprises at least memory 150, a processor unit (CPU) 160, and software 170 that may be stored or loadable into memory 150.
  • CPU 160 may be an embedded processor, if desired, for example a 48 MHz 80186 class processor.
  • CPU 160 preferably executes software stored or loadable in MEM 150, e.g., a portion of software 170, to determination interaction information.
  • preferably point-sized sub-regions are defined at areas or regions of interest within the virtual input device.
  • the effective area of the sub- region 120 is preferably a single point, e.g., a very small fraction of the effective area of the virtual key, typically much less than 1 % of the virtual key area.
  • a point-sized sub-region 120 preferably is defined at central location of each virtual key on the virtual keyboard device. For ease of illustration, only a few sub-regions 120 are shown, but it is understood that each region of interest will include at least one sub-region 120.
  • an array of sub-regions 120' will be defined with a granularity or pitch appropriate for the desired interaction resolution.
  • a three-dimensional array of sub-regions 120" will be defined. (See also Fig. 6.)
  • the present invention carries out an inverse-transformation of the sub-regions 120, 120', 120" from (x,y,z) coordinate space associated with the virtual input device to (i,j,k) coordinate space associated with the array 80 of pixel photodiode detectors 70i,j. Since the gross location of the virtual input device is known a priori, inverse-transformation of the sub- regions from (x,y,z) to (ij,k) coordinate space may be done statically. Such operation may be carried out by CPU 160 and a portion of software 170 (or other software) in or storable in memory 150. The inverse-transformation may be done each time system 10' is powered-on, or perhaps periodically, e.g., once per hour, if needed. The inverse-transformed (ij,k) coordinates define a distorted coordinate system. However rather than attempt to correct or compensate for system distortion, the present invention preferably operates using distorted coordinate information.
  • detection data is represented by (i j,k), where (i j) is detection pixel (i j) and k is distance.
  • Distance k is the distance from an active light source (e.g., a light source emitting optical energy) to the imaged portion of the target object, plus the return distance from the imaged portion of the target object to pixel (i j). In the absence of an active light source, k is the distance between pixel (i j) and the imaged portion of the target object.
  • an active light source e.g., a light source emitting optical energy
  • this sensor coordinate system reduces computation cost by avoiding coordinate transformation as the system now relates consistently to the raw data, and avoids any need to correct for geometric error.
  • the relationship between Z and D may be non-linear, yet use of distorted (i j,k) coordinate space information by the present invention essentially factors out geometric error associated with optical lenses such as lens 60.
  • system 10' determines interaction between user-controlled object 110, 140 and virtual input device 100, 100', and/or 100" using mathematical operations that are carried out in the (i j,k) coordinate space system.
  • the use of distorted data information can substantially factor out various distortion errors that would require compensation or correction in prior art systems.
  • the amount of storage in memory 150 needed to retain the inverse-transformation distortion coordinate information is relatively small and may be pre-calculated.
  • the virtual input device 100 is a virtual keyboard having perhaps 100 virtual keys, one byte or so of information may be required to store the distortion coordinate information for each virtual key.
  • the virtual input device is a virtual trackball/mouselOO" definable as a multi- dimensional virtual cube, more memory may be required.
  • the grid comprises, for example, 100 lines in x-axis, 100 lines in the y-axis, and perhaps 10 lines in the z-axis, 100,000 points of intersection are defined. Assuming about one byte of data per intersection point, perhaps 100 Kbytes of storage would be required to retain distortion coordinates for the virtual input cubic space device shown in Fig. 6. In the system shown in Fig. 7, depending upon the desired granularity of resolution, if 200x200 regions are defined in the virtual work surface, about 40 Kbytes of memory might be required to store the relevant distortion coordinates.
  • Software 170 upon execution by CPU 160 enables the various systems 10' to detect when an "intersection" between a region of interest in a virtual input device and a tip portion of a user-controlled object occurs. For example, software 170 might discern in Fig. 5 from the TOF (x,y,z)-transformed to (ij,k) coordinate data that the left "ALT" key is being contacted by a user-controlled object. More specifically, interaction between the distorted coordinate location of the tip of user-controlled object 110 and the distorted coordinate location of sub-region 120 defined in a central region of the virtual "ALT" key would be detected and processed. In this example, software 170 could command system 10' to output as DATA a key scancode representing the left "ALT" key. In a virtual pen application, such as shown in Fig. 7, software 170 could output as DATA the locus of the points traversed by the tip of the pen upon the virtual workpad or trackpad.
  • individual detectors 70ij within array 80 are defined in terms of a sensor (i j,k) coordinate system, whereas the virtual input keyboard device 100, virtual trackpad device 100', virtual trackball device 100" are originally definable in terms of conventional (x,y,z) coordinates.
  • the present invention simply represents acquired data using the (ij,k) coordinate system of sensor array 80, admittedly a distorted coordinate system.
  • each pixel 70 i j in sensor array 80 only receives light from a particular direction when optical lens 60 is disposed in front of the sensor array.
  • point A and point B associated with user-controlled object 40 or 110 and/or virtual input device 100 will be imaged in different pixels within the sensor array.
  • the present invention defines a distorted coordinate for each region of interest in the virtual input device 100.
  • the virtual keyboard device 100 and the virtual trackpad device 100' are shown as being planar, regions of interest associated with these devices will lie essentially on the x-z plane.
  • the virtual mouse/trackball input device 100" is three dimensional, and regions of interest associated with device 100" will lie within a virtual volume, e.g, a cube, a sphere, etc.
  • the virtual input device is a virtual keyboard 100 and a distorted coordinate is defined for a sub-region 120 of each virtual key.
  • the effective area of the sub-region 120 is preferably a single point, e.g., a very small fraction of the effective area of the virtual key, typically much less than 1% of the virtual key area.
  • this approach enables system 10' to overcome non-linear pincushion and barrel type distortion associated with lens 60. Rather than correct such non-linear lens distortions on image data, the present invention simply directly uses the distorted coordinates! 20 of the virtual keys to overcome the distortion. Since lens distortion for data is not per se corrected, there is a substantial saving in computational cost and overhead.
  • the preferably point sub-regions for the virtual trackpad or writing pad 100' are denoted 120" and will lie on the x-z plane of device 100'.
  • the preferably point sub-regions for the virtual trackpad/mouse pad input device 100" are denoted 120" and will occupy a three-dimension volume defined by the virtual "size" of input device 100".
  • Fig. 5 depicts what may be termed a true three- dimensional imaging system.
  • system 10' might be referred to as perhaps a 2.5 dimensional system.
  • a small object 120 is defined in preferably the center of each key to define that key's distorted coordinate.
  • the image of the various small objects 120 is collected by the imaging sensors (pixels) in sensor array 80, and from the collected image data representing the small object regions 120 is available.
  • such data are the distorted coordinate of the target virtual key.
  • a cube-like three-dimensional region 130 is defined that may represent a virtual mouse or virtual trackball input device 100
  • a planar region 100' is defined as a virtual work surface useable with a pen-like device 110, upon which virtual writing 140 may be traced.
  • the three-dimensional region 130 is partitioned into a three-dimensional grid.
  • the above-described method is then used to define a distorted coordinate for each point of intersection 120 of grid lines within the cube.
  • Fig. 7 the same technique is applied to the two- dimensional grid shown.
  • the user-controlled object e.g. the fingertip, the pen tip, etc.
  • the user-controlled object is identified as being at the intersection point. This permits tracing movement of the user-controlled object over or through the region of interest in the virtual input device.
  • tracing a locus of user-controlled object movement may require knowledge of when the object no longer is in contact with the virtual device.
  • Raw data is obtained in (i j,k) coordinates for regions of interest defined by preferably pin-point sub-regions 120.
  • the only transformation from (ij,k) to (x,y,z) carried out by the present invention will involve coordinates indicating virtual contact between the user-controlled object and the virtual input device.

Abstract

In a three-dimensional data acquisition system (10), coordinate transformation and geometric error correction are avoided by representing data in a sensor array (80) coordinate system (i, j, k) rather than a conventional (x, y, z) coordinate system. A preferably point-sized sub-region (120) is defined for each potential region of interest for a virtual input device (100) subject to interaction with a user-controlled object (110). The (i, j, k) coordinate system used relates to raw data, and correction for several types of geometric error and optical lens error are avoided by determining interaction with such raw coordinate data. As a result, substantial processing overhead may be avoided.

Description

METHOD FOR ENHANCING PERFORMANCE IN A SYSTEM UTILIZING AN ARRAY OF SENSORS THAT SENSE AT LEAST TWO-DIMENSIONS
RELATIONSHIP TO PENDING APPLICATIONS Priority is claimed from co-pending U.S. provisional patent application serial no. 60/252,166 filed 19 November 2000, entitled "A Simple Technique for Reducing Computation Cost and Overcoming Geometric Error & Lens Distortion in 3-D Sensor Applications".
FIELD OF THE INVENTION The invention relates generally systems that use an array of sensors to detect distances in at least two dimensions, and more specifically to enhancing performance of such system by reducing computational overhead, overcoming geometric error , elliptical error, and lens distortion.
BACKGROUND OF THE INVENTION Fig. 1 depicts a generic three-dimensional sensing system 10 that includes a pulsed light source 20 some of whose emissions 30 strike a target object 40 and are reflected back as optical energy 50. Some of the reflected energy 50 passes through a lens 60 and is collected by at least some three-dimensional sensors 70ij in a sensor array 80, where i,j represent indices. An electronics system 90 coordinates operation of system 10 and carries out signal processing of sensor-received data. An exemplary such system is described in U.S. patent application serial no. 09/401 ,059 "CMOS-Compatible Three- dimensional Image Sensor IC", now U.S. patent no. (2001).
Within array 80, each imaging sensor 70i,j (and its associated electronics) calculates total time of flight (TOF) from when a light pulse left source 20 to when energy reflected from target object 40 is detected by sensor 80. Surface regions of target 40 are typically identified in (x,y,z) coordinates. Different (x,y,z) regions of target object 40 will be imaged by different ones of the imaging sensors 70ij in array 80. Data representing TOF and/or brightness of the returned optical energy is collected by each senor element (i,j) in the array, and may be referred to as the data at sensor pixel detector (i,j). Typically, for each pulse of optical energy emitted by source 20, a frame of three-dimensional image data may be collected by system 10.
Fig. 2 depicts one potential application for system 10, in which system 10 attempts to detect the spatial location of interaction between a virtual input device 100 (here shown as a virtual keyboard) and a user-controlled object 110 (here shown as a user's hand). Virtual input device 100 may be simply an image of an input device, such as a keyboard. As the user 110 "types" on the image, system 10 attempts to discern in the (x,y,z) coordinate system which keys on an actual keyboard would have been typed upon by the user.
Typically an image representing the surface of virtual input device 110 will have been stored in (x,y,z) coordinates in memory within system 10. For example in Fig. 2, the user's left forefinger is shown contacting (or typing upon) the region of the virtual input device where the "ALT" character would be located on an actual keyboard. In essence, regions of contact or at least near contact between user-controlled object 110 and the virtual input device 110 are determined using TOF information. Pixel detector information from sensor array 80 would then be translated to (x,y,z) coordinates, typically on a per-frame of data acquired basis. After then determining what region of device 110 was contacted, the resultant data (e.g., here the key scancode for the ALT key) would be output, if desired, as DATA to an accessory device, perhaps a small computer. An example of such an application as shown in Fig. 2 may be found in co-pending U.S. patent application serial no. 09/502,499 entitled "CMOS-Compatible Three-dimensional Image Sensor IC", assigned to assignee herein. Unfortunately several error mechanisms are at work in the simplified system of Fig. 2. For example, geometric error or distortion is present in the raw data acquired by the sensor array. Referring to Figs. 3A and 3B, geometric or distortion error arises from use of distance measurement D at pixel (ij) as the z-value at pixel (ij). It is understood that the z-value is distance along the z- axis from the target object 40 or 110 to the optical plane of the imaging sensor array 80. It is known in the art to try to compensate for geometric error, by transforming the raw data into (x,y,z) coordinates using a coordinate transformation that is carried out on a per-pixel basis. Such coordinate transformation is a transformation from one coordinate system into another coordinate system.
Fig. 3C depicts another and potentially more serious geometric error, namely so-called elliptical error. Elliptical error results from approximating imaging regions of interest as lying on planes orthogonal to an optical axis of system 10, rather than lying on surfaces of ellipsoids whose focal points are optical emitter 20 and optical sensor 80. Elliptical error is depicted in Fig. 3C with reference to points A and point B, which are equal light travel distances from optical energy emitter 20 shown in Fig. 1. Referring to Fig. 3C, optical source 20 and sensor array 80 are spaced-apart vertically (in the figure) a distance 2c. Further, points A and point B each have the same light traveling distance 2d, e.g., r.,+r2 = 2d, and r^+r'a = 2d. In mapping distance values to planes in a three-dimensional grid, points A and B, which have the same distance value from the optical plane, may in fact map to different planes on the three- dimensional grid. Thus while points A and B both lie on the same elliptical curve Ec, point A lies on plane Pa while point B lies on a parallel plane Pb, a bit farther from the optical plane than is plane Pa. Thus to properly determine (x,y,z) coordinate information for point A and point B requires a further correction.
Unfortunately, computational overhead or cost associated with various coordinate transformations and other corrections may be high. For example assume that array 80 includes 100 rows and 100 columns of pixel detectors 70i,j (e.g., 1 ≤i≤100,1 ≤j<100). Thus, a single frame of three-dimension data acquired for each pulse of energy from emitter 20 includes information from 10,000 pixels. In this example, correcting for geometric or distortion error requires performing 10,000 coordinate transformations for each frame of data acquired. If the frame rate is 30 frames per second, the computational requirement just for the coordinate transformations will be 300,000 coordinate transformations performed within each second.
In addition to the sheer number of transformations required to be calculated per second, coordinate transformation typically involves use of floating-point calculation and/or memory to store transformation tables. Thus, the necessity to perform a substantial number of coordinate transformations can be computationally intensive and can require substantial memory resources. However in applications where system 10 is embedded system, the available computational power and available memory may be quite low. But even if the overhead associated with increased computational power and memory is provided to carry-out coordinate transformation, correction to geometric error does not correct for distortion created by lens 60.
Lens distortion is present on almost every optical lens, and is more evident on less expensive lens. Indeed, if system 10 is mass produced and lens 20 is not a high quality lens, the problem associated with lens distortion cannot generally be ignored. Fig. 4A depicts a cross-hatch image comprising parallel and vertical lines. Fig. 4B depicts the image of Fig. 4A as viewed through a lens having substantial barrel distortion, while Fig. 4C depicts the image of Fig. 4A as viewed through a lens having substantial pincushion distortion. Barrel distortion and pin cushion distortion are two common types of lens distortion. An additional type of lens distortion is fuzziness, e.g., imaged parallel lines may not necessary be distorted to bow out (Fig. 4B) or bow in (Fig. 4C), yet the resultant image is not optically sharp but somewhat fuzzy. It is known in the art to correct non-linear lens distortion such as barrel and pincushion lens distortion using non-linear numerical transformation methods that are carried out on a per-pixel basis. While such transformation can indeed compensate for such non-linear lens distortion, the computational overhead cost can be substantial. Further, in an embedded application characterized by low computational power, the ability to correct for these two types of lens distortion may simply not be available. (Correction for fuzziness lens distortion is not addressed by the present invention.)
Thus, for use with a system having an array of detectors, defined in (i,j,k) coordinate space, to acquire at least two-dimensional information representing user-controlled object interaction with a virtual input device, traditionally represented in (x,y,z) coordinate space, there is a need for a new method of analysis. Preferably such method should examine regions of the virtual input device and statically transforms sub-regions of potential interest into (i,j,k) detector array coordinates. Determination as to what regions or sub-regions of the virtual input device have been interacted with by a user- controlled object may then advantageously be carried out in (i,j,k) domain space.
Further, there is a need for a method to reduce computational overhead associated with correction of geometric error, non-linear barrel and pincushion type lens distortion, and elliptical error in such a system that acquires at least two-dimensional data. Preferably such method should be straightforward in its implementation and should not substantially contribute to the cost or complexity of the overall system.
The present invention provides such a method.
SUMMARY OF THE INVENTION The present invention provides a methodology to simplify operation and analysis overhead in a system that acquires at least two-dimensional information using a lens and an array of detectors. The information acquired represents interaction of a user-controlled object with a virtual input device that may be represented in conventional (x,y,z) space coordinates. The information is acquired preferably with detectors in an array that may be represented in (i,j,k) array space coordinates.
In one aspect, the invention defines sub-regions, preferably points, within the virtual input device reduces computational overhead and memory associated with correcting for geometric error, elliptical error, and non-linear barrel and pincushion type lens distortion in a system that acquires at least two- dimensional information using a lens and an array of detectors. Geometric error correction is addressed by representing data in the sensor array coordinate system (i,j,k) rather than in the conventional (x,y,z) coordinate system. Thus, data is represented by (i,j,k), where (ij) identifies pixel (i ) and k represents distance. Distance k is the distance from an active light source (e.g., a light source emitting optical energy) to the imaged portion of the target object, plus the return distance from the imaged portion of the target object to pixel (i j). In the absence of an active light source, k is the distance between pixel (ij) and the imaged portion of the target object.
Advantageously using the sensor coordinate system avoids having to make coordinate transformation, thus reducing computational overhead and cost and memory requirements. Further, since the sensor coordinate system relates to raw data, geometric error correction is not applicable, and no correction for geometric error correction is needed.
In another aspect, the present invention addresses non-linear barrel and pincushion type lens distortion effects by simply directly using distorted coordinates of an object, e.g., a virtual input device, to compensate for such lens distortion. Thus, rather than employ computational intensive techniques to correct for such lens distortion on the image data itself, computation cost is reduced by simply eliminating correction of such lens distortion upon the data. In a virtual keyboard input device application, since the coordinates of all the virtual keys are distorted coordinates, if the distorted image of a user- controlled object, e.g., a fingertip, is in close proximity to the distorted coordinate of a virtual key, the fingertip should be very close to the key. This permits using distorted images to identify key presses by using distorted coordinates of the virtual keys. Thus, using distorted key coordinates permits software associated with the virtual input device keyboard application to overcome non-linear pincushion and barrel type lens distortion without directly compensating for the lens distortion.
Other features and advantages of the invention will appear from the following description in which the preferred embodiments have been set forth in detail, in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 depicts a generic three-dimensional data acquisition system, according to the prior art;
FIG. 2 depicts the system of Fig. 1 in an application that tries to detect interaction between a virtual input device and a user-controlled object;
FIGS. 3A-3C depict the nature of geometric error in a three-dimensional imaging system, according to the prior art;
FIG. 4A depicts a cross-hatch image as viewed with a perfect lens;
FIG. 4B depicts the image of Fig. 4A as viewed through a lens that exhibits barrel distortion, according to the prior art;
FIG. 4C depicts the image of Fig. 4A as viewed through a lens that exhibits pincushion distortion, according to the prior art;
FIG. 5 depicts an electronics control system and use of distorted coordinates, according to the present invention; and
FIGS. 6 and 7 depict a virtual mouse/trackball and virtual pen application using distorted coordinates, according to the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
In overview, Fig. 5 depicts a system 10' in which energy transmitted from emitter 20 impinges regions of interest within the field of view of system 10', and is at least partially reflected back via an optional lens 60 to an array 80 of preferably pixel detectors 70i j. A region of interest may be considered to be a zone that ranges from the vertical plane of virtual input device 100 upward for perhaps 12 mm or so. In practice, system 10' seeks to learn when and where a user-controlled object (e.g., hand 110, stylus or other object held by the user 40) approaches the surface of a virtual input device a virtual input device (e.g., a virtual keyboard 100, and/or a virtual trackpad 100', and/or a virtual trackball 100") within the region of interest and thus interacts with the virtual input device. When the y-axis distance between the plane of the virtual input device and the user-controlled object is essentially zero, then the user-controlled object may be said to have intersected or otherwise interacted with a region, e.g., a virtual key in Fig. 5, on the virtual input device. Interaction is sensed, for example by pixel photodetectors 70i,j within a detector array 80. Conventionally detector array 80 is definable in (i j,k) coordinate space, whereas the virtual input device is definable in (x,y,z) coordinate space.
The notion of a virtual controlled device is very broad. For example and without limitation, a user-controlled object (e.g., 110) may be moved across virtual device 100' to "write" or "draw" and/or to manipulate an object, e.g., a cursor on a display, or to select a menu, etc. The virtual trackball/mouse device 100" may be manipulated with a user's hand to produce movement, e.g, a cursor on a display, to select a menu on a display, to manipulate a drawing, etc. Thus not only is the concept of virtual input device broad, and but interactions of a user-controlled object with a virtual input device are also broad in their scope.
Referring still to Fig. 5, system 10' includes electronics 90' that preferably comprises at least memory 150, a processor unit (CPU) 160, and software 170 that may be stored or loadable into memory 150. CPU 160 may be an embedded processor, if desired, for example a 48 MHz 80186 class processor. CPU 160 preferably executes software stored or loadable in MEM 150, e.g., a portion of software 170, to determination interaction information.
In one aspect of the present invention, preferably point-sized sub-regions (e.g., sub-regions 120 in device 100, sub-regions 120' in device 100', sub- regions 120" in device 100") are defined at areas or regions of interest within the virtual input device. Without limitation, the effective area of the sub- region 120 is preferably a single point, e.g., a very small fraction of the effective area of the virtual key, typically much less than 1 % of the virtual key area. In Fig. 5, a point-sized sub-region 120 preferably is defined at central location of each virtual key on the virtual keyboard device. For ease of illustration, only a few sub-regions 120 are shown, but it is understood that each region of interest will include at least one sub-region 120. For the case of a virtual trackpad input device 100', an array of sub-regions 120' will be defined with a granularity or pitch appropriate for the desired interaction resolution. For a virtual trackball input device 100", a three-dimensional array of sub-regions 120" will be defined. (See also Fig. 6.)
In another aspect, the present invention carries out an inverse-transformation of the sub-regions 120, 120', 120" from (x,y,z) coordinate space associated with the virtual input device to (i,j,k) coordinate space associated with the array 80 of pixel photodiode detectors 70i,j. Since the gross location of the virtual input device is known a priori, inverse-transformation of the sub- regions from (x,y,z) to (ij,k) coordinate space may be done statically. Such operation may be carried out by CPU 160 and a portion of software 170 (or other software) in or storable in memory 150. The inverse-transformation may be done each time system 10' is powered-on, or perhaps periodically, e.g., once per hour, if needed. The inverse-transformed (ij,k) coordinates define a distorted coordinate system. However rather than attempt to correct or compensate for system distortion, the present invention preferably operates using distorted coordinate information.
As such, detection data is represented by (i j,k), where (i j) is detection pixel (i j) and k is distance. Distance k is the distance from an active light source (e.g., a light source emitting optical energy) to the imaged portion of the target object, plus the return distance from the imaged portion of the target object to pixel (i j). In the absence of an active light source, k is the distance between pixel (i j) and the imaged portion of the target object.
Use of this sensor coordinate system reduces computation cost by avoiding coordinate transformation as the system now relates consistently to the raw data, and avoids any need to correct for geometric error. Referring to Fig. 3B, in a real world system, the relationship between Z and D may be non-linear, yet use of distorted (i j,k) coordinate space information by the present invention essentially factors out geometric error associated with optical lenses such as lens 60.
Thus, system 10' determines interaction between user-controlled object 110, 140 and virtual input device 100, 100', and/or 100" using mathematical operations that are carried out in the (i j,k) coordinate space system. As shown herein, the use of distorted data information can substantially factor out various distortion errors that would require compensation or correction in prior art systems.
Advantageously the amount of storage in memory 150 needed to retain the inverse-transformation distortion coordinate information is relatively small and may be pre-calculated. For example, if the virtual input device 100 is a virtual keyboard having perhaps 100 virtual keys, one byte or so of information may be required to store the distortion coordinate information for each virtual key. Thus, as little as perhaps 100 bytes might suffice to store coordinate information for that system of Fig. 5. If, on the other hand, the virtual input device is a virtual trackball/mouselOO" definable as a multi- dimensional virtual cube, more memory may be required. In Fig. 6, if the grid comprises, for example, 100 lines in x-axis, 100 lines in the y-axis, and perhaps 10 lines in the z-axis, 100,000 points of intersection are defined. Assuming about one byte of data per intersection point, perhaps 100 Kbytes of storage would be required to retain distortion coordinates for the virtual input cubic space device shown in Fig. 6. In the system shown in Fig. 7, depending upon the desired granularity of resolution, if 200x200 regions are defined in the virtual work surface, about 40 Kbytes of memory might be required to store the relevant distortion coordinates.
Software 170 upon execution by CPU 160 enables the various systems 10' to detect when an "intersection" between a region of interest in a virtual input device and a tip portion of a user-controlled object occurs. For example, software 170 might discern in Fig. 5 from the TOF (x,y,z)-transformed to (ij,k) coordinate data that the left "ALT" key is being contacted by a user-controlled object. More specifically, interaction between the distorted coordinate location of the tip of user-controlled object 110 and the distorted coordinate location of sub-region 120 defined in a central region of the virtual "ALT" key would be detected and processed. In this example, software 170 could command system 10' to output as DATA a key scancode representing the left "ALT" key. In a virtual pen application, such as shown in Fig. 7, software 170 could output as DATA the locus of the points traversed by the tip of the pen upon the virtual workpad or trackpad.
Thus it is understood that according to the present invention, individual detectors 70ij within array 80 are defined in terms of a sensor (i j,k) coordinate system, whereas the virtual input keyboard device 100, virtual trackpad device 100', virtual trackball device 100" are originally definable in terms of conventional (x,y,z) coordinates. But rather than use the conventional (x,y,z) coordinate system to determine user-controlled object interaction with one or more virtual input devices, and then have to transform coordinates, the present invention simply represents acquired data using the (ij,k) coordinate system of sensor array 80, admittedly a distorted coordinate system.
Note that correction of elliptical error is not required in the present invention. This advantageous result follows because each pixel 70 i j in sensor array 80 only receives light from a particular direction when optical lens 60 is disposed in front of the sensor array. Thus, referring to Fig. 3C, point A and point B associated with user-controlled object 40 or 110 and/or virtual input device 100 will be imaged in different pixels within the sensor array.
The present invention defines a distorted coordinate for each region of interest in the virtual input device 100. Referring to Fig. 5, since the virtual keyboard device 100 and the virtual trackpad device 100' are shown as being planar, regions of interest associated with these devices will lie essentially on the x-z plane. By contrast, the virtual mouse/trackball input device 100" is three dimensional, and regions of interest associated with device 100" will lie within a virtual volume, e.g, a cube, a sphere, etc.
In system 10' shown in Fig. 5 the virtual input device is a virtual keyboard 100 and a distorted coordinate is defined for a sub-region 120 of each virtual key. Without limitation, the effective area of the sub-region 120 is preferably a single point, e.g., a very small fraction of the effective area of the virtual key, typically much less than 1% of the virtual key area. As will be described, this approach enables system 10' to overcome non-linear pincushion and barrel type distortion associated with lens 60. Rather than correct such non-linear lens distortions on image data, the present invention simply directly uses the distorted coordinates! 20 of the virtual keys to overcome the distortion. Since lens distortion for data is not per se corrected, there is a substantial saving in computational cost and overhead. The preferably point sub-regions for the virtual trackpad or writing pad 100' are denoted 120" and will lie on the x-z plane of device 100'. The preferably point sub-regions for the virtual trackpad/mouse pad input device 100" are denoted 120" and will occupy a three-dimension volume defined by the virtual "size" of input device 100".
It will be appreciated that Fig. 5 depicts what may be termed a true three- dimensional imaging system. However, if the range of y-axis positions is intentionally limited (e.g., only small excursions from the x-z plane of the virtual input device are of interest), then system 10' might be referred to as perhaps a 2.5 dimensional system.
Note that in the virtual keyboard application of Fig. 5, since every virtual key's coordinates are distorted coordinates, if the distorted image of a user- controlled object (40, 110), e.g., a fingertip, is very close to or at the distorted coordinate 120 of a virtual key, then the fingertip should indeed be very close to the location of the virtual key. Thus, a distorted image is used to identify fingertip "presses" of virtual keys by using distorted key coordinates. In this fashion, software associated with use of the virtual keyboard application (e.g., perhaps a computer coupled to receive output DATA from system 10') can overcome lens distortions without having to correct for lens distortion during the data processing.
A small object 120 is defined in preferably the center of each key to define that key's distorted coordinate. The image of the various small objects 120 is collected by the imaging sensors (pixels) in sensor array 80, and from the collected image data representing the small object regions 120 is available.
As noted, such data are the distorted coordinate of the target virtual key.
In Fig. 5, assume it is desired to locate the coordinate of the left "ALT" key on the virtual keyboard. Small object (relative to the "size" of the virtual "ALT" key) 120 is defined at the actual position of virtual "ALT" key and its image data is collected from the three-dimensional sensor array 80.
Suppose that the image of the "ALT" key small target 120 falls upon pixel (s,t) with a distant value d. This means that the coordinate of the virtual "ALT" key is (s,t,d). But since the coordinate of the "ALT" key is collected from actual data, the (s,t,d) coordinate automatically reflects lens distortions associated with that virtual key position. In the same manner, the distorted coordinates for all virtual keys (which is to say for all regions of interest on a virtual input device) are defined.
Just as system 10' may be used in a variety of applications, applicant's use of distorted coordinates for small regions of interest on a virtual input device may be used in many applications. In Fig. 6, a cube-like three-dimensional region 130 is defined that may represent a virtual mouse or virtual trackball input device 100", while in Fig. 7 a planar region 100' is defined as a virtual work surface useable with a pen-like device 110, upon which virtual writing 140 may be traced.
In the more general case of Fig. 6, the three-dimensional region 130 is partitioned into a three-dimensional grid. The above-described method is then used to define a distorted coordinate for each point of intersection 120 of grid lines within the cube. In Fig. 7 the same technique is applied to the two- dimensional grid shown.
In such applications, when the distorted image of the user-controlled object (e.g. the fingertip, the pen tip, etc.) is close to the distorted coordinate 120 of some intersection point in the virtual input device, the user-controlled object is identified as being at the intersection point. This permits tracing movement of the user-controlled object over or through the region of interest in the virtual input device.
It will be appreciated that in a mouse or trackball or virtual writing pad application, tracing a locus of user-controlled object movement may require knowledge of when the object no longer is in contact with the virtual device. Raw data is obtained in (i j,k) coordinates for regions of interest defined by preferably pin-point sub-regions 120. The only transformation from (ij,k) to (x,y,z) carried out by the present invention will involve coordinates indicating virtual contact between the user-controlled object and the virtual input device. Thus, if a user moves a fingertip 110 or a stylus to trace a locus of points 140 on a virtual trackpad 100' (see Fig. 7), it is only necessary to transform (ij,k) to (x,y,z) coordinates for those relatively few locations where there is virtual contact with the virtual trackpad, or where there is movement across the virtual surface of the virtual trackpad. Thus, rather than translate thousands of pixels per frame of acquire image, the number of (i j,k) to (x,y,z) transformations is substantially less, e.g., for a pin-point region 120".
Modifications and variations may be made to the disclosed embodiments without departing from the subject and spirit of the invention as defined by the following claims.

Claims

CLAIMS:
1. A method to reduce computation in a system that uses an array of sensors definable in (i j,k) coordinates to detect at least two-dimensional data representing distance between the array and a region of interest definable in (x,y,z) coordinates, the region of interesting including at least one region in which a portion of a user-controlled object can interact with a portion of a virtual input device, the method including the following steps:
(A) calculating and transforming from (x.y.z) coordinates into said (i j,k) coordinates a distortion coordinate for a sub-portion of said region of interest; and (B) using distortion coordinates calculated at step (A) to determine in said (i,j,k) coordinates distance between an image portion of a target object and a portion of said region of interest.
2. The method of claim 1 , wherein step (B) includes determining when said (ij,k) coordinates are zero; wherein when said (i j,k) coordinates are zero, a contact interaction between a portion of said user-controlled object and at least a sub-region of said virtual input device has occurred.
3. The method of claim 1 , wherein at step (A) said sub-portion is a point.
4. The method of claim 1 , wherein: said sub-portion includes a point defined on said virtual input device; and step (A) includes at least one of (i) calculating and (ii) transforming distortion coordinates.
5. The method of claim 4, further including storing distortion coordinates so calculated.
6. The method of claim 1 , wherein step (A) includes statically doing at least one of (i) calculating and (ii) transforming distortion coordinates.
7. The method of claim 1 , wherein said system includes an optical lens associated with said array; wherein distortion effect upon distance determined at step (B) due to at least one lens distortion selected from non-linear pincushion type distortion and non-linear barrel type distortion is reduced.
8. The method of claim 1, wherein: said distortion coordinates are calculated for sub-regions on said virtual input device.
9. The method of claim 1 , wherein said user-controlled object includes a human user's finger.
10. The method of claim 1 , wherein said user-controlled object includes a user-held stylus.
11. The method of claim 1 , wherein said virtual input device includes a virtual keyboard.
12. The method of claim 1 , wherein said virtual input device includes at least one of a virtual mouse and a virtual trackball.
13. The method of claim 1 , wherein said virtual input device includes at least one of a virtual writing pad and a virtual trackpad.
14. The method of claim 1 , wherein at step (B) said distance is determined using time-of-flight data.
15. The method of claim 1 , wherein: said virtual input device includes a virtual touch pad across whose virtual surface said user-controlled object may be moved; further including transforming (i j,k) coordinates to (x,y,z) coordinates for locations where virtual contact is detected between virtual surface and a portion of said user-controlled object.
16. The method of claim 1 , wherein: said virtual input device includes a virtual touch pad across whose virtual surface said user-controlled object may be moved; further including transforming (i j,k) coordinates to (x,y,z) coordinates for locations where movement is detected across said virtual surface by a portion of said user-controlled object.
17. A sub-system to reduce computation in a system that uses an array of sensors definable in (i j,k) coordinates to detect at least two- dimensional data representing distance between the array and a region of interest definable in (x,y,z) coordinates, the sub-system including: means for transforming point sub-region locations defined within said region of interest from (x,y,z) coordinate system data to (i j,k) coordinate system data, and calculating in said (i j,k) coordinate system a distortion coordinate for each said point -region; and means for using distortion coordinates calculated at step (A) to determine in said (i j,k) coordinates distance between a sensor in said array and a portion of said region of interest.
18. The sub-system of claim 17, wherein said system is a time-of-flight system.
19. The sub-system of claim 17, wherein said user-controlled object includes at least one of a human finger and a user-held stylus.
20. The sub-system of claim 17, wherein said virtual input device includes at least one of a virtual keyboard, a virtual mouse, a virtual trackball, and a virtual writing pad.
PCT/US2001/045420 2000-11-19 2001-11-19 Method for enhancing performance in a system utilizing an array of sensors that sense at least two-dimensions WO2002048642A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2002243265A AU2002243265A1 (en) 2000-11-19 2001-11-19 Method for enhancing performance in a system utilizing an array of sensors that sense at least two-dimensions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US25216600P 2000-11-19 2000-11-19
US60/252,166 2000-11-19

Publications (2)

Publication Number Publication Date
WO2002048642A2 true WO2002048642A2 (en) 2002-06-20
WO2002048642A3 WO2002048642A3 (en) 2003-03-13

Family

ID=22954877

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/045420 WO2002048642A2 (en) 2000-11-19 2001-11-19 Method for enhancing performance in a system utilizing an array of sensors that sense at least two-dimensions

Country Status (3)

Country Link
US (1) US6690354B2 (en)
AU (1) AU2002243265A1 (en)
WO (1) WO2002048642A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017052465A1 (en) * 2015-09-23 2017-03-30 Razer (Asia-Pacific) Pte. Ltd. Trackpads and methods for controlling a trackpad
CN110502095A (en) * 2018-05-17 2019-11-26 宏碁股份有限公司 The three dimensional display for having gesture sensing function

Families Citing this family (118)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7831358B2 (en) * 1992-05-05 2010-11-09 Automotive Technologies International, Inc. Arrangement and method for obtaining information using phase difference of modulated illumination
US6701005B1 (en) 2000-04-29 2004-03-02 Cognex Corporation Method and apparatus for three-dimensional object segmentation
US6639684B1 (en) * 2000-09-13 2003-10-28 Nextengine, Inc. Digitizer using intensity gradient to image features of three-dimensional objects
KR20030072591A (en) * 2001-01-08 2003-09-15 브이케이비 인코포레이티드 A data input device
DE10294159D2 (en) * 2001-09-07 2004-07-22 Me In Gmbh operating device
JP3920067B2 (en) * 2001-10-09 2007-05-30 株式会社イーアイティー Coordinate input device
EP1315120A1 (en) * 2001-11-26 2003-05-28 Siemens Aktiengesellschaft Pen input system
EP1540641A2 (en) * 2002-06-26 2005-06-15 VKB Inc. Multifunctional integrated image sensor and application to virtual interface technology
US7920718B2 (en) * 2002-09-05 2011-04-05 Cognex Corporation Multi-zone passageway monitoring system and method
US7176438B2 (en) * 2003-04-11 2007-02-13 Canesta, Inc. Method and system to differentially enhance sensor dynamic range using enhanced common mode reset
EP1614159B1 (en) * 2003-04-11 2014-02-26 Microsoft Corporation Method and system to differentially enhance sensor dynamic range
JP2007515859A (en) * 2003-10-31 2007-06-14 ヴィーケービー インコーポレイテッド Optical device for projection and sensing of virtual interfaces
US8326084B1 (en) 2003-11-05 2012-12-04 Cognex Technology And Investment Corporation System and method of auto-exposure control for image acquisition hardware using three dimensional information
US7623674B2 (en) * 2003-11-05 2009-11-24 Cognex Technology And Investment Corporation Method and system for enhanced portal security through stereoscopy
US7227535B1 (en) 2003-12-01 2007-06-05 Romano Edwin S Keyboard and display for a computer
WO2005072358A2 (en) * 2004-01-28 2005-08-11 Canesta, Inc. Single chip red, green, blue, distance (rgb-z) sensor
US7711179B2 (en) * 2004-04-21 2010-05-04 Nextengine, Inc. Hand held portable three dimensional scanner
KR100636483B1 (en) 2004-06-25 2006-10-18 삼성에스디아이 주식회사 Transistor and fabrication method thereof and light emitting display
US20060045174A1 (en) * 2004-08-31 2006-03-02 Ittiam Systems (P) Ltd. Method and apparatus for synchronizing a transmitter clock of an analog modem to a remote clock
US8160363B2 (en) * 2004-09-25 2012-04-17 Samsung Electronics Co., Ltd Device and method for inputting characters or drawings in a mobile terminal using a virtual screen
WO2006090386A2 (en) * 2005-02-24 2006-08-31 Vkb Inc. A virtual keyboard device
US20070019099A1 (en) * 2005-07-25 2007-01-25 Vkb Inc. Optical apparatus for virtual interface projection and sensing
US20070019103A1 (en) * 2005-07-25 2007-01-25 Vkb Inc. Optical apparatus for virtual interface projection and sensing
US8111904B2 (en) 2005-10-07 2012-02-07 Cognex Technology And Investment Corp. Methods and apparatus for practical 3D vision system
US7995834B1 (en) 2006-01-20 2011-08-09 Nextengine, Inc. Multiple laser scanner
US8086971B2 (en) 2006-06-28 2011-12-27 Nokia Corporation Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications
US8316324B2 (en) * 2006-09-05 2012-11-20 Navisense Method and apparatus for touchless control of a device
KR20080044017A (en) * 2006-11-15 2008-05-20 삼성전자주식회사 Touch screen
US8126260B2 (en) * 2007-05-29 2012-02-28 Cognex Corporation System and method for locating a three-dimensional object using machine vision
US8726194B2 (en) 2007-07-27 2014-05-13 Qualcomm Incorporated Item selection using enhanced control
US8027029B2 (en) 2007-11-07 2011-09-27 Magna Electronics Inc. Object detection and tracking system
US20090278977A1 (en) * 2008-05-12 2009-11-12 Jin Li Method and apparatus providing pre-distorted solid state image sensors for lens distortion compensation
JP5251482B2 (en) * 2008-12-18 2013-07-31 セイコーエプソン株式会社 Input device and data processing system
US8427438B2 (en) * 2009-03-26 2013-04-23 Apple Inc. Virtual input tools
GB2485489A (en) * 2009-09-09 2012-05-16 Mattel Inc A system and method for displaying, navigating and selecting electronically stored content on a multifunction handheld device
KR101851264B1 (en) 2010-01-06 2018-04-24 주식회사 셀루온 System and Method for a Virtual Multi-touch Mouse and Stylus Apparatus
US9798518B1 (en) * 2010-03-26 2017-10-24 Open Invention Network Llc Method and apparatus for processing data based on touch events on a touch sensitive device
US10191609B1 (en) 2010-03-26 2019-01-29 Open Invention Network Llc Method and apparatus of providing a customized user interface
TWI423096B (en) * 2010-04-01 2014-01-11 Compal Communication Inc Projecting system with touch controllable projecting picture
US8892594B1 (en) 2010-06-28 2014-11-18 Open Invention Network, Llc System and method for search with the aid of images associated with product categories
JP2012053532A (en) * 2010-08-31 2012-03-15 Casio Comput Co Ltd Information processing apparatus and method, and program
US20120297339A1 (en) * 2011-01-27 2012-11-22 Kyocera Corporation Electronic device, control method, and storage medium storing control program
US9857868B2 (en) 2011-03-19 2018-01-02 The Board Of Trustees Of The Leland Stanford Junior University Method and system for ergonomic touch-free interface
US8928589B2 (en) * 2011-04-20 2015-01-06 Qualcomm Incorporated Virtual keyboards and methods of providing the same
US8840466B2 (en) 2011-04-25 2014-09-23 Aquifi, Inc. Method and system to create three-dimensional mapping in a two-dimensional game
CN103186255B (en) * 2011-12-27 2016-08-10 中国电信股份有限公司 Based on gyroscope to light target moving processing method and system, user terminal
JP5799817B2 (en) * 2012-01-12 2015-10-28 富士通株式会社 Finger position detection device, finger position detection method, and computer program for finger position detection
JP5814147B2 (en) * 2012-02-01 2015-11-17 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Input device, input control method, and input control program
US8854433B1 (en) 2012-02-03 2014-10-07 Aquifi, Inc. Method and system enabling natural user interface gestures with an electronic system
KR20130115750A (en) * 2012-04-13 2013-10-22 포항공과대학교 산학협력단 Method for recognizing key input on a virtual keyboard and apparatus for the same
WO2013175389A2 (en) * 2012-05-20 2013-11-28 Extreme Reality Ltd. Methods circuits apparatuses systems and associated computer executable code for providing projection based human machine interfaces
US8934675B2 (en) 2012-06-25 2015-01-13 Aquifi, Inc. Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints
US9111135B2 (en) 2012-06-25 2015-08-18 Aquifi, Inc. Systems and methods for tracking human hands using parts based template matching using corresponding pixels in bounded regions of a sequence of frames that are a specified distance interval from a reference camera
US9305229B2 (en) 2012-07-30 2016-04-05 Bruno Delean Method and system for vision based interfacing with a computer
US8497841B1 (en) 2012-08-23 2013-07-30 Celluon, Inc. System and method for a virtual keyboard
US8836768B1 (en) 2012-09-04 2014-09-16 Aquifi, Inc. Method and system enabling natural user interface gestures with user wearable glasses
US9129155B2 (en) 2013-01-30 2015-09-08 Aquifi, Inc. Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map
US9092665B2 (en) 2013-01-30 2015-07-28 Aquifi, Inc Systems and methods for initializing motion tracking of human hands
US9298266B2 (en) 2013-04-02 2016-03-29 Aquifi, Inc. Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
CN105308535A (en) * 2013-07-15 2016-02-03 英特尔公司 Hands-free assistance
US9798388B1 (en) 2013-07-31 2017-10-24 Aquifi, Inc. Vibrotactile system to augment 3D input systems
US9507417B2 (en) 2014-01-07 2016-11-29 Aquifi, Inc. Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9619105B1 (en) 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
US10419723B2 (en) 2015-06-25 2019-09-17 Magna Electronics Inc. Vehicle communication system with forward viewing camera and integrated antenna
US10137904B2 (en) 2015-10-14 2018-11-27 Magna Electronics Inc. Driver assistance system with sensor offset correction
US11027654B2 (en) 2015-12-04 2021-06-08 Magna Electronics Inc. Vehicle vision system with compressed video transfer via DSRC link
US10703204B2 (en) 2016-03-23 2020-07-07 Magna Electronics Inc. Vehicle driver monitoring system
US10571562B2 (en) 2016-03-25 2020-02-25 Magna Electronics Inc. Vehicle short range sensing system using RF sensors
US10534081B2 (en) 2016-05-02 2020-01-14 Magna Electronics Inc. Mounting system for vehicle short range sensors
US10040481B2 (en) 2016-05-17 2018-08-07 Magna Electronics Inc. Vehicle trailer angle detection system using ultrasonic sensors
US10768298B2 (en) 2016-06-14 2020-09-08 Magna Electronics Inc. Vehicle sensing system with 360 degree near range sensing
WO2018007995A1 (en) 2016-07-08 2018-01-11 Magna Electronics Inc. 2d mimo radar system for vehicle
US10239446B2 (en) 2016-07-13 2019-03-26 Magna Electronics Inc. Vehicle sensing system using daisy chain of sensors
US10708227B2 (en) 2016-07-19 2020-07-07 Magna Electronics Inc. Scalable secure gateway for vehicle
US10237509B1 (en) 2016-08-05 2019-03-19 Apple Inc. Systems with keyboards and head-mounted displays
US10641867B2 (en) 2016-08-15 2020-05-05 Magna Electronics Inc. Vehicle radar system with shaped radar antennas
US10852418B2 (en) 2016-08-24 2020-12-01 Magna Electronics Inc. Vehicle sensor with integrated radar and image sensors
US10836376B2 (en) 2016-09-06 2020-11-17 Magna Electronics Inc. Vehicle sensing system with enhanced detection of vehicle angle
US10677894B2 (en) 2016-09-06 2020-06-09 Magna Electronics Inc. Vehicle sensing system for classification of vehicle model
US10347129B2 (en) 2016-12-07 2019-07-09 Magna Electronics Inc. Vehicle system with truck turn alert
US10462354B2 (en) 2016-12-09 2019-10-29 Magna Electronics Inc. Vehicle control system utilizing multi-camera module
US10703341B2 (en) 2017-02-03 2020-07-07 Magna Electronics Inc. Vehicle sensor housing with theft protection
US11536829B2 (en) 2017-02-16 2022-12-27 Magna Electronics Inc. Vehicle radar system with radar embedded into radome
US10782388B2 (en) 2017-02-16 2020-09-22 Magna Electronics Inc. Vehicle radar system with copper PCB
US11142200B2 (en) 2017-02-23 2021-10-12 Magna Electronics Inc. Vehicular adaptive cruise control with enhanced vehicle control
US10884103B2 (en) 2017-04-17 2021-01-05 Magna Electronics Inc. Calibration system for vehicle radar system
US10870426B2 (en) 2017-06-22 2020-12-22 Magna Electronics Inc. Driving assistance system with rear collision mitigation
CN108638969A (en) 2017-06-30 2018-10-12 麦格纳电子(张家港)有限公司 The vehicle vision system communicated with trailer sensor
US10877148B2 (en) 2017-09-07 2020-12-29 Magna Electronics Inc. Vehicle radar sensing system with enhanced angle resolution using synthesized aperture
US10962638B2 (en) 2017-09-07 2021-03-30 Magna Electronics Inc. Vehicle radar sensing system with surface modeling
US11150342B2 (en) 2017-09-07 2021-10-19 Magna Electronics Inc. Vehicle radar sensing system with surface segmentation using interferometric statistical analysis
US10962641B2 (en) 2017-09-07 2021-03-30 Magna Electronics Inc. Vehicle radar sensing system with enhanced accuracy using interferometry techniques
US10933798B2 (en) 2017-09-22 2021-03-02 Magna Electronics Inc. Vehicle lighting control system with fog detection
US11391826B2 (en) 2017-09-27 2022-07-19 Magna Electronics Inc. Vehicle LIDAR sensor calibration system
US11486968B2 (en) 2017-11-15 2022-11-01 Magna Electronics Inc. Vehicle Lidar sensing system with sensor module
US10816666B2 (en) 2017-11-21 2020-10-27 Magna Electronics Inc. Vehicle sensing system with calibration/fusion of point cloud partitions
US11167771B2 (en) 2018-01-05 2021-11-09 Magna Mirrors Of America, Inc. Vehicular gesture monitoring system
US11112498B2 (en) 2018-02-12 2021-09-07 Magna Electronics Inc. Advanced driver-assistance and autonomous vehicle radar and marking system
US11199611B2 (en) 2018-02-20 2021-12-14 Magna Electronics Inc. Vehicle radar system with T-shaped slot antennas
US11047977B2 (en) 2018-02-20 2021-06-29 Magna Electronics Inc. Vehicle radar system with solution for ADC saturation
US11808876B2 (en) 2018-10-25 2023-11-07 Magna Electronics Inc. Vehicular radar system with vehicle to infrastructure communication
US11683911B2 (en) 2018-10-26 2023-06-20 Magna Electronics Inc. Vehicular sensing device with cooling feature
US11638362B2 (en) 2018-10-29 2023-04-25 Magna Electronics Inc. Vehicular radar sensor with enhanced housing and PCB construction
US11454720B2 (en) 2018-11-28 2022-09-27 Magna Electronics Inc. Vehicle radar system with enhanced wave guide antenna system
US11096301B2 (en) 2019-01-03 2021-08-17 Magna Electronics Inc. Vehicular radar sensor with mechanical coupling of sensor housing
US11332124B2 (en) 2019-01-10 2022-05-17 Magna Electronics Inc. Vehicular control system
US11294028B2 (en) 2019-01-29 2022-04-05 Magna Electronics Inc. Sensing system with enhanced electrical contact at PCB-waveguide interface
US11609304B2 (en) 2019-02-07 2023-03-21 Magna Electronics Inc. Vehicular front camera testing system
US11267393B2 (en) 2019-05-16 2022-03-08 Magna Electronics Inc. Vehicular alert system for alerting drivers of other vehicles responsive to a change in driving conditions
WO2021142442A1 (en) 2020-01-10 2021-07-15 Optimus Ride, Inc. Communication system and method
US10983567B1 (en) 2020-01-28 2021-04-20 Dell Products L.P. Keyboard magnetic guard rails
US10989978B1 (en) 2020-01-28 2021-04-27 Dell Products L.P. Selectively transparent and opaque keyboard bottom
US10929016B1 (en) 2020-01-28 2021-02-23 Dell Products L.P. Touch calibration at keyboard location
US10990204B1 (en) 2020-01-28 2021-04-27 Dell Products L.P. Virtual touchpad at keyboard location
US10983570B1 (en) 2020-01-28 2021-04-20 Dell Products L.P. Keyboard charging from an information handling system
US11586296B2 (en) 2020-01-28 2023-02-21 Dell Products L.P. Dynamic keyboard support at support and display surfaces
US11823395B2 (en) 2020-07-02 2023-11-21 Magna Electronics Inc. Vehicular vision system with road contour detection feature
US11749105B2 (en) 2020-10-01 2023-09-05 Magna Electronics Inc. Vehicular communication system with turn signal identification

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5767842A (en) * 1992-02-07 1998-06-16 International Business Machines Corporation Method and device for optical input of commands or data
US6043805A (en) * 1998-03-24 2000-03-28 Hsieh; Kuan-Hong Controlling method for inputting messages to a computer
US6133774A (en) * 1999-03-05 2000-10-17 Motorola Inc. Clock generator and method therefor
US6266078B1 (en) * 1998-01-09 2001-07-24 Canon Kabushiki Kaisha Image forming apparatus

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4988981B1 (en) * 1987-03-17 1999-05-18 Vpl Newco Inc Computer data entry and manipulation apparatus and method
US5933132A (en) * 1989-11-07 1999-08-03 Proxima Corporation Method and apparatus for calibrating geometrically an optical computer input system
US5448263A (en) * 1991-10-21 1995-09-05 Smart Technologies Inc. Interactive display system
GB2289756B (en) * 1994-05-26 1998-11-11 Alps Electric Co Ltd Space coordinates detecting device and input apparatus using same
US6104387A (en) * 1997-05-14 2000-08-15 Virtual Ink Corporation Transcription system
US6252598B1 (en) * 1997-07-03 2001-06-26 Lucent Technologies Inc. Video hand image computer interface
US6522312B2 (en) * 1997-09-01 2003-02-18 Canon Kabushiki Kaisha Apparatus for presenting mixed reality shared among operators
US6323942B1 (en) * 1999-04-30 2001-11-27 Canesta, Inc. CMOS-compatible three-dimensional image sensor IC
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US6512838B1 (en) * 1999-09-22 2003-01-28 Canesta, Inc. Methods for enhancing performance and data acquired from three-dimensional image systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5767842A (en) * 1992-02-07 1998-06-16 International Business Machines Corporation Method and device for optical input of commands or data
US6266078B1 (en) * 1998-01-09 2001-07-24 Canon Kabushiki Kaisha Image forming apparatus
US6043805A (en) * 1998-03-24 2000-03-28 Hsieh; Kuan-Hong Controlling method for inputting messages to a computer
US6133774A (en) * 1999-03-05 2000-10-17 Motorola Inc. Clock generator and method therefor

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017052465A1 (en) * 2015-09-23 2017-03-30 Razer (Asia-Pacific) Pte. Ltd. Trackpads and methods for controlling a trackpad
US10599236B2 (en) 2015-09-23 2020-03-24 Razer (Asia-Pacific) Pte. Ltd. Trackpads and methods for controlling a trackpad
TWI709879B (en) 2015-09-23 2020-11-11 新加坡商雷蛇(亞太)私人有限公司 Trackpads and methods for controlling a trackpad
CN110502095A (en) * 2018-05-17 2019-11-26 宏碁股份有限公司 The three dimensional display for having gesture sensing function
CN110502095B (en) * 2018-05-17 2021-10-29 宏碁股份有限公司 Three-dimensional display with gesture sensing function

Also Published As

Publication number Publication date
WO2002048642A3 (en) 2003-03-13
AU2002243265A1 (en) 2002-06-24
US6690354B2 (en) 2004-02-10
US20020060669A1 (en) 2002-05-23

Similar Documents

Publication Publication Date Title
US6690354B2 (en) Method for enhancing performance in a system utilizing an array of sensors that sense at least two-dimensions
CN104423731B (en) Apparatus of coordinate detecting, the method for coordinate measurement and electronic information plate system
US20110267264A1 (en) Display system with multiple optical sensors
AU2007329152B2 (en) Interactive input system and method
JP4820285B2 (en) Automatic alignment touch system and method
US6512838B1 (en) Methods for enhancing performance and data acquired from three-dimensional image systems
CA2760729C (en) Disambiguating pointers by imaging multiple touch-input zones
US20240045518A1 (en) Method for correcting gap between pen coordinate and display position of pointer
US9971455B2 (en) Spatial coordinate identification device
KR20110015461A (en) Multiple pointer ambiguity and occlusion resolution
US20120319945A1 (en) System and method for reporting data in a computer vision system
US20030226968A1 (en) Apparatus and method for inputting data
WO2012006716A1 (en) Interactive input system and method
WO2019136989A1 (en) Projection touch control method and device
CN108369470A (en) Improved stylus identification
CN112050751B (en) Projector calibration method, intelligent terminal and storage medium
WO2019171635A1 (en) Operation input device, operation input method, anc computer-readable recording medium
CN116974400B (en) Screen touch recognition method, device, equipment and storage medium
KR102587998B1 (en) pressure sensor calibration method of an optical digital pen by use of tilt angle
WO2020027813A1 (en) Cropping portions of images

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP