|Publication number||US4922336 A|
|Application number||US 07/405,253|
|Publication date||May 1, 1990|
|Filing date||Sep 11, 1989|
|Priority date||Sep 11, 1989|
|Publication number||07405253, 405253, US 4922336 A, US 4922336A, US-A-4922336, US4922336 A, US4922336A|
|Inventors||Roger R. A. Morton|
|Original Assignee||Eastman Kodak Company|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (15), Non-Patent Citations (2), Referenced by (62), Classifications (26), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The invention relates to a system that is able to display an object in three dimensions and a method for use therein.
The need to display a three-dimensional object to a viewer can occur in a variety of applications, such as in medical imaging, graphics (e.g. computer aided engineering, computer aided design and the like) or entertainment. Currently, photographic as well as presently available electronic display technologies, such as cathode ray tube (CRT) monitors, liquid crystal displays, plasma panels and the like, rely on displaying an object in only two dimensions.
Specifically, conventional photographic film is itself a flat two-dimensional medium which is unable to record three-dimensional information inherent in a scene. In particular, if one object is placed partially in front of another object in a scene and a photograph is then taken at a zero degree offset angle from the front of that scene, the resulting photograph will show a front view of that scene wherein one object is shown partially in front of the other object but will fail to reveal the separation between these two objects. The depth information inherent in the scene will simply not be recorded in this instance. By contrast, some depth information can be photographically recorded if another photograph were to be taken at a angle offset from the centerline of the image thereby providing a two-dimensional perspective image of both objects in which one dimension partially conveys the depth information. However, if the relative offset angle between these images depicted in the two photographs is excessive, then it becomes very difficult for a viewer to properly construct an adequate mental three-dimensional image of the photographed scene. Now, by comparing two photographed images of the same scene taken from slightly different perspectives, where each perspective approximately matches that for the same scene as seen by a corresponding eye of a human observer, as would be the case with stereoscopic photography, then a pair of human eyes, when simultaneously viewing the resulting photographs, is then able in conjunction with the visual center in the viewer's brain to combine the images such that the viewer will visually perceive an acceptable three-dimensional view of the scene. Unfortunately, stereoscopic cameras require two separate lenses and slaved shutter assemblies and are thus mechanically complex. Furthermore, stereoscopic cameras and associated equipment are not widely available. In addition, stereoscopic prints and slides require special viewing equipment that places both of the two photographed images, which collectively form a single stereoscopic image, at the correct separation and distance from the eyes of a viewer in order to produce a relatively accurate stereoscopic image. Unfortunately, this viewing equipment tends to be cumbersome to use and quite bulky. For these reasons, stereoscopic photography, which has been in existence for quite some time, has seen relatively little use. In addition, stereoscopic images do not provide parallax affects, i.e. where an image of an object changes in response to the changing perspective imparted to a viewer as that viewer moves his or her head from one side of the object to the other. Since parallax provides important depth cues, omission of parallax affects restricts the three-dimensional accuracy inherent in a stereoscopic image. U.S. Pat. No. 4,649,425 (issued to Pund on Mar. 10, 1987) describes an electronically based stereoscopic system in which the system monitors the position of a viewer and adjusts the current position of each individual image based upon the detected position of the viewer. This system, similar to the manual stereoscopic approaches, also fails to display parallax affects.
Electronic technologies for capturing an image of a scene predominantly rely on scanning that image using well-known two-dimensional raster scan techniques. As such, the scanned image only provides a two-dimensional view of a three-dimensional scene and thereby, in a similar fashion as a photographed image, suppresses depth information from the scanned scene. An electronic display that relies on use of two individual display elements to achieve a stereoscopic affect would suffer the same loss of parallax as with such photographic displays. Relatively new electronic imaging technologies, such as tomography and the like, are able to produce a cross-sectional view through an object, such as a cross-sectional view of a patient taken transverse to that patient's spine.
Given these limitations in the technologies currently used to capture an image and/or stereoscopically display that image, it appears that the art has endeavoured to display a three-dimensional object (or scene) primarily from a sequence of two-dimensional sectional images thereof which have either been photographically or electronically obtained rather than to display the object through stereoscopic techniques.
The simplest of all approaches known in the art that attempts to produce a three-dimensional representation of an object is simply for a viewer to arrange a "stack" of photographic images, with each image being of a different section of the object. All the sections are taken in the same direction through the object. These images would be arranged in a particular sequence with the image of an uppermost section of the object being on the top of the stack, followed by images of sections occurring at increasing depths from the top, and ending with the image of the lowest section of the object located at the bottom of the stack. With this stack, the viewer would then quickly "thumb" through these images, i.e. successively view each image in the sequence for only a fraction of a second. In this manner, the viewer, using the persistence associated with his or her vision, would, if the images are sequenced at approximately the right speed, see each image superimposed onto the previous image(s) and thereby perceive a crude three-dimensional depiction of the object. Unfortunately, this method is highly dependent upon the skill of the user. To obtain even a crude three-dimensional display, the speed at which the images are sequenced must be chosen sufficiently fast to avoid flicker but yet sufficiently slow to enable the viewer's eye to perceive and then retain each image. As one can readily appreciate, this technique is very cumbersome to use and, based upon the skill of all the viewers, produces very inaccurate and highly inconsistent results.
With this in mind, the art has turned to various techniques that attempt to mechanize the display of a sequence of photographic images in order to reliably produce three-dimensional images. One such technique, such as that disclosed in U.S. Pat. Nos. 3,428,393 (issued to de Montebello on Feb. 18, 1969) and 3,462,213 (issued to de Montebello on Aug. 19, 1969), relies on projecting each image in a pre-defined sequence of photographic "sectional" images through a rotating spirally or helically shaped projection screen that has a transmissive and diffusive surface. Unfortunately, the systems disclosed in these patents are physically large, mechanically relatively complex and, owing to the need to transmit light through a diffusive screen, exhibit a loss of image brightness. A similar system is disclosed in U.S. Pat. No. 4,294,523 (issued Oct. 13, 1981 to Woloshuk et al) in which each image in a sequence of two-dimensional "sectional" images is arranged in a film strip and then momentarily illuminated. The resulting image is then projected through a transmissive projection screen that moves in synchronism with a rate at which the individual images are illuminated. Disadvantageously, the system is mechanically large and also, by virtue of projecting an image through a screen, suffers a loss of image brightness. Another, basically similar system, though described for use as a three-dimensional radar display, is disclosed in U.S. Pat. No. 3,202,985 (issued to Perkins et al on Aug. 24, 1965 -- hereinafter referred to as the '985 Perkins et al patent) wherein a succession of individual images are projected onto a rotating spherical spiral screen where the radius of the screen at a current rotational angle and at a given moment corresponds to the angular orientation of a radar antenna at that moment. Because the system disclosed in the '985 Perkins et al patent requires a mirror, which directs a point source of illumination, to mechanically move at a relatively high speed to sweep out a volume on the rotating screen in a controlled three-dimensional manner and in synchronism with the three-dimensional movement of the radar antenna, this system is rather complex and difficult to implement. Another technique as disclosed in U.S. Pat. No. 4,297,009 (issued to Mezzrich et al on Oct. 27, 1981) relies on placing a sequence of two-dimensional images, specifically transparencies, of varying sectional views of an object along an annular region of a disk. The disk is then rotated at a relatively high speed with all images being successively illuminated one at a time and displayed on a rotating spiral screen. The resulting image seen by the viewer appears to be three-dimensional composite of the individual two-dimensional images. Unfortunately, this technique is hampered by a rather limited viewing angle.
Therefore, a specific need exists in the art for a three-dimensional display system that at least reduces, if not eliminates, the deficiencies associated with three-dimensional display systems known in the art.
Accordingly, an object of the present invention is to provide a display system that generates a three-dimensional image (such as for an object) which has sufficient visual cues, including focusing, stereoscopic and parallax affects, such that the resulting image is relatively accurate.
A specific object is to provide such a system produces relatively accurate three-dimensional images on a consistent basis, regardless of the skill level of the viewer.
Another specific object is to provide such a system that does not exhibit the same loss of image brightness or possess as limited a viewing angle as do various prior art systems.
Another specific object is to provide such a system that is relatively simple to implement, and is not cumbersome nor physically bulky.
These and other objects are accomplished in accordance with the teachings of my inventive three-dimensional display system which has: a helical reflector adapted for rotation about its central axis; an anamorphic lens coaxially aligned with and adapted for rotation in unison with the helical reflector; a two-dimensional display arranged to project light therefrom through the anamorphic lens onto the helical reflector; and a display generator, typically a graphic workstation, which itself has: means, responsive to a database of stored information which contains a three-dimensional description of an object, for determining each successive one in a series of incrementally spaced pre-defined helical sections taken through the object; means, responsive to a current position of a viewer and to the database, for ascertaining those surfaces of the object which, if displayed, would be visible to the viewer; means for determining a series of intersecting segments, wherein each segment in the series occurs as an intersection between a corresponding one of the helical sections and at least one of the visible surfaces of the object; and means for successively and selectively projecting each intersecting segment in the series onto the two-dimensional display, illustratively a projection CRT display, in response to a current position of the helical reflector, whereby as the helical reflector rotates the segments projected thereon sweep out a focused three-dimensional, typically "ghost-like", volume that depicts the object.
In accordance with a preferred embodiment of the invention, the display system also includes a focusing lens that is coaxially aligned with said anamorphic lens and situated in an optical path between said anamorphic lens and the two-dimensional display. The focusing lens in conjunction with the anamorphic lens focuses light incident thereon from a given point on the two-dimensional display onto a pre-defined point on the helical reflector. In addition, the anamorphic lens is constructed from a planar "array" of individual small lens that are molded or otherwise affixed into a common holder. In order to ensure that the light projected from each point on the screen of the two-dimensional display through the focusing and anamorphic lenses and onto the helical reflector is in-focus at only one corresponding point on the reflector, the individual lens vary in cross-section and hence magnifying power based upon the focal length between the vertical position between each individual lens situated in the anamorphic lens and the closest surface thereto, in the axial direction, of the helical reflector. Those individual lenses located relatively close to the surface of the reflector have a concave cross-sectional shape and possess relatively lower magnifying powers than the individual lenses which are located relatively far from the surface of the reflector and which possess a convex cross-sectional shape. Those individual lenses, that are situated within the anamorphic lens and intermediate the convex and concave lenses, have relatively flat surfaces (rectangular cross-sectional shape) and thereby possess magnifying powers intermediate to those associated with the concave and convex lenses.
Furthermore, a video camera and video processor are used to provide coordinates associated with the current position of the viewer. By appropriately varying the three-dimensional display of the object in response to changes in the current position of the viewer, the inventive system imparts parallax affects into the displayed object. Inasmuch as the inventive system provides stereoscopic, focus and also parallax affects as part of the three-dimensional display, this display is more accurate and more realistic than that which could be produced through three-dimensional display systems known in the art.
The teachings of the present invention may be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
FIG. 1 depicts a block diagram of an embodiment of three-dimensional image display system 5 constructed in accordance with the teachings of the present invention;
FIG. 2 depicts a top view of one embodiment of anamorphic lens 40 that forms part of system 5 shown in FIG. 1;
FIG. 3 depicts a perspective view of the embodiment of anamorphic lens 40 shown in FIG. 2;
FIG. 4 depicts the typical cross-sectional shapes of the different types of individual lenses that collectively form the embodiment of anamorphic lens 40 shown in FIGS. 2 and 3;
FIG. 5 depicts a high level flowchart of Main Routine 500 that executes within display generator 15 shown in FIG. 1 in order to generate a three-dimensional image of illustratively an object; and
FIG. 6 depicts an axial cross-sectional view of an alternate embodiment of the helical reflector, here cone shaped helical reflector 600.
To facilitate understanding, identical reference numerals have been used to denote identical elements that are common to various figures.
After reading the following description those skilled in the art will clearly realize that my inventive system will find use in a wide variety of display applications, such as in medical imaging, computer graphics, entertainment and others. For the sake of simplicity, the following discussion will describe the invention in the context of displaying a simple three-dimensional geometric object, i.e. an inclined rectangular cylinder.
In contrast to the art, my inventive system relies on displaying a three-dimensional image of an object by projecting a succession of intersecting segments from a two-dimensional display, such as a projection cathode ray tube (CRT) display, through both a focusing lens and an anamorphic lens onto a spatially corresponding portion of a rotating helical reflector. Each segment, such as for example a line segment, is formed of the intersection of a visible outside surface of the object with a corresponding helical section taken through the object. The anamorphic lens, which rotates in unison with the helical reflector, ensures, in conjunction with the focusing lens, that each dot produced on the screen of the projection CRT display will be focused only on one given point along the reflector. By painting each such segment on a different portion of the reflector, the reflector will sweep out a accurately focused displayed volume as it rotates and thereby impart depth, i.e. a "z" dimension, to the image. As a result, a three-dimensional self-radiant "ghost-like" image of the object is displayed on the reflector and located about the central axis thereof. Furthermore, the image can be depicted as having a substantially solid appearance if it is appropriately "rendered" with simulated light sources using rendering software packages such as "Renderman" available from Pixor Inc. Based upon the number of successive intersecting segments that are painted on the reflector during each of its rotations, the three-dimensional image can be made to appear as "wire-frame" outline of the contours and edges of the object or as a filled volume therefor. The spacings between individual "wires" decrease in systems having increased rotational speeds and corresponding frame rates. The displayed image contains appropriate visual information, such as stereoscopic, parallax and focus affects, such that it provides a relatively accurate three-dimensional depiction of the object.
With the above in mind, FIG. 1 depicts a block diagram of an embodiment of three-dimensional image display system 5 constructed in accordance with the teachings of my invention. As shown, system 5 contains display generator 15 which displays a succession of single vectors on a screen of projection CRT 20. The display generator calculates, using well-known three-dimensional graphical techniques, each vector as being a segment, e.g. a line segment, of intersection between the visible surfaces of an object, here the object being an inclined right rectangular cylinder, that is to be displayed and one of a series of parallel helical sections taken through that object. To do so, display generator 15 is first supplied, as represented by line 14, from an external source with a database that describes a three-dimensional space filling volumetric representation of the object that is to be displayed. This representation can be fabricated through any one of a variety of three-dimensional graphics techniques that are well-known in the art. One such technique represents a three-dimensional object through a series of polygons, hence producing a "polygon mesh". In this regard, see J. D. Foley et al, Fundamentals of Interactive Commuter Graphics, (© 1982, Addison-Wesley Publishing Co.), pages 505-511. Regardless of the specific manner in which the object is represented, a sequence of successive parallel helical sections, each taken at an incrementally different vertical height through the object but with the same pitch and axial orientation as that of the helical reflector, is calculated by display generator 15. These sections can either be calculated in real-time while a visual three-dimensional display is being produced or calculated at some earlier time and then stored on, for example, a hard disk (not specifically shown) that forms a part of the display generator for subsequent access during formation of that display. As discussed below, the intersecting segments, e.g. line segments, between each of these sections and the surfaces that are then currently visible to a viewer are calculated by the display generator. Each intersecting segment is then vectorized and, through a vector generator (well-known and also not specifically shown) located internal to display generator 15, causes a corresponding two-dimensional vector to be painted onto the display screen of projection CRT 20. Display generator 15 can be any one of a variety of commercially available graphics workstations, such as illustratively a model 4300 workstation manufactured by Sun Microsystems, Inc., that is capable of driving projection CRT 20 at a sufficiently high speed. However, it will be appreciated that special purpose display system designs using similar technologies which provide higher frame rates will allow higher rotational speeds and hence better image quality.
The optical path of system 5 contains projection CRT 20, focusing lens 30, anamorphic lens 40 and helical reflector 50. Helical reflector 50 is a reflector formed of a single turn of a constant radius spiral, such as a single turn of an "Archimedes screw". The radius is not critical and should be chosen to approximately match that of anamorphic lens 40. Inasmuch as the size of the viewing area is determined by the radius of these components, these components should be sized in order to provide a comfortably large viewing area. This reflector is fabricated around shaft 55. In order to rotate the reflector, in illustratively the direction shown by arrows 67, about it central axis, one end of shaft 55 is mechanically coupled to drive shaft 65 of actuator 60. This actuator is typically a well-known servo-controlled DC motor that can rotate at a constant controlled speed. The actuator typically contains an internal tachometer (not specifically shown) which provides an analog signal proportional to speed of drive shaft 65. This signal is applied, over leads 75, to drive circuitry 70 which, in response to this signal and a speed setting signal--such as an analog voltage level from a control potentiometer (not shown)--appearing on lead 73, generates suitable drive signals over leads 77 to cause the armature of actuator 60 to continuously rotate at the selected speed. Alternatively, actuator 60 could also be implemented using an appropriate stepping motor. Anamorphic lens 40 is mounted to the end of shaft 55 situated opposite to that connected to actuator 65. As such, this lens is synchronized to and rotates in unison with reflector 50. To eliminate any distortion to the three-dimensional image caused by an air atmosphere, such as through smoke, dust or the like, the optical components, including lenses 30, 40 and helical reflector 50 can all be enclosed in a sealed enclosure that is completely filled with a suitably clear stable gaseous atmosphere, for example nitrogen, argon, helium or the like. The enclosure can be transparent to afford viewing from any direction or contain a transparent viewing port. To further minimize any optical distortion resulting from an air atmosphere, the screen of the projection CRT would likely abut against an external surface of lens 30 with the lens itself being situated in an appropriate wall of the enclosure.
The magnification provided by anamorphic lens 40 varies across from one side of the lens to the other as does its focal length in contrast to focusing lens 30 which has a fixed focal length. Together, anamorphic lens 40 and focusing lens 30 collect and focus the light emanating from each point on the screen of projection CRT 20 onto an appropriate point on helical reflector 50. As can be seen from FIG. 1, the vertical distance between anamorphic lens 40 and the closest surface thereto, in the axial direction, of the helical reflector, and hence the focal length therebetween, changes from one side of lens 40, e.g. the right side as shown in the figure, to the other, i.e. the left side. Accordingly, to ensure that the light projecting from CRT 20 and through lens 30 and 40 onto each point on the helical reflector is always in focus at only that point on the reflector, the magnification provided by lens 40 varies from one side of this lens to the other. To simplify this lens, it is merely shown in FIG. 1 as having a rectangular side view. Lens 40 can be fabricated through either one of two methods. First, lens 40, as described in detail below in conjunction with FIGS. 2, 3 and 4, can be made as a planar "array" of closely spaced individual small lenses molded or otherwise affixed into a common holder, such as that made from illustratively plastic or a similar material. Each of these small lenses situated within the left side of lens 40 has a convex shape; while each of the small lenses situated within the right side of lens 40 and has a concave shape. As such, each of the small lenses on the left side of lens 40 provides a relatively higher degree of magnification than each such lens situated on the right side of lens 40. Each of the individual lenses situated within anamorphic lens 40 and intermediate to the convex and concave lenses has relatively flat surface contours and hence provides a magnifying power intermediate to that provided by the concave and convex lenses. Alternatively, lens 40 could be fabricated from a single piece of optical glass with continuous surfaces but with a suitably changing cross-sectional area in order to provide the necessary variation in magnification with positional variations across the face of the lens. However, fabricating such a single piece lens is certainly considerably more difficult than assembling a planar "array" of suitably small lenses into a common frame. Moreover, if the local magnification provided by anamorphic lens 40, and specifically the magnification provided by each individual lens that forms lens 40, is sufficiently strong, then this can advantageously eliminate the need to use a separate focusing lens, such as focusing lens 30, in system 5. Accordingly, the light projected by display 20 would be directly incident onto the anamorphic lens.
Lens 40 also contains positioning indicia 45 uniformly distributed around its periphery. These indicia, in conjunction with position encoder 10, provides a train of position pulses on lead 12. These pulses are counted by display generator 15 as lens 40 rotates in order to determine the current angular orientation of the lens. The specific form, which is not critical, of these indicia, i.e. whether they are locally magnetic or reflective areas, notches or otherwise, depend upon the particular technology chosen for encoder 10. These indicia and the encoder also provide a "home" pulse over leads 12 such that the display generator can synchronize itself to a known position of lens 40 and helical projector 50 at system start-up. Alternatively, a position encoder could be mounted within actuator 60 with suitable position feedback signals being applied, through suitable conditioning circuitry (not shown), to display generator 15 in lieu of using position encoder 10.
Now, with the inventive system thus far described, the present discussion will now shift to address the manner in which the inventive system generates a three-dimensional image and then address the remaining elements in the system, namely video camera 85 and video processor 80 and their use.
Since any three-dimensional object reflects light in three dimensions, any three-dimensional image of that object must also reflect light on the same three-dimensional basis as does the object. This is accomplished through my inventive system which paints each vector in a sequence--the sequence being the intersecting line segments between visible surfaces of the object and a successive helical sections thereof--on projection CRT 20 and from there, via lenses 30 and 40, onto reflector 50 in synchronism with the corresponding incremental rotational position of both lens 40 and helical reflector 50.
Specifically, as the helical reflector (and the anamorphic lens) rotates, its position is tracked by display generator 15 in response to the position pulses appearing over leads 12. At each incremental rotational position of the helical reflector, display generator paints a different vector on the screen of CRT display 20. This vector is a intersecting line segment between a particular helical section of the object and the visible surfaces thereof. As a result, through focusing lens 30 and anamorphic lens 40, every point on this segment is focused onto corresponding points located along the helical reflector. For example, if point 23 were illuminated on the screen of projection CRT 20 then the resulting illumination at this point, indicated simply by light rays 25 and 27, propagates through focusing lens 30 and anamorphic lens 40 and, as light rays 35 and 37, would then be focused on point 57 located on helical reflector 50. The light would then be reflected off the reflector in illustratively the direction of light ray 110 towards eye 90 of a human viewer. Now, as the helical reflector continues to rotate, different vectors are painted onto the screen of projection CRT 20 based upon the position of the reflector and anamorphic lens 40. As the helical reflector continues to incrementally rotate, each of the vectors would be singly and successively painted onto the screen of projection CRT 20 and thereby onto different regions of the reflector. As can be appreciated, each different vector, which corresponds to a particular section through the object, would be painted onto a corresponding line segment on the helical reflector as the reflector rotated. The light reflected from the reflector into the viewer's eye would, due to the persistence associated with human vision, fill an apparent visual volume, e.g. inclined rectangular cylinder 100, that is centered about the longitudinal axis of the reflector and is an accurate though "ghost-like" self-radiant depiction of the object being displayed. Projection CRT 20 only paints one vector at a time, though, again to the persistence of human vision, the screen of the CRT would appear blurred as a three-dimensional image is being generated. It is the helical reflector, together with lenses 30 and 40, that extracts the necessary depth information from the sequence of two-dimensional vectors painted onto the CRT. In order to generate a substantially flicker-free three-dimensional display, the helical reflector should repeatedly paint the image at a minimum rate of at least of 25 Hz which corresponds to a minimum rotational speed of 1500 revolutions/minute (rpm) for the reflector. In addition, a number of successive intersecting line segments of the object should be generated for each rotation of the reflector. If relatively few intersecting line segments (vectors), such as illustratively less than approximately eight such line segments, of the object are generated with each rotation of the reflector, then the resulting three-dimensional image will increasingly appear as a "wire-frame" outline of the contours and edges of the object. However, as the number of such line segments painted on the reflector increases per each rotation thereof, the three-dimensional image of the object will increasingly appear to be "filled in". For example, if a 4 inch (approximately 10 centimeter) high three-dimensional image is to be generated using forty helical sections for each rotation of the helical reflector, then, for uniformity, these sections and then resulting intersecting line segments that collectively form the image would be spaced apart by 0.1 inch (approximately 0.25 centimeters). Clearly, as the number of these line segments increases, the spacing therebetween will decrease thereby causing the image to progressively "fill in". With eight vectors being painted onto the projection CRT for each rotation of the reflector, then the display generator is required to possess a video frame rate of 8*25 or 200 frames/second. This frame rate is easily obtainable with currently available graphics workstations of the type described above. The resulting three-dimensional display provides both stereoscopic and focus affects.
The image quality of the displayed three-dimensional object can be enhanced by modifying the shape of helical reflector 50 to have a cone form, specifically as shown by helical reflector 600 as depicted in an axial cross-sectional view thereof in FIG. 6. This is achieved by making the periphery of the cross-section of the cone shaped helical reflector lower than the location of its center. The advantage of this approach is that the brightness and line structure of the image is improved. The brightness is improved due to the increasingly direct reflection of light emanating from the reflective surface of the modified helical reflector; the line structure is improved because each frame can now represent a larger segment, such as segment 601, including but not limited to a mere intersecting line segment of the displayed three-dimensional image. Due to the different shape of reflector 600 over that of reflector 50 and specifically the changed focal length between each individual lens that forms anamorphic lens 40 and the corresponding point of the surface of reflector 600 over that for reflector 50, the optical characteristics of the individual lenses that form the anamorphic lens would need to be suitably changed accordingly.
Now, in order to add parallax affects to the displayed three-dimensional object, video camera 85 shown in FIG. 1 detects the current position of the viewer. The video camera continuously scans an area within which the viewer is situated and generates appropriate video signals. The video output of the camera is applied to video processor 80. This processor, in conjunction with internal frame comparison and motion detection circuitry (well known and not specifically shown), determines whether the viewer has moved and, if so, the current position of the viewer. The processor generates the viewer's position in coordinate fashion, e.g. an x,y coordinate pair, over leads 83 for application to display generator 15. Using this positional information, display generator 15 determines, using standard well-known three-dimensional graphics techniques, those external surfaces of the object that would then be visible to the viewer and thereafter re-calculates the intersections between these surfaces and the helical sections of the object which are then displayed. Accordingly, as the viewer moves from side to side or in other ways that would affect the surfaces of the object that he or she sees, the coordinates produced by video processor 80 correspondingly change. This change, in turn, causes the three-dimensional display of the object to change in synchronism with changes in the viewer's perspective. By injecting parallax information into the displayed three-dimensional object, the resulting display not only becomes more accurate but also more realistic than would otherwise occur.
Having now described the system, FIG. 2 depicts a top view of one embodiment, specifically the planar "array" embodiment, of anamorphic lens 40. As described above, lens 40 contains small individual lenses 200 arranged in a closely packed formation. Hole 240 lies at the center of lens 40 for use in mounting the lens to one end of shaft 55 (see FIG. 1). The perspective view of lens 40 shown in FIG. 3 shows, as described above, a necessary positional variation in the contour and hence magnification provided by the individual lenses that collectively form lens 40. Specifically, lenses 210, typified by lens 2101, which are located in the right side of lens 40 are all concave in cross-section, such as that illustratively shown in the 35 cross-sectional view of lens 2101 shown in FIG. 4. In contrast, lenses 230, typified by lens 2301, which are located in the left side of lens 40 are all convex in cross-section, such as that illustratively shown in FIG. 4 for lens 2301. Convex lenses 230 provide a greater degree of magnification than do concave lenses 210. Moreover, the individual lenses that form concave lenses 210 have somewhat different surface contours (i.e. here different degrees of concavity) amongst themselves and hence different relative magnifications and focal lengths due to the different relative distances between each of these individual lenses and the surface of the helical reflector. Similar surface contour variations, although in terms of convexity, occurs within convex lenses 230. Lenses 220, typified by lens 2201, situated between concave lenses 210 and convex lenses 230, provide varying degrees of magnification between that associated with a convex and that associated with a concave lens and therefore have an approximately flat contour on each optical surface thereof, such as that shown for the cross-section of lens 2201 depicted in FIG. 4. All the lenses are contained within common frame 250, as shown in FIG. 3.
Having discussed the hardware that forms my inventive system, the discussion will now conclude by addressing the software. Accordingly, FIG. 5 depicts a high level flowchart of Main Routine 500 that executes within display generator 15 shown in FIG. 1 in order to generate a three-dimensional image of illustratively an object on my inventive system.
Upon entry into routine 500, which occurs upon a specific command to the display generator to generate a three-dimensional image, execution first proceeds to block 510. This block, when executed, loads a database, as described above, that contains three-dimensional volumetric describing data for the object that is to be displayed. This database can be obtained from an external source and is generally loaded into an appropriate file on an hard disk drive located internal to the display generator. Once this database has been completely loaded, execution proceeds to block 520 which reads the current position of the viewer, in (x,y) coordinate form. This position is supplied, as described above and shown in FIG. 1, over leads 83 by video processor 80. After the coordinate position has been read, execution passes, as shown in FIG. 5, to block 530. This block then calculates, using the database, these coordinates, and again through use of well-known three-dimensional graphics techniques, those surfaces of the object that would be visible to the viewer and the reflection characteristics dictated by available illumination that will be incident on the object. After this occurs, block 540 is executed. This block calculates a sequence of the intersecting line segments between the visible surfaces of the object and with each of a series of parallel helical sections of the object, these sections representing incrementally different rotational positions of the helical reflector. This block, if processing time permits, can calculate the helical sections through the object in real-time, using its three-dimensional representation in the database and each successive position of a desired helical section. Alternatively, these sections can be calculated in advance, pre-stored and merely accessed by block 540 during a subsequent formation of a three-dimensional display. These helical sections are calculated, using any of a variety of well-known three-dimensional graphics techniques, to have the same pitch and axial orientation as that of the helical reflector. If an anamorphic lens 40 were to be constructed from a single piece of optical glass, it would likely introduce a degree of anamorphic distortion into each displayed intersecting line segment (vector) projected therethrough. As such, once block 540 calculated each intersecting line segment, this block would then correct that line segment for the expected anamorphic distortion in order to appropriately compensate the displayed image. Substantially no such distortion is expected to occur if the anamorphic lens is constructed using a planar "array" of small lenses, as discussed in detail above. In any event, once the sequence of intersecting line segments has been completely determined, execution proceeds to block 550. If the three-dimensional image is to be displayed in color, then block 550 obtains appropriate color information for each intersecting line segment from the database of the object. On the other hand, if a monochromatic image is to be generated, then no such color information is obtained by this block. Thereafter, execution proceeds to block 560 which, when executed, instructs the vector generator located within display generator 15 (see FIG. 1) to draw a single vector, of an appropriate color(s), on projection CRT 20 for each intersecting line segment in the sequence generated by block 540 and in synchronism with the rotational position of the helical reflector. As such, this block will cause successive vectors to be continuously painted on the screen of projection CRT 20 which, in conjunction with rotating helical reflector 50, will extract "z dimension" information from these vectors and result in a continuous three-dimensional image of the object.
Clearly, those skilled in the art can readily appreciate that although display generator 15 has been described as a workstation which typically operates in a high speed serial fashion, this generator can alternatively utilize parallel processing. Specifically, as the display generator instructs its internal video generator, such as through a dedicated peripheral input/output (I/O) processor, to paint a specific vector on the projection CRT, the generator can through a separate internal graphics processor calculate the next successive intersecting line segment in the sequence and so on. At the same time, a direct memory access (DMA) operation can be underway, through separate specialized memory I/O circuitry, to transfer necessary information from the database for the object and/or from other associated files to random access memory located within the display generator, on a "look ahead" type basis, for subsequent use by the graphics processor in calculating or accessing the next successive helical section. Operating in this fashion saves processing time and provides a faster response.
In addition, although the preferred embodiment has been shown with the projection CRT situated above helical reflector 50, the reflector and lenses 30 and 40 can merely be inverted such that the projection CRT can be built into a desk associated with the workstation used to implement display generator 15 with these optical elements and actuator 60 positioned above the CRT. Since the size of these optical components is not much larger than the viewing volume, these components can be made quite compact, if need be.
Furthermore, although the preferred embodiment includes a video camera and an associated video processor for tracking the position of a viewer and generating coordinates therefor, any one of a variety of other well-known sensors and associated processing circuitry can be used to generate these coordinates.
Although one embodiment of the present invention has been shown and described in detail herein along with certain variants thereof, many other varied embodiments that incorporate the teachings of the invention may be easily constructed by those skilled in the art.
The present invention is useful in display systems and particularly to display three-dimensional images. The invention advantageously provides more accurate and brighter three-dimensional images than those generally obtainable using mechanized three-dimensional display systems known in the art.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US2189374 *||May 17, 1937||Feb 6, 1940||Surbeck Leighton Homer||Apparatus for forming three dimensional images|
|US3202985 *||Sep 26, 1961||Aug 24, 1965||Gen Electric||True three-dimensional display system|
|US3324760 *||Dec 30, 1964||Jun 13, 1967||Robert B Collender||Three dimensional unaided viewing apparatus|
|US3428393 *||Nov 5, 1965||Feb 18, 1969||Montebello Roger Lannes De||Optical dissector|
|US3462213 *||Aug 26, 1968||Aug 19, 1969||Roger Lannes De Montebello||Three-dimensional optical display apparatus|
|US3682553 *||Sep 19, 1968||Aug 8, 1972||Optics Technology Inc||Apparatus for acquiring and laying real time 3-d information|
|US3956833 *||Sep 13, 1974||May 18, 1976||The United States Of America As Represented By The United States National Aeronautics And Space Administration||Vehicle simulator binocular multiplanar visual display system|
|US4130832 *||Jul 11, 1977||Dec 19, 1978||Bolt Beranek And Newman Inc.||Three-dimensional display|
|US4294523 *||Oct 19, 1979||Oct 13, 1981||The Zyntrax Corporation||Three-dimensional display apparatus|
|US4297009 *||Jan 11, 1980||Oct 27, 1981||Technicare Corporation||Image storage and display|
|US4414565 *||Oct 15, 1980||Nov 8, 1983||The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland||Method and apparatus for producing three dimensional displays|
|US4649425 *||Jan 16, 1986||Mar 10, 1987||Pund Marvin L||Stereoscopic display|
|US4737921 *||Jun 3, 1985||Apr 12, 1988||Dynamic Digital Displays, Inc.||Three dimensional medical image display system|
|US4853769 *||Jun 16, 1987||Aug 1, 1989||Massachusetts Institute Of Technology||Time multiplexed auto-stereoscopic three-dimensional imaging system|
|JPS60257695A *||Title not available|
|1||*||J. D. Foley et al, Fundamentals of Interactive Computer Graphics, ( 1982, Addison Wesley Publishing Co.), pp. 505 511.|
|2||J. D. Foley et al, Fundamentals of Interactive Computer Graphics, (©1982, Addison-Wesley Publishing Co.), pp. 505-511.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US5042909 *||Sep 17, 1990||Aug 27, 1991||Texas Instruments Incorporated||Real time three dimensional display with angled rotating screen and method|
|US5072215 *||Dec 21, 1989||Dec 10, 1991||Brotz Gregory R||Three-dimensional imaging system|
|US5082350 *||Sep 19, 1989||Jan 21, 1992||Texas Instruments Incorporated||Real time three dimensional display system for displaying images in three dimensions which are projected onto a screen in two dimensions|
|US5162787 *||May 30, 1991||Nov 10, 1992||Texas Instruments Incorporated||Apparatus and method for digitized video system utilizing a moving display surface|
|US5208501 *||Jul 3, 1991||May 4, 1993||Texas Instruments Incorporated||Rim driven stepper motor and method of operation RIM driven|
|US5220452 *||Aug 7, 1991||Jun 15, 1993||Texas Instruments Incorporated||Volume display optical system and method|
|US5257345 *||May 11, 1990||Oct 26, 1993||International Business Machines Corporation||Computer system and method for displaying functional information with parallax shift|
|US5347433 *||Jun 11, 1992||Sep 13, 1994||Sedlmayr Steven R||Collimated beam of light and systems and methods for implementation thereof|
|US5347644 *||Jun 11, 1992||Sep 13, 1994||Sedlmayr Steven R||Three-dimensional image display device and systems and methods for implementation thereof|
|US5394202 *||Jan 14, 1993||Feb 28, 1995||Sun Microsystems, Inc.||Method and apparatus for generating high resolution 3D images in a head tracked stereo display system|
|US5526146 *||Jun 24, 1993||Jun 11, 1996||International Business Machines Corporation||Back-lighting system for transmissive display|
|US5568314 *||Nov 30, 1994||Oct 22, 1996||Terumo Kabushiki Kaisha||Image display apparatus|
|US5606454 *||Feb 17, 1993||Feb 25, 1997||Texas Instruments Incorporated||Volume display system and method for inside-out viewing|
|US5644427 *||Feb 7, 1995||Jul 1, 1997||Terumo Kabushiki Kaisha||Image display apparatus|
|US5646640 *||Jan 31, 1995||Jul 8, 1997||Texas Instruments Incorporated||Method for simulating views of a scene using expanded pixel data to reduce the amount of pixel data recalculations|
|US5734416 *||Dec 20, 1995||Mar 31, 1998||Nec Corp.||Stereoscopic picture display unit|
|US5745197 *||Oct 20, 1995||Apr 28, 1998||The Aerospace Corporation||Three-dimensional real-image volumetric display system and method|
|US5774261 *||Sep 23, 1994||Jun 30, 1998||Terumo Kabushiki Kaisha||Image display system|
|US5815314 *||Oct 15, 1997||Sep 29, 1998||Canon Kabushiki Kaisha||Image display apparatus and image display method|
|US5818399 *||Feb 8, 1995||Oct 6, 1998||Terumo Kabushiki Kaisha||Image communication apparatus|
|US5854613 *||May 31, 1996||Dec 29, 1998||The United Sates Of America As Represented By The Secretary Of The Navy||Laser based 3D volumetric display system|
|US5976017 *||Apr 9, 1997||Nov 2, 1999||Terumo Kabushiki Kaisha||Stereoscopic-image game playing apparatus|
|US6011580 *||Jun 6, 1995||Jan 4, 2000||Terumo Kabushiki Kaisha||Image display apparatus|
|US6052100 *||Sep 10, 1997||Apr 18, 2000||The United States Of America Represented By The Secertary Of The Navy||Computer controlled three-dimensional volumetric display|
|US6054817 *||Apr 4, 1996||Apr 25, 2000||Blundell; Barry George||Three dimensional display system|
|US6115058 *||Jan 16, 1998||Sep 5, 2000||Terumo Kabushiki Kaisha||Image display system|
|US6115059 *||Jan 16, 1998||Sep 5, 2000||Korea Institute Of Science And Technology||Method and system for providing a multiviewing three-dimensional image using a moving aperture|
|US6302542||Feb 20, 1999||Oct 16, 2001||Che-Chih Tsao||Moving screen projection technique for volumetric three-dimensional display|
|US6487020 *||Sep 23, 1999||Nov 26, 2002||Actuality Systems, Inc||Volumetric three-dimensional display architecture|
|US6587159 *||May 13, 1999||Jul 1, 2003||Texas Instruments Incorporated||Projector for digital cinema|
|US6697034||Jan 2, 2001||Feb 24, 2004||Craig Stuart Tashman||Volumetric, stage-type three-dimensional display, capable of producing color images and performing omni-viewpoint simulated hidden line removal|
|US6765566||Dec 22, 1998||Jul 20, 2004||Che-Chih Tsao||Method and apparatus for displaying volumetric 3D images|
|US6900779 *||Jan 26, 2001||May 31, 2005||Zheng Jason Geng||Method and apparatus for an interactive volumetric three dimensional display|
|US7284865 *||Aug 22, 2006||Oct 23, 2007||Tung-Chi Lee||Scrolling device with color separation and projection system incorporating same|
|US7286993||Jul 23, 2004||Oct 23, 2007||Product Discovery, Inc.||Holographic speech translation system and method|
|US7432932 *||May 15, 2001||Oct 7, 2008||Nintendo Co., Ltd.||External memory system having programmable graphics processor for use in a video game system or the like|
|US7857700||Sep 12, 2003||Dec 28, 2010||Igt||Three-dimensional autostereoscopic image display for a gaming apparatus|
|US7878910||Sep 13, 2005||Feb 1, 2011||Igt||Gaming machine with scanning 3-D display system|
|US8388146 *||Aug 1, 2010||Mar 5, 2013||T-Mobile Usa, Inc.||Anamorphic projection device|
|US9030652 *||Dec 2, 2012||May 12, 2015||K-Space Associates, Inc.||Non-contact, optical sensor for synchronizing to free rotating sample platens with asymmetry|
|US20010043224 *||May 15, 2001||Nov 22, 2001||A/N Inc.||External memory system having programmable graphics processor for use in a video game system or the like|
|US20050037843 *||Aug 11, 2003||Feb 17, 2005||William Wells||Three-dimensional image display for a gaming apparatus|
|US20050038663 *||Jul 23, 2004||Feb 17, 2005||Brotz Gregory R.||Holographic speech translation system and method|
|US20050059487 *||Sep 12, 2003||Mar 17, 2005||Wilder Richard L.||Three-dimensional autostereoscopic image display for a gaming apparatus|
|US20070060390 *||Sep 13, 2005||Mar 15, 2007||Igt||Gaming machine with scanning 3-D display system|
|US20110157558 *||Jul 2, 2010||Jun 30, 2011||Tung-Chi Lee||Beam splitting apparatus|
|US20120026376 *||Aug 1, 2010||Feb 2, 2012||T-Mobile Usa, Inc.||Anamorphic projection device|
|US20120162216 *||Dec 22, 2011||Jun 28, 2012||Electronics And Telecommunications Research Institute||Cylindrical three-dimensional image display apparatus and method|
|US20130141711 *||Dec 2, 2012||Jun 6, 2013||K-Space Associates, Inc.||Non-contact, optical sensor for synchronizing to free rotating sample platens with asymmetry|
|US20170013251 *||Jul 10, 2015||Jan 12, 2017||Samuel Arledge Thigpen||Three-dimensional projection|
|CN102004387A *||Sep 15, 2010||Apr 6, 2011||中国科学院自动化研究所||Full screen projection system with double helix screen|
|CN102004387B||Sep 15, 2010||Nov 7, 2012||中国科学院自动化研究所||Full screen projection system with double helix screen|
|DE19613618A1 *||Apr 4, 1996||Oct 9, 1997||Ernst Hartmut Professor Dr||Stereo image display method with reference point visualising|
|DE19613618C2 *||Apr 4, 1996||May 2, 2002||Deutsche Telekom Ag||Vorrichtung zur Einbeziehung einer gegenüber einer Anzeigeeinrichtung veränderbaren Relativlage eines Bezugspunktes in die Visualisierung von auf der Anzeigeeinrichtung wiedergegebenen Darstellung|
|EP0470800A1 *||Aug 5, 1991||Feb 12, 1992||Texas Instruments Incorporated||Volume display development system|
|EP0470801A1 *||Aug 5, 1991||Feb 12, 1992||Texas Instruments Incorporated||Apparatus and method for volume graphics display|
|EP0491284A1 *||Dec 12, 1991||Jun 24, 1992||Texas Instruments Incorporated||Volume display system and method for inside-out viewing|
|EP1330791A1 *||Oct 17, 2001||Jul 30, 2003||Actuality Systems, Inc.||Rasterization of lines in a cylindrical voxel grid|
|EP1330791A4 *||Oct 17, 2001||May 4, 2005||Actuality Systems Inc||Rasterization of lines in a cylindrical voxel grid|
|WO1995021397A1 *||Feb 7, 1995||Aug 10, 1995||Terumo Kabushiki Kaisha||Image display apparatus|
|WO1995021398A1 *||Feb 7, 1995||Aug 10, 1995||Terumo Kabushiki Kaisha||Image display apparatus|
|WO1996031986A1 *||Apr 4, 1996||Oct 10, 1996||Barry George Blundell||Improvements in a three-dimensional display system|
|U.S. Classification||348/51, 348/E13.056, 348/E13.023, 348/E13.059, 348/42, 348/E13.05, 348/E13.058|
|International Classification||G09F19/18, G02B27/22, H04N13/00, H04N13/04|
|Cooperative Classification||H04N13/0289, H04N13/0497, G02B27/2285, H04N13/0278, G09F19/18, H04N13/0493, H04N13/0459, H04N13/0477|
|European Classification||H04N13/04V3, H04N13/04P, H04N13/02E1, H04N13/04Y, H04N13/04T5, G02B27/22V2, G09F19/18|
|Sep 11, 1989||AS||Assignment|
Owner name: EASTMAN KODAK COMPANY, NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNOR:MORTON, ROGER R. A.;REEL/FRAME:005135/0294
Effective date: 19890909
|Sep 20, 1993||FPAY||Fee payment|
Year of fee payment: 4
|Sep 29, 1997||FPAY||Fee payment|
Year of fee payment: 8
|Sep 28, 2001||FPAY||Fee payment|
Year of fee payment: 12