US 20100302348 A1
The following patent relates to an overall hardware configuration that produces an enhanced spatial television-like viewing experience. Unlike normal television, with this system the viewer is able to control both the viewing direction and relative position of the viewer with respect to the movie action. In addition to a specific hardware configuration, this patent also relates to a new video format which makes possible this virtual reality like experience.
1) An interactive image capture and display system comprising
a) an image input means including an array of electronic image capture devices distributed in a horizontal plane such that their fields of view partially overlap and collectively cover a full 360 degrees; and
b) an image storage and playback means compatible with existing television standards;
c) a signal processing means including
1) a means of producing graphical imagery depicting a panoramic image such that said panoramic image is composed of a plurality of smaller image sections;
2) a means for cropping, distorting and aligning individual images produced by the said image capture devices to produce an overall 360 degree panoramic image with negligible distortion and overlap between the individual image sections and wherein each pixel in the resulting 360 degree panoramic image has the same effective width, where each pixel subtends an equal horizontal angle to the center of said panoramic image;
3) a means for generating an image representing a subset of the said 360 degree panoramic image, whereby the azimuth and elevation of the center of said subset is adjustable by user control;
4) a means for selectively combining and geometrically altering real time imagery from said capture devices and prerecorded imagery to create a composite augmented reality experience;
5) a means for determining the correct location of said image sections within said 360 degree panoramic image utilizing additional information present in the source media;
6) a means for inserting tracking information to describe at least the current orientation of said array of electronic image capture devices into an outgoing video stream;
7) a means for encoding multi-track audio such that it maintains compatibility with standard video storage, playback and transmission systems; and
8) a means for producing orientation-sensitive audio in real-time, utilizing multi-track audio information and controlled by coordinates of a viewport within said panoramic image;
d) an image output means capable of outputting an image in a format compatible with existing television standards;
e) an audio output means capable of outputting at least 2 channels of audio;
f) a display means including at least one display device;
g) a user control means including an input device allowing the user to control said signal processing means; and
h) A tracking means capable of measuring at least azimuth and elevation of said array of electronic image capture devices.
2) The system according to
3) A system according to
4) A system according to
5) A system according to
6) A system according to
7) A system according to
8) A system according to
9) A system according to
10) A system according to 5 further comprising within one or more scan lines of a standard video image, additional coded data providing
a) audio information for generation of four or more real-time audio tracks; and
b) data descriptive of a number of employed audio tracks, an employed audio data format, an employed audio sampling rate,
and track synchronization, whereby said signal processing means can decode the audio information into position and orientation sensitive sound.
11) A system according to
12) A system according to
a) means for mathematically combining information about azimuth
and elevation of a viewer; and
b) means for encoding multi-track audio for use with standard video storage and transmission systems such that the combined information can be subsequently decoded by specific hardware to produce a left and right audio channel with spatially correct three-dimensional audio for the left and right ears of a viewer.
13) A system according to
14) A system according to
15) A system according to
a) a tracking device for continuously calculating a viewer's physical position; and
b) means for varying the position of a viewpoint within a three-dimensional virtual space responsive to said position.
16) A system according to
17) A system according to
18) A system according to
19) A system according to
a) one or more video digitizing modules;
b) one or more memory areas selected from the group consisting of ARM, VRM, and TM;
c) digital processing means for
1) altering address mapping of data held in at least one of ARM and VRM so as to effectively move graphical information from one location to another therein; and
2) mathematically combining and altering data from both a source location and a destination location, thereby achieving the functions of compositing and transformation; and
d) one or more video, generation modules.
20) A system according to
21) A system according to
22) A system according to
a) means for displaying imagery;
b) means for placing said real-time video imagery into ARM and source information from said video playback means into VRM; and
c) means for combining imagery from ARM and VRM according to a pattern of data held in TM into a composite image before display.
23) A system according to
a) means for displaying imagery;
b) means for placing source information from said video playback means into ARM and VRM; and
c) means for combining imagery from ARM and VRM according to a translation map included in the source media.
24) The system according to
a) means for displaying imagery;
b) means for placing source information from said video playback means into ARM and VRM; and
c) means for combining imagery from ARM and VRM in accordance with a geometric interpretation of said real-time video imagery.
25) A system according to
26) A system according to
27) A system according to
28) A system according to
29) A system according to
a) a plurality of reflective targets placed at predetermined coordinates;
b) a plurality of on-axis light sources strobed in synchronization with the capture rate of said array of electronic image capture devices; and
c) means for computing absolute angular and spatial data based on said predetermined coordinates and relative angular and spatial data determined by said array of electronic image capture devices.
30) A system according to
31) A system according to
32) A system according to
This application is a continuation of application Ser. No. 09/891,733, filed Jun. 25, 2001.
The following patent relates to an overall hardware configuration that produces an enhanced spatial television-like viewing experience. Unlike normal television, with this system the viewer is able to control both the viewing direction and relative position of the viewer with respect to the movie action. In addition to a specific hardware configuration, this patent also relates to a new video format which makes possible this virtual reality like experience. Additionally, several proprietary video compression standards are also defined which facilitate this goal. The VTV system is designed to be an intermediary technology between conventional two-dimensional cinematography and true virtual reality. There are several stages in the evolution of the VTV system ranging from, in its most basic form, a panoramic display system to, in its most sophisticated form featuring full object based virtual reality utilizing animated texture maps and featuring live actors and/or computer-generated characters in a full “environment aware” augmented reality system.
As can be seen in
In a more sophisticated configuration, as shown in
The wireless HMD is connected to the VTV processor by virtue of a wireless data link “Cybernet link”. In its most basic form this link is capable of transmitting video information from the VTV processor, to the HMD and transmitting tracking information from the HMD to the VTV processor. In its most sophisticated form the cybernet link would transmit video information both to and from the HMD in addition to transferring tracking information from the HMD to the VTV processor. Additionally certain components of the VTV processor may be incorporated in the remote HMD thus reducing the data transfer requirement through the cybernet link. This wireless data link can be implemented in a number of different ways utilizing either analog or digital video transmission (in either an un-compressed or a digitally compressed format) with a secondary digitally encoded data stream for tracking information. Alternately, a purely digital un-directional or bidirectional data link which carries both of these channels could be incorporated. The actual medium for data transfer would probably be microwave or optical. However either transfer medium may be utilized as appropriate. The preferred embodiment of this system is one which utilizes on-board panoramic cameras fitted to the HMD in conjunction with image analysis hardware on board the HMD or possibly on the VTV base station to provide real-time tracking information. To further improve system accuracy, retroflective markers may also the utilized in the “real world environment”. In such a configuration, switchable light sources placed near to the optical axis of the on-board cameras would be utilized in conjunction with these cameras to form a “differential image analysis” system. Such a system features considerably higher recognition accuracy than one utilizing direct video images alone.
Ultimately, the VTV system will transfer graphic information utilizing a “universal graphics standard”. Such a standard will incorporate an object based graphics description language which achieves a high degree of compression by virtue of a “common graphics knowledge base” between subsystems. This patent describes in basic terms three levels of progressive sophistication in the evolution of this graphics language.
These three compression standards will for the purpose of this patent be described as:
In its most basic format the VTV system can be thought of as a 360 Degree panoramic display screen which surrounds the viewer.
This “virtual display screen” consists of a number of “video Pages”. Encoded in the video image is a “Page key code” which instructs the VTV processor to place the graphic information into specific locations within this “virtual display screen”. As a result of this ability to place images dynamically it is possible to achieve the effective equivalent to both high-resolution and high frame rates without significant sacrifice to either. For example, only sections of the image which are rapidly changing require rapid image updates whereas the majority of the image is generally static. Unlike conventional cinematography in which key elements (which are generally moving) are located in the primary scene, the majority of a panoramic image is generally static.
In its most basic form the VTV graphics standard consists of a virtual 360 degree panoramic display screen upon which video images can be rendered from an external video source such as VCR, DVD, satellite, camera or terrestrial television receiver such that each video frame contains not only the video information but also information that defines its location within the virtual display screen. Such a system is remarkably versatile as it provides not only variable resolution images but also frame rate independent imagery. That is to say, the actual update rate within a particular virtual image (entire virtual display screen) may vary within the display screen itself. This is inherently accomplished by virtue of each frame containing its virtual location information. This allows active regions of the virtual image to be updated quickly at the nominal perception cost of not updating sections on the image which have little or no change. Such a system is shown in
To further improve the realism of the imagery, the basic VTV system can be enhanced to the format shown in
Due to constant variation of absolute planes of reference, mobile camera applications (either HMO based or Pan-Cam based) require additional tracking information for azimuth and elevation of the camera system to be included with the visual information in order that the images can be correctly decoded by the VTV graphics engine. In such a system, absolute camera azimuth and elevation becomes part of the image frame information. There are several possible techniques for the interpretation of this absolute reference data. Firstly, the coordinate data could be used to define the origins of the image planes within the memory during the memory writing process. Unfortunately this approach will tend to result in remnant image fragments being left in memory from previous frames with different alignment values. A more practical solution is simply to write the video information into memory with an assumed reference point of 0 azimuth, 0 elevation. This video information is then correctly displayed by correcting the display viewport for the camera angular offsets. One possible data format for such a system is shown in
In addition to 360 Degree panoramic video, the VTV standard also supports either 4 track (quadraphonic) or 8 track (octaphonic) spatial audio. A virtual representation of the 4 track system is shown in
In its most basic form, the VTV standard encodes the multi-track audio channels as part of the video information in a digital/analogue hybrid format as shown in
In addition to this pre-scaling of the digital information, an audio control bit (AS) is included in each field (at line 21). This control bit sets the audio buffer sequence to 0 when it is set. This provides a way to synchronize the 4 or 8 track audio information so that the correct track is always being updated from the current data regardless of the sequence of the video Page updates.
In more sophisticated multimedia data formats such as computer AV files and digital television transmissions, these additional audio tracks could be stored in other ways which may be more efficient or otherwise advantageous.
should be noted that, in addition to it's use as an audiovisual device, this spatial audio system/standard could also be used in audio only mode by the combination of a suitable compact tracking device and a set of cordless headphones to realize a spatial-audio system for advanced hi-fi equipment.
In addition to this simplistic graphics standard, There a are number of enhancements which can be used alone or in conjunction with the basic VTV graphics standard. These three graphics standards will be described in detail in subsequent patents, however for the purpose of this patent, they are known as:
The first two standards relate to the definitions of spatial graphics objects where as the third graphics standard relates to a complete VR environment definition language which utilizes the first standards as a subset and incorporates additional environment definitions and control algorithms.
The VTV graphic standard (in its basic form) can be thought of as a control layer above that of the conventional video standard (NTSC, PAL etc.). As such, it is not limited purely to conventional analog video transmission standards. Using basically identical techniques, the VTV standard can 30 operate with the HDTV standard as well as many of the computer graphic and industry audiovisual standards.
The VTV graphics processor is the heart of the VTV system. In its most basic form this module is responsible for the real-time generation of the graphics which is output to the display device (either conventional TV/HDTV or HMD). In addition to digitizing raw graphics information input from a video media provision device such as VCR, DVD, satellite, camera or terrestrial television receiver. More sophisticated versions of this module may real-time render graphics from a “universal graphics language” passed to it via the Internet or other network connection. In addition to this digitizing and graphics rendering task, the VTV processor can also perform image analysis. Early versions of this system will use this image analysis function for the purpose of determining tracking coordinates of the HMD. More sophisticated versions of this module will in addition to providing this tracking information, also interpret the real world images from the HMD as physical three-dimensional objects. These three-dimensional objects will be defined in the universal graphics language which can then be recorded or communicated to similar remote display devices via the Internet or other network or alternatively be replaced by other virtual objects of similar physical size thus creating a true augmented reality experience.
The VTV hardware itself consists of a group of sub modules as follows:
The exact configuration of these modules is dependent upon other external hardware. For example, if digital video sources are used then the video digitizing module becomes relatively trivial and may consist of no more than a group of latch's or FIFO buffer. However, if composite or Y/C video inputs are utilized then additional hardware is required to convert these signals into digital format. Additionally, if a digital HDTV signal is used as the video input source then an HDTV, decoder is required as the front end of the system (as HDTV signals cannot be processed in compressed format).
In the case of a field based video system such as analogue TV, the basic operation of the VTV graphics engine is as follows:
As can be seen in
Although having two input and two output stages improves the versatility of the design, the memory pool style of design means that the system can function with either one or two input and/or output stages (although with reduced capabilities) and as such the presence of either one or two input or output stages in a particular implementation should not limit the generality of the specification.
For ease of design, high-speed static RAM was utilized as the video memory in the prototype device. However, other memory technologies may be utilized without limiting the generality of the design specification.
In the preferred embodiment, the digital processing hardware would take the form of one or more field programmable logic arrays or custom ASIC. The advantage of using field programmable logic arrays is that the hardware can be updated at anytime. The main disadvantage of this technology is that it is not quite as fast as an ASIC. Alternatively, high speed conventional digital processors may' also be utilized to perform this image analysis and/or graphics generation task.
As previously described, certain sections of this hardware may be incorporated in the HMD, possibly even to the, point at which the entire VTV hardware exists within the portable HMD device. In such a case the VTV base station hardware would act only as a link between the HMD and the Internet or other network with all graphics image generation, image analysis and spatial object recognition occurring within the HMD itself.
Note: The low order bits of the viewport address generator are run through a look up table address translator for the X and Y image axies which impose barrel distortion on the generated images. This provides the correct image distortion for the current field of view for the viewport. This hardware is not shown explicitly in
It should be noted that only viewport-0 is affected by the translation engine (Warp Engine), Viewport-1 is read out undistorted. This is necessary when using the superimpose and overlay augmented reality modes because VR-video material being played from storage has already been “flattened” (Le. pincushion distorted) prior to being stored whereas the live video from the panoramic cameras on the HMD require distortion correction prior to being displayed by the system in Augmented Reality mode. After this preliminary distortion, images recorded by the panoramic cameras in the HMD should be geometrically accurate and suitable for storage as new VR material in their own right (Le. they can become VR material). One of the primary roles of the Warp Engine is then to provide geometry correction and trimming of the panoramic camera's on the HMD. This includes the complex task of providing a seamless transition between camera views.
As can be seen in
The VTV system has two basic modes of operation. Within these two modes there also exist several sub modes. The two basic modes are as follows:
In augmented reality mode 1, selective components of “real world imagery” are overlaid upon a virtual reality background. In general, this process involves first removing all of the background components from the “real world” imagery. This can be easily done by using differential imaging techniques. i.e. by comparing current “real world” imagery against a stored copy taken previously and detecting differences between the two. After the two images have been correctly aligned, the regions that differ are new or foreground objects and those that remain the same are static background objects. This is the simplest of the augmented reality modes and is generally not sufficiently interesting as most of the background will be removed in the process. It should be noted that, when operated in mobile Pan-Cam (telepresense) or augmented reality mode the augmented reality memory will generally be updated in sequential Page order (Le. updated in whole system frames) rather than random Page updates. This is because constant variations in the position and orientation of the panoramic camera system during filming will probably cause mis-matches in the image Pages if they are handled separately.
Augmented reality mode 2 differs from mode 1 in that, in addition to automatically extracting foreground and moving objects and placing these in an artificial background environment, the system also utilizes the Warp Engine to “push” additional “real world” objects into the background. In addition to simply adding these “real world” objects into the virtual environment the Warp Engine is also capable of scaling and translating these objects so that they match into the virtual environment more effectively. These objects can be handled as opaque overlays or transparencies.
Augmented reality mode 3 differs from the mode 2 in that, in this case, the Warp Engine is used to “pull” the background objects into the foreground to replace “real world” objects. As in mode 2: these objects can be translated and scaled and can be handled as r either opaque overlays or transparencies. This gives the user to the ability to “match” the physical size and position of a “real world” object with a virtual object. By doing so, the user is able to interact and navigate within the augmented reality environment as they would in the “real world” environment. This mode is probably the most likely mode to be utilized for entertainment and gaming purposes as it would allow a Hollywood production to be brought into the users own living room.
3.16) Clearly the key to making augmented reality modes 2 and 3 operate effectively is a fast and accurate optical tracking system. Theoretically, it is possible for the VTV processor to identify and track “real world” objects in real-time. However, this is a relatively complex task, particularly as object geometry changes greatly with changes in the viewer's physical position within the “real world” environment, and as I such, simple auto correlation type tracking techniques will not work effectively. In such a situation, tracking accuracy can be greatly improved by placing several retroflective targets on key elements of the objects in question. Such retroflective targets can easily be identified by utilizing relatively simple differential imaging techniques.
Virtual reality mode is a functionally simpler mode than the previous augmented reality modes. In this mode “pre-filmed” or computer-generated graphics are loaded into augmented reality memory on a random Page by Page basis. This is possible because the virtual camera planes of reference are fixed. As in the previous examples, virtual reality memory is loaded with a fixed or dynamic background at a lower resolution. The use of both foreground and background image planes makes possible more sophisticated graphics techniques such as motion parallax.
The versatility of virtual reality memory (background memory) can be improved by utilizing an enhanced form of “blue-screening”. In such a system, a sample of the “chroma-key” color is provided at the beginning of each scan line in the background field (area outside of the active image area). This provides a versatile system in which any color is allowable in the image. Thus, by surrounding individual objects with the “transparent” chroma-key color, problems and inaccuracies associated with the “cutting and pasting” of this object by the Warp Engine are greatly reduced. Additionally, the use of “transparent” chroma-keyed regions within foreground virtual reality images allows easy generation of complex sharp edged and/or dynamic foreground regions with no additional information overhead.
As can be seen in the definition of the graphic standard, additional Page placement and tracking information is required for the correct placement and subsequent display of the imagery captured by mobile Pan-Cam or HMD based video systems. Additionally, if Spatial audio is to be recorded in real-time then this information must also be encoded as part of the video stream. In the case of computer-generated imagery this additional video information can easily be inserted at render-stage. However, in the case of live video capture, this additional tracking and audio information must be inserted into the video stream prior to recording. This can effectively be achieved through a graphics processing module herein after referred to as the VTV encoder module.
In the case of imagery collected by mobile panoramic camera systems, the images are first processed by a VTV encoder module. This device provides video distortion correction and also inserts video Page information, orientation tracking data and spatial audio into the video stream. This can be done without altering the video standard, thereby maintaining compatibility with existing recording and playback devices. Although this module could be incorporated within the VTV processor, having this module as a separate entity is advantageous for use in remote camera applications where the video information must ultimately be either stored or transmitted through some form of wireless network
For any mobile panoramic camera system such as a “Pan-Cam” or HMD based camera system, tracking information must comprise part of the resultant video stream in order that an “absolute” azimuth and elevation coordinate system be maintained. In the case of computer-generated imagery this data is not required as the camera orientation is a theoretical construct known to the computer system at render time.
The basic tracking system of the VTV HMD utilizes on-board panoramic video cameras to capture the required 360 degree visual information of the surrounding real world environment. This information is then analyzed by the VTV processor (whether it exists within the HMD or as a base station unit) utilizing computationally intensive yet relatively algorithmically simple techniques such as auto correlation. Examples of a possible algorithm are shown in
The simple tracking system outlined in
An alternative implementation of this tracking system is possible utilizing a similar image analysis technique to track a pattern on the ceiling to achieve spatial positioning information and simple “tilt sensors” to detect angular orientation of the HMD/Pan-Cam system. The advantage of this system is that it is considerably simpler and less expensive than the full six axis optical tracker previously described. The fact that the ceiling is at a constant distance and known orientation from the HMD greatly implifies the optical system, the quality of the required imaging device and the complexity of the subsequent image analysis. As in the previous six-axis optical tracking system, this spatial positioning information is inherently in the form of relative movement only. However, the addition of “absolute reference points” allows such a system to re-calibrate its absolute references and thus achieve an overall absolute coordinate system. This absolute reference point calibration can be achieved relatively easily utilizing several different techniques. The first, and perhaps simplest technique is to use color sensitive retroflective spots as previously described. Alternately, active optical beacons (such as LED beacons) could also be utilized. A further alternative absolute reference calibration system which could be used is based on a bi-directional infrared beacon. Such as system would communicate a unique code between the HMD and the beacon, such that calibration would occur only once each time the HMD passed under any of these “known spatial reference points”. This is required to avoid “dead tracking regions” within the vicinity of the calibration beacons due to multiple origin resets.
The basic auto correlation technique used to locate movement within the image can be simplified into reasonably straightforward image processing steps. Firstly, rotation detection can be simplified into a group of lateral shifts (up, down, left, right) symmetrical around the center of the image (optical axis of the camera). Additionally, these “sample points” for lateral movement do not necessarily have to be very large. They do however have to contain unique picture information. For example a blank featureless wall will yield no useful tracking information However an image with high contrast regions such as edges of objects or bright highlight points is relatively easily tracked. Taking this thinking one step further, it is possible to first reduce the entire image into highlight points/edges. The image can then be processed as a series of horizontal and vertical strips such that auto correlation regions are bounded between highlight points/edges. Additionally, small highlight regions can very easily be tracked by comparing previous image frames against current images and determining “closest possible fit” between the images (i.e. minimum movement of highlight points). Such techniques are relatively easy and well within the capabilities of most moderate speed micro-processors, provided some of-the image pre-processing overhead is handled by hardware.