Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS3725563 A
Publication typeGrant
Publication dateApr 3, 1973
Filing dateDec 23, 1971
Priority dateDec 23, 1971
Publication numberUS 3725563 A, US 3725563A, US-A-3725563, US3725563 A, US3725563A
InventorsWoycechowsky B
Original AssigneeSinger Co
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method of perspective transformation in scanned raster visual display
US 3725563 A
Abstract
A general method of providing perspective transformations in a visual display system having an image generated by a scanned raster device such as a CRT or television projector is shown. The television display is a window out of which an observer views a simulated picture of terrain. The line of sight from the observer passing through the instantaneous spot position on the window is used to find a ground intersection point, the location of this point on an image source, such as film, is found and a video signal representing that image is generated by positioning the scan of an image pick up device to that point on the image source.
Images(10)
Previous page
Next page
Description  (OCR text may contain errors)

UIlitQd States Patent 1 [111 3,725,563

Woycechowsky [4 1 Apr. 3, 1973 TRANSFORMATION IN SCANNED UNITED STATES PATENTS RASTER VISUAL DISPLAY 3,098,929 7/1963 Kirchner ..343/7.9 X [75] Inventor: Brian J w y h ky, B gham- 3,261,912 7/1966 Hemstreet ..l78/DlG. 20 ton, N.Y. 3,060,596 10/1962 Tucker et al. ..35/l0.2

[73] Assignee: The Singer Company, Binghamton, Primary Examiner Maco|m A Morrison Assistant Examiner-R. Stephen Dilidine, Jr. [22] Filed; 23, 7 AttorneyFrancis L. Masselle et al.

[ pp 211,372 57 ABSTRACT Related US. Application Data A general method of providing perspective transfor- I mations in a visual display system having an image {63] commumomn'pan of generated by a scanned raster device such as a CRT or 1971 abaldoned television projector is shown. The television display is a window out of which an observer views a simulated Cl 5 35/12 35, picture of terrain. The line of sight from the observer passing through the instantaneous spot position on the [51] Int. Cl ..G09b 9/08, H04n 3/30, G015 7/20 window is used to find a ground intersection point, the

[58] Field of Search..l78/DIG. 20, DIG. 35; 35/l0.2, location of this point on an image source, such as film, 35/12 N; 235/186; 343/79 is found and a video signal representing that image is generated by positioning the scan of an image pick up device to that point on the image source.

19 Claims, 11 Drawing Figures e .fi

RAsTER -----fi coMPUTER SYNC. L H SIMULATOR W f COMPUTER IMAGE i o C 23 SOURCE M SCANNED V RASTER DISPLAY DEVICE I N VEN TOR:

B f' y QM 3. Wm

PATENTEDAPR3 197a SHEET U20F 1o PATENTEUAPRB 197a SHEET 03 HF 10 ATTORNEY PATENTEnAPRs I975 3 725,553

SHEET UHUF 1O V|DEO FIG. 5

' yNVENTOR.

QLNSLL @SMfi ATTORNEY x X X X Z 0 Q (0 u u. I z 8 N l i U) I PATENTEDAPR3 I973 7 5,5553

,vnv

INVENT R:

" 'B- WM ATTORNEY PATENTHJ-Am 197a SHEET USUF 1O I N VEN TOR.

ATTORNEY METHOD OF PERSPECTIVE TRANSFORMATION IN SCANNEI) RASTER VISUAL DISPLAY This invention relates to visual systems in general and more particularly to a method and apparatus for raster shaping to alter a perspective point in a visual system and is a continuation-in-part of application Ser. No. 134,238 filed Apr. 15, 1971; and now abandoned.

Visual systems for use in aircraft simulators and other types of trainers have gained widespread use due to the increased cost of training in an actual aircraft or other vehicle or device. In general, four basic types of visual systems have been used, one of which is a camera model system in which a probe containing a TV camera is moved over a scale terrain model in accordance with computed attitude and position of the simulator. The resulting image is displayed to the trainee with a TV projector or CRT.

A second type system is the film-based system in which a predetermined path is flown by an aircraft and a motion picture recorded. The motion picture is then shown to the trainee as he flies the same path. Deviations may be simulated by optical distortion as is shown in patents granted to H. S. I-Iemstreet such as U.S. Pat. No. 3,233,508 granted on Feb. 8, 1966 and U.S. Pat. No. 3,261,912 granted on July 19, 1966. Also disclosed therein is a variation of the system of the present invention in which the film image is viewed by a TV camera and the resulting image projected via TV projector or CRT. Distortion in that case is accomplished by raster shaping.

A third type of system is a scan-transparency system wherein an image is generated by scanning a transparency containing othophotographic information. The information generated is displayed via TV as in the previous example. Such a system is shown in U.S. Pat. No. 3,439,105, granted to W. C. Ebeling et al. on Apr. 15, 1969.

A fourth type system is a digital image generation system. In such a system image information is stored in a computer which selects the desired information for display in a TV type display.

A variation of the film based system viewed by a TV camera is a film based system scanned by a flying spot scanner to generate an image. The film based system and the scan transparency system have in common an important aspect. The recorded information on them is from a specific viewpoint. To produce a scene as it would appear if viewed from another viewpoint requires raster shaping. Although the camera model and digital systems do not have this restriction there may be cases where it is desired to cause a change in viewpoint by raster shaping rather than moving the camera probe or reconstructing the digital image. For example, in the former case problems arise as the probe gets close to the model. In the later, construction of images uses considerable computer time.

The present invention provides a system which may be used for raster shaping in any visual system where it is desired to transform an image containing information as viewed from one viewpoint to an image which appears as if viewed from another viewpoint.

It is the object of this invention to provide a general system which controls the shape of a raster in a visual simulation system such that a desired perspective change is achieved.

Other objects of the invention will in part be obvious and will in part appear hereinafter.

The invention accordingly comprises the several steps and the relation of one or more of such steps with respect to each of the others, and the apparatus embodying features of construction, combination(s) of elements and arrangement of parts which are adapted to effect such steps, all as exemplified in the following detailed disclosure, and the scope of the invention will be indicated in the claims.

For a fuller understanding of the nature and objects of the invention reference should be had to the following detailed descriptions taken in connection with the accompanying drawings, in which:

FIG. 1 is a block diagram of a preferred embodiment of the invention in combination with an aircraft simulator;

FIG. 2 is a flow diagram of a preferred set of equations for use with the invention;

FIG. 3 is a perspective view of the relationship between an observers view through the display and the view on the image source;

FIG. 4 is a schematic view of a first type of scanned raster device;

FIG. 5 is a schematic view of a second type of scanned raster device;

FIG. 6 is a block diagram of a preferred embodiment of a raster computer for implementing the equations of FIG. 2;

FIG. 7 is a block diagram of a modification to the embodiment of FIG. 6 for compensating for image roll in those systems where it is desirable to roll the image before raster shaping is introduced;

FIG. 8 is a block diagram showing a second form of the equations of FIG. 2;

FIG. 9 is a block diagram of the implementation of the equations of FIG. 8;

FIG. 10 is a block diagram of the equations of FIG. 8 in rate rather than position form; and

FIG. 11 is a block diagram of a third form which the equations of FIG. 2 may take.

FIG. 1 is a basic representation of the systems with which the present invention may be used. Block 11 is an image source. It may be an image recorded on a frame of film, the image picked up by a probe in a camera model system, a digitally generated image, an orthophoto or other image. Block 13 is a scanned raster device. It may be a TV camera viewing image source 1 l or a flying spot scanner device scanning image source 1 1 to produce a video signal.

Display 15 may be one or more TV projectors, CRTs, laser projectors or other similar devices capable of projecting a video signal. Raster computer 17 is the system of the present invention which shapes the raster of device 13 to obtain the desired perspective. Sync generator 19 provides sync commands to synchronize the scans of raster device 13 and display 15.

In each case the image presented by block 11 will represent a scene as it would appear from a predetermined viewpoint. If it is film it will be as viewed from the location of the taking camera; if a probe image it will depend on the probe position and altitude. Likewise, if an othophoto it will appear as a map view from a certain altitude and if digitally generated will represent a view based on computer inputs. In each case, however, the viewpoint of the image is known.

Examining the balance of FIG. I will further show the problem to be solved by the present invention. Display is in a position to be viewed by a trainee in simulator cockpit 21. This cockpit will contain controls and instruments duplicating those of the actual aircraft being simulated. Control movements will be supplied as inputs to computer 23 which will use these inputs in equations of motion to compute the aircraft state vector (position, attitude, velocity). From this computed data, outputs from computer 23 drive the instruments in cockpit 21 such as altimeter, airspeed, etc.

The state vector information of the aircraft is available in computer 23 and may be used along with the information concerning the viewpoint from which the image was made by raster computer 17 in determining proper raster shape. This viewpoint information is contained in block 11 and is provided to raster computer 17 and/or simulator computer 23. These two computers work together, as will be explained later.

The information may, for example, be recorded on the film and picked up by a device in block 11 in a film based system. In a camera model system the position and attitude of the probe will be available. In a scan transparency system, the scale of the orthophoto will be known; and in a computer generated image, the inputs used in constructing the image will be known. Thus, computers 17 and 23 have available the state vector of the simulated aircraft and the state vector of the image present in image source 11. This information will of course be constantly updated as the simulator flies and as the image changes due to film advancement, probe movement, etc.

A third type of information is used in the present invention. This is the instantaneous position of the scanning spot on the display as referenced to the eyepoint of the trainee. This information is known indirectly through sync generator 19 which controls the scanning of the spot on display 15. For an explanation of how the display raster may be made quite accurate see US. application Ser. No. 130217 filed by R. F. H. McCoy et al. on Apr. 1, 1971 and assigned to the same assignee as the present invention.

In general terms, it is known from the sync command 19 when the sweep is started and the characteristics of the sweep are known. From this information it is possible to compute the instantaneous spot position as will be shown in more detail later.

Using these items of information, i.e., the aircraft state vector, the image source state vector, and the instantaneous spot position, along with the relationship between the aircraft body axes and the display axes, it is possible to compute the intersection of a line from the pilot's nominal eye-point passing through the instantaneous spot position with the ground, to then determine where or if that point intersects the image source, and then to position the scan of device 13 to that spot.

FIG. 2 shows a flow diagram of the computations. From the state vector of the simulated aircraft, the rotation of the aircraft with respect to a horizontal frame of reference is known. These rotations are 0,, the simulated pitch angle; (1),, the simulated roll angle; and 111,, the simulated heading angle. From these angles, computer 23 of FIG. 1 may compute the sines and cosines of the angles; and from the sines and cosines, the direction cosines of the simulated aircraft body to ground reference axes. This computation is shown in block 25 of FIG. 2 and results in a matrix.

Computer 23 may also compute the direction cosines of the window axes referenced to the body axes, m from ill the window heading with respect to the body axes; 0 the window pitch with respect to the body axes; and da the window roll with respect to the body axes. The computation required to evaluate the w corresponds to the a computation shown in block 25. If the window axes are fixed with respect to the body axes, the w are constant and therefore need not be continuously computed. The evaluation of the w is indicated in block 27.

In general, the simulated eyepoint is located some distance away from the simulated center of gravity. In situations where the eyepoint displacement is significant (e.g., takeoff and landing situations for transport aircraft), the eyepoint displacement with respect to the center of gravity must be taken into account. The components of eyepoint displacement with respect to the center of gravity are referenced to the horizontal frame of reference by multiplying the body axes coordinates of the eyepoint (x yggp, 2 by the 04,, matrix. The evaluation of the horizontal frame components of the eyepoint with respect to the center of gravity (x y 2 is shown in block 29. Eyepoint altitude with respect to the horizontal plane of reference, hay), is also computed in block 29 by subtracting z from the altitude of the simulated aircraft, h,

Horizontal frame of reference components of eyepoint position relative to image position (Ax and Ay) are found by respectively adding x and y to the horizontal frame components of the simulated aircrafts center of gravity (x and y,,) and then subtracting the image position coordinates (x and y This computation is shown in block 31 of FIG. 2.

The altitude associated with a frame of film (hp) or with a probe in a camera model system, etc.; and image attitude, represented by di 0 and (fa are provided directly to the camera raster computer 17 of FIG. 1 by the image source 11.

The remainder of the computations must be done in raster computer 17 of FIG. 1, which is an analog computer, due to the fact that computations are being done for an instantaneous spot position. The angles 111 and (BWI representing the coordinates of the instantaneous spot position as viewed from the pilots eyepoint, are generated in a manner to be described later. For a rectangular window located a unit's distance from a nominal pilot's eyepoint, the window axes coordinates of the instantaneous spot position are 1, tan 111 tan (9w),.. These quantities are transformed to the body axes in block 33 through the use of the (n matrix to obtain the direction lines of a line passing through the instantaneous spot position referenced to the body frame (e 2 1 If the screen is spherical rather than planar, this line is defined by the direction cosines: cos 111 cos 0 sin ill cos 0 and sin 0 where '11,, and 0 are the respective window referenced longitude and latitude of the instantaneous spot position. These direction lines must be transformed to the horizontal reference system. This is done by multiplying them by the a matrix in block 35. The m matrix and a matrix may be combined to form the window to body axes matrix prior to entering the analog computer in which case blocks 33 and 35 would be combined.

Referring now to FIG. 3, point 37 is a fixed point on the ground which is the reference for x y and x,, y,. The X and Y position of the simulated eyepoint 39 with respect to the image axes 41, Ax and Ay, are also shown. Line 43 is the line passing through the instantaneous spot 45 on display face 47. Since the direction lines of line 43 and the eyepoint altitude have been obtained, it is now possible to find the horizontal components (h (1 /11 and It d /d of line 43. This computation is done in block 51 ofFIG. 2 where they are added to the horizontal components of the eyepoint with respect to the image position. The results of the computation done in block 51 of FIG. 2 are the horizontal components of the ground intersection point 49 of line 43 with respect to the image position. If the ground is assumed to be horizontal, the vertical component of the ground intersection point 49 with respect to the image position is the altitude of the image position h However, these components are multiplied by d in block 51 in order to avoid divisions by d;,. It can be seen from FIG. 2 that when the image coordinates are obtained in block 59, the multiplications of these components by d are cancelled.

Now that the ground intersection point 49 of FIG. 3 is known, referenced to the image axes 41, it is only necessarytodetermine the image plane coordinates of the intersection of line 53 (the line from the image axis origin 41 to ground intersection point 49) and the image plane 55. This is done in blocks 57 and 59. Block 57 transforms x d yup d and 2, d into a frame having two of its axes in the image plane 55 using horizontal frame to image frame direction cosines. This direction cosines are defined in terms of the trigonometric functions of lily, 6 and just as the a are made up of terms containing trigonometric functions of 111,, 0,, and (1),. The final step is shown in block 59. By similar triangles the Y and Z co-ordinates in the image plane, d and 2 are found from x d y d;,, and 1 d;,. In a film system,fis the local length of the taking camera and hence block 59 shows the value off multiplying y da/X d and 1p d /x d If the image is obtained from a camera viewing a model, the focal length will again be equal to an appropriate focal length. In a digitally generated image, this value will be stored, since it is used in the generation of the image.

Knowing where the ground intersection point is located on the image source 11, it is only necessary to position the spot of scanning device 13 so that it intersects that point on the image. For example, if device 13 of FIG. 1 is a flying spot scanner and the image is on film, a system such as FIG. 4 would be used. Flying spot scanner 61 will have an electron gun 63 and horizontal and vertical deflection plates 65 (only the vertical plates are shown). Electrons emitted by gun 63 will he I deflected by plates 65 and impinge on the face of the flying spot scanner which is coated with phosphor. The light emitted by the phospher surface will pass through film 67 and be collected by lens 69 to be imaged on photomultiplier tube 71 which provides a video signal to display of FIG. 1. The relationship between the voltage on plates 65 and the resulting spot position is well known. Thus, it is only necessary to scale the values of y, and z, obtained in FIG. 2 so that the proper voltages are input to the plates.

Thus, as a spot moves across display 15 of FIG. 1 its associated instantaneous ground intersection point will be computed and used to find where that spot is on the film. This information will then be used to drive the flying spot scanner spot to that point resulting in the proper ground point being displayed on display 15 for ail points in time. The same system would be used if 67 were on orthophoto rather than a normal frame of movie film. The raster shape would differ but the computation and driving of the spot would be the same.

In a camera model system, y and z, are the positions on the camera tube of the instantaneous ground intersection point. Thus, it is only necessary to drive the scan on the camera tube to that point which corresponds to the instantaneous line of sign associated with a display CRTs electron beam.

If the system is one where TV camera is viewing an image as shown on FIG. 5, one additional step is necessary. The image 73, which could be a projected film image or a computer generated image, or other image on a screen (or CRT), is imaged on camera tube 75 through lens 77. Since the position on image 73 is known, but not the position on tube 75, it is necessary to multiply y, and z; by the ratio of image to object distance in the system to obtain the values used in scanning tube 75.

FIG. 6 shows a typical embodiment of raster computer 17 of FIG. 1. Sweep generator 81 will have an input on line 83 from the sync generator 19 of FIG. 1 to synchronize it with the display 15. If the display is planar, as assumed for block 33 of FIG. 2, the sweeps generated represent tan (0 and tan \p This may be done by generating a normal TV type linear sweep since, with the distance to the center of the display fixed, the tangenets of (0w) and 41 will correspond directly to the X and Y positions of the spot on the display. If a spherical display is involved, sines and cosines of 0 and (11 and the direction cosines costp cos 0 sin \Ilw cos 6 and sin 0 must be generated. Apparatus to generate such scans is disclosed in U.S. application Ser. No. 108446 filed by T. Cwynar et al. on Jan. 21, I971.

The outputs of sweep generator 81 are inputs to block 85, a transformation apparatus. This apparatus comprises three servos each driving sine-cosine potentiometers. The three servos correspond to 41 0 4), and are driven by inputs corresponding to these values from computer 23. The computation done in this block is equivalent to that of blocks 27 and 33 of FIG. 2 combined. The servo driven potentiometers are connected together to perform the required multiplications. A system which describes how such multiplications are performed is shown in U.S. Pat. No. 3,003,252 granted to E. G. Schwarm on Oct. 10,1961.

The outputs from block 85 are inputs to a similar transformation block 87 which has servo inputs |,l|,, 0,, and (1),. This block will do the computations of the combined blocks 25 and 35 of FIG. 2. Two of the outputs of block 87, d and d are multiplied by h obtained from computer 23 in multipliers 89 and 91 respectively. Values of Ax, Ay and h,- also obtained from computer 23 are respectively multiplied by the third output of block 87 (d;,) in multipliers 93, 95 and 97. (All multipliers may be Analog Devices Model 422] or their equivalent).

ln summing amplifier 99 the h d, output from multiplier 89 is added to the Ax d output of multiplier 93 and in summing amplifier 101 the h d output of multiplier 91 is added to the Ay 'd output of multiplier 95. The h d output of multiplier 97 and the outputs of amplifiers 99 and 101 (h 11 Ax 'd and It d Ay d form the x d y d;, and z d of block 51 of FIG. 2.

These three signals are inputs to block 103, another transformation block similar to blocks 85 and 87, wherein the computations of block 57 of FIG. 2 are performed resulting in x d y; d and 2 d;,. The servo inputs from computer 23 in this case are ill O and (p The y d and x (1 are provided as inputs to divider 105 and 1F d and x d to divider 107. By

scaling using normal analog techniques the constantf of block 59 of FIG. 2 may be included in this computation thus causing dividers 105 and 107 to have respective outputs representing the y, and z, of block 59 of FIG. 2. These outputs are then used as inputs to scanned raster device 13 of FIG. 1. The dividers used may be constructed using the instructions given on the data sheet for Analog Devices Multiplier Model 422.] published by Analog Devices of Norwood, Mass.

As shown in FIG. 6 the matrix multiplications are done using servo multipliers. It should be noted that the an, matrix of block 27 of FIG. 2 and the a matrix of block 25 may be multiplied in the simulator computer, in which case only one set of angles and thus only one block 85 or 87 would be required in the embodiment of FIG. 6. It is also possible to compute the required sines and cosines in computer 23 and perform the matrix multiplications using additional multipliers similar to blocks 89, 91, etc.

Various modifications may be made without departing from the principles of the invention. One such modification is shown in FIG. 7. Because of screen shape it is often desirable to roll the image optically. However, since the equations implicitly take roll into account, if optical roll is used, derotation in the raster computer is required.

Basically, the circuits of FIG. 7 perform the function of a resolver transforming the coordinates y, and Z in one axis system to the coordinates y and 1 in an axis system rotated an angle from the original system. Values of sin and cos (1),, are obtained from computer 23 and the values y, sin 4) y, cos 1, sin di and z, cos (11,, obtained in multipliers 111, 113, 115, and 117. In summing amplifier 119 y is found by adding z, sin (11,, and y, cos 42 and in amplifier 121 Z is found by adding z, cos and y, sin 42, (Signs are inverted through amplifiers 119 and 121.) In this manner optical roll, for example, is compensated for in the camera raster computer output.

An examination of FIG. 6 shows that a relatively large number of multiplications and transformations must be done in the raster computer. Each function performed contributes to the noise in the system with the analog multipliers causing the greatest problems because of internal noise generation. Thus, it is desirable to have as few functions performed in the raster computer as possible.

The only variables changing at a rate which requires the use of analog computations are ill and ra It was previously noted that blocks 33 and 35 may be combined by doing further computation in the digital computer. It is possible to go even further and combine not only blocks 33 and 35 but also 51 and 57 to end up with one matrix multiplication. Such an arrangement is shown in FIG. 8.

Only three blocks of computation are shown being done at fast computation rates in the analog raster computer. Sweep generator 81 provides the lb and 0 to block 123 where g g and g are computed. The equations of block 33 of FIG. 2 are for a flat display and tangent functions used. In block 123 the equations for a spherical display are used. If block 123 were computing for a flat display the equations would be g I, g tan 111 and g tan (0 These quantities go to block 125 where A B and C are computed from the g s and mjs. These two computations replace all those shown in blocks 33, 35,51, and 57 of FIG. 2. The rr s are found in the digital computer 23 using the quantities in the above mentioned blocks of FIG. 2. The final block 127 corresponds to block 59 of FIG. 2. The precise way of combining all the various transformations is not shown as it will be well within the capability of those skilled in the art to derive the equations for the 'lTu S.

The implementation of these equations is shown in FIG. 9. Sweep generator 81 is the type previously described in connection with FIG. 6. In block 129 the gfs are obtained using the types of multipliers previously mentioned in describing FIG. 6 to obtain g, and g and an operational amplifier to invert sin 0 for g;,. Blocks 131 are multiplying digital-to-analog converters such as Model 2254 available from Data Device Corporation of Hicksville, NY. In the implementation of FIG. 6 the quantities developed by the computer 23 were required to be converted to analog quantities before being used. This resulted in any noise on the analog lines being further amplified by the analog multipliers. By using the digital signals directly as multiplying D/A inputs, significant noise reduction is possible. The multiplied 'rr g, quantities are summed in amplifiers 133 to obtain A B and C The final outputs y and z are obtained by dividing B and C by A in block 135. (Basically the same computation as was done in blocks and 107 of FIG. 6.)

It may be that the noise reduction of the systems of FIGS. 8 and 9 is not sufficient for some purposes. The equations for a system which uses the integration of rates is shown in FIG. 10. Since positions will be obtained using analog integrators a filtering effect will result which should further reduce noise. The equations shown are essentially the rate equivalents of the position equations of FIG. 8.

The bloclc 137, where the M are computed, 139, where the A B and C are computed, and 141, where the and z} are computed are the equivalents of blocks 123, and 127 of FIG. 8. In addition a block 142 wherein gfs are computed for use in block 137 and 139 is required. And as indicated digital computer 23 computes both the rr s and rr 's. The final step, of integration, which provides the filtering to reduce noise is shown in block 143.

As with any integration initial values are required. The method of obtaining these values is shown in blocks 145, 147 and 149 in the lower part of FIG. 10. The system is initialized for each horizontal line. Thus in b k (.Lihs are computed for a line beginning at a value of {11w 30 (in a particular embodiment. In

other embodiments another proper constant defining the azimuth of the starting position would be used.) Thus gfs for each line based on the constant 30 and the 6 associated with a given line are computed. In 147, (A (B( and (C9,, for these starting points are computed and in block 149 the Us), and (z are computed. These three blocks are the same as blocks 123, 125 and 127 of FIG. 8 except that instead of computing continuous values, they only compute the initial starting point of horizontal lines.

Initialization might also be done only each field or frame if the integrators used are accurate enough. A line by line initialization, however, assures that each line will start at the same azimuth independent of integrator accuracy. It should also be noted that the initial values need not be computed in real-time and may thus be precomputed and stored. A particular imple mentation of these equations is not shown as the techniques of FIG. 6 and 8, along with other well known analog methods, may be used in implementation as will be recognized by those skilled in the art.

These last two sets of equations, although offering many advantages, have certain disadvantages in cost due to the large number of technically sophisticated components. Another set of equations which provides a raster computer which is simpler and more noise-free than that of FIG. 6 is shown in FIG. 11. This set of equations allows the type of servo multipliers described in connection with FIG. 6 to be used in matrix multiplications. It will be recognized that the m, used in the equations of FIGS. 8 and do not lend themselves to use with servos and thus multipliers were required.

In block 123 the g s are computed as before (in FIGS. 8 and 10). In block 151 dfs are computed in a manner similar to that done in block 33 of FIG. 2 (block 87 of FIG. 6). Here in effect the w s of block 33 and the a s of block 35 of FIG. 2 have been combined into a matrix composed of 111 6 and terms. Block 153 is essentially the same as block 51 of FIG. 2. Additional digital computer computations have been used to provide X Y and H to eliminate some of the analog multiplications associated with block 51 of FIG. 2. Block 155 is the same as block 57 of FIG. 2 except that, instead of finding film image plane coordinates, the scanned raster coordinates are found directly. (This is also true in the equations of FIGS. 8 and 10.) The final step in block 157 corresponds directly to that of block 59 of FIG. 2, again with the exception that y and z rather than y, and z; are obtained. (The f subscript denotes film image plane coordinates and the C subscript scanned raster device coordinates.)

Implementation is essentially the same as that shown in FIG. 6. One of blocks 85 or 87 will be eliminated since 41 0 lb '11,, 0,, and 11, have been combined into 41 0 and dm Multipliers 89 and 91 are eliminated since d, and d are added directly to the products of multipliers 93 and 95 (93, 95 and 97 will now have as inputs X Y and H respectively) and the final circuit output will be y and z rather than y, and z, since the inputs to block 103 will be I11 6 and 42 rather than :11 0 and The equations above assume that the relationship between the center of the window and the angles ilr and 0 remain fixed. Such would be the case in a single fixed display window and in some cases where the center of the display (meaning here the imagry displayed) is allowed to move.

However in certain types of systems the equations described above will have to be varied to achieve the result of always defining the line of sight from the ob servers eye through the instantaneous spot position. For example in the type of system described in application Ser. No. 66729 filed by R. F. H. McCoy on Aug. 25, 1970 wherein a total wide angle spherical display is made up of tiers of narrow angle displays the display raster will be generally made to trace circles of latitude. The center of a high resolution image to be displayed is capable of being positioned anywhere on the display and 111 and 0 which are associated with the high resolution image, define at each point in time latitude and longitude increments referenced to the fixed display frame. The lllw and G will then define the spot position with respect to the center of the moving window. To reference (11 and 0 to this fixed frame it is then only necessary to add the latitude and longitude (of the center of the moving window) respectively to ill and 0 and then take the sines and cosines of the resulting angular sums in order to find the direction cosines of the instantaneous line of sight.

At this point a more detailed explanation seems in order particularly in view of changes required in the equations of FIG. 8 and those following. In FIG. 8 et seq. where g; terms are computed the lb and 0 would have to be changed to (\ll 111 and (S 0 where #1 and 0 represent the respective longitude and latitude of the center of the moving window. In practice it has been found difficult to combine these angles and then take their sines and cosines. This difficulty may be overcome by using the well known trigonometric relationships for finding the sine and cosines of the sum of two angles. Doing this however requires that three additional g, terms be computed.

The additional terms to be computed are:

g, cos 111 sin G 35 sin 111 sin 0 g, cos 6 These are then multiplied by the w s (which must be approximately altered in such a way that properly takes the sines and cosines of 11 and 0 into account) in block 125 of FIG. 8 to result in the following equations:

c "at 81+ :12 82 "as 83 34 84 "as "as 86 These additional terms will of courserequire additional hardware computing elements which may be constructed in the same manner as shown in FIG. 9.

Thus a general method and a number of specific implementations of that method for changing the apparent perspective of an image which is of general application in a visual system utilizing scanned raster devices has been disclosed. A general set of equations and straight forward implementation was first shown and then various improvements which result in increased efficiency and noise reduction disclosed.

Although specific systems which are useful in flight simulators have been disclosed herein, the invention may be used in similar applications such as space simulators, ship simulators, driver trainers, etc.

What is claimed is:

1. In a display system for presenting to an observer a desired simulated scene of the earths surface as viewed from the observers viewpoint, comprising an image source depicting a portion of the earths surface as viewed from an image viewing point, at least part of which scene contains the same information as that contained in the desired scene; a device with a controllable first spot for scanning the image source to develop a video signal, a display located within the observers l field of view to form a simulated window through which the observer may view said simulated scene, said display being of the type formed by scanning a second spot across the display to form a raster and modulating said second spot with the video signal developed by said device, a method of driving said first spot to obtain an image of the desired scene in proper perspective on said display comprising:

a. determining the simulated point of intersection with the earth's surface of a line from the observer passing through the instantaneous position of the second scanning spot on the display window;

b. determining the location on the image source of the depicting thereon of said earth intersection point; and

c. positioning the first spot so that the video signal developed corresponds to said location on said image source.

2. The invention according to claim 1 wherein said image source is optically rotated to simulate rotation of said display window and further including the step of compensating for said rotation whereby the scanning first spot will generally follow a path more nearly approximating a normal raster.

3. The invention according to claim 1 wherein the steps of determining said earth intersection point and said location on said image source comprise:

a. determining a first set of direction cosines of the trainer body axes to a horizontal referenced axis system;

b. using said direction cosines to compute the components of said observers eye position with respect to the simulated center of gravity in said horizontal reference axis system;

c. determining a second set of direction cosines of the display window axes to said body axes;

d. determining from the scan waveforms of said second spot the direction of a line from the observer to the instantaneous position of said second spot in the window axes frame;

. using said second set of direction cosines to determine the direction of a line from the origin of said window axes passing through said second spot with respect to said body axes;

using said first set of direction cosines to reference the direction of said line to the horizontal reference axes;

determining the location of said observers eyepoint with respect to said image viewing point; h. determining from said line referenced to said horizontal axes and the altitude and location of said observers eyepoint with respect to said image viewing point the intersection point with the earths surface referenced to said image viewing point with respect to the horizontal axes system;

i. determining the direction of a line from said image of viewing point to said intersection point with respect to an axes system referenced to the image source;

j. using the direction of said line reference to said image source axes to determine the location on said image source of the depiction of said earth intersection point.

4. The invention according to claim 3 wherein said display is a wide angle spherical display having a fixed frame of reference, only a relatively small portion of which is modulated by said video signal, the center of said portion is movable and may be defined by a latitude and longitude, said second set of direction cosines are the fixed display frame to body axes direction cosines; and the direction of said line in said display frame is obtained by adding the scan wave forms of said second spot to said latitude and longitude.

5. The invention according to claim 3 wherein the steps of determining said first set of direction cosines, using said first set of direction cosines to compute said observers eye position, determining said second set of direction cosines, and determining the location of said observers eyepoint with respect to said image viewing point are performed in a digital computer and the remaining steps performed by analog computing means.

6. The invention according to claim 5 wherein the results of the digital computation are combined into a third set of direction cosines so that the steps of determining the direction of a line through said second spot, referencing said spot to the horizontal reference axes, and determining the intersection point on the earths surface are combined in the digital computer and said third set of direction cosines is then used in an analog computer to determine from said third set and the direction lines of said second spot with respect to the window axes, the direction of a line from said image viewing point.

7. The invention according to claim 5 wherein computations are done using instantaneous position information.

8. The invention according to claim 5 wherein computations in the analog computer are done by the integration of rate information developed in the digital computer and further including a step to periodically initialize the analog computer.

9. The invention according to claim 8 wherein said initialization step is done for each horizontal scan of said second spot.

10. The invention according to claim 3 wherein said first set of direction cosines and said second set of direction cosines are computed and combined in a digital computer to form a fourth set of direction cosines and said fourth set is then used to reference said direction lines of said second spot to the horizontal axes.

11. The invention according to claim 10 wherein the image is rolled optically and further including the step of determining by computation in the digital computer an image source axes rolled by the amount of image roll and using said axes in determining the direction of said line from said image viewing point.

12. In a display system for presenting to an observer a desired simulated scene of the earths surface as viewed from the observer's viewpoint, comprising an image source depicting a portion of the earths surface as viewed from an image viewing point, at least part of which scene contains the same information as that con tained in the desired scene; a device with a controllable first spot for scanning the image source to develop a video signal, a display located within the observers field of view to form a simulated window through which the observer may view said simulated scene, said display being of the type formed by scanning a second spot across the display to form a raster and modulating said second spot with the video signal developed by said device, apparatus for driving said first spot to obtain an image of the desired scene in proper perspective on said display comprising:

a. means for determining the simulated point of intersection with the earths surface of a line from the observer passing through. the instantaneous position of the second scanning spot on the display window;

b. means for determining the location on the image source of the depicting thereon of said earth intersection point; and

0. means for positioning the first spot so that the video signal developed corresponds to said location on said image source.

13. The invention according to claim 12 wherein said display is used in combination with a fixed-base vehicle trainer and said observer is the trainee.

14. The invention according to claim 13 wherein said trainer is an aircraft simulator.

15. The invention according to claim 12 wherein said image source is a frame of a motion picture and said image source viewing point is the point from which said frame was taken.

16. The invention according to claim 15 wherein said device is a TV camera on which said frame is imaged.

17. The invention according to claim 15 wherein said device is a flying spot scanner, pickup photomultiplier tube and associated optics and wherein said device is arranged to scan said frame.

18. The invention according to claim 12 wherein said image source is the image obtained from an optical probe viewing a model, said model is a portion of the earths surface and said device is a TV camera on which said image is focused.

19. The invention according to claim 12 wherein said image source is an orthophoto and said device is a flying spot scanner with associated pickup and optics arranged to scan said orthophoto.

' UNITED STATES PATENT OFFICE CERTIFICATE OF CORRECTION Patent No. 3, 725, 563 Dated April 3. 1973 Inventor s) Brian J. Wovcechowskv It is certified ,that error appears in the above-identified patent and that said Letters'Patent are hereby corrected as shown below:

Column 5, line 15, change "If" to With; line 16, delete "is assumed to be"; line 32', change "This' to --l11ese-. Column 6,- line 13, change 7 "sign" to "scanline 33, change "tangenets" to tangents-; line 38,

after "U. s. insert -Patent No. 3,688, 098 issued on a n--; line 39, change "108446" to -108', 446; change'"fi1ed b to -of-; and change "on" to --fi1eol: line 40, after 1971" delete the period and insert -and assigned to'the same assignee'as the present invention. Column 11, claim 1 line 9, after "signal" change the comma to a semi-colon line 23, change "depicting" to -depiction--.

Signed and sealed this 8th day of January 197M..-

(SEAL) Attest:

EDWARD M.FLETCHER,JR. RENE D. TEGTMEYER Attesting Officer Acting Commissioner of Patents FORM PO-105O 0-69) USCOMM-DC 60376-P69 U.S, GOVERNMENT PRINTING OFFICE: 1969 0-366-334 UNITED STATES PATENT OFFICE CERTIFICATE OF CORRECTION Patent No. 3, 725, 563 Dated April 3. 1973 Inventor s) Brian J. Wovcechowskv It is certified ,that error appears in the above-identified patent and that said Letters'Patent are hereby corrected as shown below:

Column 5, line 15, change "If" to With; line 16, delete "is assumed to be"; line 32', change "This' to --l11ese-. Column 6,- line 13, change 7 "sign" to "scanline 33, change "tangenets" to tangents-; line 38,

after "U. s. insert -Patent No. 3,688, 098 issued on a n--; line 39, change "108446" to -108', 446; change'"fi1ed b to -of-; and change "on" to --fi1eol: line 40, after 1971" delete the period and insert -and assigned to'the same assignee'as the present invention. Column 11, claim 1 line 9, after "signal" change the comma to a semi-colon line 23, change "depicting" to -depiction--.

Signed and sealed this 8th day of January 197M..-

(SEAL) Attest:

EDWARD M.FLETCHER,JR. RENE D. TEGTMEYER Attesting Officer Acting Commissioner of Patents FORM PO-105O 0-69) USCOMM-DC 60376-P69 U.S, GOVERNMENT PRINTING OFFICE: 1969 0-366-334

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3060596 *Aug 22, 1961Oct 30, 1962Dalto Electronics CorpElectronic system for generating a perspective image
US3098929 *Jan 2, 1959Jul 23, 1963Gen ElectricElectronic contact analog simulator
US3261912 *Apr 8, 1965Jul 19, 1966Gen Precision IncSimulated viewpoint displacement apparatus
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US3892051 *Oct 31, 1973Jul 1, 1975Gen ElectricSimulated collimation of computer generated images
US3943344 *Jun 26, 1974Mar 9, 1976Tokyo Shibaura Electric Co., Ltd.Apparatus for measuring the elevation of a three-dimensional foreground subject
US4208719 *Aug 10, 1978Jun 17, 1980The Singer CompanyEdge smoothing for real-time simulation of a polygon face object system as viewed by a moving observer
US4241519 *Jan 25, 1979Dec 30, 1980The Ohio State University Research FoundationFlight simulator with spaced visuals
US4263726 *Apr 12, 1979Apr 28, 1981Redifon Simulation LimitedVisual display apparatus
US4276029 *May 17, 1979Jun 30, 1981The Ohio State UniversityVisual cue simulator
US4283765 *May 25, 1979Aug 11, 1981Tektronix, Inc.Graphics matrix multiplier
US4296930 *Jul 5, 1977Oct 27, 1981Bally Manufacturing CorporationTV Game apparatus
US4475172 *Jun 18, 1981Oct 2, 1984Bally Manufacturing CorporationAudio/visual home computer and game apparatus
US4500879 *Jan 6, 1982Feb 19, 1985Smith EngineeringCircuitry for controlling a CRT beam
US4521196 *Jun 9, 1982Jun 4, 1985Giravions DorandMethod and apparatus for formation of a fictitious target in a training unit for aiming at targets
US4827438 *Mar 30, 1987May 2, 1989Halliburton CompanyMethod and apparatus related to simulating train responses to actual train operating data
US4853883 *Nov 9, 1987Aug 1, 1989Nickles Stephen KApparatus and method for use in simulating operation and control of a railway train
US4982345 *Jan 23, 1989Jan 1, 1991International Business Machines CorporationInteractive computer graphics display system processing method for identifying an operator selected displayed object
US5247356 *Feb 14, 1992Sep 21, 1993Ciampa John AMethod and apparatus for mapping and measuring land
US5253051 *Mar 5, 1991Oct 12, 1993Mcmanigal Paul GVideo artificial window apparatus
US5684937 *Jun 7, 1995Nov 4, 1997Oxaal; FordMethod and apparatus for performing perspective transformation on visible stimuli
US5936630 *Mar 7, 1997Aug 10, 1999Oxaal; FordMethod of and apparatus for performing perspective transformation of visible stimuli
US7366359Jul 8, 2005Apr 29, 2008Grandeye, Ltd.Image processing of regions in a wide angle video camera
US7450165Apr 30, 2004Nov 11, 2008Grandeye, Ltd.Multiple-view processing in wide-angle video camera
US7528881Apr 30, 2004May 5, 2009Grandeye, Ltd.Multiple object processing in wide-angle video camera
US7529424Apr 30, 2004May 5, 2009Grandeye, Ltd.Correction of optical distortion by image processing
US7542035Jun 25, 2003Jun 2, 2009Ford OxaalMethod for interactively viewing full-surround image data and apparatus therefor
US7787659Aug 6, 2008Aug 31, 2010Pictometry International Corp.Method and apparatus for capturing, geolocating and measuring oblique images
US7873238Aug 29, 2007Jan 18, 2011Pictometry International CorporationMosaic oblique images and methods of making and using same
US7893985Mar 15, 2005Feb 22, 2011Grandeye Ltd.Wide angle electronic camera with improved peripheral vision
US7894531Feb 15, 2006Feb 22, 2011Grandeye Ltd.Method of compression for wide angle digital video
US7990422Jul 19, 2005Aug 2, 2011Grandeye, Ltd.Automatically expanding the zoom capability of a wide-angle video camera
US7991226Oct 12, 2007Aug 2, 2011Pictometry International CorporationSystem and process for color-balancing a series of oblique images
US7995799Aug 10, 2010Aug 9, 2011Pictometry International CorporationMethod and apparatus for capturing geolocating and measuring oblique images
US8145007Apr 15, 2008Mar 27, 2012Grandeye, Ltd.Image processing of regions in a wide angle video camera
US8385672Apr 30, 2008Feb 26, 2013Pictometry International Corp.System for detecting image abnormalities
US8401222May 22, 2009Mar 19, 2013Pictometry International Corp.System and process for roof measurement using aerial imagery
US8405732Jun 17, 2011Mar 26, 2013Grandeye, Ltd.Automatically expanding the zoom capability of a wide-angle video camera
US8427538May 4, 2009Apr 23, 2013Oncam GrandeyeMultiple view and multiple object processing in wide-angle video camera
US8477190Jul 7, 2010Jul 2, 2013Pictometry International Corp.Real-time moving platform management system
US8520079Feb 14, 2008Aug 27, 2013Pictometry International Corp.Event multiplexer for managing the capture of images
US8531472Dec 3, 2007Sep 10, 2013Pictometry International Corp.Systems and methods for rapid three-dimensional modeling with real fašade texture
US8588547Aug 5, 2008Nov 19, 2013Pictometry International Corp.Cut-line steering methods for forming a mosaic image of a geographical area
US8593518Jan 31, 2008Nov 26, 2013Pictometry International Corp.Computer system for continuous oblique panning
US8649596 *Jul 12, 2011Feb 11, 2014Pictometry International Corp.System and process for color-balancing a series of oblique images
US8723951Nov 23, 2005May 13, 2014Grandeye, Ltd.Interactive wide-angle video server
US20120183217 *Jul 12, 2011Jul 19, 2012Stephen SchultzSystem and process for color-balancing a series of oblique images
Classifications
U.S. Classification434/43, 348/123, 708/2, 708/442
International ClassificationG09B9/02, G09B9/04, G09B9/05, G09B9/06, G09B9/30, G06T17/40
Cooperative ClassificationG09B9/302
European ClassificationG09B9/30B2
Legal Events
DateCodeEventDescription
Oct 24, 1989ASAssignment
Owner name: CAE-LINK CORPORATION, A CORP. OF DE.
Free format text: MERGER;ASSIGNORS:LINK FLIGHT SIMULATION CORPORATION, A DE CORP.;LINK FACTICAL MILITARY SIMULATION CORPORATION, A CORP. OF DE;LINK TRAINING SERVICES CORPORATION, A CORP. OF DE (MERGED INTO);AND OTHERS;REEL/FRAME:005252/0187
Effective date: 19881130
Aug 23, 1988ASAssignment
Owner name: LINK FLIGHT SIMULATION CORPORATION, KIRKWOOD INDUS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNOR:SINGER COMPANY, THE, A NJ CORP.;REEL/FRAME:004998/0190
Effective date: 19880425