EP0205252A1 - Video signal processing - Google Patents

Video signal processing Download PDF

Info

Publication number
EP0205252A1
EP0205252A1 EP86303298A EP86303298A EP0205252A1 EP 0205252 A1 EP0205252 A1 EP 0205252A1 EP 86303298 A EP86303298 A EP 86303298A EP 86303298 A EP86303298 A EP 86303298A EP 0205252 A1 EP0205252 A1 EP 0205252A1
Authority
EP
European Patent Office
Prior art keywords
coordinate values
input
effect
visual effect
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP86303298A
Other languages
German (de)
French (fr)
Other versions
EP0205252B1 (en
Inventor
Morgan William Amos David
David John Hedley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB08511649A external-priority patent/GB2174861A/en
Priority claimed from GB8511648A external-priority patent/GB2174860B/en
Application filed by Sony Corp filed Critical Sony Corp
Priority to AT86303298T priority Critical patent/ATE46404T1/en
Publication of EP0205252A1 publication Critical patent/EP0205252A1/en
Application granted granted Critical
Publication of EP0205252B1 publication Critical patent/EP0205252B1/en
Expired legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Definitions

  • This invention relates to methods of and circuits for video signal processing. More particularly, but not exclusively, this invention relates to video signal processing circuits which are suitable for use in special effects equipment in high definition video systems. Embodiments of the invention are able to achieve a visual effect corresponding to that which would be obtained if an input two-dimensional image were projected onto a three-dimensional surface. Reference is also made to linear manipulation of an image in two or three dimensions.
  • the effects to be achieved by special effects equipment can in general be divided into two types; those which do not bend or twist the image plane, that is to say linear effects, which may nonetheless be three-dimensional, and those which do distort the image plane by projecting the image onto a three-dimensional shape, that is to say non-linear effects.
  • An example of a three-dimensional linear effect is tilting the image plane with perspective, as in a tumble or flip.
  • An example of a three-dimensional non-linear effect is the projection of the input image plane onto the surface of a cone.
  • a method of processing video signals to achieve a visual effect corresponding to that which would be achieved if an input two-dimensional image were projected onto a three-dimensional surface comprising:
  • circuits for processing video signals to achieve a visual effect corresponding to that which would be achieved if an input two-dimensional image were projected onto a three-dimensional surface comprising:
  • the special effects equipment comprises two field memories, a field zero memory 1 and a field one memory 2, together with a write address generator 3 and a read address generator 4. These elements are interconnected by switches 5, 6, 7 and 8, each of which is operated at the field frequency.
  • Input data in the form of digitized sample values supplied to an input terminal 9 are selectively supplied by way of the switch 5 to the field zero memory 1 or the field one memory 2.
  • Output data for supply to an output terminal 10 are selectively derived by the switch 6 from the field zero memory 1 or the field one memory 2.
  • the write address generator 3 and the read address generator 4 are selectively and alternately connected to the field zero memory 1 and the field one memory 2 by the switches 7 and 8.
  • an input analog signal is sampled 2048 times per horizontal scan line and the resulting sample values are pulse code modulation coded into 8-bit words to form the input digital data which are supplied to the input terminal 9.
  • Writing proceeds alternately in the field zero memory 1 and the field one memory 2 in dependence on the position of the switch 5 and under the control of the write address generator 3.
  • the necessary complex address calculations which are required so as not only to achieve simple writing and reading of the individual digital signals into and out of the appropriate memory 1 or 2, but also to modify the pixel addresses in the cathode ray tube screen raster so as to achieve the required special effect are under control of a signal supplied to the write address generator 3 by way of an input terminal 11.
  • the switches 5 to 8 change position and the digital signals stored in that memory 1 or 2 are then sequentially read out under the control of the read address generator 4, which in turn is controlled by a signal supplied by way of an input terminal 12, and supplied to the output terminal 10, while the digital signals for the next field are written in the other memory 2 or 1.
  • the central part of the embodiment of video signal processing circuit to be described comprises a two-dimensional to three-dimensional mapping store 20 having two main inputs designated X and Y and three main outputs designated « , p and Z.
  • input X addresses of pixels are supplied by way of a multiplier 21, these input X addresses also being supplied to a multiplier 22 connected to the x output.
  • Input Y addresses of pixels are supplied to the Y input by way of a multiplier 23, these input Y addresses also being supplied to a multiplier 24 connected to the p output.
  • a multiplier 25 is connected to the Z output.
  • X, Y and Z outputs respectively are derived from the multipliers 22, 24 and 25.
  • the multipliers 21, 23 and 25 can be controlled by an X scaling factor signal, a Y scaling factor signal and a Z scaling factor signal, respectively.
  • the mapping store 20 is a random access memory (RAM) operating as a look-up table, and is preloaded with data corresponding to the special effect which is to be achieved.
  • the mapping store 20 contains instructions as to how to map X and Y coordinates corresponding to the pixel addresses in the raster of the two-dimensional input image to three-dimensional space. For each sample position there are stored three parameters; ⁇ and p which are the X and Y scaling multipliers, and Z which is the absolute depth coordinate. Considering for a moment just one dimension, the effect on each pixel address in a horizontal line scan of the raster in achieving the required special effect is likely to be the horizontal movement of that pixel address to a different address.
  • the scaling multipliers ⁇ and f7 corresponding to the input pixel address are therefore supplied to the multipliers 22 and 24 respectively, which also receive the input X and Y addresses respectively of the input pixel.
  • the multipliers 22 and 24 therefore scale the input X and Y addresses to the required new values which, together with the Z address derived from the mapping store 20, are supplied to the respective outputs.
  • Figure 3 shows a computer simulation of this special effect in which an initially-flat 81-point grid is progressively mapped onto the surface of a sphere.
  • the first diagram shows the initial flat image and each of the 81 points appears as a rectangle, although it should be remembered that in the digital video signal each of these rectangles simply represents a pixel and so corresponds to a single sample value having X and Y addresses. Following through the diagrams successively will indicate how the sample values move as the special effect is progressively achieved.
  • the data to be loaded into the mapping store 20 can in this case be derived mathematically by calculating the way in which the positions of the individual pixel addresses have to change in moving from the initial two-dimensional image to the final three-dimensional image.
  • the necessary data can be calculated mathematically in the case of any special effect which involves mapping the initial two-dimensional image onto a three-dimensional surface which can readily be expressed in mathematical terms, such as the surface of a sphere, cylinder or cone.
  • an additional computer analysis to map the surface in coordinate terms will first be required, and from this the computer can then calculate the data necessary for loading into the mapping store 20.
  • mapping store 20 In an actual equipment a variety of different sets of ⁇ and ⁇ scaling multipliers and Z coordinates which have been pre-calculated are stored on disc for down-loading into the mapping store 20 when required for use.
  • FIG. 5 shows the embodiment in more detail, although still in simplified block form.
  • This figure shows, in addition to the mapping store 20 and the multipliers 22, 24 and 25 (the multipliers 21 and 23 not being shown merely to simplify the figure); a pre-map matrix 30, a multiplexer 31, a post-map matrix 32, a perspective transformation device 33 and a further multiplexer 34.
  • the input X and Y addresses are supplied to respective inputs of the pre-map matrix 30, the multipliers 22 and 24 as previously described, respective inputs of the multiplexer 31, and respective inputs of the multiplexer 34.
  • the multiplexer 31 receives a zero Z input address, which is the Z address corresponding to all X and Y addresses on the initial two-dimensional image.
  • Three further inputs of the multiplexer 31 respectively receive the outputs of the multipliers 22, 24 and 25, these being the X, Y and Z addresses respectively corresponding to the pixels of the input video data mapped in accordance with the required special effect.
  • the multiplexer 31 is operative under control of a linear/non-linear effect control signal to select either the initial input addresses or the input addresses after mapping, that is to say to include the non-linear special effect or not as required.
  • the three output addresses supplied by the multiplexer 31, designated the X', the Y" and the Z' addresses are supplied to the post-map matrix 32, which supplies output X"', Y"' and Z'" addresses to the perspective transformation device 33.
  • the perspective transformation device 33 also receives a perspective viewing distance control signal and supplies output addresses designated X and Y to the multiplexer 34.
  • the multiplexer 34 also receives the input X and Y addresses, and under control of an effect/no-effect control signal supplies either the input X and Y addresses unchanged, or the output X and Y addresses derived in accordance with the required non-linear special effect and by the post-map matrix 32 and the perspective transformation device 33, or, if the multiplier 31 is controlled so that the input X and Y addresses by-pass the pre-map matrix 30 and the mapping store 20, the output X and Y addresses derived by the post-map matrix 32 and the perspective transformation device 33 only.
  • the pre-map matrix 30 is operative to cause any one or any combination of the two-dimensional effects of off-setting (shifting or translating the image in its plane), scaling (expanding or compressing the image) and rolling (rotating).
  • rotation about a non- central point involves the combination of off-setting to the point, rotation about that point and off-setting back to the initial position.
  • These effects are all well known, and the necessary matrices are described in "Computer Graphics and Applications" by Dennis Harris, Chapman and Hall Computing 1984.
  • a 3 x 2 matrix is sufficient, but a third line is added simply to make the matrices 3 x 3, so that any two or more matrices can readily be multiplied together to give the required combination of effects.
  • This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the pre-map matrix 30, which comprises multipliers and adders.
  • the post-map matrix 32 is operative to cause any one or any combination of the three-dimensional effects of off-setting (which may be in two-dimensions only), scaling, rolling, pitching and yawing. These effects also are all well known, and the necessary matrices are described in "Computer Graphics and Applications" referred to above. In this case to achieve each individual effect a 4 x 3 matrix is sufficient, but a fourth line is added simply to make the matrices 4 x 4, so that any two or more matrices can readily be multiplied together to give the required combination of effects. This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the post-map matrix 32, which comprises multipliers and adders.
  • the post-map matrix 32 comprises a hybrid arrangement comprising a high speed microprocessor and a hardware matrix circuit. Basically, the microprocessor is required to calculate the coefficients of a single 4 x 4 matrix which is, if necessary, a composite matrix combining the matrices corresponding respectively to two or more of off-setting, scaling, rolling, pitching or yawing.
  • the rather simpler pre-map matrix 30 is realised in a similar way, these two matrices 30 and 32 being described in more detail below.
  • the perspective transformation device 33 introduces geometrical perspective by adapting the X" and Y" addresses in dependence on the Z' addresses and on the selected viewing distance. This again is a known technique and the way it is done is described in "Computer Graphics and Applications" referred to above.
  • the need to effect perspective transformation will be understood from a very simple example.
  • an initial two-dimensional rectangular image is hinged rearwards about a horizontal axis coinciding with the top edge of the image.
  • each pixel in the image will acquire a Z address (which will be zero for pixels lying along the axis), but initially the length of the bottom edge of the image will remain the same as the length of the top edge of the image.
  • an initial two-dimensional image may have been mapped onto the surface of a cylinder and may then be further mapped onto the surface of a sphere.
  • this two mapping stores are required, or alternatively one substantially enlarged mapping store.
  • the method involves deriving X and Y coordinate values multiplied by the respective x and p scaling multipliers, and the associated Z coordinate values, both for the "cylindrical" image and the "spherical” image.
  • the resulting X, Y and Z addresses are then averaged, the averaging being progressively weighted to give a progressive transformation of the image from "cylindrical" to "spherical” over a selected time interval.
  • the pre-map matrix 30 is operative to cause any one or any combination of the two-dimensional effects of off-setting (shifting or translating the image in its plane), scaling (expanding or compressing the image) and rolling (rotating).
  • off-setting shifting or translating the image in its plane
  • scaling expanding or compressing the image
  • rolling rolling
  • a 3 x 2 matrix is sufficient, but a third line is added simply to make the matrices 3 x 3, so that any two or more matrices can readily be multiplied together to give the required combination of effects.
  • This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the pre-map matrix 30, which comprises multipliers and adders.
  • the post-map matrix 32 is operative to cause any one or any combination of the three-dimensional effects of off-setting (which may be in two-dimensions only), scaling, rolling, pitching and yawing.
  • off-setting which may be in two-dimensions only
  • a 4 x 3 matrix is sufficient, but a fourth line is added simply to make the matrices 4 x 4, so that any two or more matrices can readily be multiplied together to give the required combination of effects, such as rolling about an off-set point, which involves off-setting, rolling and off-setting back, in which case, the three appropriate matrices are multiplied together to give a single matrix corresponding to the required effect.
  • This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the post-map matrix 32, which comprises multipliers and adders.
  • the mathematical operations which the post-map matrix 32 has to perform to achieve the three-dimensional effects of off-setting, scaling, rolling, yawing and pitching are known, for example, from "Computer Graphics and Applications" referred to above.
  • the images must be processed in real time, that is, all the processing necessary for each field must be performed at the video field rate, which, in the present example is 60 fields per second.
  • the embodiment comprises a hybrid arrangement comprising a high speed microprocessor and a hardware matrix circuit.
  • the microprocessor is required to calculate the coefficients of a single 4 x 4 matrix which is, if necessary, a composite matrix combining the matrices corresponding respectively to two or more of off-setting, scaling, rolling, pitching or yawing.
  • any one or any combination of the above three-dimensional linear effects can be achieved by first selecting the appropriate matrix, or in the case of a combination selecting the appropriate matrices, substituting the required values of the parameters into the matrix or matrices, and, in the case of a combination multiplying the resulting matrices together.
  • This first step is carried out by a microprocessor 40 shown in Figure 6 under control of a program stored in a program memory 41 and selection inputs which specify which of the off-setting, scaling, rolling, pitching and yawing effects are required and, in the appropriate cases, the off-set distances, the scaling coefficients and the roll, pitch and yaw angles.
  • the microcomputer 40 Under control of the program, the microcomputer 40 then selects the appropriate 4 x 4 matrix or matrices, substitutes the parameters and, where necessary, multiplies the resulting matrices together to provide in each case an output matrix comprising the required coefficients a 1 to d 3 which are supplied to respective outputs by way of latch circuits 42. During each field period of the video signal to be processed the microprocessor 40 performs the above operations so that the required coefficients a l to d 3 are available for use in the next field period.
  • the coefficients a l to d 3 are supplied to the hardware matrix circuit shown in block form in Figure 7.
  • the matrix circuit comprises nine multipliers 50 to 58 and nine adders 59 to 67.
  • the output x new of each pixel in a field is derived by supplying the input coordinates x, y and z of that pixel to the multipliers 50, 51 and 52 respectively, where they are multiplied by the coefficients a 1 , b 1 and c 1 respectively down-loaded from the microcomputer 40 at the end of the previous field.
  • the output of the multipliers 50 and 51 are added by the adder 59, the output of the adder 59 is added to the output of the multiplier 52 by the adder 60, and the output of the adder 60 is added to the coefficient d1 by the adder 61.
  • the output of the adder 61 is x new.
  • the outputs y new and z new are derived similarly.
  • the form and operation of the pre-map matrix 30 is similar, but with the simplification that because only the two-dimensional linear effects of off-setting, scaling and rolling are involved, the matrices are 3 x 3, and the matrix circuit is correspondingly simplified.
  • a method of processing video signals to achieve a visual effect involving linear manipulation of an image comprising:
  • circuit for processing video signals to achieve a visual effect involving linear manipulation of an image comprising:

Abstract

A method of and apparatus for processing video signals to achieve a visual effect corresponding to that which would be achieved if an input two-dimensional image were projected onto a three-dimensional surface is disclosed. A video signal is formed into digitized sample values each having an input pixel address comprising X and Y coordinate values. X and Y scaling multipliers (a, β) and Z coordinate values corresponding to the respective pixel addresses on the three-dimensional surface to which each of the input pixel addresses will be moved in achieving the visual effect are stored in a mapping store (20). Multipliers (22, 24) multiply the X and Y coordinate values of each input pixel address by the corresponding X and Y scaling multipliers (α, β), and the resulting scaled X and Y coordinate values and the associated Z coordinate values are supplied to a perspective transformation device (33) to derive the output X and Y coordinate values corresponding to the visual effect.

Description

  • This invention relates to methods of and circuits for video signal processing. More particularly, but not exclusively, this invention relates to video signal processing circuits which are suitable for use in special effects equipment in high definition video systems. Embodiments of the invention are able to achieve a visual effect corresponding to that which would be obtained if an input two-dimensional image were projected onto a three-dimensional surface. Reference is also made to linear manipulation of an image in two or three dimensions.
  • The standard television signal transmitted in the United Kingdom is a PAL signal of a 625-lines per frame, 50-fields per second system, arid the PAL, NTSC and SECAM signals transmitted in other countries use similar or slightly lower line frequencies (for example 525 lines per frame), and similar or slightly higher field frequencies (for example 60 fields per second). While there is no immediate prospect of significant changes in these transmitted signals, there is an increasing requirement for higher definition video systems. Such systems can be used, for example, in film-making, in closed circuit television systems, in satellite communication systems and in studio use generally. One such proposed high definition video system uses 1125 lines per frame and 60 fields per second. This proposed system also uses a 5:3 aspect ratio instead of the 4:3 aspect ratio now usual for television receivers.
  • The special effects which can be applied to a video signal are well known. Thus, for example, images on a cathode ray tube can be off-set (moved in any direction), scaled (expanded or compressed in size), rolled (rotated) in two or three dimensions and so on.
  • One way of achieving such special effects, which will be referred to in more detail below, involves converting an input analog video signal into digitized sample values each having a pixel address, modifying the resulting individual pixel addresses to achieve the required special effect, storing the sample values at the modified pixel addresses in a field memory, and reading from the field memory to derive the sample values for reconversion into the required output analog signal.
  • The effects to be achieved by special effects equipment can in general be divided into two types; those which do not bend or twist the image plane, that is to say linear effects, which may nonetheless be three-dimensional, and those which do distort the image plane by projecting the image onto a three-dimensional shape, that is to say non-linear effects. An example of a three-dimensional linear effect is tilting the image plane with perspective, as in a tumble or flip. An example of a three-dimensional non-linear effect is the projection of the input image plane onto the surface of a cone.
  • Two of the processes involved in producing three-dimensional effects whether linear or non-linear are; transformation of the initial two-dimensional pixel addresses to pixel addresses in three-dimensional space, and then perspective transformation back onto the two-dimensional viewing plane.
  • For linear effects, the required two or three-dimensional pixel addresses can be derived by matrix calculation as used, for example, in computer graphics. However, substantial modification of the techniques is necessary to achieve operation in real time as required in a television system. For non-linear effects, there is a requirement for methods and circuits which can not only achieve the required effect, but can also do so at the required speeds and without requiring hardware which is too extensive or too complex. This forms the subject of the present application.
  • According to the present invention there is provided a method of processing video signals to achieve a visual effect corresponding to that which would be achieved if an input two-dimensional image were projected onto a three-dimensional surface, the method comprising:
    • forming a video signal corresponding to said input two-dimensional image into digitized sample values each having an input pixel address comprising X and Y coordinate values; characterised by:
    • storing X and Y scaling multipliers and Z coordinate values corresponding to the respective pixel addresses on said three-dimensional surface to which each of said input pixel addresses will be moved in achieving said visual effect;
    • deriving from store the corresponding said X and Y scaling multipliers and Z coordinate value for each said input pixel address and multiplying said X and Y coordinate values of each said input pixel address by the corresponding said X and Y scaling multipliers respectively; and
    • supplying the resulting scaled X and Y coordinate values and the associated Z coordinate value to a perspective transformation device to derive the required output X and Y coordinate values corresponding to said visual effect.
  • According to the present invention there is also provided a circuit for processing video signals to achieve a visual effect corresponding to that which would be achieved if an input two-dimensional image were projected onto a three-dimensional surface, the circuit comprising:
    • means for providing a video signal corresponding to said input two-dimensional image in the form of digitized sample values each having an input pixel address comprising X and Y coordinate values; characterised by:
    • a mapping store for storing X and Y scaling multipliers and Z coordinate values corresponding to the respective pixel addresses on said three-dimensional surface to which each of said input pixel addresses will be moved in achieving said visual effect;
    • multiplier means for multiplying said X and Y coordinate values of each said input pixel address by the corresponding said X and Y scaling multipliers derived from said mapping store; and
    • a perspective transformation device to which the resulting scaled X and Y coordinate values and the associated Z coordinate values are supplied and which derives output X and Y coordinate values corresponding to said visual effect.
  • The invention will now be described by way of example with reference to the accompanying drawings, throughout which like elements are referred to by like references, and in which:
    • Figure 1 shows in simplified block form part of a special effects equipment for a high definition video system;
    • Figure 2 shows in simplified block form the central part of an embodiment of video signal processing circuit according to the present invention;
    • Figure 3 shows a computer simulation in which an initially-flat 81-point grid is progressively mapped onto the surface of a sphere;
    • Figures 4A, 4B and 4C show diagrammatically the progressive mapping of an initially-flat image onto the surface of a cylinder; and
    • Figure 5 shows in more detailed, but still simplified, block form the embodiment of Figure 2;
    • Figure 6 shows in more detailed, but still simplified, block form part of the embodiment of Figure 2; and
    • Figure 7 shows in block form another part of the embodiment of Figure 2.
  • Before describing the embodiment, part of the overall arrangement of an example of a special effects equipment for the high definition video system outlined above will be briefly described with reference to Figure 1. Basically, the special effects equipment comprises two field memories, a field zero memory 1 and a field one memory 2, together with a write address generator 3 and a read address generator 4. These elements are interconnected by switches 5, 6, 7 and 8, each of which is operated at the field frequency. Input data in the form of digitized sample values supplied to an input terminal 9 are selectively supplied by way of the switch 5 to the field zero memory 1 or the field one memory 2. Output data for supply to an output terminal 10 are selectively derived by the switch 6 from the field zero memory 1 or the field one memory 2. The write address generator 3 and the read address generator 4 are selectively and alternately connected to the field zero memory 1 and the field one memory 2 by the switches 7 and 8.
  • In operation of this special effects equipment, an input analog signal is sampled 2048 times per horizontal scan line and the resulting sample values are pulse code modulation coded into 8-bit words to form the input digital data which are supplied to the input terminal 9. Writing proceeds alternately in the field zero memory 1 and the field one memory 2 in dependence on the position of the switch 5 and under the control of the write address generator 3. The necessary complex address calculations which are required so as not only to achieve simple writing and reading of the individual digital signals into and out of the appropriate memory 1 or 2, but also to modify the pixel addresses in the cathode ray tube screen raster so as to achieve the required special effect are under control of a signal supplied to the write address generator 3 by way of an input terminal 11. When a complete field has been written in the memory 1 or 2, the switches 5 to 8 change position and the digital signals stored in that memory 1 or 2 are then sequentially read out under the control of the read address generator 4, which in turn is controlled by a signal supplied by way of an input terminal 12, and supplied to the output terminal 10, while the digital signals for the next field are written in the other memory 2 or 1.
  • The present invention is particularly concerned with the way in which the write address generator 3 performs the complex address calculations which are required to achieve a special effect, most particularly where that special effect is a three-dimensional non-linear effect.
  • Referring now to Figure 2, the central part of the embodiment of video signal processing circuit to be described comprises a two-dimensional to three-dimensional mapping store 20 having two main inputs designated X and Y and three main outputs designated « , p and Z. To the X input, input X addresses of pixels are supplied by way of a multiplier 21, these input X addresses also being supplied to a multiplier 22 connected to the x output. Input Y addresses of pixels are supplied to the Y input by way of a multiplier 23, these input Y addresses also being supplied to a multiplier 24 connected to the p output. A multiplier 25 is connected to the Z output. X, Y and Z outputs respectively are derived from the multipliers 22, 24 and 25. The multipliers 21, 23 and 25 can be controlled by an X scaling factor signal, a Y scaling factor signal and a Z scaling factor signal, respectively.
  • The mapping store 20 is a random access memory (RAM) operating as a look-up table, and is preloaded with data corresponding to the special effect which is to be achieved. Thus, the mapping store 20 contains instructions as to how to map X and Y coordinates corresponding to the pixel addresses in the raster of the two-dimensional input image to three-dimensional space. For each sample position there are stored three parameters; α and p which are the X and Y scaling multipliers, and Z which is the absolute depth coordinate. Considering for a moment just one dimension, the effect on each pixel address in a horizontal line scan of the raster in achieving the required special effect is likely to be the horizontal movement of that pixel address to a different address. This change of address can be effected by multiplying the X coordinate of the original address by a scaling multiplier. In practice, the required special effect is likely to affect each pixel address by movement in two dimensions, so multiplication of both the X and Y coordinates of the original address of the pixel by respective scaling multipliers is likely to be required. As, therefore, each pixel address is supplied to the X and Y inputs of the mapping store 20, the mapping store 20 operates to access the appropriate scaling multipliers α and (2 for that pixel address and supply them to the αand P outputs. Additionally, however, it is likely that the required special effect will necessitate movement of the pixel address in the third or depth direction also, so a further operation of the mapping store 20 is to access and supply to the Z output thereof, the Z coordinate of the address corresponding to the pixel address designated by the X and Y coordinates of the input addresses and to the required special effect.
  • The scaling multipliers α and f7 corresponding to the input pixel address are therefore supplied to the multipliers 22 and 24 respectively, which also receive the input X and Y addresses respectively of the input pixel. The multipliers 22 and 24 therefore scale the input X and Y addresses to the required new values which, together with the Z address derived from the mapping store 20, are supplied to the respective outputs.
  • The purpose of the multipliers 21, 23 and 25 will now be explained. The foregoing description assumes that the transition from the two-dimensional image to the three-dimensional non-linear effect is to be achieved in one step. Commonly, however, it is required that the special effect be achieved progressively over a succession of fields.
  • Referring to Figure 3, suppose that the required special effect is the progressive change of an initial two-dimensional image so that the image appears to be being wrapped around the surface of a sphere. Figure 3 shows a computer simulation of this special effect in which an initially-flat 81-point grid is progressively mapped onto the surface of a sphere. The first diagram shows the initial flat image and each of the 81 points appears as a rectangle, although it should be remembered that in the digital video signal each of these rectangles simply represents a pixel and so corresponds to a single sample value having X and Y addresses. Following through the diagrams successively will indicate how the sample values move as the special effect is progressively achieved. It will also be seen from the later diagrams, and in particular the final diagram, that as the special effect proceeds the situation arises that there are sample values having the same pixel address so far as the X and Y coordinates are concerned, but different Z addresses. In other words, some sample values have moved behind others. If a transparent effect is required, then both these sample values can be used in forming the output video signal, but if a solid effect is required, then the sample value nearer the viewing plane, that is having the smaller Z address, can be selected simply by comparison of the Z addresses of sample values having the same X and Y addresses.
  • Consideration of the final diagram in Figure 3 will also show that the data to be loaded into the mapping store 20 can in this case be derived mathematically by calculating the way in which the positions of the individual pixel addresses have to change in moving from the initial two-dimensional image to the final three-dimensional image. Likewise, the necessary data can be calculated mathematically in the case of any special effect which involves mapping the initial two-dimensional image onto a three-dimensional surface which can readily be expressed in mathematical terms, such as the surface of a sphere, cylinder or cone. In the case, however, of a more complex surface such as the surface of a table or a telephone, then an additional computer analysis to map the surface in coordinate terms will first be required, and from this the computer can then calculate the data necessary for loading into the mapping store 20.
  • In an actual equipment a variety of different sets of α and β scaling multipliers and Z coordinates which have been pre-calculated are stored on disc for down-loading into the mapping store 20 when required for use.
  • Returning now to Figure 2 and the question of progressively mapping the initial two-dimensional image onto the three-dimensional surface, this is achieved by the use of further scaling multipliers, which, to avoid confusion with the scaling multipliers α- and f7 stored in the mapping store 20, will be called scaling factors, and in particular the X, Y and Z scaling factor signals which are supplied to the multipliers 21, 23 and 25 respectively. The effect of the X and Y scaling factor signals is initially to concentrate the input X and Y addresses in the centre of the two-dimensional area which is to be mapped onto the three-dimensional shape. The X and Y scaling factor signals then change progressively so that the addresses are in effect spread outwards to the boundaries to the three-dimensional shape. This spreading takes place progressively by progressively altering the values of the X and Y scaling factor signals.
  • This may be seen by reference to Figures 4A to 4C which shows the input, map and output at initial, intermediate and final stages respectively in mapping an initial two-dimensional image onto the surface of a cylinder. Throughout the progression the address of the pixel A at top centre of the initial image remains unchanged both by the scaling factors and the mapping, so the output address of the pixel A is the same as the input address. However, the address of a pixel B initially at top right is brought close to the centre by operation of the scaling factors in the initial stage shown in Figure 4A, so the output address of pixel B is not at this initial stage substantially changed by the scaling multipliers in the mapping. At the intermediate stage shown in Figure 4B, the address of the pixel B is not brought so close to the centre, so it is affected to some extent by the scaling multipliers in the mapping, and the output address shows the pixel B starting to move away from its original position as the initially flat plane of the image starts to curve to take up the shape of a cylinder. In the final stage shown in Figure 4C, the scaling factors do not affect the input address of the pixel B, so the scaling multipliers in the mapping take full effect on this address and the output address of the pixel B is moved substantially, and in fact to the position which it is to occupy on the final cylindrically-shaped image.
  • It will be noted that in this case, as in the case of mapping onto the surface of a sphere as shown in Figure 3, the addresses of some pixels have moved behind others. As mentioned above, if a transparent effect is required, then both these pixels can be used in forming the output video signal, but if a solid effect is required, then the pixel nearer the viewing plane, that is the one having the smaller Z address, can be selected simply by comparison of the Z addresses of the pixels having the same X and Y addresses.
  • Referring now to Figure 5, this shows the embodiment in more detail, although still in simplified block form. This figure shows, in addition to the mapping store 20 and the multipliers 22, 24 and 25 (the multipliers 21 and 23 not being shown merely to simplify the figure); a pre-map matrix 30, a multiplexer 31, a post-map matrix 32, a perspective transformation device 33 and a further multiplexer 34. The input X and Y addresses are supplied to respective inputs of the pre-map matrix 30, the multipliers 22 and 24 as previously described, respective inputs of the multiplexer 31, and respective inputs of the multiplexer 34. Additionally the multiplexer 31 receives a zero Z input address, which is the Z address corresponding to all X and Y addresses on the initial two-dimensional image. Three further inputs of the multiplexer 31 respectively receive the outputs of the multipliers 22, 24 and 25, these being the X, Y and Z addresses respectively corresponding to the pixels of the input video data mapped in accordance with the required special effect.
  • The multiplexer 31 is operative under control of a linear/non-linear effect control signal to select either the initial input addresses or the input addresses after mapping, that is to say to include the non-linear special effect or not as required. The three output addresses supplied by the multiplexer 31, designated the X', the Y" and the Z' addresses are supplied to the post-map matrix 32, which supplies output X"', Y"' and Z'" addresses to the perspective transformation device 33. The perspective transformation device 33 also receives a perspective viewing distance control signal and supplies output addresses designated X and Y to the multiplexer 34. The multiplexer 34 also receives the input X and Y addresses, and under control of an effect/no-effect control signal supplies either the input X and Y addresses unchanged, or the output X and Y addresses derived in accordance with the required non-linear special effect and by the post-map matrix 32 and the perspective transformation device 33, or, if the multiplier 31 is controlled so that the input X and Y addresses by-pass the pre-map matrix 30 and the mapping store 20, the output X and Y addresses derived by the post-map matrix 32 and the perspective transformation device 33 only.
  • The pre-map matrix 30 is operative to cause any one or any combination of the two-dimensional effects of off-setting (shifting or translating the image in its plane), scaling (expanding or compressing the image) and rolling (rotating). Thus, for example, rotation about a non- central point involves the combination of off-setting to the point, rotation about that point and off-setting back to the initial position. These effects are all well known, and the necessary matrices are described in "Computer Graphics and Applications" by Dennis Harris, Chapman and Hall Computing 1984. To achieve each individual effect a 3 x 2 matrix is sufficient, but a third line is added simply to make the matrices 3 x 3, so that any two or more matrices can readily be multiplied together to give the required combination of effects. This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the pre-map matrix 30, which comprises multipliers and adders.
  • The post-map matrix 32 is operative to cause any one or any combination of the three-dimensional effects of off-setting (which may be in two-dimensions only), scaling, rolling, pitching and yawing. These effects also are all well known, and the necessary matrices are described in "Computer Graphics and Applications" referred to above. In this case to achieve each individual effect a 4 x 3 matrix is sufficient, but a fourth line is added simply to make the matrices 4 x 4, so that any two or more matrices can readily be multiplied together to give the required combination of effects. This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the post-map matrix 32, which comprises multipliers and adders. The images must be processed in real time, that is, all the processing necessary for each field must be performed at the video field rate, which, in the present example is 60 fields per second. It is not possible for a computer to perform the processing at the required high speed, so the post-map matrix 32 comprises a hybrid arrangement comprising a high speed microprocessor and a hardware matrix circuit. Basically, the microprocessor is required to calculate the coefficients of a single 4 x 4 matrix which is, if necessary, a composite matrix combining the matrices corresponding respectively to two or more of off-setting, scaling, rolling, pitching or yawing. The rather simpler pre-map matrix 30 is realised in a similar way, these two matrices 30 and 32 being described in more detail below.
  • The perspective transformation device 33 introduces geometrical perspective by adapting the X" and Y" addresses in dependence on the Z' addresses and on the selected viewing distance. This again is a known technique and the way it is done is described in "Computer Graphics and Applications" referred to above. The need to effect perspective transformation will be understood from a very simple example. Suppose an initial two-dimensional rectangular image is hinged rearwards about a horizontal axis coinciding with the top edge of the image. As this is done each pixel in the image will acquire a Z address (which will be zero for pixels lying along the axis), but initially the length of the bottom edge of the image will remain the same as the length of the top edge of the image. In other words there will be no perspective effect to delude the eye that the movement is in three dimensions. The function of the perspective transformation device 33 is to add the required geometrical perspective effect, which in the above simple example involves shortening the bottom edge, and progressively shortening the intervening horizontal lines.
  • Various modifications to and extensions of the above-described methods and apparatus are of course possible without departing from the appended claims. As an example of an extension, it is possible to arrange for progressions from one three-dimensional non-linear effect to another. For example, an initial two-dimensional image may have been mapped onto the surface of a cylinder and may then be further mapped onto the surface of a sphere. To do this two mapping stores are required, or alternatively one substantially enlarged mapping store. Briefly the method involves deriving X and Y coordinate values multiplied by the respective x and p scaling multipliers, and the associated Z coordinate values, both for the "cylindrical" image and the "spherical" image. The resulting X, Y and Z addresses are then averaged, the averaging being progressively weighted to give a progressive transformation of the image from "cylindrical" to "spherical" over a selected time interval.
  • In a further aspect, the present invention is also concerned with the way in which the write address generator 3 of Figure 1 performs the complex address calculations which are required to achieve a special effect, most particularly where that special effect is a two or three-dimensional linear effect, and more particularly with the form and operation of the pre-map matrix 30 and the post-map matrix 32.
  • As described above, the pre-map matrix 30 is operative to cause any one or any combination of the two-dimensional effects of off-setting (shifting or translating the image in its plane), scaling (expanding or compressing the image) and rolling (rotating). To achieve each individual effect a 3 x 2 matrix is sufficient, but a third line is added simply to make the matrices 3 x 3, so that any two or more matrices can readily be multiplied together to give the required combination of effects. This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the pre-map matrix 30, which comprises multipliers and adders.
  • Also as described above, the post-map matrix 32 is operative to cause any one or any combination of the three-dimensional effects of off-setting (which may be in two-dimensions only), scaling, rolling, pitching and yawing. In this case to achieve each individual effect a 4 x 3 matrix is sufficient, but a fourth line is added simply to make the matrices 4 x 4, so that any two or more matrices can readily be multiplied together to give the required combination of effects, such as rolling about an off-set point, which involves off-setting, rolling and off-setting back, in which case, the three appropriate matrices are multiplied together to give a single matrix corresponding to the required effect. This multiplication is done in a microcomputer and when required, the resultant matrix is down-loaded as a set of coefficients for the post-map matrix 32, which comprises multipliers and adders.
  • An example of the post-map matrix 32 will now be described with reference to Figures 6 and 7. From this it will be readily apparent how the rather simplier pre-map matrix 30 can be realised in a similar way.
  • As mentioned above, the mathematical operations which the post-map matrix 32 has to perform to achieve the three-dimensional effects of off-setting, scaling, rolling, yawing and pitching are known, for example, from "Computer Graphics and Applications" referred to above. However, in the present case the images must be processed in real time, that is, all the processing necessary for each field must be performed at the video field rate, which, in the present example is 60 fields per second. It is not possible for a computer to perform the processing at the required high speed, so the embodiment comprises a hybrid arrangement comprising a high speed microprocessor and a hardware matrix circuit. Basically, the microprocessor is required to calculate the coefficients of a single 4 x 4 matrix which is, if necessary, a composite matrix combining the matrices corresponding respectively to two or more of off-setting, scaling, rolling, pitching or yawing.
  • Suppose that the three-dimensional input address of a pixel is x, y, z and that the output address, after a required manipulation, of that pixel is x new, y new, z new. In the general case, therefore, where the precise manipulation has not yet been specified:
    Figure imgb0001
    Figure imgb0002
    Figure imgb0003
    where a 1, a 2, a3, b1, b2, b3, c1, c2, c3, d1, d2 and d3 are coefficients determined by the manipulation to be performed. Writing the above three equations in matrix form:
    Figure imgb0004
    To make the centre matrix 4 x 4, so as to permit multiplication of such matrices, this can be re-written as:
    Figure imgb0005
  • In the case of off-setting, suppose that the image is to be off-set by distances Ox, Oy, Oz respectively in the three dimensions. Then:
    Figure imgb0006
    Figure imgb0007
    Figure imgb0008
    and in this case the 4 x 4 matrix becomes:
    Figure imgb0009
  • In the case of scaling, suppose that the image is to be scaled by scaling coefficients Sx, Sy, Sz respectively in the three dimensions. Then:
    Figure imgb0010
    Figure imgb0011
    Figure imgb0012
    and in this case the 4 x 4 matrix becomes:
    Figure imgb0013
  • In the case of rolling, suppose that the image is to be rolled through an angle θ. Then:
    Figure imgb0014
    Figure imgb0015
    Figure imgb0016
    and in this case the 4 x 4 matrix becomes:
    Figure imgb0017
  • In the case of pitching, suppose that the image is to be pitched through an angle θ. Then:
    Figure imgb0018
    Figure imgb0019
    Figure imgb0020
    and in this case the 4 x 4 matrix becomes:
    Figure imgb0021
  • In the case of yawing, suppose that the image is to be yawed through an angle θ. Then:
    Figure imgb0022
    Figure imgb0023
    Figure imgb0024
    and in this case the 4 x 4 matrix becomes:
    Figure imgb0025
  • Any one or any combination of the above three-dimensional linear effects can be achieved by first selecting the appropriate matrix, or in the case of a combination selecting the appropriate matrices, substituting the required values of the parameters into the matrix or matrices, and, in the case of a combination multiplying the resulting matrices together. This first step is carried out by a microprocessor 40 shown in Figure 6 under control of a program stored in a program memory 41 and selection inputs which specify which of the off-setting, scaling, rolling, pitching and yawing effects are required and, in the appropriate cases, the off-set distances, the scaling coefficients and the roll, pitch and yaw angles. Under control of the program, the microcomputer 40 then selects the appropriate 4 x 4 matrix or matrices, substitutes the parameters and, where necessary, multiplies the resulting matrices together to provide in each case an output matrix comprising the required coefficients a1 to d3 which are supplied to respective outputs by way of latch circuits 42. During each field period of the video signal to be processed the microprocessor 40 performs the above operations so that the required coefficients al to d3 are available for use in the next field period.
  • The coefficients al to d3 are supplied to the hardware matrix circuit shown in block form in Figure 7. The matrix circuit comprises nine multipliers 50 to 58 and nine adders 59 to 67. The output x new of each pixel in a field is derived by supplying the input coordinates x, y and z of that pixel to the multipliers 50, 51 and 52 respectively, where they are multiplied by the coefficients a1, b1 and c1 respectively down-loaded from the microcomputer 40 at the end of the previous field. The output of the multipliers 50 and 51 are added by the adder 59, the output of the adder 59 is added to the output of the multiplier 52 by the adder 60, and the output of the adder 60 is added to the coefficient d1 by the adder 61. The output of the adder 61 is x new. The outputs y new and z new are derived similarly.
  • The three outputs x new, y new and z new correspond to X", Y" and zm and are supplied to the perspective transformation device 33 shown in Figure 5.
  • As mentioned above, the form and operation of the pre-map matrix 30 is similar, but with the simplification that because only the two-dimensional linear effects of off-setting, scaling and rolling are involved, the matrices are 3 x 3, and the matrix circuit is correspondingly simplified.
  • According to the present invention there is also provided a method of processing video signals to achieve a visual effect involving linear manipulation of an image, the method comprising:
    • storing in a microprocessor memory a respective n x n matrix corresponding to each said linear manipulation;
    • controlling said microprocessor to select said matrix, or for a visual effect involving a combination of two or more said linear manipulations said matrices, corresponding to said visual effect;
    • controlling said microprocessor to substitute into the selected matrix or matrices the respective parameters of said visual effect;
    • for a visual effect involving a combination of two or more said linear manipulations, controlling said microprocessor to multiply the substituted matrices together;
    • deriving from the resulting matrix the coefficients required to calculate from input pixel addresses of said image the output pixel addresses corresponding to said visual effect; and
    • supplying said coefficients and signals representing the coordinates of said input pixel addresses to a hardware matrix circuit which derives therefrom the signals representing the coordinates of the output pixel address corresponding to said visual effect.
  • And according to the present invention there is also provided a circuit for processing video signals to achieve a visual effect involving linear manipulation of an image, the circuit comprising:
    • a microprocessor comprising a memory in which is stored a respective n x n matrix corresponding to each said linear manipulation;
    • means for supplying selection inputs to said microprocessor to control said microprocessor to select said matrix, or for a visual effect involving a combination of two or more said linear manipulations, said matrices corresponding to said visual effect, to substitute into the selected matrix or matrices the respective parameters of said visual effect, for a visual effect involving a combination of two or more said linear manipulations to multiply the substituted matrices together, and to derive from the resulting matrix the coefficients required to calculate from input pixel addresses of said image the output pixel addresses corresponding to said visual effect; and
    • a hardware matrix circuit to which are supplied said coefficients and signals representing the coordinates of said input pixel addresses and which is operative to derive therefrom the output pixel addresses corresponding to said visual effect.
  • Although described in relation to the above high definition video system, it will be realised that the invention can equally well be applied to the processing of any video signal which can be expressed in the necessary digital form.

Claims (10)

1. A method of processing video signals to achieve a visual effect corresponding to that which would be achieved if an input two-dimensional image were projected onto a three-dimensional surface, the method comprising:
forming a video signal corresponding to said input two-dimensional image into digitized sample values each having an input pixel address comprising X and Y coordinate values;

characterised by:
storing X and Y scaling multipliers and Z coordinate values corresponding to the respective pixel addresses on said three-dimensional surface to which each of said input pixel addresses will be moved in achieving said effect; deriving from store the corresponding said X and Y scaling multipliers and Z coordinate value for each said input pixel address and multiplying said X and Y coordinate values of each said input pixel address by the corresponding said X and Y scaling multipliers respectively; and
supplying the resulting scaled X and Y coordinate values and the associated Z coordinate value to a perspective transformation device (33) to derive the required output X and Y coordinate values corresponding to said effect.
2. A method according to claim 1 further comprising multiplying said X and Y coordinate values of each said input pixel address by respective progressively changing X and Y scaling factors and then multiplying said X and Y coordinate values by said respective X and Y scaling multipliers corresponding to the resulting scaled X and Y coordinate values, and multiplying each said derived Z coordinate value by a respective progressively changing Z scaling factor, whereby said X and Y coordinate values and said derived Z coordinate values are changed progressively so that said output X and Y coordinate values move progressively from the positions in said input two-dimensional image to the final positions corresponding to said effect.
3. A method according to claim 1 of processing video signals to achieve a visual effect corresponding to that which would be achieved if said input two-dimensional image were projected onto said three-dimensional surface and then a further visual effect corresponding to that which would be achieved if said input two-dimensional image as projected onto said three-dimensional surface were transformed as if projected onto a further, different, three-dimensional surface, the method further comprising: storing further X and Y scaling multipliers and further Z coordinate values corresponding to the respective pixel addresses on said further three-dimensional surface to which each of said input pixel addresses will be moved in achieving said further effect;
deriving from store the corresponding said further X and Y scaling multipliers and further Z coordinate value for each said input pixel address and multiplying said X and Y coordinate values of each said input pixel address by the corresponding said further X and Y scaling multipliers respectively;
deriving progressively weighted averages of said resulting scaled X and Y coordinate values and the associated Z coordinate value and the resulting further scaled X and Y coordinate values and the. associated further Z coordinate value; and
supplying the resulting averaged X and Y coordinate values and the associated averaged Z coordinate value to said perspective transformation device (33) to derive said required output X and Y coordinate values corresponding to a progressive transformation from said effect to said further effect.
4. A method according to claim 1 of processing video signals also to achieve a linear visual effect involving linear manipulation of an image, the method further comprising:
storing in a microprocessor memory (41) a respective n x n matrix corresponding to each said linear manipulation;
controlling said microprocessor (40) to select said matrix, or for a linear visual effect involving a combination of two or more said linear manipulations said matrices, corresponding to said linear visual effect; controlling said microprocessor (40) to substitute into the selected matrix or matrices the respective parameters of said linear visual effect;
for a linear visual effect involving a combination of two or more said linear manipulations, controlling said microprocessor (40) to multiply the substituted matrices together;
deriving from the resulting matrix the coefficients required to calculate from input pixel addresses of said image the output pixel addresses corresponding to said linear visual effect; and
supplying said coefficients and signals representing the coordinates of said input pixel addresses to a hardware matrix circuit (50 to 67) which derives therefrom the signals representing the coordinates of the output pixel address corresponding to said linear visual effect.
5. A circuit for processing video signals to achieve a visual effect corresponding to that which would be achieved if an input two-dimensional image were projected onto a three-dimensional surface, the circuit comprising:
means for providing a video signal corresponding to said input two-dimensional image in the form of digitized sample values each having an input pixel address comprising X and Y coordinate values;

characterised by:
a mapping store (20) for storing X and Y scaling multipliers and Z coordinate values corresponding to the respective pixel addresses on said three-dimensional surface to which each of said input pixel addresses will be moved in achieving said effect;
multiplier means (22, 24) for multiplying said X and Y coordinate values of each said input pixel address by the corresponding said X and Y scaling multipliers derived from said mapping store (20); and
a perspective transformation device (33) to which the resulting scaled X and Y coordinate values and the associated Z coordinate values are supplied and which derives output X and Y coordinate values corresponding to said effect.
6. A circuit according to claim 5 comprising first, second, third, fourth and fifth multipliers (21, 23, 22, 24, 25), said X and Y coordinate values of each said input pixel address being multiplied in said first and second multipliers (21, 23) respectively by respective progressively changing X and Y scaling factors and said X and Y coordinate values then being multiplied in said third and fourth multipliers (22, 24) respectively by said respective derived X and Y scaling multipliers corresponding to the resulting scaled X and Y coordinate values, and each said Z coordinate being multiplied in said fifth multiplier (25) by a respective progressively changing Z scaling factor, whereby said X and Y coordinate values and said derived Z coordinate values are changed progressively so that said output X and Y coordinate values move progressively from the positions in said input two-dimensional image to the final positions corresponding to said effect.
7. A circuit according to claim 5 for processing video signals to achieve a visual effect corresponding to that which would be achieved if said input two-dimensional image were projected onto said three-dimensional surface and then a further visual effect corresponding to that which would be achieved if said input two-dimensional image as projected onto said three-dimensional surface were transformed as if projected onto a further, different, three-dimensional surface, the circuit comprising:
a mapping store (20) for storing further X and Y scaling multipliers and further Z coordinate values corresponding to the respective pixel addresses on said further three-dimensional surface to which each of said input pixel will be moved in achieving said further effect;
multiplier means (22, 24) for multiplying said X and Y coordinate values of each said input pixel address by the corresponding said further X and Y scaling multipliers respectively; and
means for deriving progressively weighted averages of said resulting scaled X and Y coordinate values and the associated Z coordinate value and the resulting further scaled X and Y coordinate values and the associated further Z coordinate value;
the resulting averaged X and Y coordinate values and the associated averaged Z coordinate value being supplied to said perspective transformation device (33) to derive said required output X and Y coordinate values corresponding to a progressive transformation from said effect to said further effect.
8. A circuit according to claim 5, claim 6 or claim 7 further comprising a first matrix (30) to which said X and Y coordinate values of each said input pixel address are supplied to effect off-setting or scaling or rotating or any combination thereof, of said input two-dimensional image.
9. A circuit according to any one of claims 5 to 8 further comprising a second matrix (32) to which said resulting scaled X and Y coordinate values and the associated Z coordinate values or said resulting averaged X and Y coordinate values and the associate averaged Z coordinate values are supplied to effect off-setting or scaling or rolling or pitching or yawing or any combination thereof, of the image corresponding to said effect or said further effect.
10. A circuit according to claim 5 for processing video signals also to achieve a linear visual effect involving linear manipulation of an image, the circuit further comprising:
a microprocessor (40) comprising a memory (41) in which is stored a respective n x m matrix corresponding to each said linear manipulation; means for supplying selection inputs to said microprocessor (40) to control said microprocessor (40) to select said matrix, or for a linear visual effect involving a combination of two or more said linear manipulations, said matrices corresponding to said linear visual effect, to substitute into the selected matrix or matrices the respective parameters of said linear visual effect, for a linear visual effect involving a combination of two or more said linear manipulations to multiply the substituted matrices together, and to derive from the resulting matrix the coefficients required to calculate from input pixel addresses of said image the output pixel addresses corresponding to said linear visual effect; and
a hardware matrix circuit (50 to 67) to which are supplied said coefficients and signals representing the coordinates of said input pixel addresses and which is operative to derive therefrom the output pixel addresses corresponding to said linear visual effect.
EP86303298A 1985-05-08 1986-04-30 Video signal processing Expired EP0205252B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AT86303298T ATE46404T1 (en) 1985-05-08 1986-04-30 VIDEO SIGNAL PROCESSING.

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB8511649 1985-05-08
GB08511649A GB2174861A (en) 1985-05-08 1985-05-08 Video signal special effects generator
GB8511648A GB2174860B (en) 1985-05-08 1985-05-08 Methods of and circuits for video signal processing
GB8511648 1985-05-08

Publications (2)

Publication Number Publication Date
EP0205252A1 true EP0205252A1 (en) 1986-12-17
EP0205252B1 EP0205252B1 (en) 1989-09-13

Family

ID=26289227

Family Applications (1)

Application Number Title Priority Date Filing Date
EP86303298A Expired EP0205252B1 (en) 1985-05-08 1986-04-30 Video signal processing

Country Status (5)

Country Link
US (1) US4682217A (en)
EP (1) EP0205252B1 (en)
JP (1) JPH0752925B2 (en)
CA (1) CA1254650A (en)
DE (1) DE3665639D1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0208448A2 (en) 1985-07-09 1987-01-14 Sony Corporation Methods of and circuits for video signal processing
GB2220540A (en) * 1988-06-07 1990-01-10 Thomson Video Equip Device for the digital processing of images to obtain special geometrical effects

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0681275B2 (en) * 1985-04-03 1994-10-12 ソニー株式会社 Image converter
GB2181929B (en) * 1985-10-21 1989-09-20 Sony Corp Methods of and apparatus for video signal processing
GB8613447D0 (en) * 1986-06-03 1986-07-09 Quantel Ltd Video image processing systems
US4841292A (en) * 1986-08-11 1989-06-20 Allied-Signal Inc. Third dimension pop up generation from a two-dimensional transformed image display
US4875097A (en) * 1986-10-24 1989-10-17 The Grass Valley Group, Inc. Perspective processing of a video signal
GB8706348D0 (en) * 1987-03-17 1987-04-23 Quantel Ltd Electronic image processing systems
JP2951663B2 (en) * 1987-08-05 1999-09-20 ダイキン工業株式会社 Texture mapping apparatus and method
DE3843232A1 (en) * 1988-12-22 1990-06-28 Philips Patentverwaltung CIRCUIT ARRANGEMENT FOR GEOMETRIC IMAGE TRANSFORMATION
US4975770A (en) * 1989-07-31 1990-12-04 Troxell James D Method for the enhancement of contours for video broadcasts
JP2773354B2 (en) * 1990-02-16 1998-07-09 ソニー株式会社 Special effect device and special effect generation method
US5214512A (en) * 1991-02-11 1993-05-25 Ampex Systems Corporation Keyed, true-transparency image information combine
US5231499A (en) * 1991-02-11 1993-07-27 Ampex Systems Corporation Keyed, true-transparency image information combine
US5379370A (en) * 1992-07-17 1995-01-03 International Business Machines Corporation Method and apparatus for drawing lines, curves, and points coincident with a surface
GB2270243B (en) * 1992-08-26 1996-02-28 Namco Ltd Image synthesizing system
US5670984A (en) * 1993-10-26 1997-09-23 Xerox Corporation Image lens
US5630140A (en) * 1995-01-23 1997-05-13 Tandem Computers Incorporated Ordered and reliable signal delivery in a distributed multiprocessor
JP3480648B2 (en) * 1996-11-12 2003-12-22 ソニー株式会社 Video signal processing apparatus and video signal processing method
KR102115930B1 (en) * 2013-09-16 2020-05-27 삼성전자주식회사 Display apparatus and image processing method
CN111439594B (en) * 2020-03-09 2022-02-18 兰剑智能科技股份有限公司 Unstacking method and system based on 3D visual guidance

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4124871A (en) * 1977-08-31 1978-11-07 International Business Machines Corporation Image data resolution change apparatus and process utilizing boundary compression coding of objects
GB2119594A (en) * 1982-03-19 1983-11-16 Quantel Ltd Video processing systems
US4432009A (en) * 1981-03-24 1984-02-14 Rca Corporation Video pre-filtering in phantom raster generating apparatus

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3261912A (en) * 1965-04-08 1966-07-19 Gen Precision Inc Simulated viewpoint displacement apparatus
US3602702A (en) * 1969-05-19 1971-08-31 Univ Utah Electronically generated perspective images
US4414565A (en) * 1979-10-16 1983-11-08 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Method and apparatus for producing three dimensional displays
US4489389A (en) * 1981-10-02 1984-12-18 Harris Corporation Real time video perspective digital map display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4124871A (en) * 1977-08-31 1978-11-07 International Business Machines Corporation Image data resolution change apparatus and process utilizing boundary compression coding of objects
US4432009A (en) * 1981-03-24 1984-02-14 Rca Corporation Video pre-filtering in phantom raster generating apparatus
GB2119594A (en) * 1982-03-19 1983-11-16 Quantel Ltd Video processing systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
E.D.N. ELECTRICAL DESIGN NEWS, vol. 28, no. 18, September 1983, pages 119-124, Boston, Massachusetts, US; K. SHOEMAKER: "Understand the basics to implement 3-D graphics" *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0208448A2 (en) 1985-07-09 1987-01-14 Sony Corporation Methods of and circuits for video signal processing
GB2220540A (en) * 1988-06-07 1990-01-10 Thomson Video Equip Device for the digital processing of images to obtain special geometrical effects
GB2220540B (en) * 1988-06-07 1992-08-19 Thomson Video Equip Device for the digital processing of images to obtain special geometrical effects

Also Published As

Publication number Publication date
EP0205252B1 (en) 1989-09-13
JPS62283784A (en) 1987-12-09
JPH0752925B2 (en) 1995-06-05
US4682217A (en) 1987-07-21
CA1254650A (en) 1989-05-23
DE3665639D1 (en) 1989-10-19

Similar Documents

Publication Publication Date Title
EP0205252B1 (en) Video signal processing
US4751660A (en) Determining orientation of transformed image
EP0221704B1 (en) Video signal processing
CA1228945A (en) Method for producing a geometrical transformation on a video image and devices for carrying out said method
US4965844A (en) Method and system for image transformation
US4965745A (en) YIQ based color cell texture
US6493467B1 (en) Image processor, data processor, and their methods
EP0283159B1 (en) Electronic image processing
JP3190762B2 (en) Digital video special effects device
JPH0458231B2 (en)
EP0560533B1 (en) Localized image compression calculating method and apparatus to control anti-aliasing filtering in 3-D manipulation of 2-D video images
US5327501A (en) Apparatus for image transformation
US4899295A (en) Video signal processing
GB2174861A (en) Video signal special effects generator
US6020932A (en) Video signal processing device and its method
JP2973573B2 (en) Image conversion device
GB2174860A (en) Video signal special effects generator
US5150213A (en) Video signal processing systems
US5220428A (en) Digital video effects apparatus for image transposition
US5488428A (en) Video special effect generating apparatus
EP0268359B1 (en) Method and apparatus for processing video image signals
GB2215552A (en) Video signal transformation
JPS61237171A (en) Image converter
JP2840690B2 (en) Wipe pattern generator and video signal processor
JPS6282778A (en) Picture special effecting device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT DE FR GB NL

17P Request for examination filed

Effective date: 19870502

17Q First examination report despatched

Effective date: 19881123

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT DE FR GB NL

REF Corresponds to:

Ref document number: 46404

Country of ref document: AT

Date of ref document: 19890915

Kind code of ref document: T

REF Corresponds to:

Ref document number: 3665639

Country of ref document: DE

Date of ref document: 19891019

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed
REG Reference to a national code

Ref country code: GB

Ref legal event code: IF02

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20050403

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20050408

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: AT

Payment date: 20050413

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20050427

Year of fee payment: 20

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20050428

Year of fee payment: 20

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20060429

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF EXPIRATION OF PROTECTION

Effective date: 20060430

REG Reference to a national code

Ref country code: GB

Ref legal event code: PE20

NLV7 Nl: ceased due to reaching the maximum lifetime of a patent

Effective date: 20060430