US 6804415 B2 Abstract The invention relates to a method for providing an image to be displayed on a screen
10 such that a viewer in any current spatial position O in front of the screen 10 can watch the image with only a minimal perspective deformation. According to the invention this is achieved by estimating the current spatial position O of a viewer in relation to a fixed predetermined position Q representing a viewing point in front of the screen 10 from which the image could be watched without any perspective deformation and by providing the image by applying a variable perspective transformation to an originally generated image in response to said estimated current position O of the viewer, such that the viewer in said position O is enabled to watch the image without a perspective deformation.Claims(15) 1. Method for providing an image to be displayed on a screen (
10), in particular on a TV screen or on a computer monitor, the method being characterized by the following steps of:estimating a current spatial position O of a viewer in relation to a fixed predetermined position Q representing a viewing point in front of the screen (
10) from which the image could be watched without any perspective deformation; and providing the image by applying a variable perspective transformation to an originally generated image in response to said estimated current position O of the viewer, such that the viewer in said position O is enabled to watch the image without a perspective deformation.
2. The method according to
wherein:
u,v are the co-ordinates of a pixel of the original image before transformation;
x,y are the co-ordinates of the pixel of the provided image after the transformation; and
a,b,c,d,e,f,g,h and i are variable coefficients defining the transformation and being adapted in response to the estimated current position O of the viewer.
3. The method according to
4. The method according to
5. The method according to
wherein:
L
_{d }is the fixed distance between the position Q and the centre of an area A of the image when being displayed on the screen; R
_{ij }with i=1−3 and j=1−3 are the coefficients of a rotation matrix for rotating the pixels; t
_{x}, t_{y }and t_{z }are the coefficients of a translation vector; and wherein the rotation matrix and the translation vector are calculated according to the estimated current spatial position O of the viewer in relation to the position Q.
6. The method according to
wherein:
R
_{1l}=R_{33}=cosφ, R_{13}=−sinφ, R_{31}=sinφ; and t
_{x}=t_{y}=t_{z}=0 with: wherein x
_{0 }and z_{0 }are co-ordinates of the viewer position in a co-ordinate system having Q as origin. 7. The method according to
8. The method according to
calculating preliminary scale factors si i=1−8 by solving the following 8 linear equation systems:
wherein the x,y vectors on the right side of each of said linear systems represent the fictive co-ordinates of a corner of the original image on the screen, wherein the x,y vectors on the left side respectively represents the co-ordinate of a corner of the provided image actually displayed on the screen, and wherein the co-ordinates x
_{right}, x_{left}, y_{bottom }and y_{top }are predetermined; and selecting the minimal one of said calculated preliminary scale factors si as the optimal scale factor.
9. The method according to
10. The method according to
11. The method according to
12. Apparatus (
1) for providing an image to be displayed on a screen (10), in particular on a TV screen or on a computer monitor, the apparatus is characterized by:an estimation unit (
20) for outputting a positioning signal representing an estimation of a current spatial position O of a viewer in front of the screen (10) in relation to a fixed predetermined position Q representing a viewing point from which the image on the screen (10) can be watched without any perspective deformation; and a correcting unit (
30′) for applying a variable perspective transformation to an originally generated image in response to said positioning signal such that the viewer in the position O is enabled to watch the image without said perspective deformation. 13. The apparatus according to
wherein:
u,v are the co-ordinates of a pixel of the original image before transformation;
x,y are the co-ordinates of the pixel of the provided image after the transformation; and
a,b,c,d,e,f,g,h and i are variable coefficients defining the transformation and being adapted in response to the estimated current position O of the viewer such that the viewer in any current spatial position O is enabled to watch the image on the screen (
10) without perspective deformations. 14. The apparatus (
1) according to 15. The apparatus (
1) according to Description The invention relates to a method and an apparatus for providing an image to be displayed on a screen, in particular on a TV screen or on a computer monitor, according to the preambles of claims Such a method and apparatus are known in the art. However, in prior art it is also known that perspective deformations may occur when an image is displayed on a screen depending on the current position of a viewer watching the screen. That phenomenon shall now be explained in detail by referring to FIG. FIG. 3 is based on the assumption that a 2-dimensional image of a 3-dimensional scene is either taken by a camera, e.g. a TV camera or generated by a computer graphic program, e.g. a computer game. Moreover, an assumption is made about the location of the centre of the projection P and the rectangular viewport S of the original image, wherein P and S relate to the location where the original image is generated, e.g. a camera, but not necessarily to the location where it is later watched by a viewer. P and S are considered to form a fictive first pyramid as shown in FIG. The original image might be generated by the camera or by the computer program by using different transformations known in the art: One example for such a transformation is the change of the viewing point from which a particular scene is watched by the real or virtual camera. Another example for such a transformation is the following one used in computer graphic applications for correcting texture mapping. Such a transformation may be described according to the following equation: wherein: the term (gu+hv+i) represents a division per pixel; u,v are the co-ordinates of a pixel of the image before transformation; x,y are the co-ordinates of the pixel of the image after the transformation; and a,b,c,d,e,f,g,h and i are variable coefficients being individually defined by the graphic program. However, irrespective as to whether the original image has been generated by conducting such transformations or not or as to whether the image has been generated by a camera or by a computer program, there is only one spatial position Q in the location where the image is later watched after its generation, i.e. in front of a screen Said position Q is fix in relation to the position of the screen The position Q is illustrated in FIG. 3 as the top of a second fictive pyramid which is restricted by an rectangular area A of the image when being displayed on the screen More specifically, the first and the second pyramid are similar if the following two conditions are fulfilled simultaneously: a) Q lies on a line L which is orthogonal to the area A of the displayed image and goes through the centre of A; and b) the distance between Q and the centre of A is such that the top angles of the two pyramids are equal. If condition a) is not fulfilled there will be an oblique second pyramid; if condition b) is not fulfilled there will be an erroneous perspective shortening in case of occulsion, i.e. different objects of the original 3D scene get false relative apparent depths. The case that the condition a) is not fulfilled is more annoying to the viewer than the case that condition b) is not fulfilled. Expressed in other words, if the current position O of the viewer watching the image on the screen In prior art a suboptimal approach is known to overcome these disadvantages by adapting the displayed image to the current position of the viewer. More specifically, that approach proposes to rotate the physical screen by hand or by an electric motor such that condition a) is fulfilled; e.g. Bang & Olufsen sells a TV having a motor for rotating the screen. According to that approach rotation of the screen is controlled in response to the distance |0-Q | between the position O of the viewer and the fix position Q. Rotation of the screen by hand is inconvenient for the viewer and the rotation by motor is expensive and vulnerable. Moreover, condition b) can not be fulfilled by that approach. Starting from that prior art it is the object of the invention to improve a method and apparatus for providing an image to be displayed on a screen such that the application of the method is more convenient to a user or a viewer of the image. Said object is solved by the method according to claim Advantageously said perspective transformation enables a viewer in any current spatial position O in front of the screen to watch the image on the screen without perspective deformations. Consequently, the visibility of the displayed image and in particular the readability of displayed text is improved. Said transformation is convenient to the viewer because he does not get aware of an application of the transformation when he is watching the images on the screen. There is no physical movement of the screen like in the prior art. Moreover, the implementation of the transformation can be realised cheaply and usually no maintenance is required. The application of said method is in particular helpful for large screen TV's or large monitors. According to an embodiment of the invention the perspective transformation of the original image advantageously includes a rotation and/ or a translation of the co-ordinates of at least one pixel of the originally generated image. In that case an exact transformation of the position of the viewer into the ideal position Q can be achieved and perspective deformations can completely be deleted. Preferably, the estimation of the position O of the viewer is done by tracking the head or the eye of the viewer. Alternatively, said estimation is done by estimating the current position of a remote control used by the viewer for controlling the screen. Preferably, the method steps of the method according to the invention are carried out in real-time because in that case the viewer is not optically disturbed when watching the image on the screen. Further advantageous embodiments of the method according to the invention are subject matter of the dependent claims. The object of the invention is further solved by an apparatus according to claim Advantageously, the estimation unit and/ or the correcting unit is included in a TV set or alternatively in a computer. In these cases, there are no additional unit required which would otherwise have to be placed close to the TV set or to the computer. The object of the invention is further solved by the subject matter of claim In the following a preferred embodiments of the invention will be described by referring to the accompanying figures, wherein: FIG. 1 shows an apparatuses for carrying out a method according to the invention; FIG. 2 illustrates the watching of an image on a screen without perspective deformations according to the invention; and FIG. 3 illustrates the watching of an image on a screen from an ideal position Q without any perspective deformations as known in the art. FIG. 1 shows an apparatus The apparatus The correction unit The transformation is represented by formula (1) known in the art as described above. The usage of said transformation does not change a fix position Q from which the image can be watched after its generation without any perspective deformations. It is important to note that the location where the original image is generated and the location where said image is later watched on a TV-screen or on a monitor are usually different. Moreover, at the time when the original image is generated a current or actual position O of a viewer in front of the screen when watching the image is not known and can thus not be considered when generating the original image. Based on that situation the invention teaches another application of the known transformation according to equation 1. More specifically, according to the invention said transformation is used to enable a viewer to watch the image not only from the position Q but from any arbitrary position O in front of the screen More specifically, according to the invention the variable coefficients a,b,c,d,e,f,g,h and i of said transformation are adapted in response to the currently estimated position O of the viewer. The transformation with the such adapted coefficients is subsequently applied to the original image in order to provide the image to be displayed on the screen. Said displayed image can be watched by the viewer from any position in front of the screen A method for carrying out the adaptation will now be explained in detail by referring to FIG. 1. Defining a co-ordinate system with Q as origin in which the x-axis lies in a horizontal direction, in which the y-axis lies in the vertical direction and in which the z-axis also lies in the horizontal direction leading from the position Q through the centre of the area A of the image displayed on the screen 2. The parameters u and v as used in the transformation according to equation (1) relate to an only two-dimensional Euclidian co-ordinate system having its origin in the centre of the area A. For later being able to calculate the coefficients a-i of the transformation, the co-ordinates u and v are transformed from said two-dimensional Euclidian space into a three-dimensional Euclidian space having Q as origin according to the following equation: wherein L 3. The co-ordinates (u, v, L 4.Subsequently, an Euclidian transformation T is calculated to change the co-ordinate system such that the viewer position O is made the centre of the new co-ordinate system. Said Euclidian transformation T is in general calculated according to: wherein: the vector [x R t the vector [0, 0, 0, 1] represents the origin of the new co-ordinate system corresponding to the position O of the viewer. 5. The found transformation T is now applied to all the pixels of the rectangle area A, i.e. to the pixel co-ordinates of the displayed image, according to Equation 5 represents a transformation of the pixel co-ordinates [u, v, L 6. In the next method step, the transformed pixel positions resulting from equation 5 are transformed by applying a perspective image transformation to them. Said perspective image transformation serves for transforming the transformed co-ordinates [x 7. The vector [u 8. Thus, equation 6 It is important to note that equation 8″ corresponds to equation 1!! 9. Consequently, the variable coefficients a-i of the transformation according to equation 1 can be calculated by comparing equation 6 Method steps 1 to 9 and in particular equation 9 are in general known in the art from texture mapping in computer graphics application. However, in difference to the prior art, according to the present invention the coefficients a-i according to equation 9 are calculated inter alia from the estimated current position of the viewer O in front of the screen Equation 9 represents an exact transformation including in general a rotation and a translation such that it compensates for any arbitrary position O of the viewer. More specifically, L In the following, two simple examples for applying the perspective transformation, i.e. for calculating the coefficients a-i, according to the present invention are provided. In a first example it is assumed that a viewer O stays on the line L connecting the centre of the area A of the screen with the position Q in front of the screen. In that case a rotation of the image to be displayed is obsolete; only a translative transformation is required. With equation 1 and the calculated coefficients a-i it is possible to compensate for an eye-position O of the viewer on the line L closer to the area A than to the position Q, but in that case some of the outer area of the received image is not displayed. Or it is possible to compensate for an eye-position O of the viewer on the line L further away from the area A than to the position Q, but in that case some of the outer area of the screen is not used. In the second example, it is assumed that the viewer O does not stay on the line L but having a distance to the centre of the area A which corresponds to the distance between Q and said centre. Consequently, the translation tx, ty and tz coefficients in equation 9 can be ignored and only correction for the horizontal rotation needs to be considered. The required perspective transformation follows from the pyramid with O as top and with a line through the position O of the viewer and the centre of the screen as centre line. According to FIG. 2 rotation around the position Q is required to get the position O of the viewer onto the line L, after the position O is projected on a horizontal plane through Q (thus, ignoring the y-co-ordinate of the position O ). This gives: wherein x The rotation around Q changes the distance to the centre of A, but after the rotation a scaling is required to avoid that either a part of the transformed image is outside the display screen or that a large part of the display screen is not used: The scale factor can be computed by requiring that the perspective projections of the four corners of the original rectangle A are on the four boundaries of the rectangle: with P(s,u,v) being a rational perspective transformation derived from the right side of equation 12 in the same way as equation 8′ is derived from equation 6 This gives eight scale factors si i=1-8. The smallest one should be used as the final or optimal scale factor in equation 12 ensuring that the area of the transformed image completely fits into the area of the screen on which it shall be displayed. The co-ordinates x and y on both sides of equations 13 to 20 represent co-ordinates of the image area A in the two-dimensional Euclidian space. Equations 13 to 20 express conditions or requirements for the position of the corners of a new area Anew of the image after transformation. E.g. in equation 13 it is required that the x-co-ordinate of the corner of the original rectangle A represented by the co-ordinates x Returning back to equation 1 and FIG. 2 it shall be pointed out that an application of the perspective transformation according to equation 1 onto an originally generated image ensures that the pyramid formed by the position Q and the original area A of the image on the screen The proposed solution may be implemented in future TV's which are extended with means for special grapic effects inclusive cheap hardware for carrying out the required calculations according to the invention with only little additional costs. Patent Citations
Referenced by
Classifications
Legal Events
Rotate |