§ 1. BACKGROUND OF THE INVENTION
§ 1.1 Field of the Invention
The present invention concerns techniques for enhancing the resolution of images, such as fonts, line drawings, or black-and-white or full-color images for example, to be rendered on a patterned output device, such as a flat panel video monitor or an LCD video monitor for example.
§ 1.2 Related Art
The present invention may be used in the context of patterned output devices such as flat panel video monitors, or LCD video monitors for example. In particular, the present invention may be used as a part of processing to produce higher resolution images, such as more legible text for example, on LCD video monitors. Although the structure and operation of display devices in general, and flat panel display devices, such as LCD monitors for example, in particular, are known by those skilled in the art, they are discussed in § 1.2.1 below for the reader's convenience. Then, known ways of rendering text, line art and graphics on such displays are discussed in §§ 1.2.2, 1.2.3 and 1.2.4 below.
§ 1.2.1 Display Devices
Color display devices have become the principal display devices of choice for most computer users. Color is typically displayed on a monitor by operating the display device to emit light (such as a combination of red, green, and blue light for example) which results in one or more colors being perceived by the human eye.
Although color video monitors in general, and LCD video monitors in particular, are known to those skilled in the art, they are introduced below for the reader's convenience. In § 188.8.131.52 below, cathode ray tube (or CRT) video monitors are first introduced. Then, in § 184.108.40.206 below, LCD video monitors are introduced.
§ 220.127.116.11 CRT Video Monitors
Cathode ray tube (CRT) display devices include phosphor coatings which may be applied as dots in a sequence on the screen of the CRT. A different phosphor coating is normally associated with the generation of different colors, such as red, green, and blue for example. Consequently, repeated sequences of phosphor dots are defined on the screen of the video monitor. When a phosphor dot is excited by a beam of electrons, it will generate its associated color, such as red, green and blue for example.
The term “pixel” is commonly used to refer to one spot in a group of spots, such as rectangular grid of thousands of such spots for example. The spots are selectively activated to form an image on the display device. In most color CRTs, a single triad of red, green and blue phosphor dots cannot be uniquely selected. Consequently, the smallest possible pixel size will depend on the focus, alignment and bandwidth of the electron guns used to excite the phosphor dots. The light emitted from one or more triads of red, green and blue phosphor dots, in various arrangements known for CRT displays, tend to blend together giving, at a distance, the appearance of a single colored light source.
In color displays, the intensity of the light emitted from the additive primary colors (such as red, green, and blue) can be varied to achieve the appearance of almost any desired color pixel. Adding no color, i.e., emitting no light, produces a black pixel. Adding 100 percent of all three (3) colors produces a white pixel.
Having introduced color CRT video monitors, color LCD video monitors are now introduced in § 18.104.22.168 below.
§ 22.214.171.124 LCD Video Monitors
Portable computing devices (also referred to generally as computing appliances or untethered computing appliances) often use liquid crystal displays (LCDs) or other flat panel display devices, instead of CRT displays. This is because flat panel displays tend to be smaller and lighter than CRT displays. In addition, flat panel displays are well suited for battery powered applications since they typically consume less power than comparably sized CRT displays. Further, LCD flat panel monitors are even becoming more popular in the desktop computing environment.
Color LCD displays are examples of display devices which distinctly address elements (referred to herein as pixel sub-components, pixel sub-elements, or simply, emitters) to represent each pixel of an image being displayed. Normally, each pixel element of a color LCD display includes three (3) non-square elements. More specifically, each pixel element may include adjacent red, green and blue (RGB) pixel sub-components. Thus, a set of RGB pixel sub-components together define a single pixel element.
Known LCD displays generally include a series of RGB pixel sub-components which are commonly arranged to form stripes along the display. The RGB stripes normally run the entire length of the display in one direction. The resulting RGB stripes are sometimes referred to as “RGB striping”. Common LCD monitors used for computer applications, which are wider than they are tall, tend to have RGB vertical stripes. Naturally, however, some LCD monitors may have RGB horizontal stripes.
FIG. 1 illustrates a known LCD screen 100 comprising pixels arranged in a plurality of rows (R1-R12) and columns (C1-C16). That is, a pixel is defined at each row-column intersection. Each pixel includes a red pixel sub-component, depicted with moderate stippling, a green component, depicted with dense stippling, and a blue component, depicted with sparse stippling. FIG. 2 illustrates the upper left hand portion of the known display 100 in greater detail. Note how each pixel element, such as, the (R2, C4) pixel element for example, comprises three (3) distinct sub-element or sub-components; a red sub-component 206, a green sub-component 207 and a blue sub-component 208. In the exemplary display illustrated, each known pixel sub-component 206, 207, 208 is ⅓, or approximately ⅓, the width of a pixel while being equal, or approximately equal, in height to the height of a pixel. Thus, when combined, the three ⅓ width, full height, pixel sub-components 206, 207, 208 define a single pixel element.
As illustrated in FIG. 1, one known arrangement of RGB pixel sub-components 206, 207, 208 form what appear to be vertical color stripes on the display 100. Accordingly, the arrangement of ⅓ width color sub-components 206, 207, 208, in the known manner illustrated in FIGS. 1 and 2, exhibit what is sometimes called “vertical striping”.
In known systems, the RGB pixel sub-components are generally used as a group to generate a single colored pixel corresponding to a single sample of the image to be represented. More specifically, in known systems, luminous intensity values for all the pixel sub-components of a pixel element are generated from a single sample of the image to be rendered.
Having introduced the general structure and operation of known LCD displays, known techniques for rendering text on such LCD displays, as well as perceived shortcomings of such known techniques, are introduced in § 1.2.2 below. Then, known techniques for rendering line art or images on such LCD displays, as well as perceived shortcomings of such known techniques, are introduced in § 1.2.3 below. Finally, rendering graphics is introduced in § 1.2.4 below.
§ 1.2.2 Rendering Text on Displays
The expression of textual information using font sets is introduced in § 126.96.36.199 below. Then, the rendering of textual information using so-called pixel precision and perceived shortcomings of doing so are introduced in § 188.8.131.52 below.
§ 184.108.40.206. Font Sets
A “font” is a set of characters of the same typeface (such as Times Roman, Courier New, etc.), the same style (such as italic), the same weight (such as bold and, strictly speaking, the same size). Characters may include symbols, such as the “Parties MT”, “Webdings”, and “Wingdings” symbol groups found on the Word™ word processor from Microsoft Corporation of Redmond, Wash. for example. A “typeface” is a specific named design of a set of printed characters (e.g., Helvetica Bold Oblique), that has a specified obliqueness (i.e., degree of slant) and stoke weight (i.e., line thickness). Strictly speaking, a typeface is not the same as a font, which is a specific size of a specific typeface (such as 12-point Helvetica Bold Oblique). However, since some fonts are “scalable”, the terms “font” and “typeface” may sometimes be used interchangeably. A “typeface family” is a group of related typefaces. For example, the Helvetica family may include Helvetica, Helvetica Bold, Helvetica Oblique and Helvetica Bold Oblique.
Many modern computer systems use font outline technology, such as scalable fonts for example, to facilitate the rendering and display of text. TrueType™ fonts from Microsoft Corporation of Redmond, Wash. are an example of such technology. In such systems, various font sets, such as “Times New Roman,” “Onyx,” “Courier New,” etc. for example, may be provided. The font set normally includes an analytic outline representation, such as a series of contours for example, for each character which may be displayed using the provided font set. The contours may be straight lines or curves for example. Curves may be defined by a series of points that describe second order Bezier-splines for example. The points defining a curve are typically numbered in consecutive order. The ordering of the points may be important. For example, the character outline may be “filled” to the right of curves when the curves are followed in the direction of increasing point numbers. Thus the analytic character outline representation may be defined by a set of points and mathematical formulas.
The point locations may be described in “font units” for example. A “font unit” may be defined as the smallest measurable unit in an “em” square, which is an imaginary square that is used to size and align glyphs (a “glyph” can be thought of as a character). FIG. 3 illustrates an “em” square 310 around a character outline 320 of the letter Q. Historically, an “em” was approximately equal to the width of a capital M. Further, historically, glyphs could not extend beyond the em square. More generally, however, the dimensions of an “em” square are those of the full body height 340 of a font plus some extra spacing. This extra spacing was provided to prevent lines of text from colliding when typeset without extra leading was used. Further, in general, portions of glyphs can extend outside of the em square. The coordinates of the points defining the lines and curves (or contours) may be positioned relative to a baseline 330 (Y coordinate=0). The portion of the character outline 320 above the baseline 330 is referred to as the “ascent” 342 of the glyph. The portion of the character outline 320 below the baseline 330 is referred to as the “decent” 344 of the glyph. Note that in some languages, such as Japanese for example, the characters sit on the baseline, with no portion of the character extending below the baseline.
The stored outline character representation normally does not represent space beyond the maximum horizontal and vertical boundaries of the character (also referred to as “white space” or “side bearings”). Therefore, the stored character outline portion of a character font is often referred to as a black body (or BB). A font generator is a program for transforming character outlines into bitmaps of the style and size required by an application. Font generators (also referred to as “rasterizers”) typically operate by scaling a character outline to a requested size and can often expand or compress the characters that they generate.
In addition to stored black body character outline information, a character font normally includes black body size, black body positioning, and overall character width information. Black body size information is sometimes expressed in terms of the dimensions of a bounding box used to define the vertical and horizontal borders of the black body.
Certain terms used to define a character are now defined with reference to FIG. 4, which illustrates character outlines of the letters A and I 400. Box 408 is a bounding box which defines the size of the black body 407 of the character (A). The total width of the character (A), including white space to be associated with the character (A), is denoted by an advance width (or AW) value 402. The advance width typically starts to a point left of the bounding box 408. This point 404 is referred to as the left side bearing point (or LSBP). The left side bearing point 404 defines the horizontal starting point for positioning the character (A) relative to a current display position. The horizontal distance 410 between the left end of the bounding box 408 and the left side bearing point 404 is referred to as the left side bearing (or LSB). The left side bearing 410 indicates the amount of white space to be placed between the left end of the bounding box 408 of a current character (A) and the right side bearing point of the preceding character (not shown). The point 406 to the right of the bounding box 408 at the end of the advance width 402 is referred to as the right side bearing point (or RSBP). The right side bearing point 406 defines the end of the current character (A) and the point at which the left side bearing point 404′ of the next character (I) should be positioned. The horizontal distance 412 between the right end of the bounding box 408 and the right side bearing point 406 is referred to as the right side bearing (or RSB). The right side bearing 412 indicates the amount of white space to be placed between the right end of the bounding box 408 of a current character (A) and the left side bearing point 404′ of the next character (I). Note that the left and right side bearings may have zero (0) or negative values. Note also that in characters used in Japanese and other Far Eastern languages, metrics analogous to advance width, left side bearing and right side bearing—namely, advance height (AH), top side bearing (TSB) and bottom side bearing (BSB)—may be used.
As discussed above, a scalable font file normally includes black body size, black body positioning, and overall character width information for each supported character. The black body size information may include horizontal and vertical size information expressed in the form of bounding box 408 dimensions. The black body positioning information may expressed as a left side bearing value 410. Overall character width information may be expressed as an advance width 402.
§ 220.127.116.11 Rendering Text to Pixel Precision
In the following, known techniques for rendering text on an output device such as a display (or printer) is described in § 18.104.22.168.1. Then, an example illustrating round-off errors which may occur when using such known techniques is described in § 22.214.171.124.2.
§ 126.96.36.199.1 Technique for Rendering Text
FIG. 5 is a high level diagram of processes that may be performed when an application requests that text be rendered on a display device. Basically, as will be described in more detail below, text may be rendered by: (i) loading a font and supplying it to a rasterizer; (ii) scaling the font outline based on the point size and the resolution of the display device; (iii) applying hints to the outline; (iv) filling the grid fitted outline with pixels to generate a raster bitmap; (v) scanning for dropouts (optional); (vi) caching the raster bitmap; and (vii) transferring the raster bitmap to the display device.
In the case of scaling fonts, the font unit coordinates used to define the position of points defining contours of a character outline are scaled to device specific pixel coordinates. That is, when the resolution of the em square is used to define a character outline, before that character can be displayed, it must be scaled to reflect the size, transformation and the characteristics of the output device on which it is to be rendered. The scaled outline describes the character outline in units that reflect the absolute unit of measurement used to measure pixels of the output device, rather than the relative system of measurement of font units per em. Specifically, with known techniques, values in the em square are converted to values in the pixel coordinate system in accordance with the following formula:
where the character outline size is in font units, and output device resolution is in pixels/inch.
The resolution of the output device may be specified by the number of dots or pixels per inch (dpi). For example, a VGA video monitor may be treated as a 96 dpi device, a laser printer may be treated as a 300 dpi device, an EGA video monitor may be treated as a 96 dpi device in the horizontal (X) direction, but a 72 dpi device in the vertical (Y) direction. The font units per em may (but need not) be chosen to be a power of two (2), such as 2048 (=211) for example.
FIG. 5 is a high level diagram of processes which may be performed by a known text rendering system. As shown in FIG. 5, an application process 510, such as a word processor or contact manager for example, may request that text be displayed and may specify a point size for the text. Although not shown in FIG. 5, the application process 510 may also request a font name, background and foreground colors and a screen location at which the text is to be rendered. The text and, if applicable, the point size, 512 are provided to a graphics display interface (or GDI) process (or more generally, a graphics display interface) 522. The GDI process 522 uses display information 524 (which may include such display resolution information as pixels per inch on the display) and character information 525 (which may be a character outline information which may be represented as points defining a sequence of contours such as lines and curves, advance width information and left side bearing information) to generate glyphs (or to access cached glyphs which have already been generated). Glyphs may include a bitmap of a scaled character outline (or a bounding box 308 containing black body 307 information), advance width 302 information, and left side bearing 310 information. Each of the bits of the bitmap may have associated red, green and blue luminous intensity values. The graphics display interface process 522 is described in more detail in § 188.8.131.52.1.1 below. The graphics display interface process 522, the display information 524, and the glyph cache 526 may be a part of, and effected by, an operating system, such as the Windows® CE or Windows NT® operating systems (from Microsoft Corporation of Redmond, Wash.) for example.
Glyphs (also referred to as digital font representations) 528′ or 528, either from the glyph cache 526 or from the graphics display interface process 522, are then provided to a display driver management process (or more generally, a display driver manager) 535. The display driver management process 535 may be a part of a display (or video) driver 530. Typically, a display driver 530 may be software which permits a computer operating system to communicate with a particular video display. Basically, the display driver management process 535 may invoke a color palette selection process 538. These processes 535 and 538 serve to convert the character glyph information into the actual pixel intensity values. The display driver management process 535 receives, as input, glyphs and display information 524′. The display information 524′ may include, for example, foreground/background color information, color palette information and pixel value format information.
The processed pixel values may then be forwarded as video frame part(s) 540 along with screen (and perhaps window) positioning information (e.g., from the application process 510 and/or operating system), to a display (video) adapter 550. A display adapter 550 may include electronic components that generate a video signal sent to the display 560. A frame buffer process 552 may be used to store the received video frame part(s) in a screen frame buffer 554 of the display adapter 550. Using the screen frame buffer 554 allows a single image of, e.g., a text string, to be generated from glyphs representing several different characters. The video frame(s) from the screen frame buffer 554 is then provided to a display adaptation process 553 which adapts the video for a particular display device. The display adaptation process 558 may also be effected by the display adapter 550.
Finally, the adapted video is presented to the display device 560, such as an LCD display for example, for rendering.
Having provided an overview of a text rendering system, the graphics display interface process 522 is now described in more detail in § 184.108.40.206.1.1 below. The processes which may be performed by the display driver are then described in more detail in § 220.127.116.11.1.2 below.
§ 18.104.22.168.1.1 Graphics Display Interface
FIG. 6 illustrates processes that may be performed by a graphics display interface (or GDI) process 522, as well as data that may be used by the GDI process 522. As shown in FIG. 6, the GDI process 522 may include a glyph cache management process (or more generally, a glyph cache manager) 610 which accepts text, or more specifically, requests to display text, 512. The request may include the point size of the text. The glyph cache management process 610 forwards this request to the glyph cache 526. If the glyph cache 526 includes the glyph corresponding to the requested text character, it provides it for downstream processing. If, on the other hand, the glyph cache 526 does not have the glyph corresponding to the requested text character, it so informs the glyph cache management process 610 which, in turn, submits a request to generate the needed glyph to the type rasterization process (or more generally, a type rasterizer) 620. Basically, a type rasterization process 620 may be effected by hardware and/or software and converts a character outline (which may, recall, include points which define contours such as lines and curves based on mathematical formulas) into a raster (that is, a bitmapped) image. Each pixel of the bitmap image may have a color value and a brightness for example. A type rasterization process is described in § 22.214.171.124.1.1.1 below.
§ 126.96.36.199.1.1.1 Rasterizer
To reiterate, the type rasterization process 620 basically transforms character outlines into bitmapped images. The scale of the bitmap may be based on the point size of the font and the resolution (e.g., pixels per inch) of the display device 560. The text, font, and point size information may be obtained from the application 510, while the resolution of the display device 560 may be obtained from a system configuration or display driver file or from monitor settings stored in memory by the operating system. The display information 524 may also include foreground/background color information, gamma values, color palette information and/or display adapter/display device pixel value format information. To reiterate, this information may be provided from the graphics display interface 522 in response to a request from the application process 510. If, however, the background of the text requested is to be transparent (as opposed to Opaque), the background color information is what is being rendered on the display (such as a bitmap image or other text for example) and is provided from the display device 560 or the video frame buffer 554.
Basically, the rasterization process may include two (2) or three (3) sub-steps or sub-processes. First, the character outline is scaled using a scaling process 622. This process is described below. Next, the scaled image generated by the scaling process 622 may be placed on a grid and have portions extended or shrunk using a hinting process 626. This process is also described below. Then, an outline fill process 628 is used to fill the grid-fitted outline to generate a raster bitmap. This process is also described below.
When scaling fonts in conventional systems such as TrueType™ from Microsoft Corporation of Redmond, Wash., the font unit coordinates used to define the position of points defining contours of a character outline were scaled to device specific pixel coordinates. That is, since the resolution of the em square was used to define a character outline, before that character could be displayed, it was scaled to reflect the size, transformation and the characteristics of the output device on which it was to be rendered. Recall that the scaled outline describes the character outline in units that reflect the absolute unit of measurement used to measure pixels of the output device, rather than the relative system of measurement of font units per em. Thus, recall that values in the em square were converted to values in the pixel coordinate system in accordance with the following formula:
where the character outline size is in font units, and output device resolution is in pixels/inch.
Recall that the resolution of an output device may be specified by the number of dots or pixels per inch (dpi).
The purpose of hinting (also referred to as “instructing a glyph”) is to ensure that critical characteristics of the original font design are preserved when the glyph is rendered at different sizes and on different devices. Consistent stem weights, consistent “color” (that is, in this context, the balance of black and white on a page or screen), even spacing, and avoiding pixel dropout are common goals of hinting. In the past, uninstructed, or unhinted, fonts would generally produce good quality results at sufficiently high resolutions and point sizes. However, for many fonts, legibility may become compromised at smaller point sizes on lower resolution displays. For example, at low resolutions, with few pixels available to describe the character shapes, features such as stem weights, crossbar widths and serif details can become irregular, or inconsistent, or even missed completely.
Basically, hinting may involve “grid placement” and “grid fitting”. Grid placement is used to align a scaled character within a grid, that is used by a subsequent outline fill process 628, in a manner intended to optimize the accurate display of the character using the available sub-pixel elements. Grid fitting involves distorting character outlines so that the character better conforms to the shape of the grid. Grid fitting ensures that certain features of the glyphs are regularized. Since the outlines are only distorted at a specified number of smaller sizes, the contours of the fonts at high resolutions remain unchanged and undistorted.
In grid placement, sub-pixel element boundaries may be treated as boundaries along which characters can, and should, be aligned or boundaries to which the outline of a character should be adjusted.
Other known hinting instructions may also be carried out on the scaled character outline.
In an implementation of anti-aliased text for TrueType™ fonts supported in Windows NT™ 4, the hinted image 627 is overscaled four (4) times in both the X and Y directions. The image is then sampled, i.e. for every physical pixel, which is represented by 4-by-4 portion of the grid in an overscaled image, the blend factor alpha is computed for that pixel by simply counting the squares having centers which lie within the glyph outline and dividing the result by 16. As a result, the foreground/background blend factor alpha is expressed as k/16 and is computed for every pixel. This whole process is also called standard anti-aliasing filtering. Unfortunately, however, such standard anti-aliasing tends to blur the image. Similar implementation exists in Windows 95 and Windows 98, and the only difference is that the image is overscaled two (2) times in both X and Y, so that alpha for every pixel is expressed as k/4, where k is a number of squares within the glyph outline.
The outline fill process 628 basically determines whether the center of each pixel is enclosed within the character outline. If the center of a pixel is enclosed within the character outline, that pixel is turned ON. Otherwise, the pixel is left OFF. The problem of “pixel dropout” may occur whenever a connected region of a glyph interior contains two ON pixels that cannot be connected by a straight line that passes through only those ON pixels. Pixel dropout may be overcome by looking at an imaginary line segment connected two adjacent pixel centers, determining whether the line segment is intersected by both an on-transition contour and off-transition contour, determining whether the two contour lines continue in both directions to cut other line segments between adjacent pixel centers and, if so, turning pixels ON.
The rasterized glyphs are then cached in glyph cache 526. Caching glyphs is useful. More specifically, since most Latin fonts have only about 200 characters, a reasonably sized cache makes the speed of the rasterizer almost meaningless. This is because the rasterizeer runs once, for example when a new font or point size is selected. Then, the bitmaps are transferred out of the glyph cache 526 as needed.
The scaling process 622 of the known system just described may introduce certain rounding errors. Constraints are enforced by (i) scaling the size and positioning information included in a character font as a function of the point size and device resolution as just described above, and (ii) then rounding the size and positioning values to integer multiples of the pixel size used in the particular display device. Using pixel size units as the minimum (or “atomic”) distance unit produces what is called “pixel precision” since the values are accurate to the size of one (1) pixel.
Rounding size and positioning values of character fonts to pixel precision introduces changes, or errors, into displayed images. Each of these errors may be up to ˝ a pixel in size (assuming that values less than ˝ a pixel are rounded down and values greater than or equal to ˝ a pixel are rounded up). Thus, the overall width of a character may be less precise than desired since the character's AW is (may be) rounded. In addition, the positioning of a character's black body within the total horizontal space allocated to that character may be sub-optimal since the left side bearing is (may be) rounded. At small point sizes, the changes introduced by rounding using pixel precision can be significant.
§ 1.2.3 Rendering Line Drawings
As was the case when scaling character outlines, when rendering line drawings, the boundaries between the (black) line portions and the (white) background are typically forced to correspond to pixel boundaries. This may be done by rounding the position values of the (black) line portions to integer multiples of the pixel size used in the particular display device. Referring to FIG. 7, this may be done by a scaling process 710 which accepts analytic image information 702 and generates pixel resolution digital image information 728. To reiterate, using pixel size units as the minimum (or “atomic”) positioning unit produces what is called “pixel precision” since the position values are accurate to the size of one (1) pixel.
Rounding position values for line drawings to pixel precision introduces changes, or errors, into displayed images. Each of these errors may be up to ˝ a pixel in size (assuming that values less than ˝ a pixel are rounded down and values greater than or equal to ˝ a pixel are rounded up). Thus, the overall width of a line section may be less precise than desired since the width or weight of the line is (may be) rounded.
§ 1.2.4 Rendering Graphics
Similar to text and line drawings, certain graphics, represented analytically or by a resolution higher than that of a display device 650, may have to be scaled and rounded to correspond to the resolution of the display device 650. Referring to FIG. 7, this may be done by a scaling process 710 which accepts ultra resolution digital image information 704 and generates pixel resolution digital image information. Thus, rounding errors can be introduced here as well.
§ 1.2.5 Unmet Needs
In view of the errors introduced when rounding character values, line drawings, or high resolution or analytic graphics to pixel precision as introduced above, methods and apparatus to improve character spacing and positioning, to increase the legibility and perceived quality of text, to improve the resolution of line drawings, and/or to improve the resolution of images are needed. Such methods and apparatus should not blur the image, as occurs when standard anti-aliasing is used.
§ 2. SUMMARY OF THE INVENTION
The present invention improves the resolution of images (either analog images, analytic images, or images having a higher resolution than that of a display device) to be rendered on patterned displays. In one aspect of the present invention, an overscaling or oversampling process may accept analytic character information, such as contours for example, and a scale factor or grid and overscale or oversample the analytic character information to produce an overscaled or oversampled image. The overscaled or oversampled image generated has a higher resolution than the display upon which the character is to be rendered. If, for example, the display is a RGB striped LCD monitor, the ultra-resolution image may have a resolution corresponding to the sub-pixel component resolution of the display, or an integer multiple thereof. For example, if a vertically striped RGB LCD monitor is to be used, the ultra-resolution image 704 may have a pixel resolution in the Y direction and a ⅓ (or ⅓N, where N is an integer) pixel resolution in the X direction. If, on the other hand, a horizontally striped RGB LCD monitor is to be used, the ultra-resolution image may have a pixel resolution in the X direction and a ⅓ (or ⅓N) pixel resolution in the Y direction. Then a process for combining displaced samples of the ultra-resolution image 624′ may be used to generate another ultra-resolution image (or an image with sub-pixel information) which is then cached. The cached character information may then be accessed by a compositing process which uses foreground and background color information.
An analytic image, such as a line drawing for example, may be applied to the oversampling/overscaling process as was the case with the character analytic image. However, since the analytic image may have different units than that of the character analytic image, the scale factor applied may be different. In any event, the downstream processes may be similarly applied.
Since an ultra resolution image is already “digitized”, that is, not merely mathematically expressed contours or lines between points, it may be applied directly to a process for combining displaced samples of the ultra-resolution image to generate another ultra-resolution image (or an image with sub-pixel information). Downstream processing may then be similarly applied.
In one embodiment of the present invention, the functionality of the overscaling/oversampling process and the processes for combining displaced samples may be combined into a single step analytic to digital sub-pixel resolution conversion process.
§ 3. BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1 and 2 illustrate vertical striping in a conventional RGB LCD display device.
FIGS. 3 and 4 illustrate certain font technology terms.
FIG. 5 illustrates processes that may be performed in a font or character rendering system in which the present invention may be implemented.
FIG. 6 illustrates processes that may be performed in a graphics display interface.
FIG. 7 illustrates processes that may be performed in a line art or graphics rendering system in which the present invention may be implemented.
FIG. 8 illustrates processes that may be used to effect various aspects of the present invention.
FIG. 9 illustrates an overscaling process operating on character outline information.
FIG. 10 is a block diagram of a computer architecture which may be used to implement various aspects of the present invention.
FIG. 11 illustrates the operation of an ideal analog to digital sub-pixel conversion method.
FIG. 12 is a high level flow diagram of that method.
FIG. 13 illustrates the operation of a disfavored downsampling method.
FIG. 14 is a high level flow diagram of that method.
FIG. 15 illustrates the operation of a method for deriving sub-pixel element information from color scan lines.
FIG. 16 is a high level flow diagram of that method.
FIG. 17 illustrates the operation of an alternative method for deriving sub-pixel element information from color scan lines.
FIG. 18 is a high level flow diagram of that method.
FIG. 19 illustrates the operation of a method for deriving sub-pixel element information from blend coefficient information, as well as foreground and background color information.
FIG. 20 is a high level flow diagram of that method.
FIG. 21 illustrates the operation of a method for deriving sub-pixel element information from blend coefficient samples, as well as foreground and background color information.
FIG. 22 is a high level flow diagram of that method.
FIG. 23 illustrates the operation of an alternative method for deriving sub-pixel element information from blend coefficient samples, as well as foreground and background color information.
FIG. 24 is a high level flow diagram of that method.
FIG. 25 illustrates the operation of a method for deriving sub-pixel element information from blend coefficient samples, as well as foreground and background color information, where the foreground and/or background color information may vary based on the position of a pixel within the image.
FIG. 26 is a high level flow diagram of that method.
FIG. 27 illustrates the operation of an alternative method for deriving sub-pixel element information from blend coefficient samples, as well as foreground and background color information, where the foreground and/or background color information may vary based on the position of a pixel within the image.
FIG. 28 is a high level flow diagram of that method.
FIG. 29 is a high level block diagram of a machine which may be used to implement various aspects of the present invention.
FIG. 30 illustrates samples derived from a portion of an overscaled character outline.
FIG. 31 illustrates the operations of alternative sample combination techniques.