WO1982004146A1 - Pictorial information processing technique - Google Patents

Pictorial information processing technique Download PDF

Info

Publication number
WO1982004146A1
WO1982004146A1 PCT/US1982/000671 US8200671W WO8204146A1 WO 1982004146 A1 WO1982004146 A1 WO 1982004146A1 US 8200671 W US8200671 W US 8200671W WO 8204146 A1 WO8204146 A1 WO 8204146A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
display
elements
pels
pel
Prior art date
Application number
PCT/US1982/000671
Other languages
French (fr)
Inventor
Electric Co Western
James Richard Fleming
William Armand Frezza
Gerald Steven Soloway
Original Assignee
Electric Co Western
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Co Western filed Critical Electric Co Western
Priority to AU86809/82A priority Critical patent/AU8680982A/en
Publication of WO1982004146A1 publication Critical patent/WO1982004146A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • H04M11/08Telephonic communication systems specially adapted for combination with other electrical systems specially adapted for optional reception of entertainment or informative matter
    • H04M11/085Telephonic communication systems specially adapted for combination with other electrical systems specially adapted for optional reception of entertainment or informative matter using a television receiver, e.g. viewdata system
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/39Control of the bit-mapped memory
    • G09G5/393Arrangements for updating the contents of the bit-mapped memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction

Definitions

  • the present invention relates generally to the processing of pictorial information and, in particular, to a technique for encoding information describing a picture in a manner that permits its display on devices having different resolution characteristics.
  • Viewdata or “videotex” are generic terms which have been used to describe systems enabling two-way, interactive communication between the user and the data bas-e, generally co ⁇ ununieating via telephone lines and using an ordinary but specially adapted television set for display of pictorial information or text.
  • Teletext is another generic term used to describe one-way communication from the data base to the user, with transmission being accomplished in portions of the broadcast television spectrum and display again being on a specially adapted television screen. Both system types must have a large range of flexibility, since a number of alternatives exist with respect to various system components.
  • a television set may be a preferred display device for home use
  • different terminals may access the data base in an office environment, such as bi-level (plasma) displays.
  • other communication channels such as dedicated coaxial, twisted pair cable, or satellite or land-based radio may interconnect the users who input or output information from the data base, and each type of channel has its own requirements and limitations.
  • pictorial information is processed using a unique encoding procedure and a corresponding decoding and display procedure which is efficient and which enables flexibility including display of the picture on a device independent of the device's resolution characteristics.
  • attributes of each element of a picture or image are specified by a group of N bits.
  • the bits are assembled to form M-bit active portions of a series of data words (bytes) .
  • M and N need not be integer multiples, i.e., evenly divisible by each other.
  • Some of the words also define the size of the elements with respect to a so-called "unit screen coordinate space" (the horizontal and vertical dimensions - dx and dy, respectively - of each element can be different) , the order in which the elements are arranged within the active portion of the unit screen (e.g., left to right or vice versa) , the value of N, as well as information specifying the boundary values or dimensions of the active area and the starting point in the active area for the first pel in the sequence being processed.
  • the attributes of each picture element are extracted from the data word stream and the location of the pel is determined with respect to the unit screen coordinate space.
  • the frame memory includes an array of storage locations which are read sequentially in order to actually display the picture on a display device. During mapping, if a succeeding pel is entered in one or more previously written storage locations in the frame memory, the existing contents of that memory element a.re overwritten.
  • FIG. 1 is an overall block diagram of an image display system which employs the principles of the present invention
  • FIG. 2 illustrates the spatial relationships of the picture elements and scan lines in a picture being encoded
  • FIG. 3 is a block diagram of one embodiment of pictorial information encoding apparatus arranged to implement the processing technique of the present invention
  • FIG. 4 illustrates a series of bytes used in communication between the apparatus of FIG. 3 and the terminal of FIG. 1;
  • FIG. 5 illustrates an example of the types and sequence of the information encoded in the apparatus of FIG. 3 in accordance with the present invention
  • FIG. 6 graphically illustrates the coordinate space established by the unit screen and active drawing area used to define encoded pictorial information
  • FIG. 7 is a flowchart illustrating the steps in a procedure in accordance with the present invention for drawing or locating the information representing the encoded picture with respect to the unit screen;
  • FIGS. 8 and 9 illustrate the mapping of logical pels of different sizes onto an array representing the physical display screen
  • FIG. 10 is a flowchart illustrating the steps in a procedure for mapping as illustrated in FIGS. 8 and 9.
  • Such a system includes a data processor 1 having bidirectional access to data bus 2.
  • Timing module 3 provides the clock signals re ⁇ uired for system operation.
  • Timing module 3 also provides timing signals on video data bus 8 for use by video memory 4, and
  • Data processor 1 may be a microprocessor comprising program memory, read only memory 9 and scratch pad or random access memory 10. Data processor 1 responds to user input from a keyboard, light pen or other data input devices well known in the art. In its application with a viewdata or teletext terminal, data processor 1 may also respond to input provided from a remote or centralized data base such as one located at a television broadcast station or a provider of viewdata services. This input is received via communication line 11 and modem 13 or as a broadcast signal 12 via RF receiver 14 to communication interface 15. Input-output device 16 operates to control data from communications interface 15 or from peripheral device interface 17.
  • Data bus 2 is a bidirectional, conduit through which data processor 1 controls video memory 4, timing generator 3 and video controller 6.
  • bus structures may be adapted for use in the present invention. Whichever specific bus structure is chosen, the bus generally comprises address capability, a data path and control lines which may include interrupt, reset, clock (for synchronous use), wait (for asynchronous use), and bus request lines.
  • Timing module 3 may provide the timing signals on both the data bus 2 and the video data bus 8 and may comprise a chain of programmable logic circuits, digital dividers and counters for providing required timing signal outputs. These may, of course, be incorporated into data processor 1. For operation of video data bus 8, a number of different timing signals are required. Horizontal and vertical drive signals are provided in accordance with horizontal and field rates respectively. A dot clock signal is provided at the dot frequency (picture element or PEL rate) of the system. An odd/even field signal indicates if the odd or even field is to be displayed in an interlaced system. A composite blanking signal indicates if video is being displayed or if vertical or horizontal retrace is occurring. Also, a group clock signal or other signals may be provided.
  • the group clock signal indicates when to access data for a new group of picture element data from memory.
  • picture element data in video memory having a slow access time may be serially provided in groups of 4, 8 or 16 picture elements.
  • a parallel data transmission scheme is possible, potentially increasing the requirements for leads of video data bus 8.
  • Digital to video converter 6 accepts digital image information from the video data bus 8, pel—by-pel, and converts the digital information, if necessaryy, to analog form for presentation on video display 7.
  • the video converter may be in modular form and comprises three components: (1) color map memory, (2) digital to analog conversion and sample and hold circuits, if required by video display 7, and (3) a standard composite video encoder (for example, for providing NTSC standard video or red, green, blue RGB outputs) and RF modulator (if necessary, for antenna lead-in access to a television set display 7) .
  • Video display 7 may either be a monitor or a television set or it may be other forms of graphic display known in the art, including a liquid crystal display, an LED display or a plasma panel display. Pictorial information which is to be displayed on
  • CRT 7 or any other display device within a terminal such as that shown in FIG. 1 is encoded for display by a user who may be remotely located from the terminal.
  • the user is furnished with certain information to guide him in making critical choices in the encoding procedure, but is advantageously allowed a great deal of flexibility.
  • the encoded information may be transmitted to the terminal of FIG. 1 (via communications line 11 or as a broadcast signal 12) , decoded in a manner described hereinafter, and eventually displayed on CRT 7 independent of its resolution characteristics.
  • FIG. 2 An image or picture as shown in FIG. 2 can be encoded, for example, using the system shown in FIG. 3.
  • Camera 301 is arranged to scan picture 201 along a series of generally parallel scan lines, such as lines 202-1, 202-2 and 202-3, at a scan rate determined by a control signal applied to camera 301 on line 302a.
  • the spacing between adjacent lines shown greatly exaggerated in FIG. 2, is also controllable by the user, via a control signal on line 302b.
  • the output of camera 301 on line 310 is an electrical signal which represents the attributes of picture 201, such as its luminance and chrominance, and it is these attributes which must be encoded, in the correct spatial relation, to enable reconstruction and display of picture 201.
  • the camera output is sampled by sample and hold circuit 303, at a rate determined by an input on line 304, and each such sample is converted from analog to digital form in an A/D converter 305.
  • Each parallel output from converter 305.on lines 306-1 to 306-n thus represents the attributes of a particular element (pel) of picture 201, and these ultibit values are entered in a rando ⁇ access memory 310 via a processor ' 308.
  • the addresses at which the picture information is stored in memory 310 are provided by the processor, which may be part of a microcomputer or minicomputer such as a Digital Equipment Corp. PDP 11/23. Numerous alternative techniques exist for obtaining information describing elements in a picture, and these can also be used to obtain the values entered in memory 310.
  • FIG. 2 The picture elements which are defined as a result of the scanning and sampling processes are illustrated in FIG. 2.
  • sampling may occur at instants marked by points 212a, 212b and 212c.
  • Corresponding vertically displaced sampling points 213a, 213b and 213c are also shown on scan line 202-3.
  • the picture element 222b corresponding to sampling point 212b is a rectangle of width x (measured in the direction of the scan lines) and height y_ (measured in the direction perpendicular to the scan lines) , centered at point 212b.
  • the system user is enabled to independently define, together with the encoded picture attributes, information indicating the width and height dimensions of the picture elements relative to the overall size of a so-called "unit screen", which is described hereafter. This allows considerable flexibility when the picture is displayed, as will be illustrated by several examples.
  • the relative width and height values dx and dy are set at 1/P and 1/L, respectively, where P_ is the number of samples (of width x.) c each of L scan lines . (spaced from each other.by distance y) .
  • P_ is the number of samples (of width x.) c each of L scan lines . (spaced from each other.by distance y) .
  • This establishes a 1 to 1 correspondence between each element of picture 201 and the elements, called logical pels, in the unit screen, and this insures that when the unit screen is subsequently mapped onto the physical display (such as CRT 7) the entire picture will be reproduced.
  • dx is still set at 1/P (since the full screen width is used) but dy is reduced to 1/2L, to compress the relative heights of each pel that will be displayed.
  • Information indicating the coordinates of a starting point X S/ ⁇ s on the unit screen where the picture display is to begin (in this case, at the midpoint) is also encoded, as are coordinates (X 0 ,X b ) and (Y Q ,Y- b ) indicating the vertical and horizontal boundaries of a rectangular "active region" within the unit screen, to which display is confined. This active area will be described more fully below.
  • dx and dy are signed, two's complement numbers, i.e., binary decimals where the most significant bit represents the digit just to the right of the decimal point.
  • dx is positive
  • scanning is from left to right
  • negative dx indicates the reverse.
  • hen dy is positive
  • scanning proceeds from bottom to top, and a negative dy indicates conventional top down scanning.
  • dx is specified as 0.001010
  • the leftmost bit indicates a positive number of deci ⁇ al value
  • Information specifying the sign and magnitude of dx and dy the values of other user specified variables such as X s , Y s , X Q , Xt ⁇ , ⁇ 0 and Y D ⁇ , as well as other control information described below, are entered in memory 310.
  • the input may be made on a keyboard 312, and entered via processor 308 in certain preselected locations in memory 310 which are later accessed when the information is read from the memory, assembled, and transmitted to the terminal of FIG. 1.
  • the data read from memory 310 may be conditioned for application to communication line 11 using a conventional mode 315.
  • FIGS. 4 and 5 One example illustrating the arrangement of data in accordance with the present invention as it is read from memory 310 and applied to communication line 11, is shown in FIGS. 4 and 5.
  • the data consists of a series of ultibit words (bytes) of a fixed predetermined length "agreed upon" in advance by the encoder of FIG. 3 and the decoder described hereinafter, at least with respect to each communication session.
  • a 7 or 8—bit code may be used, and the latter is illustrated in FIG. 4. It is noted that the basic coding scheme described herein is built upon the framework established by CCITT
  • the first information conveyed (block 501) describes the sign and magnitude of dx and dy, thus defining the "logical pel" size and the scanning direction used to obtain the picture information. This information may require use of the active M-bit portions of
  • the next type of user specified information (block 504) is called a packing counter count N_, which is an unsigned integer that describes the number of consecutive bits to be taken from the active portions of successive picture attribute words (block 505) to make up a specification of a single pel.
  • N_ is an unsigned integer that describes the number of consecutive bits to be taken from the active portions of successive picture attribute words (block 505) to make up a specification of a single pel.
  • M and N need not be integer multiples of each other, so that a given pel may be described in terms of bits derived from more than one byte. This arrangement is advantageous, in that flexibility is provided in the specification of the picture attribute information and the number of bits transmitted is used efficiently.
  • code words can be used to describe percentage luminance values or color values can be described in terms of the primary colors or in terms of a table look-up scheme in which certain pre- agreed codes are communicated in advance, to both the encoder and display terminal.
  • the codes can represent either a predefined series of colors or alternatively color information in a predefined specification scheme.
  • the color specifications for the pels in picture 210 are represented sequentially in the active portions of the bytes (such as bytes 50 505-2) , without regard to byte boundaries.
  • the bits are used, N at a time, to define the color; for this purpose, the bits are organized into three tuples representing the three primary colors, and the order of interpretation of the bits is green-red-blue.
  • the color specification for a single pel may contain multiple three- tuples, depending on the packing count N.
  • the first three-tuple contains the MSBs, and the last contains the LSBs of the three primaries.
  • it is desirable that the packing count N is not an integer multiple of three, so that each primary is not specified to an equal accuracy. For example, if the packing count
  • N 4
  • the color specification for each pel would look like GRBG.
  • the bits in the active portion of the bytes would look like GRBGGRBG....
  • the word string of FIG. 5 is assembled in the correct sequence under control of processor 308, which includes instructions for sequentially reading from memory 310 the contents of the preselected registers which contain the appropriate user specified variables, followed by the registers which contain the pel attribute (color) information.
  • processor 308 includes instructions for sequentially reading from memory 310 the contents of the preselected registers which contain the appropriate user specified variables, followed by the registers which contain the pel attribute (color) information.
  • the instruction set is straightforward, and many alternatives will be apparent to those skilled in the art.
  • the active portion of the unit screen is a rectangular portion 602 of the screen, as, for example, the rectangle having vertices at coordinates ( X 0 ,Y 0 ) , ( X b ,Y 0 ) ( X b ,Y b > and ( ⁇ o' ⁇ b ⁇ as shown in FIG - ⁇ - It is convenient to define X Q and Y Q as the boundaries away from which scanning occurs: for dx positive, X Q is to the left of X , but for negative dx, the positions are opposite.
  • Y Q is the bottom of the active area and b the top, with the opposite for negative dy.
  • scanning is from left to right and bottom to top.
  • the values of all four variables lie between zero and one (i.e., 0_ ⁇ x ⁇ l and 0_ ⁇ y ⁇ l) • In the most normal case, when it is desired that the encoded picture fill substantially the entire screen available on the display device, the active area will be co-extensive with the unit screen, when the display device is square.
  • the upper left corner (1,0) of the unit screen would be mapped outside of the physical display area; by limiting the active area the y coordinate of approximately 0.75 on the unit screen would correspond to the top of the display, and all information would be displayed.
  • Y coordinates of the starting point of the display with respect to the unit screen may be independently specified by the user.
  • OMPI transmission is used, the broadcast signal is demodulated in receiver 14 before being input to interface 15.
  • Functions performed in the interface include demultiplexing, error correction, retiming and so on, and are well known to those skilled in the art.
  • the output of interface 15 is coupled through input/output circuit 16 and the information shown in FIG. 5 is applied to various registers in random access memory 10, under control of data processor 1.
  • the terminal includes appropriate program information stored in ROM 9 to recognize the order in which the information is presented, and to correctly interpret control signals encoded together with the pictorial data.
  • the decoding procedure of the present invention includes the following steps. First, the location of each picture element with respect to the unit screen is determined. Second, each pel location on the unit screen is mapped to one or more corresponding locations in a video memory 4, and the attributes describing the pel written in the location (s) . Next, the contents of memory 4 are read sequentially. Depending on the type of display device used, the memory output may be converted to conventional video format in converter 6.
  • FIG. 7 illustrates the steps followed in accordance with the present invention in order locate each received pel with respect to the coordinate of the unit screen. This procedure in effect determines where on the unit screen each logical pel belongs: each location is expressed as coordinates x, y referenced to the unit
  • the user specified values X 0 , X b , Y 0 and Y b which define the active area on the unit screen are initialized as are X s and Y s , the starting x and y coordinates.
  • This information is obtained from the input information, blocks 502 and 503, and stored in specified locations in RAM 10.
  • the next step, 730 reads the logical pel size dx and dy from the incoming data, and stores these values in RAM 10. If both dx and dy are zero (step 731), a default condition is recognized, whereupon dx and dy are set (step 732) to the smallest positive definable value permitted by the number of bits which define those variables.
  • step 702 the packing count N is read (as obtained from block 504) and stored in memory.
  • the series of input bytes containing information describing the picture attributes (block 505 in FIG. 5) are processed to re-assemble the active bits, N at a time, with each group of N bits representing the attributes of a single logical pel at coordinates x, y on the unit screen.
  • step 706 a process variable C is set equal to N, the number of bits which define the attributes of each pel.
  • steps 707-710 the bits in the active portions of picture attribute words (block 505 in FIG. 5) are assembled, N at a time.
  • Step 711 compares the present X coordinate to X b , to determine if the X-direction boundary of the active drawing area has been met or exceeded. If not, the logical pel is drawn or mapped into memory 4 (step 712) in a manner described hereinafter, at the currently specified x and y coordinates. The value of x is then incremented by the value dx (which may be positive or negative) of the logical pel (step 713) and the process repeated from step 706. If dx is positive, the next pel lies to the right of the previous pel, but if dx is negative, the next pel lies to the left.
  • Step 711 produces a positive branch which, in step 714 sets Y to Y +dy (dy may be positive or negative) thus incrementing the scan line position and X is set 'to X Q , thus restarting the X position on the new line. If dy is positive, the next line is above the previous line, but when dy is negative, it is below the previous line. After being incremented, the new value Y is compared to Y b (step 715) to insure that the horizontal boundary (top or bottom) of the active area has not been met or exceeded.
  • step 716 the pel is drawn (step 716) in the same manner as in step 712, to be explained below. However, by convention, if any bits are left in the byte presently being processed, they are discarded (step 717) and the next byte read (step 713) before returning to step 706.
  • step 715 When the branch at step 715 indicates that the top or bottom of the active area has been met or exceeded, the entire display lying within the area of the screen defined by the active display area is scrolled by -dy step 719. This is done by sending a control signal to memory 4 to perform a mass shift of the appropriate portion of its contents in a vertical direction. Thus, if writing is proceeding down the screen, scrolling is up, and vice- versa. Thereafter, in step 722, the value of Y is reduced by dy, to in effect cancel the last Y direction increment in step 714, and the pel is drawn in step 716.
  • the picture has been fully processed, and the variables X and Y are reset in step 750 prior to exiting from the process of FIG. 7.
  • the X and Y values used for resetting are the coordinates of the lower left corner of the active drawing region. This allows subsequent drawing operations to commence from a precisely specified point.
  • steps 712 and 716 in FIG. 7 are performed, the N-bit attribute value (color, etc.) for the present pel, just extracted from the byte sequence being processed, is written or mapped into one or more storage locations in video memory 4 of FIG. 1.
  • This mapping procedure will be explained by reference to FIGS. 8-10, the first two of which graphically illustrate mapping with different logical pel sizes, and the latter of which is a flowchart of the steps used in mapping.
  • Video memory 4 is usually a frame memory (RAM) having a rectilinear array of storage locations or registers each capable of storing the attributes of one display element (display pel) that can be presented on display device 7. It is anticipated that memory 4 will be made as large as possible, given constraints of economy and the inherent resolution capability of the display 7, so that a reasonable visual resolution may be obtained. As an example, if display 7 is a CRT, a frame memory for storing any array of 256 by 256 display pels would be common. Alternatively, if display 7 is a bi—level device such as a plasma panel, the on-off attribute of each pel is represented by only one bit, and a greater resolution (say, 512 by 512 pels) can be expected.
  • RAM frame memory
  • FIGS. 8 and 9 A portion of the array (801) of storage locations in video memory 4 is represented in FIGS. 8 and 9 as a plurality of individual rectangles positioned in a series of rows and a series of columns. Where the picture attributes are represented by multiple bits (N > 1) , memory 4 may include several memory planes, one for each bit. The other memory planes are illustrated in FIG. 8 by showing small portions of two other arrays 810 and 820.
  • the attributes of a logical pel such as pel 802 in FIG. 8 into the appropriate storage location in memory 4, the logical pel mapped or superimposed on the array of FIG. 8 or 9.
  • Three sets of variables control the mapping process: (1) the current X, Y position of the pel; (2) its size, specified by dx and dy; and (3) the size of the memory array.
  • the geometric alignment of the X, Y position within the logical pel is predefined by convention. For example, the location used is (a) lower .
  • a similar mapping is next performed to locate the coordinates aj _, a] _ °f t ⁇ e diagonally opposite corner of the logical pel with respect to the array. This is done by realizing that the coordinates of this far corner in the unit screen are x ⁇ dx and y + dy.
  • the coordinates of this far corner in the unit screen are x ⁇ dx and y + dy.
  • the storage locations into which the attribute value for that pel are to be entered can be determined, using several different strategies.
  • any portion of a storage location in the array is "covered” or included in the mapping, the pel attributes are written into that location.
  • This strategy assures that each "logical pel” will always map to at least one (but possibly many) display pels.
  • Other strategies could depend on the percentage of coverage.
  • the attributes for pel 802 are entered in the 9 storage locations in columns 3, 4 and 5 at rows 4, 5 and 6. These are shown shaded in FIG. 8.
  • the attributes for pel 902 of FIG. 9, when similarly processed, are loaded in one memory location in column 3, line 3. If the next pel being processed (as pel 803 in FIG.
  • mapping procedure just described can also be explained by reference to the flowchart of FIG. 10. To repeat, during each mapping operation, it is desired to identify the coordinates of one or more storage locations in video memory 4 that correspond to the logical pel being processed. This is done as a function of the coordinates X, Y of the logical pel, with respect to the unit screen, its size dx, dy, also with respect to the unit screen, and the total number W and H of storage locations (one for each display pel in device 7) in each row and column of the memory array, respectively.
  • the current values of x and y are saved in registers so that they may be correctly restored at the end of the routine.
  • the coordinates X,, y, with respect to the unit screen of the corner of the logical pel diagonally opposite to the corner at location X, y are calculated, using
  • step 1006 and 1007 the coordinates of both corners are mapped to the storage location array, defining coordinates X a , Y a and X a ⁇ and Y a ⁇ _ f respectively.
  • the coordinates X a , Y a are determined from Equations (1) and (2) above. In these multiplications, any fractions are truncated, so that X a , Y a maps "down and to the left" as shown by arrow 850 in FIG. 8.
  • the equations used correspond to Equations (1) and (2) , namely:
  • the value of X a is next incremented (step 1010) by one, and a test performed (step 1011) to see if the current value of X a is less than the value X al . If it is, the write process step 1009 is repeated for the incremented value. This corresponds to the storage location in row 4, column 4 in FIG. 8.
  • X a is not less than X a ⁇
  • the value of X a previously saved in step 1008 is restored (step 1012) and Y a is then incremented by one in step 1013.
  • a test is then made (step 1014) to insure that the new Y a is less than Y a ⁇ - I f s O / the procedure returns to step 1008, and the process repeats for the storage locations in the next row (row 5 in FIG. 8).
  • Y a is not less than Y a] _
  • the original values of x and y are restored (step 1015) and the mapping procedure for the current logical pel has been completed.
  • the entire process of FIG. 10 is then repeated in order to enter the attributes of the corresponding locations in the memory array. As mentioned before, overwriting of some information may occur.
  • the technique for processing pictorial information for display described above is primarily intended for use in conjunction with a videotex or other computer-based information system which can operate in a variety of modes and which can accommodate a variety of information types other than pictures.
  • This information can include text, line drawings, mosiacs, special purpose characters and the like.
  • the terminal of FIG. 1 will include operating and support programs usually stored in ROM 9, for performing various functions such as mode control (i.e., text, line drawing modes, et cetera) input/output control, and so on.
  • mode control i.e., text, line drawing modes, et cetera
  • the control bit portions of each word shown in FIG. 4 contain information recognized and used in controlling overall system operation, such as switching between modes, activating input/output sequences, and so on.
  • a remapping of the stored pel attributes can change the colors of the display, the texture of the picture, or its relative dimensions, to give just a few examples.
  • picture information can come from a facsimile terminal, a video tape scanner, or other device capable of representing a picture as a raster scanned sequence of attributes including but not limited to black/white values, gray levels, luminance/chrominance values, et cetera.
  • a facsimile terminal a facsimile terminal
  • a video tape scanner or other device capable of representing a picture as a raster scanned sequence of attributes including but not limited to black/white values, gray levels, luminance/chrominance values, et cetera.
  • display devices can easily be substituted for the CRT of FIG. 1.

Abstract

A technique for processing pictorial information to enable its display on a desired display device independent of the latter's resolution characteristics includes a unique encoding procedure and a corresponding decoding and display procedure. Attributes of each element of a picture or image (such as its color) are specified by a group of N bits. The bits are assembled to form M-bit active portions of a series of bytes. M and N need not to be integrally related. Ones of the words also define the size of the elements with respect to a unit display space (the horizontal and vertical dimensions of each element can be different), the order in which the elements are arranged in the picture (e.g., left to right or vice versa) and the value of N. The bytes are decoded and the picture displayed by locating each pel in the unit screen coordinate space, mapping each location to a corresponding storage location in an array formed by a frame memory, writing the attributes of each pel in one or more of the storage locations, and sequentially reading the contents of the memory and applying the output to the display device. Apparatus for practicing the processing technique inclues a random access memory (10) arranged to receive encoded pictorial information via input/output circuit (10), a read-only memory (9) arranged to store program information defining the various steps of the present invention, and a data processor (1) for effecting the specified operation. During the display processing sequency, information describing the desired picture is entered in video memory (4) for eventual presentation on display device (7) via a digital to video convertor (6).

Description

PICTORIAL INFORMATION PROCESSING TECHNIQUE
Background of the invention
1. Field of the Invention The present invention relates generally to the processing of pictorial information and, in particular, to a technique for encoding information describing a picture in a manner that permits its display on devices having different resolution characteristics. 2. Description of the Prior Art
Computer-based information systems have now evolved to the stage where it is both desirable and feasible to allow access to the wealth of information stored in private or public data bases to the public, using commonly available display devices and communicating via existing channels. "Viewdata" or "videotex" are generic terms which have been used to describe systems enabling two-way, interactive communication between the user and the data bas-e, generally coπununieating via telephone lines and using an ordinary but specially adapted television set for display of pictorial information or text. "Teletext" is another generic term used to describe one-way communication from the data base to the user, with transmission being accomplished in portions of the broadcast television spectrum and display again being on a specially adapted television screen. Both system types must have a large range of flexibility, since a number of alternatives exist with respect to various system components. For example, although a television set may be a preferred display device for home use, different terminals may access the data base in an office environment, such as bi-level (plasma) displays. Additionally, other communication channels, such as dedicated coaxial, twisted pair cable, or satellite or land-based radio may interconnect the users who input or output information from the data base, and each type of channel has its own requirements and limitations.
O P In view of the fact that different types of equipment and facilities must interact to achieve satisfactory overall results, several attempts have been made to standardize the manner in which information (primarily pictorial as opposed to text) is encoded and decoded. The success of these systems must be measured against several parameters. First, the procedure used to encode the pictorial information must make reasonably efficient use of the bandwidth available in the communication channel and the processing capability of the microprocessor usually located in the user's terminal. Second, the users of the system must, during both encoding and display operations, have a high degree of control and flexibility in specifying how the information will be processed. Finally, the techniques used must recognize that different equipment - particularly displays — will be used, some having nonstandard resolution and other capabilities, and that all must operate satisfactorily using the same encoding/decoding strategy. While the various available systems have been somewhat successful in achieving the capabilities just described, it still remains that the efficiency, flexibility, interchangeability and universality of the technique can be improved. Summary of the Invention
In accordance with the present invention, pictorial information is processed using a unique encoding procedure and a corresponding decoding and display procedure which is efficient and which enables flexibility including display of the picture on a device independent of the device's resolution characteristics.
During encoding, attributes of each element of a picture or image (such as its color) are specified by a group of N bits. The bits are assembled to form M-bit active portions of a series of data words (bytes) . M and N need not be integer multiples, i.e., evenly divisible by each other. Some of the words also define the size of the elements with respect to a so-called "unit screen coordinate space" (the horizontal and vertical dimensions - dx and dy, respectively - of each element can be different) , the order in which the elements are arranged within the active portion of the unit screen (e.g., left to right or vice versa) , the value of N, as well as information specifying the boundary values or dimensions of the active area and the starting point in the active area for the first pel in the sequence being processed. During the decoding, the attributes of each picture element are extracted from the data word stream and the location of the pel is determined with respect to the unit screen coordinate space. This is done by locating the first pel at the specified starting point and locating succeeding pels in adjacent positions along a series of parallel scan lines, in the order specified. During this part of the decoding procedure, when a pel being processed lies in a position on the unit screen which meets or exceeds the boundaries of the active area, a predetermined incrementing procedure is followed.
After the location of each pel with respect to the unit screen has been determined, the attributes of that pel are mapped to and written in one or more storage locations in a frame memory, and the procedure repeated for the next pel in the sequence. The frame memory includes an array of storage locations which are read sequentially in order to actually display the picture on a display device. During mapping, if a succeeding pel is entered in one or more previously written storage locations in the frame memory, the existing contents of that memory element a.re overwritten. Brief Description of the Drawing
The present invention will be more clearly appreciated by consideration of the following detailed description when read in light of the accompanying drawing in which:
"BUREA
OMPI
W1P0 FIG. 1 is an overall block diagram of an image display system which employs the principles of the present invention;
FIG. 2 illustrates the spatial relationships of the picture elements and scan lines in a picture being encoded;
FIG. 3 is a block diagram of one embodiment of pictorial information encoding apparatus arranged to implement the processing technique of the present invention;
FIG. 4 illustrates a series of bytes used in communication between the apparatus of FIG. 3 and the terminal of FIG. 1;
FIG. 5 illustrates an example of the types and sequence of the information encoded in the apparatus of FIG. 3 in accordance with the present invention;
FIG. 6 graphically illustrates the coordinate space established by the unit screen and active drawing area used to define encoded pictorial information; FIG. 7 is a flowchart illustrating the steps in a procedure in accordance with the present invention for drawing or locating the information representing the encoded picture with respect to the unit screen;
FIGS. 8 and 9 illustrate the mapping of logical pels of different sizes onto an array representing the physical display screen; and
FIG. 10 is a flowchart illustrating the steps in a procedure for mapping as illustrated in FIGS. 8 and 9.
Detailed Description The present invention will be better appreciated when first put in proper perspective with respect to an overall image display system which embodies the present invention-. As shown in FIG. 1, such a system includes a data processor 1 having bidirectional access to data bus 2. Timing module 3 provides the clock signals reσuired for system operation. Timing module 3 also provides timing signals on video data bus 8 for use by video memory 4, and
-^JR by digital to video converter 6.
Data processor 1 may be a microprocessor comprising program memory, read only memory 9 and scratch pad or random access memory 10. Data processor 1 responds to user input from a keyboard, light pen or other data input devices well known in the art. In its application with a viewdata or teletext terminal, data processor 1 may also respond to input provided from a remote or centralized data base such as one located at a television broadcast station or a provider of viewdata services. This input is received via communication line 11 and modem 13 or as a broadcast signal 12 via RF receiver 14 to communication interface 15. Input-output device 16 operates to control data from communications interface 15 or from peripheral device interface 17.
Data bus 2 is a bidirectional, conduit through which data processor 1 controls video memory 4, timing generator 3 and video controller 6. Several bus structures may be adapted for use in the present invention. Whichever specific bus structure is chosen, the bus generally comprises address capability, a data path and control lines which may include interrupt, reset, clock (for synchronous use), wait (for asynchronous use), and bus request lines.
Timing module 3 may provide the timing signals on both the data bus 2 and the video data bus 8 and may comprise a chain of programmable logic circuits, digital dividers and counters for providing required timing signal outputs. These may, of course, be incorporated into data processor 1. For operation of video data bus 8, a number of different timing signals are required. Horizontal and vertical drive signals are provided in accordance with horizontal and field rates respectively. A dot clock signal is provided at the dot frequency (picture element or PEL rate) of the system. An odd/even field signal indicates if the odd or even field is to be displayed in an interlaced system. A composite blanking signal indicates if video is being displayed or if vertical or horizontal retrace is occurring. Also, a group clock signal or other signals may be provided. The group clock signal indicates when to access data for a new group of picture element data from memory. For example, picture element data in video memory having a slow access time may be serially provided in groups of 4, 8 or 16 picture elements. On the other hand, a parallel data transmission scheme is possible, potentially increasing the requirements for leads of video data bus 8. Digital to video converter 6 accepts digital image information from the video data bus 8, pel—by-pel, and converts the digital information, if necesary, to analog form for presentation on video display 7. The video converter may be in modular form and comprises three components: (1) color map memory, (2) digital to analog conversion and sample and hold circuits, if required by video display 7, and (3) a standard composite video encoder (for example, for providing NTSC standard video or red, green, blue RGB outputs) and RF modulator (if necessary, for antenna lead-in access to a television set display 7) . Video display 7 may either be a monitor or a television set or it may be other forms of graphic display known in the art, including a liquid crystal display, an LED display or a plasma panel display. Pictorial information which is to be displayed on
CRT 7 or any other display device within a terminal such as that shown in FIG. 1 is encoded for display by a user who may be remotely located from the terminal. The user is furnished with certain information to guide him in making critical choices in the encoding procedure, but is advantageously allowed a great deal of flexibility. When desired, the encoded information may be transmitted to the terminal of FIG. 1 (via communications line 11 or as a broadcast signal 12) , decoded in a manner described hereinafter, and eventually displayed on CRT 7 independent of its resolution characteristics.
-BVJREA OMPI An image or picture as shown in FIG. 2 can be encoded, for example, using the system shown in FIG. 3. Camera 301 is arranged to scan picture 201 along a series of generally parallel scan lines, such as lines 202-1, 202-2 and 202-3, at a scan rate determined by a control signal applied to camera 301 on line 302a. The spacing between adjacent lines, shown greatly exaggerated in FIG. 2, is also controllable by the user, via a control signal on line 302b. The output of camera 301 on line 310 is an electrical signal which represents the attributes of picture 201, such as its luminance and chrominance, and it is these attributes which must be encoded, in the correct spatial relation, to enable reconstruction and display of picture 201. The camera output is sampled by sample and hold circuit 303, at a rate determined by an input on line 304, and each such sample is converted from analog to digital form in an A/D converter 305. Each parallel output from converter 305.on lines 306-1 to 306-n thus represents the attributes of a particular element (pel) of picture 201, and these ultibit values are entered in a randoπ access memory 310 via a processor' 308. The addresses at which the picture information is stored in memory 310 are provided by the processor, which may be part of a microcomputer or minicomputer such as a Digital Equipment Corp. PDP 11/23. Numerous alternative techniques exist for obtaining information describing elements in a picture, and these can also be used to obtain the values entered in memory 310. The picture elements which are defined as a result of the scanning and sampling processes are illustrated in FIG. 2. For example, when picture 201 is scanned along line 202-2, sampling may occur at instants marked by points 212a, 212b and 212c. Corresponding vertically displaced sampling points 213a, 213b and 213c are also shown on scan line 202-3. The picture element 222b corresponding to sampling point 212b is a rectangle of width x (measured in the direction of the scan lines) and height y_ (measured in the direction perpendicular to the scan lines) , centered at point 212b. Its vertical edges are located halfway between point 212b and the preceding and succeeding points 212a and 212c, respectively, while its horizontal edges lie halfway between scan line 202—2 and the preceding and following lines 202-1 and 202-3, respectively. Thus, it is seen that pel 222b abuts the preceding and succeeding pels 222a and 222c along common vertical sides, and also abuts pels 221b and 223b on preceding and succeeding scan lines along common horizontal edges. The picture elements thus form a generally rectilinear array which completely cover and define the picture 201 being encoded.
In accordance with the present invention, the system user is enabled to independently define, together with the encoded picture attributes, information indicating the width and height dimensions of the picture elements relative to the overall size of a so-called "unit screen", which is described hereafter. This allows considerable flexibility when the picture is displayed, as will be illustrated by several examples.
In the "normal" case, when it is desired that the picture 201 fill substantially the entire screen in the display device, the relative width and height values dx and dy, are set at 1/P and 1/L, respectively, where P_ is the number of samples (of width x.) c each of L scan lines . (spaced from each other.by distance y) . This establishes a 1 to 1 correspondence between each element of picture 201 and the elements, called logical pels, in the unit screen, and this insures that when the unit screen is subsequently mapped onto the physical display (such as CRT 7) the entire picture will be reproduced. If it is desired that only the bottom half of the display be used, dx is still set at 1/P (since the full screen width is used) but dy is reduced to 1/2L, to compress the relative heights of each pel that will be displayed. Information indicating the coordinates of a starting point XS/γs on the unit screen where the picture display is to begin (in this case, at the midpoint) is also encoded, as are coordinates (X0,Xb) and (YQ,Y-b) indicating the vertical and horizontal boundaries of a rectangular "active region" within the unit screen, to which display is confined. This active area will be described more fully below.
While the previously described scanning procedure used in camera 301 is conventional, it is sometimes desirable for the user to scan from right to left (instead of from left to right) and/or from bottom to top. Either of these situations is communicated to the terminal of FIG. 1 by treating dx and dy as signed, two's complement numbers, i.e., binary decimals where the most significant bit represents the digit just to the right of the decimal point. When dx is positive, scanning is from left to right, while negative dx indicates the reverse. hen dy is positive, scanning proceeds from bottom to top, and a negative dy indicates conventional top down scanning. Thus, as an example, if dx is specified as 0.001010, the leftmost bit indicates a positive number of deciπal value
8" + "32 =0.15625 indicating that scanning is from left to right and that the width of each pel relative to the unit screen is 15.625%. Similarly, if dy is specified as
1.000101, the scanning lines proceed from top to bottom (dy negative) and the pel height is - 1r6- + ~64 =.078125.
Information specifying the sign and magnitude of dx and dy, the values of other user specified variables such as Xs, Ys, XQ, Xtø, γ0 and YD< , as well as other control information described below, are entered in memory 310. The input may be made on a keyboard 312, and entered via processor 308 in certain preselected locations in memory 310 which are later accessed when the information is read from the memory, assembled, and transmitted to the terminal of FIG. 1. The data read from memory 310 may be conditioned for application to communication line 11 using a conventional mode 315. If no information has been entered in certain memory locations, a default condition is recognized in the processor, whereupon predetermined "common" values may be used to indicate the user specified variables involved. One example illustrating the arrangement of data in accordance with the present invention as it is read from memory 310 and applied to communication line 11, is shown in FIGS. 4 and 5. The data consists of a series of ultibit words (bytes) of a fixed predetermined length "agreed upon" in advance by the encoder of FIG. 3 and the decoder described hereinafter, at least with respect to each communication session. Typically, a 7 or 8—bit code may be used, and the latter is illustrated in FIG. 4. It is noted that the basic coding scheme described herein is built upon the framework established by CCITT
Recommendation Ξ.100, "International Information Exchange for Interactive Videotex," Yellow Book, Volume VII.2, Geneva, 1980. Within each word, such as words 401, 402, 403, -bits (M is shown as 6) are considered to be active, while the remaining bits (two in FIG. 4) are used for parity, error correction and other control purposes.
The order or sequence in which words in FIG. 4 describe the user specified variables (such as but not limited to dx, dy, XQ, Xb, YQ and Yj..) must also be agreed in advance between the encoder and the terminal. The CRC Technical Note identified above and incorporated herein by reference, describes one protocol or standard which may be followed. However, the present invention is not limited or confined to use with a particular standard, .and the illustrative order shown in FIG. 5 is assumed to have been agreed upon by the encoder of FIG. 3 and the decoder portion of the terminal of FIG. 1.
In FIG. 5, the first information conveyed (block 501) describes the sign and magnitude of dx and dy, thus defining the "logical pel" size and the scanning direction used to obtain the picture information. This information may require use of the active M-bit portions of
OR several bytes (such as 8-bit words 501-1 through 501-5) . Next, information (block 502) is entered indicating the coordinates Xs, Ys in the unit screen, of the starting point of the display. Again, several bytes 502-1 through 502-3 may specify this information. The starting coordinates are followed, in turn, by coordinate definitions (block 503) of the active portion of the unit screen that will be used to display the picture attributes which follow. Each of the four values may illustratively require several bytes 503-1, 503-2, et cetera.
The next type of user specified information (block 504) is called a packing counter count N_, which is an unsigned integer that describes the number of consecutive bits to be taken from the active portions of successive picture attribute words (block 505) to make up a specification of a single pel. M and N need not be integer multiples of each other, so that a given pel may be described in terms of bits derived from more than one byte. This arrangement is advantageous, in that flexibility is provided in the specification of the picture attribute information and the number of bits transmitted is used efficiently.
Many alternatives exist concerning the description of pel attributes. For example, code words can be used to describe percentage luminance values or color values can be described in terms of the primary colors or in terms of a table look-up scheme in which certain pre- agreed codes are communicated in advance, to both the encoder and display terminal. The codes can represent either a predefined series of colors or alternatively color information in a predefined specification scheme.
An example of the manner in which the active portions of the bytes in block 505 specify a particular attribute - color - of the pels being processed will be helpful. As mentioned previously, the color specifications for the pels in picture 210 are represented sequentially in the active portions of the bytes (such as bytes 50 505-2) , without regard to byte boundaries. In one arrangement, the bits are used, N at a time, to define the color; for this purpose, the bits are organized into three tuples representing the three primary colors, and the order of interpretation of the bits is green-red-blue. The color specification for a single pel may contain multiple three- tuples, depending on the packing count N. For N > 3, the first three-tuple contains the MSBs, and the last contains the LSBs of the three primaries. For example, if the packing count is N = 6, the color specification for each pel would look like GRBGRB, and each primary color would be specified to two bits of accuracy. In some instances, it is desirable that the packing count N is not an integer multiple of three, so that each primary is not specified to an equal accuracy. For example, if the packing count
N = 4, the color specification for each pel would look like GRBG. The bits in the active portion of the bytes would look like GRBGGRBG....
.The word string of FIG. 5 is assembled in the correct sequence under control of processor 308, which includes instructions for sequentially reading from memory 310 the contents of the preselected registers which contain the appropriate user specified variables, followed by the registers which contain the pel attribute (color) information. The instruction set is straightforward, and many alternatives will be apparent to those skilled in the art.
Display of the pictorial information encoded using the procedure just described will be appreciated by first considering the "unit screen", mentioned earlier and illustrated in FIG. 6, which is defined as the virtual address space 601 within which the picture is to be displayed, having dimensions 0 to 1 in both the horizontal (x) and vertical (y) directions. The active portion of the unit screen, defined by the user selected variables XQ, X-^, Y0 and Yb, is a rectangular portion 602 of the screen, as, for example, the rectangle having vertices at coordinates (X0,Y0 ), (Xb,Y0 ) (Xb,Yb> and (χo'γb} as shown in FIG- β- It is convenient to define XQ and YQ as the boundaries away from which scanning occurs: for dx positive, XQ is to the left of X , but for negative dx, the positions are opposite. Similarly, for positive dy, YQ is the bottom of the active area and b the top, with the opposite for negative dy. In the example shown in FIG. 6, scanning is from left to right and bottom to top. The values of all four variables lie between zero and one (i.e., 0_<x<l and 0_<y<l) • In the most normal case, when it is desired that the encoded picture fill substantially the entire screen available on the display device, the active area will be co-extensive with the unit screen, when the display device is square. When the physical display has the conventional 4:3 rectangular aspect ratio and scanning is from left to right and top to bottom, active area settings of XQ = 0, Xb = .999..., Y0 '= 0.75 and Yb = 0.0 would fill the entire screen. This is because the mapping procedure from the unit screen to the physical display, which will be described more fully below, is arranged so that the origin (0,0) of the unit screen is mapped to the lower left corner of the physical display, and the lower right (1,0) of the unit screen is mapped to the lower right corner of the display. With this mapping fixed, the upper left corner (1,0) of the unit screen would be mapped outside of the physical display area; by limiting the active area the y coordinate of approximately 0.75 on the unit screen would correspond to the top of the display, and all information would be displayed. Several further examples will illustrate the relationship between the unit screen, the active area, and the logical pel size, all of which are user defined. In these examples, it is assumed that dx is positive and dy negative, and that the physical display is square, so that 3/4 active display area limitation mentioned above for a rectangular display device is not needed. For display of the entire picture on the entire bottom half of the screen. the user would set XQ = 0, Xb = .999 , YQ = 0.5 and
Yb = 0.0. If the height dy of the logical pel is 1/L, where L is the number of scan lines in the original picture and is not divided by a factor of 2, then only the top half of the picture would be displayed in the bottom half of the display, but it will be compressed in a 2:1 ratio in the vertical direction. As will be described below, scrolling (line by line movement of the display picture in the vertical direction) can be used to display the second half of the picture. As an alternative, if dy = 1/2L, the entire picture would appear in the bottom half of the display. As a second example, if it is desired to display the picture in the upper left quadrant of the display, then the entire area is defined as XQ = 0, Xb = 0.5, Y0 = .999... and Yb = 0.5. To properly scale the picture, both dimensions of the logical pel should also be reduced by a factor of 2, i.e. dy = 1/2L and dx = 1/2P, where L and P are the number of lines per frame and pels per line in the original picture, respectively. It is also to be noted that Xs and Ys, the X and
Y coordinates of the starting point of the display with respect to the unit screen may be independently specified by the user. In the first example given above, Xs = 0 and Ys = 0.5 would be appropriate. In the second example (active area in the upper left quadrant) , Xs = 0 and Ys = .999... would be used.
The ability of the user in accordance with the invention, to specify the active area dimensions (which need not be quadrants but can be any fractions of the unit screen) independently of the logical pel size, gives much flexibility in achieving certain types of special effects. However, appropriate coordination between these values is normally required.
When encoded pictorial information is received at the terminal of FIG. 1 via communication line 11, it is appropriately demodulated in modem 13 and applied to communications interface 15. In instances where radio
IΪΛJ EA
OMPI transmission is used, the broadcast signal is demodulated in receiver 14 before being input to interface 15. Functions performed in the interface include demultiplexing, error correction, retiming and so on, and are well known to those skilled in the art. The output of interface 15 is coupled through input/output circuit 16 and the information shown in FIG. 5 is applied to various registers in random access memory 10, under control of data processor 1. For this purpose, the terminal includes appropriate program information stored in ROM 9 to recognize the order in which the information is presented, and to correctly interpret control signals encoded together with the pictorial data. A description of the general characteristics of a videotex system is contained in a simultaneously filed application entitled "Terminal
Generation of Definable Character- Sets" filed by the same inventors as in the present application.
As the user specified inputs and the picture attributes are entered in RAM 10, they may be concurrently processed for display, or they may be stored for later use. In either event, the decoding procedure of the present invention includes the following steps. First, the location of each picture element with respect to the unit screen is determined. Second, each pel location on the unit screen is mapped to one or more corresponding locations in a video memory 4, and the attributes describing the pel written in the location (s) . Next, the contents of memory 4 are read sequentially. Depending on the type of display device used, the memory output may be converted to conventional video format in converter 6.
Finally, the video signal is applied to display device 7.
The flowchart of FIG. 7 illustrates the steps followed in accordance with the present invention in order locate each received pel with respect to the coordinate of the unit screen. This procedure in effect determines where on the unit screen each logical pel belongs: each location is expressed as coordinates x, y referenced to the unit
MPI screen .
In the first step 701, the user specified values X0, Xb, Y0 and Yb which define the active area on the unit screen are initialized as are Xs and Ys, the starting x and y coordinates. This information is obtained from the input information, blocks 502 and 503, and stored in specified locations in RAM 10. The next step, 730, reads the logical pel size dx and dy from the incoming data, and stores these values in RAM 10. If both dx and dy are zero (step 731), a default condition is recognized, whereupon dx and dy are set (step 732) to the smallest positive definable value permitted by the number of bits which define those variables. For example, if dx includes a sign bit and 7 two's complement magnitude bits, then the smallest definable value is 1/128. If either dx or dy is non-zero, then in step 702, the packing count N is read (as obtained from block 504) and stored in memory. A branch step 703 next determines if a default condition exists: If N = 0, a default packing count, for example, N = 64, is set in step 704, while otherwise, the processing proceeds to step 706.
In the remaining portion of the process shown in FIG. 7, the series of input bytes containing information describing the picture attributes (block 505 in FIG. 5) are processed to re-assemble the active bits, N at a time, with each group of N bits representing the attributes of a single logical pel at coordinates x, y on the unit screen.
In step 706, a process variable C is set equal to N, the number of bits which define the attributes of each pel. Thereafter, in steps 707-710, the bits in the active portions of picture attribute words (block 505 in FIG. 5) are assembled, N at a time. Specifically, step 707 is a branch point depending upon whether C = 0. If not, and bits in the last byte extracted from the incoming datastream still remain to be processed (step 708) , the next bit is read from the register which contained the byte. The bits that have been read are assembled in an in-ρrocess register, and C is then decremented to C = C-l (step 710). If no bits are left in the byte being processed, but it is determined in step 720 that other bytes are available, the next one is read (step 721) and the process continues at step 709.
When C = 0 occurs, N bits have been assembled in the in-process register. Step 711 compares the present X coordinate to Xb, to determine if the X-direction boundary of the active drawing area has been met or exceeded. If not, the logical pel is drawn or mapped into memory 4 (step 712) in a manner described hereinafter, at the currently specified x and y coordinates. The value of x is then incremented by the value dx (which may be positive or negative) of the logical pel (step 713) and the process repeated from step 706. If dx is positive, the next pel lies to the right of the previous pel, but if dx is negative, the next pel lies to the left.
When the value of X equals or exceeds the value of Xb, the active area boundary has been met. Step 711 produces a positive branch which, in step 714 sets Y to Y +dy (dy may be positive or negative) thus incrementing the scan line position and X is set 'to XQ, thus restarting the X position on the new line. If dy is positive, the next line is above the previous line, but when dy is negative, it is below the previous line. After being incremented, the new value Y is compared to Yb (step 715) to insure that the horizontal boundary (top or bottom) of the active area has not been met or exceeded. If not, the pel is drawn (step 716) in the same manner as in step 712, to be explained below. However, by convention, if any bits are left in the byte presently being processed, they are discarded (step 717) and the next byte read (step 713) before returning to step 706.
When the branch at step 715 indicates that the top or bottom of the active area has been met or exceeded, the entire display lying within the area of the screen defined by the active display area is scrolled by -dy step 719. This is done by sending a control signal to memory 4 to perform a mass shift of the appropriate portion of its contents in a vertical direction. Thus, if writing is proceeding down the screen, scrolling is up, and vice- versa. Thereafter, in step 722, the value of Y is reduced by dy, to in effect cancel the last Y direction increment in step 714, and the pel is drawn in step 716.
When all bytes have been read, the picture has been fully processed, and the variables X and Y are reset in step 750 prior to exiting from the process of FIG. 7. The X and Y values used for resetting are the coordinates of the lower left corner of the active drawing region. This allows subsequent drawing operations to commence from a precisely specified point. When steps 712 and 716 in FIG. 7 are performed, the N-bit attribute value (color, etc.) for the present pel, just extracted from the byte sequence being processed, is written or mapped into one or more storage locations in video memory 4 of FIG. 1. This mapping procedure will be explained by reference to FIGS. 8-10, the first two of which graphically illustrate mapping with different logical pel sizes, and the latter of which is a flowchart of the steps used in mapping.
Video memory 4 is usually a frame memory (RAM) having a rectilinear array of storage locations or registers each capable of storing the attributes of one display element (display pel) that can be presented on display device 7. It is anticipated that memory 4 will be made as large as possible, given constraints of economy and the inherent resolution capability of the display 7, so that a reasonable visual resolution may be obtained. As an example, if display 7 is a CRT, a frame memory for storing any array of 256 by 256 display pels would be common. Alternatively, if display 7 is a bi—level device such as a plasma panel, the on-off attribute of each pel is represented by only one bit, and a greater resolution (say, 512 by 512 pels) can be expected.
' A portion of the array (801) of storage locations in video memory 4 is represented in FIGS. 8 and 9 as a plurality of individual rectangles positioned in a series of rows and a series of columns. Where the picture attributes are represented by multiple bits (N > 1) , memory 4 may include several memory planes, one for each bit. The other memory planes are illustrated in FIG. 8 by showing small portions of two other arrays 810 and 820.
When it is desired to enter" the attributes of a logical pel (such as pel 802 in FIG. 8) into the appropriate storage location in memory 4, the logical pel mapped or superimposed on the array of FIG. 8 or 9. Three sets of variables control the mapping process: (1) the current X, Y position of the pel; (2) its size, specified by dx and dy; and (3) the size of the memory array. In determining position, the geometric alignment of the X, Y position within the logical pel is predefined by convention. For example, the location used is (a) lower . left corner if dx and dy are both positive, (b) lower right corner if dx is negative and dy is positive, (c) upper left corner if dx is positive and dy is negative, and (d) upper right corner if dx and dy are both negative. Thus, in FIGS. 8 and 9, assuming that dx and dy are both positive, a relatively large pel 802 in FIG. 8 and a relatively small pel 902 in FIG. 2 have commonly located lower left hand corners at coordinates X = 9/32, Y = 10/21.
In order to determine the coordinates Xa, Ya of the lower left hand corner of a storage location in array 801 which corresponds to a pel in the unit screen at coordinates X, Y, a linear mapping is performed from the unit screen to the array, such that
Xa = W • X and (1)
Ya = H- Y (2)
where H is the total number of storage locations in each column of the array and W is the total number of storage locations in each row of the array. In FIGS. 8 and 9, W = 8 and H = 7. Thus, for this example, g X = 8 -~ = 2.25 and
Figure imgf000022_0001
A similar mapping is next performed to locate the coordinates aj_, a]_ °f t^e diagonally opposite corner of the logical pel with respect to the array. This is done by realizing that the coordinates of this far corner in the unit screen are x ÷ dx and y + dy. In the example shown in
F FIIGG.. 88,, idx = 32 and dy = _7 21 Thus, using Equations (1) and (2):
Xal = (x+dx) = 8(-32 + 325 = 4*5 and
Figure imgf000022_0002
After the coordinates of the diagonally opposite corners of the logical pel have been mapped to the memory array, the storage locations into which the attribute value for that pel are to be entered can be determined, using several different strategies.
In accordance with one strategy, if any portion of a storage location in the array is "covered" or included in the mapping, the pel attributes are written into that location. This strategy assures that each "logical pel" will always map to at least one (but possibly many) display pels. Other strategies could depend on the percentage of coverage. Using "any portion" strategy in FIG. 8, the attributes for pel 802 are entered in the 9 storage locations in columns 3, 4 and 5 at rows 4, 5 and 6. These are shown shaded in FIG. 8. The attributes for pel 902 of FIG. 9, when similarly processed, are loaded in one memory location in column 3, line 3. If the next pel being processed (as pel 803 in FIG. 8) "covers" any previously written storage locations (such as those in column 5, rows 4, 5 and 6) the information stored at that location is overwritten. If either the width or the height of the logical pel is specified as zero, only a line will be drawn. If both the width and height of the "logical pel" are specified as 0, the "logical pel" reduces to a dimensionless drawing point. This invokes the default condition explained in connection with FIG. 7.
The mapping procedure just described can also be explained by reference to the flowchart of FIG. 10. To repeat, during each mapping operation, it is desired to identify the coordinates of one or more storage locations in video memory 4 that correspond to the logical pel being processed. This is done as a function of the coordinates X, Y of the logical pel, with respect to the unit screen, its size dx, dy, also with respect to the unit screen, and the total number W and H of storage locations (one for each display pel in device 7) in each row and column of the memory array, respectively.
In the first step (1000) the current values of x and y are saved in registers so that they may be correctly restored at the end of the routine. In the next step 1001, the coordinates X,, y, with respect to the unit screen of the corner of the logical pel diagonally opposite to the corner at location X, y are calculated, using
X-j^ = X + dx and (3)
Yχ = Y + dy . (4)
In the next steps 1002 and 1003, a determination is made as to whether dx or dy are negative. If dx is negative, X and X^ are switched (step 1004) and, if dy is negative, Y and Y-^ are switched (step 1005). This is done so that the truncation and boosting in steps 1006 and 1007
OMP are always in the correct sense. Next, in steps 1006 and 1007, the coordinates of both corners are mapped to the storage location array, defining coordinates Xa, Ya and Xa^ and Yaτ_f respectively. The coordinates Xa, Ya are determined from Equations (1) and (2) above. In these multiplications, any fractions are truncated, so that Xa, Ya maps "down and to the left" as shown by arrow 850 in FIG. 8. When Xal and Yal are mapped in step 1003, the equations used correspond to Equations (1) and (2) , namely:
Xal = W . Xλ (5)
γal = H *γl <6>
However, if any fractions result from these multiplications, the result is boosted to the next higher integer. This has the effect of mapping the diagonally opposite corner "up and to the right" as shown by arrow 860 in FIG. 8. After truncation and boosting in the example of FIG. 8, Xa = 2; Y0 = 3; Xal = 5 and Yal = 6.
With the mapping of the diagonal corners thus complete, all that remains in the procedure is to identify the storage locations in the memory array into which the attributes are to be written. First, the value of Xa is saved in a register (step 1008) and the attribute value being processed is then written (step 1009) in the storage location at Xa, Ya. In the example of FIG. 8, this would be the location in row 4, column 3, since by convention coordinates (2, 3) specify the lower left corner of this location.
The value of Xa is next incremented (step 1010) by one, and a test performed (step 1011) to see if the current value of Xa is less than the value Xal. If it is, the write process step 1009 is repeated for the incremented value. This corresponds to the storage location in row 4, column 4 in FIG. 8. When Xa is not less than Xa^, the value of Xa previously saved in step 1008 is restored (step 1012) and Ya is then incremented by one in step 1013. A test is then made (step 1014) to insure that the new Ya is less than Yaι- If sO/ the procedure returns to step 1008, and the process repeats for the storage locations in the next row (row 5 in FIG. 8). When Ya is not less than Ya]_, the original values of x and y are restored (step 1015) and the mapping procedure for the current logical pel has been completed. The entire process of FIG. 10 is then repeated in order to enter the attributes of the corresponding locations in the memory array. As mentioned before, overwriting of some information may occur.
It may be desirable, depending on the relationship between the "logical pel" dimensions and the physical pel dimensions on an individual display device, to perform a pre-execution rescaling of both the "logical pel" dimensions and the active drawing area dimensions to avoid certain types of distortions that will result from a mismatch. The goal of this type of rescaling would be to make the "logical pel" dimensions become either integer multiples or integer fractions of the corresponding physical pel dimensions. (The active drawing area dimensions would have to be scaled equivalently in order to maintain line synchronism in the image.) This type of rescaling is terminal dependent, but is permissible as long as the "logical pel" dimensions and active drawing area dimensions are restored to their prescaled values after the picture has been processed and displayed. In the rescaling, care must be taken to insure that the resultant image lies within the original active drawing area.
The technique for processing pictorial information for display described above is primarily intended for use in conjunction with a videotex or other computer-based information system which can operate in a variety of modes and which can accommodate a variety of information types other than pictures. This information can include text, line drawings, mosiacs, special purpose characters and the like. For this reason, the terminal of FIG. 1 will include operating and support programs usually stored in ROM 9, for performing various functions such as mode control (i.e., text, line drawing modes, et cetera) input/output control, and so on. The control bit portions of each word shown in FIG. 4 contain information recognized and used in controlling overall system operation, such as switching between modes, activating input/output sequences, and so on. By virtue of the flexibility built into the terminal of FIG. 1, it is possible for the picture entered in memory 4 to be further processed prior to display. A remapping of the stored pel attributes can change the colors of the display, the texture of the picture, or its relative dimensions, to give just a few examples.
Various modifications and adaptations of the present invention will be readily recognized by those skilled in the art. For this reason, it is intended that the invention be limited only by the appended claims. For example, while a video camera is shown as the input device in FIG. 3, picture information can come from a facsimile terminal, a video tape scanner, or other device capable of representing a picture as a raster scanned sequence of attributes including but not limited to black/white values, gray levels, luminance/chrominance values, et cetera. Similarly, it is to be clearly understood that many different display devices can easily be substituted for the CRT of FIG. 1.

Claims

Claims
1. A method for displaying a picture on a display device comprising the steps of: defining said picture in terms of a plurality of generally rectangular logical picture elements (pels) having a predefined width dx and a predefined height dy, with respect to a unit screen coordinate space, each of said pels having an associated N-bit code word indicating its visual attributes; mapping the location of a first one of said logical pels from said unit screen coordinate space to a corresponding location in a display memory which includes an array of discrete storage locations arranged in a plurality of rows and columns; writing the N-bit code word associated with said first logical pel in ones of said storage locations identified in said mapping step; repeating said mapping and writing steps for each of said logical pels defining said picture; and sequentially applying the contents of said display memory to a display device.
2. The method defined in claim 1 wherein said defining step includes: generating a series of samples each representing the attributes of a particular element of said picture, said elements lying along a plurality of generally parallel raster scan line; converting each of said samples to an N-bit digital word; and sequentially assembling each of said words in the
M-bit active portions of a stream of bytes, where M and N need not be integer multiples.
3. The method of claim 2 wherein said defining step further includes: encoding an indication of the value of N; encoding the values of said predefined width dx and said predefined height dy as signed numbers which ϋ EA
OMPI indicate: (1) the direction in which said picture is scanned, and (2) the values of said width dx and height dy as a fraction of a unit screen, such that said picture is entirely displayed as long as dx = 1/P and dy = 1/L, where P is the number of said samples on each of said scan lines and L is the number of lines scanned.
4. The method of claim 3 wherein said defining step further includes specifying horizontal and vertical boundaries of an active area within said unit screen, said active area defining a portion of said display memory within which said picture will be mapped, and said mapping step includes:
(1) locating said first one of said logical pels at a predetermined starting location in said active area, (2) locating the next one of said logical pels at a location spaced dx from said starting location,
(3) repeating said last mentioned locating step (2) until one of said vertical boundaries of said active region is met or exceeded, (4) locating the next one of said logical pels at a position adjoining the other one of said vertical boundaries but spaced dy from the line defined by the last processed pels, and
(5) repeating step (2) .
5. The method of claim 4 wherein said mapping step further includes scrolling the pels in said active area when one of said horizontal boundaries is met or exceeded in said last-mentioned locating step (4) , wherein scrolling includes shifting all priorly determined locations in the portion of said display memory defined by said active area by an amount (-dy) .
6. The method of claim 1 wherein said mapping step includes:
(a) determining the coordinates (X, Y) and (X , Y-_) of diagonally opposite corners of said pel with respect to said unit display space, where Xj = X + dx and YT, = Y + dy,
-gtT EAT OMPI (b) forming for each set of coordinates a corresponding set of coordinates (Xa, Ya) and (Xa-j_, Yaχ) in said display memory array, where
Xa = W • X (1)
Ya = H Y (2)
Xal = w- X-_ and (3)
Yal = H. Yχ (4)
and where W is the number of storage locations in each of said rows and H is the number of storage location in each of said columns.
7. The method of claim 6 wherein said forming step includes:
(a) truncating the fractional portion of the products formed using Equations (1) and (2) ; and (b) boosting to the next highest integer any fractional portion of the products formed using Equations (3) and (4) ; and said writing step includes identifying said storage locations using the coordinates formed in said boosting and truncating steps.
8. A method of encoding a picture for display on a display device, comprising the steps of: defining the attributes of each of a plurality of elements of said picture by an associated N-bit code word, said elements lying along a series of generally parallel scan lines which traverse said picture; sequentially assembling said N-bit code words in predefined portions of .M-bit bytes, where M and _. need not be integer multiples of each other; and encoding said bytes together with an indication of (1) the value of N, (2) the size of said elements, (3) the order in which ones of said elements on each of
OMPI said lines are arranged, and (4) the direction in which said scan lines traverse said picture.
9. The method of claim 8 wherein said encoding step further includes encoding said element size as a function of the number P of said picture elements that lie along each of L of said scan lines.
10. A method of displaying an encoded representation of a picture on a display device independent of the device's resolution characteristics, comprising the steps of:
(1) forming said representation by encoding (a) the attributes of a plurality of elements of said picture, said elements lying along a series of generally parallel scan lines, (b) the boundaries of the active area of a unit screen coordinate space which defines a portion of a display memory into which said pels are to be mapped, and
(c) the location of the first of said picture elements in said active area; (2) locating the coordinates of each of said pels with respect to said unit screen coordinate space;
(3) mapping the coordinates of each pel formed in the last mentioned step to a rectilinear array representing storage locations in said display memory; (4) writing said attributes of each pel into at least one of the storage locations mapped in said last mentioned step; and
(5) sequentially reading said attributes stored in said memory to form a signal for application to said display.
11. The method of claim 10 wherein said locating step includes:
(1) locating the next of said picture elements adjacent to said first picture element at a specified displacement dx along the same scan line;
(2) repeating the last mentioned locating step for successive picture elements until one of said boundaries of said active region is met or exceeded,
(3) locating the next of said picture elements at the opposite one of said boundaries on a scan line displaced from said same scan line by a specified displacement dy, and
(4) repeating steps (2) and (3) above until a different one of said boundaries is met or exceeded.
12. The method defined in claim 11 wherein said forming step includes encoding the sign and magnitude of each of dx and dy.
13. The method defined in claim 11 wherein said locating step further includes:
(5) shifting the locations of all previously processed picture elements in the portion of said display memory defined by said active area by an amount (-dy) when said different one of said boundaries is met or exceeded.
14. The invention defined in claim 10 wherein said mapping step includes:
(1) calculating the coordinates (X, Y) of one "corner of each pel using the coordinates (X-, - Y-,) of the diagonally opposite corner, where
X1 = X + dx and
Figure imgf000031_0001
and where dx and dy are the width and height, respectively, of each of said picture elements expressed as a fraction of said unit screen coordinate space,
(2) multiplying each of said X, X-, coordinates by the number W of storage locations in each row of said array, and (3) multiplying each of said ϊ, Y-, coordinates by the number H of storage locations in each column of said array.
15. The method of claim 14 wherein said mapping step further includes: (4) truncating any fractional portion of the products X - W and Y . H to obtain first coordinates (Xa, Ya) in said array;
(5) boosting to the next higher integer any fractional portion of the products X-^ . W and Y-^ . H to obtain second coordinates (Xair Ya ) --n said array; and
(6) selecting as said corresponding location all locations in said array within the rectangle having diagonal corners defined by said first and second coordinates (Xa, Ya) and (Xa^, Yal) .
16. A system for displaying a picture on a display device comprising: means for defining said picture in terms of a plurality of generally rectangular logical picture elements (pels) having a predefined width dx and a predefined height dy, with respect to a unit screen coordinate space, each of said pels having an associated N-bit code word indicating its visual attributes; means for mapping the location of each of said logical pels from said unit screen coordinate space to a corresponding location in a display memory which includes an array of discrete storage locations arranged in a plurality of rows and columns; means for writing the N—bit code word associated with each of said logical pels in ones of said storage locations identified by said mapping means; and means for sequentially applying the contents of said display memory to a display device.
17. The system defined in claim 16 wherein said defining means includes: means for generating a series of samples each representing the attributes of a particular element of said picture, said elements lying along a plurality of generally parallel raster scan lines; means for converting each of said samples to an
N-bit digital word; and means for sequentially assembling each of said words in the M-bit active portions of a stream of bytes, where M and N need not be integer multiples.
18. The system of claim 17 wherein said defining means further includes: means for encoding an indication of the value of N; and means for encoding the values of said predefined width dx and said predefined height dy as sign numbers which indicate: (1) the direction in which said picture is scanned, and (2) the values of said width dx and height dy as a fraction of a unit screen, such that said picture is entirely displayed as long as dx = 1/P and dy = 1/L, where P is the number of said samples on each of said scan lines and L is the number of lines scanned.
19. Apparatus for encoding a picture for display on a display device, comprising: means for defining the attributes of each of a plurality of elements of said picture by an associated N- bit code word, said elements lying along a series of generally parallel scan lines which traverse said picture; means for sequentially assembling said N-bit code words in predefined portions of M-bit bytes, where M and N need not be integer multiples of each other; and means for encoding said bytes together with an indication of (1) the value of N, (2) the size of said elements, (3) the order in which ones of said elements on each of said lines are arranged, and (4) the direction in which said scan lines traverse said picture.
20. A system for displaying an encoded representation of a picture on a display device independent of the device's resolution characte istics, comprising: (1) means for forming said representation by encoding (a) the attributes of a plurality of elements of said picture, said elements lying along a series of generally parallel scan lines. (b) the boundaries of the active area of a unit screen coordinate space which defines a portion of a display memory into which said pels are to be mapped, and
(c) the location of the first of said picture elements in said active area;
(2) means for locating the coordinates of each of said pels with respect to said unit screen coordinate space;
(3) means for mapping the coordinates .of each pel formed by the last mentioned means to a rectilinear array representing storage locations in said display memory;
(4) means for writing said attributes of each pel into at least one of the storage locations mapped in said last mentioned means; and
(5) means for sequentially reading said attributes stored in said memory to form a signal for application to said display.
PCT/US1982/000671 1981-05-19 1982-05-18 Pictorial information processing technique WO1982004146A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU86809/82A AU8680982A (en) 1981-05-19 1982-05-18 Pictorial information processing technique

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US06/265,069 US4454593A (en) 1981-05-19 1981-05-19 Pictorial information processing technique
US265069810519 1981-05-19

Publications (1)

Publication Number Publication Date
WO1982004146A1 true WO1982004146A1 (en) 1982-11-25

Family

ID=23008835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1982/000671 WO1982004146A1 (en) 1981-05-19 1982-05-18 Pictorial information processing technique

Country Status (7)

Country Link
US (1) US4454593A (en)
EP (2) EP0079380A4 (en)
JP (1) JPS58500775A (en)
CA (1) CA1191636A (en)
ES (1) ES512307A0 (en)
GB (1) GB2100546A (en)
WO (1) WO1982004146A1 (en)

Families Citing this family (78)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS57195282A (en) * 1981-05-27 1982-11-30 Soodo Densanki System Kk Display method and apparatus by video signal
JPS58159184A (en) * 1982-03-17 1983-09-21 Nec Corp Picture turning device
US4536848A (en) * 1982-04-15 1985-08-20 Polaroid Corporation Method and apparatus for colored computer graphic photography
DE3381991D1 (en) * 1982-06-28 1990-12-20 Toshiba Kawasaki Kk IMAGE DISPLAY CONTROL DEVICE.
US4521770A (en) * 1982-08-30 1985-06-04 International Business Machines Corporation Use of inversions in the near realtime control of selected functions in interactive buffered raster displays
US4627004A (en) * 1982-10-12 1986-12-02 Image Resource Corporation Color image recording system and method for computer-generated displays
US5129061A (en) * 1982-11-10 1992-07-07 Wang Laboratories, Inc. Composite document accessing and processing terminal with graphic and text data buffers
US4516266A (en) * 1982-12-17 1985-05-07 International Business Machines Corporation Entity control for raster displays
US4800380A (en) * 1982-12-21 1989-01-24 Convergent Technologies Multi-plane page mode video memory controller
US4586158A (en) * 1983-02-22 1986-04-29 International Business Machines Corp. Screen management system
JPS59216190A (en) * 1983-05-24 1984-12-06 株式会社日立製作所 Display control system
US4570161A (en) * 1983-08-16 1986-02-11 International Business Machines Corporation Raster scan digital display system
US4633244A (en) * 1983-09-30 1986-12-30 Honeywell Information Systems Inc. Multiple beam high definition page display
US4651146A (en) * 1983-10-17 1987-03-17 International Business Machines Corporation Display of multiple data windows in a multi-tasking system
US4550315A (en) * 1983-11-03 1985-10-29 Burroughs Corporation System for electronically displaying multiple images on a CRT screen such that some images are more prominent than others
EP0989536B1 (en) 1983-12-26 2003-05-07 Hitachi, Ltd. Graphic pattern processing apparatus
FR2558618A1 (en) * 1984-01-24 1985-07-26 Duguet Jean Claude Method and device for computer-aided management
US4953229A (en) * 1984-02-16 1990-08-28 Konishiroku Photo Industry Co., Ltd. Image processing apparatus
US4599610A (en) * 1984-03-21 1986-07-08 Phillips Petroleum Company Overlaying information on a video display
EP0158209B1 (en) * 1984-03-28 1991-12-18 Kabushiki Kaisha Toshiba Memory control apparatus for a crt controller
US4751669A (en) * 1984-03-30 1988-06-14 Wang Laboratories, Inc. Videotex frame processing
DE163863T1 (en) * 1984-04-13 1986-04-30 Ascii Corp., Tokio/Tokyo VIDEO DISPLAY CONTROL UNIT TO DISPLAY MOVABLE PATTERNS.
US4633415A (en) * 1984-06-11 1986-12-30 Northern Telecom Limited Windowing and scrolling for a cathode-ray tube display
US4701865A (en) * 1984-06-25 1987-10-20 Data General Corporation Video control section for a data processing system
US4631692A (en) * 1984-09-21 1986-12-23 Video-7 Incorporated RGB interface
US6552730B1 (en) 1984-10-05 2003-04-22 Hitachi, Ltd. Method and apparatus for bit operational process
US5034900A (en) * 1984-10-05 1991-07-23 Hitachi, Ltd. Method and apparatus for bit operational process
JPH087748B2 (en) * 1984-10-11 1996-01-29 株式会社日立製作所 Document coloring device
US4704605A (en) * 1984-12-17 1987-11-03 Edelson Steven D Method and apparatus for providing anti-aliased edges in pixel-mapped computer graphics
US4757441A (en) * 1985-02-28 1988-07-12 International Business Machines Corporation Logical arrangement for controlling use of different system displays by main proessor and coprocessor
US4755810A (en) * 1985-04-05 1988-07-05 Tektronix, Inc. Frame buffer memory
NL8502642A (en) * 1985-09-27 1986-04-01 Oce Nederland Bv GRID IMAGE PROCESSOR.
US4837812A (en) * 1985-12-21 1989-06-06 Ricoh Company, Ltd. Dual connection mode equipped communication control apparatus
JPS62147842A (en) * 1985-12-21 1987-07-01 Ricoh Co Ltd Communication control equipment
US4769632A (en) * 1986-02-10 1988-09-06 Inmos Limited Color graphics control system
US4742350A (en) * 1986-02-14 1988-05-03 International Business Machines Corporation Software managed video synchronization generation
US4755814A (en) * 1986-02-21 1988-07-05 Prime Computer, Inc. Attribute control method and apparatus
US4868557A (en) * 1986-06-04 1989-09-19 Apple Computer, Inc. Video display apparatus
DE3739030B4 (en) * 1986-11-18 2005-03-31 Canon K.K. Mode changing facsimile installation - has controller connecting I=O devices to audio or digital line with engaged memory
JPS63174122A (en) * 1987-01-05 1988-07-18 コンピュータ・エツクス・インコーポレーテツド Computer human interface
FR2611942B1 (en) * 1987-02-25 1991-11-29 France Etat BROADBAND SERVER, PARTICULARLY FOR TRANSMISSION OF MUSIC OR IMAGES
US4729020A (en) * 1987-06-01 1988-03-01 Delta Information Systems System for formatting digital signals to be transmitted
DE3852149T2 (en) * 1987-06-19 1995-04-06 Toshiba Kawasaki Kk Cathode ray tube / plasma display control unit.
DE3852148T2 (en) * 1987-06-19 1995-04-06 Toshiba Kawasaki Kk Display mode switching system for a plasma display device.
US5351064A (en) * 1987-06-19 1994-09-27 Kabushiki Kaisha Toshiba CRT/flat panel display control system
EP0295690B1 (en) * 1987-06-19 1994-11-30 Kabushiki Kaisha Toshiba Display area control system for plasma display apparatus
EP0295689B1 (en) * 1987-06-19 1995-03-29 Kabushiki Kaisha Toshiba Display controller for CRT/plasma display apparatus
DE3787283T2 (en) * 1987-10-05 1994-02-24 Oce Nederland Bv Integral input-output system for raster scan printing unit.
US4967392A (en) * 1988-07-27 1990-10-30 Alliant Computer Systems Corporation Drawing processor for computer graphic system using a plurality of parallel processors which each handle a group of display screen scanlines
JP2909079B2 (en) * 1988-09-13 1999-06-23 株式会社東芝 Display control method
US5293485A (en) * 1988-09-13 1994-03-08 Kabushiki Kaisha Toshiba Display control apparatus for converting color/monochromatic CRT gradation into flat panel display gradation
JPH0362090A (en) * 1989-07-31 1991-03-18 Toshiba Corp Control circuit for flat panel display
CA2041819C (en) * 1990-05-07 1995-06-27 Hiroki Zenda Color lcd display control system
JP3152396B2 (en) * 1990-09-04 2001-04-03 株式会社東芝 Medical image display device
US5062136A (en) * 1990-09-12 1991-10-29 The United States Of America As Represented By The Secretary Of The Navy Telecommunications system and method
US5388201A (en) * 1990-09-14 1995-02-07 Hourvitz; Leonard Method and apparatus for providing multiple bit depth windows
GB2250615B (en) * 1990-11-21 1995-06-14 Apple Computer Apparatus for performing direct memory access with stride
US5345542A (en) * 1991-06-27 1994-09-06 At&T Bell Laboratories Proportional replication mapping system
JPH06318060A (en) * 1991-07-31 1994-11-15 Toshiba Corp Display controller
US5613053A (en) 1992-01-21 1997-03-18 Compaq Computer Corporation Video graphics controller with automatic starting for line draws
ATE154714T1 (en) * 1992-01-21 1997-07-15 Compaq Computer Corp CIRCUIT AND METHOD FOR DRAWING LINES IN A VIDEOGRAPHIC SYSTEM
EP0623232B1 (en) * 1992-01-21 1996-04-17 Compaq Computer Corporation Video graphics controller with improved calculation capabilities
US5502807A (en) * 1992-09-21 1996-03-26 Tektronix, Inc. Configurable video sequence viewing and recording system
US5583953A (en) * 1993-06-30 1996-12-10 Xerox Corporation Intelligent doubling for low-cost image buffers
US5684956A (en) * 1994-11-14 1997-11-04 Billings; Roger E. Data transmission system with parallel packet delivery
US6331856B1 (en) * 1995-11-22 2001-12-18 Nintendo Co., Ltd. Video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing
US5903255A (en) * 1996-01-30 1999-05-11 Microsoft Corporation Method and system for selecting a color value using a hexagonal honeycomb
US20040078824A1 (en) * 1996-04-10 2004-04-22 Worldgate Communications Access system and method for providing interactive access to an information source through a television distribution system
US5999970A (en) * 1996-04-10 1999-12-07 World Gate Communications, Llc Access system and method for providing interactive access to an information source through a television distribution system
US6049539A (en) * 1997-09-15 2000-04-11 Worldgate Communications, Inc. Access system and method for providing interactive access to an information source through a networked distribution system
US6002407A (en) 1997-12-16 1999-12-14 Oak Technology, Inc. Cache memory and method for use in generating computer graphics texture
US6412015B1 (en) 1998-06-24 2002-06-25 New Moon Systems, Inc. System and method for virtualizing and controlling input and output of computer programs
ES2340136T3 (en) 1999-02-17 2010-05-31 Macdermid Incorporated METHOD TO POWER THE SOLDABILITY OF A SURFACE.
US7319700B1 (en) * 2000-12-29 2008-01-15 Juniper Networks, Inc. Communicating constraint information for determining a path subject to such constraints
JP2003084751A (en) * 2001-07-02 2003-03-19 Hitachi Ltd Display controller, microcomputer and graphic system
US7320531B2 (en) * 2003-03-28 2008-01-22 Philips Lumileds Lighting Company, Llc Multi-colored LED array with improved brightness profile and color uniformity
US7577749B1 (en) 2004-12-03 2009-08-18 Ux Ltd. Emulation of persistent HTTP connections between network devices
GB2472631B (en) 2009-08-13 2014-10-08 Otter Controls Ltd Electrical appliances

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3646571A (en) * 1970-03-18 1972-02-29 Typagraph Corp Method of encoding graphical data
US3973245A (en) * 1974-06-10 1976-08-03 International Business Machines Corporation Method and apparatus for point plotting of graphical data from a coded source into a buffer and for rearranging that data for supply to a raster responsive device
US4145754A (en) * 1976-06-11 1979-03-20 James Utzerath Line segment video display apparatus
US4177462A (en) * 1976-12-30 1979-12-04 Umtech, Inc. Computer control of television receiver display
US4262302A (en) * 1979-03-05 1981-04-14 Texas Instruments Incorporated Video display processor having an integral composite video generator
US4301514A (en) * 1979-04-16 1981-11-17 Hitachi, Ltd. Data processor for processing at one time data including X bytes and Y bits
US4303986A (en) * 1979-01-09 1981-12-01 Hakan Lans Data processing system and apparatus for color graphics display

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US30785A (en) * 1860-11-27 William s
US3976982A (en) * 1975-05-12 1976-08-24 International Business Machines Corporation Apparatus for image manipulation
JPS6037464B2 (en) * 1976-07-20 1985-08-26 大日本スクリ−ン製造株式会社 Variable magnification image duplication method
US4146877A (en) * 1977-05-26 1979-03-27 Zimmer Edward F Character generator for video display
US4204207A (en) * 1977-08-30 1980-05-20 Harris Corporation Video display of images with video enhancements thereto
US4342029A (en) * 1979-01-31 1982-07-27 Grumman Aerospace Corporation Color graphics display terminal

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3646571A (en) * 1970-03-18 1972-02-29 Typagraph Corp Method of encoding graphical data
US3973245A (en) * 1974-06-10 1976-08-03 International Business Machines Corporation Method and apparatus for point plotting of graphical data from a coded source into a buffer and for rearranging that data for supply to a raster responsive device
US4145754A (en) * 1976-06-11 1979-03-20 James Utzerath Line segment video display apparatus
US4177462A (en) * 1976-12-30 1979-12-04 Umtech, Inc. Computer control of television receiver display
US4303986A (en) * 1979-01-09 1981-12-01 Hakan Lans Data processing system and apparatus for color graphics display
US4262302A (en) * 1979-03-05 1981-04-14 Texas Instruments Incorporated Video display processor having an integral composite video generator
US4301514A (en) * 1979-04-16 1981-11-17 Hitachi, Ltd. Data processor for processing at one time data including X bytes and Y bits

Also Published As

Publication number Publication date
GB2100546A (en) 1982-12-22
CA1191636A (en) 1985-08-06
JPS58500775A (en) 1983-05-12
US4454593A (en) 1984-06-12
ES8308436A1 (en) 1983-09-01
EP0079380A4 (en) 1983-09-29
EP0079380A1 (en) 1983-05-25
EP0066981A1 (en) 1982-12-15
ES512307A0 (en) 1983-09-01

Similar Documents

Publication Publication Date Title
US4454593A (en) Pictorial information processing technique
CA1181880A (en) Terminal generation of dynamically redefinable character sets
US4597005A (en) Digital color photographic image video display system
US5838296A (en) Apparatus for changing the magnification of video graphics prior to display therefor on a TV screen
US5587928A (en) Computer teleconferencing method and apparatus
EP0135994A2 (en) Method for transmitting broadband information over a narrow band transmission medium
US4611227A (en) Decoder for digital information T.V. signal
JPH0326600B2 (en)
EP0239840B1 (en) Soft copy display of facsimile images
US6351292B1 (en) Apparatus and method for generating on-screen-display messages using line doubling
EP0932978B1 (en) Apparatus and method for generating on-screen-display messages using line doubling
AU8680982A (en) Pictorial information processing technique
CA2268124C (en) Apparatus and method for generating on-screen-display messages using one-bit pixels
EP0932977B1 (en) Apparatus and method for generating on-screen-display messages using field doubling
KR0184446B1 (en) Method for displaying the multi-screen of a video phone
AU719563C (en) Apparatus and method for generating on-screen-display messages using field doubling
Vivian Enhanced UK teletext level 4 aphageometrics
Tenne-Sens Telidon graphics and applications
JPH08140063A (en) Graphic data unit for digital television receiver and methodfor usage of digital television receiver
AU8584182A (en) Terminal generation of dynamically redefinable character sets
MXPA99003535A (en) Apparatus and method for generating on-screen-display messages using field doubling
MXPA99003537A (en) Apparatus and method for generating on-screen-display messages using line doubling
MXPA99003536A (en) Apparatus and method for generating on-screen-display messages using one-bit pixels
MXPA99003539A (en) Apparatus and method for generating on-screen-display messages using true color mode

Legal Events

Date Code Title Description
AK Designated states

Designated state(s): AU JP

AL Designated countries for regional patents

Designated state(s): BE DE FR NL SE

WWE Wipo information: entry into national phase

Ref document number: 1982902035

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1982902035

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1982902035

Country of ref document: EP