|Publication number||US6903751 B2|
|Application number||US 10/104,805|
|Publication date||Jun 7, 2005|
|Filing date||Mar 22, 2002|
|Priority date||Mar 22, 2002|
|Also published as||DE60334420D1, EP1361544A2, EP1361544A3, EP1361544B1, US20030179214|
|Publication number||10104805, 104805, US 6903751 B2, US 6903751B2, US-B2-6903751, US6903751 B2, US6903751B2|
|Inventors||Eric Saund, Thomas P. Moran, Daniel L. Larner, James V. Mahoney, David J. Fleet, Ashok C. Popat|
|Original Assignee||Xerox Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (22), Referenced by (61), Classifications (9), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The following copending applications, U.S. application Ser. No. 10/104,523, filed Mar. 22, 2002, titled “Method and System for Interpreting Imprecise Object Selection Paths”, U.S. application Ser. No. 10/104,804, filed Mar. 22, 2002, titled “Method and System for Overloading Loop Selection Commands in a System for Selecting and Arranging Visible Material in Document Images”, and U.S. application Ser. No. 10/104,396, filed Mar. 22, 2002, titled “Method for Gestural Interpretation in a System for Selecting and Arranging Visible Material in Document Images”, are assigned to the same assignee of the present application. The entire disclosures of these copending applications are totally incorporated herein by reference in their entirety.
The following U.S. patents are fully incorporated herein by reference: U.S. Pat. No. 5,548,700 to Bagley et al. (“Editing Text in an Image”); U.S. Pat. No. 5,553,224 to Saund et al. (“Method for Dynamically Maintaining Multiple Structural Interpretations in Graphics System”); U.S. Pat. No. 5,889,523 to Wilcox et al. (“Method and Apparatus for Dynamically Grouping a Plurality of Graphic Objects”); U.S. Pat. No. 5,974,198 to Hamburg et al. (“Adjustment Layers for Composited Image Manipulation”); U.S. Pat. No. 6,028,583 to Hamburg (“Compound Layers for Composited Image Manipulation”); U.S. patent application Ser. No. 09/199,699 (“Method and Apparatus for Separating Document Image Object Types” to Saund); and U.S. patent application Ser. No. 09/158,443 (System and Method for Color Normalization of Board Images” to Saund et al.).
This invention relates generally to graphical image manipulation systems, and more particularly to a method for creating and editing electronic images of documents.
Two major classes of image editors are structured graphics, or vector-based editors, and digital paint, or raster-based editors. Structured graphics editors are suitable for editing graphic objects such as lines, curves, polygons, etc. Other types of images such as photographs, are more suitably edited in “paint” style editors that preserve the full variation and tone of the markings in terms of a two-dimensional raster of pixel intensities. Paint style image editing programs support the import and editing of raster-format electronic images. Various means are provided for selecting image regions for further manipulation such as deleting, copying, moving, rotating, and scaling. These programs are designed for editing of general photographic images, and they are limited in the degree of support they provide for the more specialized features and requirements of editing raster images of documents.
Paint style programs maintain an electronic canvas of pixel intensities. In some programs, the user is presented with a very simple usage model, which is easy to understand but offers limited functionality. In simple paint programs there is only one canvas layer. The process of selecting and moving image material causes the pixel values in one image region to replace the pixel values in a corresponding region at a different, destination location. Once such a procedure is completed, there is no notion of an image object as such that can be re-selected, to replicate the previous selection operation. This shortcoming is particularly problematic when some pixel values are treated as transparent: in this case pixels with these values do not replace pixel values in the destination location, and then two image objects can become intermingled and effectively inseparable.
Other, more complex paint-style programs, offer greater functionality but are much more difficult for users to understand and operate. In these programs multiple canvases represent different layers, where the topmost visible layer determines what is actually rendered as the visible image. Layers are dealt with explicitly by the user through complex sets of keyboard and mouse commands and auxiliary windows. Users can cause new empty layers to be created, and they can perform operations which cause regions of any source layer to be removed and copied to a different destination layer. The user maintains control over the ordering of layers. When a user wishes to move or modify an object, they must find the corresponding layer, then shift that layer's positioning over the base canvas. If the user wishes to move or modify several objects at once, then a lengthy series of steps must be undertaken to get all of the objects onto a single layer or group of layers treating the objects as a unified collection.
In both simple and complex paint programs, certain pixel color/intensities can be defined to be transparent so that pixels from layers beneath them are made visible, as is illustrated in FIG. 1. As shown in
In some applications, a specified pixel intensity, such as white, is predefined as being potentially a transparent value. In others the user specifies a range of pixel colors/intensities to be treated as transparent on one or more layers. In other applications, image processing operations may be applied to the image. If a high-pass filtering operation is available and applied by the user, then that could have the effect of regularizing a mottled or blotchy background of a scanned document so that potentially a small range of color/intensity values could be assigned to set the background to behave transparently.
U.S. Pat. No. 5,548,700 to Bagley et al. titled “Editing Text in an Image” teaches a data structure and procedures for subdividing a document image into smaller raster image objects which collectively become rendered to final image. However, Bagley et al. is directed to keyboard-based editing images of printed text, rather than to mouse or stylus-based editing of more general document images including handwritten scribbles and graphics.
U.S. Pat. No. 5,553,224 to Saund et al. titled “Method for Dynamically Maintaining Multiple Structural Interpretations in Graphics Systems” discloses an approach to maintaining a lattice grouping structure in curvilinear line art in which curvilinear segments with co-terminal endpoints are grouped according to their alignment and corner configurations. However, it does not pertain to arbitrary image objects, but only to curvilinear strokes.
U.S. Pat. No. 5,889,523 to Wilcox et al. titled “Method and Apparatus for Dynamically Grouping a Plurality of Graphic Objects” teaches a cluster tree for dynamically grouping a plurality of graphic objects. The cluster tree is based on a distance metric indicating a distance between a pair of graphic objects, with each level of the cluster tree defining a new cluster of graphic objects. The different cluster levels of the cluster tree containing a selected graphic object are displayable and can be modified to increase or decrease the cluster level of the cluster containing the selected graphic object.
U.S. Pat. No. 5,974,198 to Hamburg et al. titled “Adjustment Layers for Composited Image Manipulation” teaches the use of additional layers in the modification of composited images. Specifically, one or more adjustment layers are applied to an intermediate merged image, generated by compositing previous image layers, and the adjusted result is stored as a temporary image. The temporary image is then composited with the intermediate merged image. Any remaining image layers are then composited in with the intermediate merged image to generate a final merged image.
U.S. Pat. No. 6,028,583 to Hamburg titled “Compound Layers for Composited Image Manipulation” teaches a method for compositing a set of ordered image layers, in which a compound layer contains a plurality of image layers. Image layers under the compound layer are composited to generate a first intermediate image, the first intermediate image is composited with each image layer in the compound layer to generate a second intermediate, the first intermediate image is composited with the second intermediate image according to the compound layer effect to generate a third intermediate image, and the third intermediate image is composited with any remaining image layers to generate a final image.
The present invention offers a new tool for computer assisted drawing, one that incorporates the advantages of paint style image editing programs with a simple and intuitive user interface to provide high functionality for editing document images.
Briefly stated, and in accordance with one aspect of the present invention, there is disclosed herein a graphical input and display system for creating and manipulating electronic images, permitting a user to manipulate elements of electronic images received from various image input sources. A processor, connected to the system, receives requests for various image editing operations and also accesses a memory structure. The system memory structure includes a user interaction module, which allows a user to enter new image material or select and modify existing image material to form primary image objects, as well as a grouping module, which maintains an unrestricted grouping structure, an output module, and data memory.
In another aspect of the invention, there is disclosed a method for organizing an electronic image entered on a display device into meaningful image objects. After unrestricted existing image material is selected, primary image objects are established in an unrestricted grouping structure. The image material is modified and the unrestricted grouping structure is reconstituted.
In yet another aspect of the invention, there is provided an article of manufacture in the form of a computer usable medium having computer readable program code embodied in the medium. When the program code is executed by the computer, the computer usable medium causes the computer to perform method steps for editing and manipulating an electronic image entered onto a display for the computer. The program readable code causes the computer to decompose the electronic image into primary image objects and also to organize the primary image objects into unrestricted groups of primary image objects such that each primary image object belongs to zero or more groups and each group contains not less than one primary image object. New primary image objects may be created and reorganized into one or more new groups of primary image objects in response to user manipulation of at least one primary image object.
In another aspect of the invention, there is provided a memory for storing data for access by a program being executed on a computer for creating and manipulating data representing an electronic image. The memory includes a lattice data structure, stored in the memory, for providing an unrestricted grouping structure defining the relationships between primary image objects and composite objects. A plurality of primary objects are also stored in the memory, with the primary objects being bitmap objects or curvilinear objects. A plurality of composite objects, with each composite object including at least one primary object, are also stored within the memory. A plurality of hyperlinks link the primary objects with either or both destination and source objects.
The foregoing and other features of the instant invention will be apparent and easily understood from a further reading of the specification, claims and by reference to the accompanying drawings in which:
Disclosed herein is a method and apparatus for editing a document image. In the following description numerous specific details are set forth, such as calculations for character spacings for performing deletion and insertion operations, in order to provide a thorough understanding of the present invention. It would be apparent, however, to one skilled in the art to practice the invention without such specific details. In other instances, specific implementation details such as parsing techniques for extracting characters from a document image, have not been shown in detail in order not to unnecessarily obscure the present invention.
It should be noted that a document image is simply a bit-mapped representation of an image obtained through a scanning process, video source, screen snapshot, digital camera, digital ink input device, or any other document source known in the art. The present invention could be used with any document having a bit-mapped representation. For example, frame grabbers are used to capture bit-mapped representations of images from a video source. Such bit-mapped representations can be edited on systems embodying the present invention. Further, the terms scanned document image, bit-mapped representation of an image, and bit-mapped image are used interchangeably herein and are taken to have the equivalent meaning.
As will become apparent in the description below, the present invention finds particular advantage in editing text and line art contained in an image. Documents which are faxed or which are copied on a digital copier typically involve images that contain primarily text and graphics. As described with respect to the prior art, it is common that in order to edit any of the text contained in the image, extraneous processing such as Optical Character Recognition (OCR) or the placement of image information into layers must be performed. As will become apparent, the present invention minimizes extraneous processing and provides added flexibility to defining both text and graphical image information so as to allow the editing of a wider range of textual and graphical data in an image.
An illustration of the use of the present invention is shown in
A number of terms are used herein to describe images and related structures, and the terms defined below have the meanings indicated throughout this application, including the claims.
“Character” means a discrete element that appears in a writing system. Characters can thus include not only alphabetic and numerical elements, but also punctuation marks, diacritical marks, mathematical and logical symbols, and other elements. More generally, characters can include, in addition to alphanumeric elements, phonetic, ideographic, or pictographic elements. A “character type” is a category of which a character may be an instance, such as the letter “a” or the number “3”.
A “word” is a set of one or more characters that is treated as a semantic unit in a language. A “text” is an arrangement of one or more lines of characters; the characters of a text may form words.
An “image” is a pattern of light. An image may include characters, words, and text as well as other features such as graphics.
A “data structure” is any combination of interrelated items of data. An item of data is “included” in a data structure when it can be accessed using the locations or data of other items in the data structure; the included item of data may be another data structure. Conversely, an item of data can be “removed” from a data structure by rendering it inaccessible, such as by deleting it. An “array of data” or “data array” or “array” is a data structure that includes items of data that can be mapped into an array. A “two-dimensional array” is a data array whose items of data can be mapped into an array having two dimensions.
A data structure can be “obtained” from another data structure by operations that produce the data structure using data in the other data structure. For example, an array can be “obtained” from another array by operations such as producing a smaller array that is the same as a part of the other array, producing a larger array that includes a part that is the same as the other array, copying the other array, or modifying data in the other array or in a copy of it.
A “data unit” is an item of data that is accessible as a unit within a data structure. An “array data unit” is a data unit that includes data sufficient to define an array; for example, and array data unit may include the defined array itself, a compressed or encoded form of the defined array, a pointer to the defined array, a pointer to a part of another array from which the defined array can be obtained, or pointers to a set of smaller arrays from which the defined array can be obtained.
Data “defines” an image when the data includes sufficient information to produce the image. For example, a two-dimensional array can define all or any part of an image, with each item of data in the array providing a value indicating the color of a respective location of the image. A “character-size array” is a two dimensional array that defines only one character or character-size element.
Each location or single picture element of an image may be called a “pixel.” Taken collectively, the pixels form the image. In an array defining an image in which each item of data provides a value, each value indicating the color of a location may be called a “pixel value”. Each pixel value is a bit in the “binary form” of the image, a gray-scale value in a “gray-scale form” of the image, or a set of color space coordinates in a “color coordinate form” of the image. The binary form, gray-scale form, and color coordinate form each being a two-dimensional array defining the image. In addition, pixel values can represent transparency. “White” or background pixels in a binary image may be treated as transparent, revealing any black pixels previously rendered into the display. Similarly, one or more values of a gray-scale image may be reserved to represent transparency. And a transparency channel, or “alpha” channel, can be associated with color pixels to represent degree of transparency or opacity of the pixel's color value with respect to pixels “below”, or previously rendered into the display data structure.
“Bitmap” refers to bits stored in digital memory in a data structure that represents the pixels. As used herein, “bitmap” can refer to both a data structure for outputting black and white pixels, where each pixel either is on or off, as well as a “pixel map” having more information for each pixel, such as for color or gray scale pixels. “Resolution” refers to the size, shape, and separation of pixels of a displayed or printed image. For example, a displayed bitmap of very small pixels, closely spaced, has a greater resolution, i.e. greater detail, than a displayed bitmap having large pixels widely spaced. “Render” refers to the creation of a bitmap from an image description, such as a character outline.
A “Bitmap Object” is a raster image, plus an (x, y) coordinate indicating the positioning of the “Bitmap Object” on a visible electronic canvas. The pixels in a “Bitmap Object” may take any color values, or the value “transparent”. Transparency may alternatively be represented by an associated alpha binary raster image indicating which pixels are transparent. Any given source image may be represented as a single “Bitmap Object”, or as a collection of several component “Bitmap Objects”, appropriately positioned. These alternative representations may be unapparent to the user and not detectable by inspection when the result is displayed by displaying the collection of “Bitmap Objects” at their respective positions.
“Raster” refers to the arrangement of pixels on an output device that creates an image by displaying an array of pixels arranged in rows and columns. Raster output devices include laser printers, computer displays, video displays, LCD displays, etc. “Coded” data is represented by a “code” that is designed to be more concise and to be more readily manipulated in a computing device than raw data, in, for example, bitmap form. “Non-coded” data is data that is not represented by a code. For example, the lowercase letter “a” can be represented as coded data, e.g., the number 97 in ASCII encoding, or as non-coded graphical or image data that could be used to create the appearance of “a” on an output device such as a display screen or printer. Fonts usually have one or more associated “encodings” that associates coded data with non-coded data.
A “version” of a first image is a second image produced using data defining the first image. The second image may be identical to the first image, or it may be modified by loss of resolution, by changing the data defining the first image, or by other processes that result in a modified version. A “view” of an image is a version of the image that is displayed to a user; a view can omit some details of the image or can be otherwise modified.
A “text editing operation” is an operation that assumes that the data on which it is performed defines lines of elements that can be treated as if it were text. Examples of text editing operations include inserting and deleting elements, changing a characteristic of an element such as typeface, changing alignment and spacing, cursor positioning, justification, moving characters or a cursor to a following line, searching for a character or sequence of characters, and so forth.
A “character level text editing operation” is a text editing operation that affects a character or character-size element in text being edited. Examples of character level text editing operations include inserting, deleting changing, or positioning a character; positioning a cursor on a character; searching for a character; and so forth.
A “Primary Image Object” or “Primary Object” is a graphical element out of which larger graphical structures may be composed and may include a Bitmap Object, but may also include other objects as well, such as a pen-stroke object. A “Primary Object” is not immutable and may be fragmented by being broken into smaller “Primary Objects” or enlarged by merging with other “Primary Objects”. A “Composite Object” is associated with a set of “Primary Objects” and thereby refers to individual or combinations of elementary graphical entities. Under this interpretation, “Primary Objects” are directly associated with the rendered appearance of pixels in the image; “Composite Objects” refer to the physical appearance of the image only through the “Primary Objects” upon which they are constructed. The set of “Composite Objects” associated with an image constitutes the set of abstract objects by which the user gains access to perceptually coherent collections of image marks. Both types of object are attributed with the properties of spatial location, rough orientation, size, plus miscellaneous other properties.
A “connected component” is a set of pixels within a data array defining an image, all of which are connected to each other through an appropriate rule such as that they are neighbors of each other or are both neighbors of other members of the set. A connected component of a binary form of an image can include a connected set of pixels that have the same binary value, such as black. A “connected component set” or “component set” is a set of connected components that are treated as a unit. A character can therefore be a component set; for example, the letter “i” includes two connected components that are treated as a single character in English text-the connected components “form” the character. A “bounding box” for a character or other component set is a rectilinear region just large enough to include all the pixels in the component set and extends to the minimum and maximum extent in the vertical and horizontal directions.
The data used to produce a modified version of an image that includes text can include information about a character in the text. “Identity information” about a character is information identifying its character type, case, typeface, point size, or the like. To “recognize” a character means to obtain identity information about the character from a digital form of an image that includes the character. “Spatial information” about a character is information identifying its spatial characteristics, such as its size, shape, position, orientation, alignment with other characters, or the like. Although spatial information and identity information are not completely independent, spatial information about a character can be obtained from a two-dimensional array defining an image without recognizing the character.
Referring now to
Processor 310 is also connected to access program memory 350 and data memory 360. Program memory 350 includes data preparation module 352, user interaction module 354, grouping module 356, hyperlink module 357, and image output module 358. Data memory 360 includes image input data structure 362, parsed image data structure 364 and image output data structure 366.
In executing the routines of data preparation module 352, processor 310 loads data from image input device 320 into image input data structure 362, which is equivalent to a two-dimensional data array. Processor 310 then performs data preparation which prepares image objects and groups for convenient access by the user.
Data preparation module 352 makes use of several data structures and processing modules. As shown, parsed image data structure 364 includes one or more subsidiary data structures called image region data arrays. Each image region data array includes one or more array data units, each defining text and line art data, continuous tone or photographic data, or halftone data. Image region arrays are given representation by Primary Image Objects in the form of Bitmap Objects. Segmentation module 355 decomposes textual and graphical image material into smaller elementary Bitmap Objects or Primary Image Objects of other types.
Grouping module 356 is responsible for maintaining, and at some times constructing, the lattice of relationships between Primary Image Objects and Composite Objects even as Primary Image Objects are split, moved, and merged. Grouping module 356 also contains automatic recognition routines to identify perceptually meaningful groups that should be represented by Composite Objects. Hyperlink Module 357 establishes hyperlinks to and from arbitrary regions of electronic images reflecting image structure that may be perceptually salient to human users but not represented by independent data objects, and is discussed in more detail hereinbelow.
Some stages of data preparation involve decomposing textual and graphical image material into smaller Primary Image Objects, then performing grouping operations to form groups of fragments representing visually apparent structures. Under the control of a user option, these stages may or may not be performed automatically by the data preparation module 352, and these stages may also be invoked by the user through the User Interaction Module 354.
If the image is not to be treated as a photograph, a decision is made at step 420 as to whether to perform document image segmentation. If document image segmentation is to be applied to the image, then at step 425, document image segmentation processes known in the art are performed in which the image is segmented into image regions of three classes: text and graphics, continuous-tone/photographic, and halftone. Continuous-tone/photographic and halftone image regions are passed to Step 430, where Bitmap Objects are created to represent them. These become Primary Image Objects 435 to be operated on through a user's editing commands by User Interaction Model 354. Text and line art/graphics regions are passed to Step 440. If document image segmentation is not to be applied to the image as determined by a user controlled option at Step 420, then the entire image is treated as text and line-art or graphics, as depicted by Step 440.
At step 445 an image processing operation is performed to distinguish foreground from background pixels. In document images, foreground pixels are typically darker than the surrounding background. Various filtering operations, such as those disclosed in applicant's U.S. patent application Ser. No. 09/158,443, may be utilized to classify background pixels as such. At step 450 these pixel values are made “transparent” by setting appropriate transparency bits for these pixels.
At optional step 455, a determination is made as to whether to break the processed source image into a multiplicity of elemental Bitmap Objects, each of which is a natural candidate for selection by users. For example, at step 460 the Bitmap Objects may be segmented into a larger number of smaller Bitmap Objects corresponding to character-size connected components of foreground pixels, and relatively straight segments of line art, as is described in U.S. patent application Ser. No. 09/199,699. Alternatively, the unfragmented Bitmap Objects may be passed to output step 465.
At step 470 a determination will be made as to whether to perform an optional step 475 to identify significant collections of elemental Bitmap Objects into Composite Objects, or groups. For example, the character-size Bitmap Objects forming words, lines of text, and columns of text would form desirable groups. Procedures for performing this grouping are described in more detail hereinbelow. These grouped Primary Image Objects and Composite Objects from step 475 are passed to step 485 as fragmented Primary Image Objects with transparent backgrounds organized into Composite Objects in a lattice structure, shown at 495. If the Primary Objects are not to be grouped, with resulting groups represented by Composite Objects, they are passed to step 480 as a plurality of fragmented Primary Image Objects with transparent backgrounds, shown at 490. As a result of the data preparation stage, one or more Bitmap Objects is created. Bitmap Objects representing text and graphical image material have their foreground pixels visible and their background pixels transparent, and Composite Objects are constructed.
During the user interaction stage, the user participates in an interaction cycle in which new image material may be entered by typing or drawing with the mouse or stylus, or may be copied from a clipboard data structure either from within the application or from an outside application. Alternatively, the user may select and modify existing image material, which is illustrated in FIG. 5. In
Referring now to
The automatic reconstitution of the grouping structure that the processor performs is shown in the flow diagram of FIG. 8 and is illustrated diagrammatically in FIG. 9. Referring first to
Referring back to
Referring now to
This method is illustrated in
The processor next locates Composite Objects supported by all selected Primary Objects and identifies these as fully supported Composite Objects at 1160. As illustrated here, fully supported Composite Object “CO1” is supported by Primary Objects “A”, “B”, “C” and “D”. At step 1170, the processor removes the support links from the selected Primary Objects “B” and “C” and replaces them with a support link to the new Primary Object “F”. The processor then locates Composite Objects that contain some but not all of the selected Primary Objects as well as other non-selected Primary Objects and identifies these as partially-supported Composite Objects at step 1180. In the example, partially-supported Composite Object “CO2” contains Primary Objects “A” and “B”. For each partially-supported Composite Object, the processor removes all support links to the Primary Objects, thus eliminating “CO2” from the grouping structure at step 1190. Alternatively, in the case in which a partially-supported Composite Object contains multiple non-selected Primary Objects, a choice may be made to either demolish or retain the partially-supported Composite Objects.
In the case in which the partially-supported Composite Objects are retained, only the support links to the selected Primary Objects are removed. For the purposes of this example, the partially-supported Composite Object contained only one Primary Object other than a member of the selected Primary Objects, resulting in the elimination of Composite Object “CO2” from the grouping structure upon removal of the selected Primary Object support link, since a Composite Object must contain more than one Primary Object. However, in those cases in which the partially-supported Composite Object contains a plurality of non-selected Primary Objects in addition to a subset of the selected Primary Objects, when the support links to the selected Primary Objects are removed, the partially-supported Composite Object survives as a Composite Object containing the remaining non-selected Primary Objects.
Referring now to
Groups may be created in numerous ways, for example, the user may select a set of objects and establish them as a group that is independent of other groups of which these objects may be a member, through an explicit menu command. Alternatively, the user may select a set of objects and have the processor create a group automatically by virtue of the user's moving, rotating, scaling, or otherwise operating on the collection of objects. The processor may also create groups automatically by the application of image analysis processes that identify significant groups in the image. One approach to identifying groups of connected components that form words is illustrated in FIG. 13.
Several methods may be used to destroy groups. For example, a user may select a group and abolish it by an explicit menu command. Alternatively, the processor may automatically remove an object from a group when a user drags or moves an object sufficiently far from the other members of the group to which it belongs.
It is noted that within this application reference will be made to “tapping”, “clicking on” or otherwise selecting an object. These words are intended to interchangeably refer to the act of selecting the object. The term tapping is generally used in reference to the physical act of touching the stylus of a pen-based computing system to the screen or tablet and shortly thereafter lifting the stylus from the screen (i.e. within a predetermined period of time) without moving the stylus any significant amount (i.e. less than a predetermined amount, as for example two pixels). This is a typical method of selecting objects in a pen-based computing system. The term “clicking on” is intended to be broader in scope and is intended to cover not only tapping, but also the action of selecting an object using a button associated with a mouse or track ball as well as the selection of an object using any other pointer device.
Any specific object may belong to numerous groups, with one method shown in
The priority queue of the groups identified according to the method of
Grouping structures may also be used to edit selections, as illustrated in FIG. 15. In this example, the user has created objects “A”, “B”, “C”, “D”, “E” and “F” at 1500, and object “C” happens to belong to a group containing objects “C” and “D”, but no other groups have been established. If the user wishes to select objects “A”, “B”, “E” and “F”, one approach is to select object “A” by clicking on object “A” at step 1510. Then, by holding down a particular key on the keyboard (for example the shift key) and clicking on another object, this object will be added to the set of selected objects, as is the case with “B” at step 1515, “F” at step 1520 and “F” at step 1525. Alternatively, the user could select all of the objects, perhaps by an encircling gesture, at step 1530 and then remove “C” and “D” individually by shift-clicking “C” at step 1535 and shift-clicking “D” at step 1540. Another alternative is to select all of the objects, perhaps by an encircling gesture, at step 1550, and then removing “C” and “D” as a group by shift-clicking “C” twice, as at steps 1555 and 1560. The first shift-click removes “C” from the selection. Subsequent shift-clicks on “C” de-selects groups to which “C” belongs, leaving objects “A”, “B”, “E” and “F”, as shown in step 1560, as the remaining objects in the group.
This invention utilizes these selection tools to establish hyperlinks between an object and a destination or source. Currently available tools support the formation of hyperlinks between structured image objects, or between simply shaped image regions, but there is no easy, convenient, and effective way to specify a link whose “hot” region is an arbitrarily-shaped but perceptually-salient image object. Using current tools, the user must select among an array of predefined geometric shapes for the region, including circle, rectangle, and polygon. Then the user must specify the parameters of the shape object, preferably through the use of a graphical user interface. This process can become tedious and problematic if a number of different hyperlinks need to be established for nearby and complexly shaped image regions.
The ability to establish unidirectional or bi-directional hyperlinks between objects and destinations or sources is provided by the selection tools described herein, which are based on image processing and analysis technology. Beginning with an undifferentiated image, certain primitive image objects are automatically defined, and certain salient groupings of these are established. Using simple mouse and keyboard operations the user can easily establish additional image objects as Primary Image Objects or Composite Image Objects. The user may then select these objects, and the complex regions they define, simply by clicking a mouse over them.
Under existing hyperlink standards, any given image location may or may not support multiple hyperlinks. For example, in a case of two overlapping hyperlink source polygons, if the user clicks in the intersection region, one or the other link will be followed depending on which region polygon occurs first in a file. In contrast to this, the subject invention provides richer link structure than the conventional hyperlinked document formats. This invention permits the selection of multiple groups sharing the same patch of image. The user may cycle through selected objects pertaining to a given location by repeatedly clicking the mouse button. As an image viewer, this invention permits any selectable image object, including complex composite objects, to have their own independent hyperlinks. These links can be followed by an action other than a left mouse button click, for example a double-click, right button click, or right button click followed by selection of the link following through use of a pop-up menu.
A contribution of the present invention is the provision for managing a lattice structure which represents multiple possible groupings of primitive image objects which may include pen strokes and bitmap objects. For example, the user may easily substitute typed text for handwritten material such as handwritten notes, as illustrated in FIG. 17. In
The present invention augments this functionality in two ways. First, it incorporates the selection mechanisms described hereinabove. The system presents the user with not just a single choice of original image material to be replaced with typed text, but instead is able to choose exactly what image material is to be replaced. This is accomplished through use of any or all of the tools disclosed herein: rectangle dragging, freeform path dragging, polygon selection, selection of established primitive image objects with a single mouse click, selection of established groups of image objects with multiple mouse clicks, and editing of group structure by depressing a single prespecified key, such as the shift key, while performing selection operations. These operations make use of the lattice structure of relationships between primitive image objects and Composite Objects representing groupings of them. After image material is selected by any of these means, the user may commence typing text. Once text is entered, the selected image material is removed from the display and replaced with an image of the typed text.
It will be noted that this functionality applies also in systems where some sort of automatic character recognition is provided. In these cases, instead of the user typing text, the user may invoke a character recognition system which would be applied to just the image material selected by the user. In this way the user is able to simplify the job of any character recognition system by reducing the complexity of the image input material it is given to recognize, e.g. by isolating single words which OCR/ICR systems might recognize successfully in isolation but not when surrounded and intruded upon by extraneous image material.
Secondly, the present invention teaches a method for maintaining established group structure even while the user replaces source image material with typed text. The Primary Image Object (e.g. Bitmap Objects), which are to be replaced by typed text, may in many cases participate in groups, which are represented by Composite Objects. These groups should be preserved if possible even if the selected Bitmap Objects are removed and replaced with typed text. This is accomplished according to the method illustrated in the flow chart of FIG. 18. Here, typed text is entered into the display using a special kind of Bitmap Object called a Text String Bitmap Object. This is a Bitmap Object which is associated with a set of ascii characters plus typography information such as font family, font size, font color, etc. The textual characters and typography information permit this Bitmap Object to be modified by the user in terms of its formatted textual appearance.
At step 1810, the input to the system may include Bitmap Objects, with a group structure represented by a lattice of Composite Objects, a Text String Bitmap Object (TSBO), and a listing of Selected Bitmap Objects the TSBO is to replace in the image display. This is illustrated in
If the selected image objects are not comprised of a single Bitmap Object, then at step 1830, a Composite Object corresponding to the collection of selected Bitmap Objects is identified. This is illustrated in
Referring back to
Referring again to
Referring once more to
Again referring to
Returning again to
Referring now to
Referring again to
The result of the procedure described above is a reconfigured structure lattice, whereby the TSBO replaces the selected Bitmap Objects in the list of displayed image objects visible in the display, while groups involving the selected Bitmap Objects now become associated with the TSBO. This structure leaves “historical links”, which preserve the information about the original groupings. This permits the TSBO to be exchanged and the original Bitmap Objects it replaced to be restored, with all of their prior grouping structure.
It will be noted that although this aspect of the invention is described with regard to replacing Bitmap Objects representing textual material with typed text represented in a Text String Bitmap Object, this procedure applies as well to purely graphical or line-art data, thus enabling groups of image primitives to be replaced with Formal Graphic Objects while maintaining prior grouping relationships. For example,
While the present invention has been illustrated and described with reference to specific embodiments, further modification and improvements will occur to those skilled in the art. For example, the editor described herein may be combined with a digital camera that interfaces to a computer to form a graphics/text tool usable by children as well as adults. Although discussed with reference to text and line art, the operations illustrated herein apply equally well to any type of image object. Additionally, “code” as used herein, or “program” as used herein, is any plurality of binary values or any executable, interpreted or compiled code which can be used by a computer or execution device to perform a task. This code or program can be written in any one of several known computer languages. A “computer”, as used herein, can mean any device which stores, processes, routes, manipulates, or performs like operation on data. It is to be understood, therefore, that this invention is not limited to the particular forms illustrated and that it is intended in the appended claims to embrace all alternatives, modifications, and variations which do not depart from the spirit and scope of this invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5506946 *||Oct 14, 1994||Apr 9, 1996||Electronics For Imaging, Inc.||Selective color correction|
|US5548700||Mar 29, 1993||Aug 20, 1996||Xerox Corporation||Editing text in an image|
|US5553224 *||Aug 4, 1993||Sep 3, 1996||Xerox Corporation||Method for dynamically maintaining multiple structural interpretations in graphics system|
|US5664180 *||Mar 20, 1995||Sep 2, 1997||Framework Technologies Corporation||Design tool for complex objects which links object structures of a design object in multiple design domains|
|US5687306 *||Nov 12, 1996||Nov 11, 1997||Image Ware Software, Inc.||Image editing system including sizing function|
|US5861886 *||Jun 26, 1996||Jan 19, 1999||Xerox Corporation||Method and apparatus for grouping graphic objects on a computer based system having a graphical user interface|
|US5889523||Nov 25, 1997||Mar 30, 1999||Fuji Xerox Co., Ltd.||Method and apparatus for dynamically grouping a plurality of graphic objects|
|US5912668||May 30, 1997||Jun 15, 1999||Sony Corporation||Controlling a screen display of a group of images represented by a graphical object|
|US5926186||Jan 21, 1997||Jul 20, 1999||Fujitsu Limited||Graphic editing apparatus and method|
|US5974198||Aug 26, 1996||Oct 26, 1999||Adobe Systems Incorporated||Adjustment layers for composited image manipulation|
|US6020895||Jun 10, 1997||Feb 1, 2000||Fujitsu Limited||Object editing method, object editing system and computer memory product|
|US6028583||Jan 16, 1998||Feb 22, 2000||Adobe Systems, Inc.||Compound layers for composited image manipulation|
|US6184860||Sep 27, 1994||Feb 6, 2001||Canon Kabushiki Kaisha||Image editing apparatus|
|US6459442 *||Dec 1, 1999||Oct 1, 2002||Xerox Corporation||System for applying application behaviors to freeform data|
|US6651221 *||Apr 5, 2000||Nov 18, 2003||Microsoft Corporation||System and methods for spacing, storing and recognizing electronic representations of handwriting, printing and drawings|
|US20020081040 *||Dec 21, 2000||Jun 27, 2002||Yoshiki Uchida||Image editing with block selection|
|US20020175948 *||May 23, 2001||Nov 28, 2002||Nielsen Eric W.||Graphical user interface method and apparatus for interaction with finite element analysis applications|
|US20030002733 *||Jun 29, 2001||Jan 2, 2003||Jewel Tsai||Multi-mode image processing method and a system thereof|
|US20030051255 *||Feb 25, 2002||Mar 13, 2003||Bulman Richard L.||Object customization and presentation system|
|US20030086127 *||Nov 4, 2002||May 8, 2003||Naoki Ito||Image processing apparatus and method, computer program, and computer readable storage medium|
|EP0637812A2||Aug 2, 1994||Feb 8, 1995||Xerox Corporation||Method for dynamically maintaining multiple structural interpretations in graphics system|
|EP0816999A2||Jun 25, 1997||Jan 7, 1998||Xerox Corporation||Method and apparatus for collapsing and expanding selected regions on work space on a computer controlled display system|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7154511 *||Oct 24, 2003||Dec 26, 2006||Microsoft Corporation||Fast rendering of ink|
|US7158675 *||Jun 28, 2002||Jan 2, 2007||Microsoft Corporation||Interfacing with ink|
|US7167585||Dec 16, 2005||Jan 23, 2007||Microsoft Corporation||Interfacing with ink|
|US7246321 *||Jul 12, 2002||Jul 17, 2007||Anoto Ab||Editing data|
|US7310769 *||Mar 12, 2003||Dec 18, 2007||Adobe Systems Incorporated||Text encoding using dummy font|
|US7421116 *||Mar 25, 2004||Sep 2, 2008||Hewlett-Packard Development Company, L.P.||Image processing methods and systems|
|US7715630||Dec 16, 2005||May 11, 2010||Mircosoft Corporation||Interfacing with ink|
|US7725493||Mar 23, 2007||May 25, 2010||Palo Alto Research Center Incorporated||Optimization method and process using tree searching operation and non-overlapping support constraint requirements|
|US7765477||Nov 5, 2007||Jul 27, 2010||Adobe Systems Incorporated||Searching dummy font encoded text|
|US7876335 *||Jun 2, 2006||Jan 25, 2011||Adobe Systems Incorporated||Methods and apparatus for redacting content in a document|
|US7907141 *||Mar 23, 2007||Mar 15, 2011||Palo Alto Research Center Incorporated||Methods and processes for recognition of electronic ink strokes|
|US7925987||Jun 28, 2002||Apr 12, 2011||Microsoft Corporation||Entry and editing of electronic ink|
|US8014607||Mar 23, 2007||Sep 6, 2011||Palo Alto Research Center Incorporated||Method and apparatus for creating and editing node-link diagrams in pen computing systems|
|US8072472 *||Jun 26, 2006||Dec 6, 2011||Agfa Healthcare Inc.||System and method for scaling overlay images|
|US8074184 *||Nov 7, 2003||Dec 6, 2011||Mocrosoft Corporation||Modifying electronic documents with recognized content or other associated data|
|US8166388||Jun 28, 2002||Apr 24, 2012||Microsoft Corporation||Overlaying electronic ink|
|US8170380 *||May 30, 2008||May 1, 2012||Adobe Systems Incorporated||Method and apparatus for importing, exporting and determining an initial state for files having multiple layers|
|US8229220 *||Jul 6, 2007||Jul 24, 2012||Samsung Electronics Co., Ltd.||Image processing apparatus and image processing method|
|US8442319||Jul 10, 2009||May 14, 2013||Palo Alto Research Center Incorporated||System and method for classifying connected groups of foreground pixels in scanned document images according to the type of marking|
|US8452086||Jul 10, 2009||May 28, 2013||Palo Alto Research Center Incorporated||System and user interface for machine-assisted human labeling of pixels in an image|
|US8514447 *||Jun 6, 2006||Aug 20, 2013||Canon Kabushiki Kaisha||Image processing using first and second color matching|
|US8539385||May 28, 2010||Sep 17, 2013||Apple Inc.||Device, method, and graphical user interface for precise positioning of objects|
|US8539386 *||May 28, 2010||Sep 17, 2013||Apple Inc.||Device, method, and graphical user interface for selecting and moving objects|
|US8548280 *||Feb 14, 2011||Oct 1, 2013||Hewlett-Packard Development Company, L.P.||Systems and methods for replacing non-image text|
|US8612884||May 28, 2010||Dec 17, 2013||Apple Inc.||Device, method, and graphical user interface for resizing objects|
|US8649600||Jul 10, 2009||Feb 11, 2014||Palo Alto Research Center Incorporated||System and method for segmenting text lines in documents|
|US8677268||May 28, 2010||Mar 18, 2014||Apple Inc.||Device, method, and graphical user interface for resizing objects|
|US8766928||Apr 27, 2010||Jul 1, 2014||Apple Inc.||Device, method, and graphical user interface for manipulating user interface objects|
|US8768057||Nov 15, 2012||Jul 1, 2014||Palo Alto Research Center Incorporated||System and method for segmenting text lines in documents|
|US8780069||Jun 3, 2013||Jul 15, 2014||Apple Inc.||Device, method, and graphical user interface for manipulating user interface objects|
|US8799826||Sep 25, 2009||Aug 5, 2014||Apple Inc.||Device, method, and graphical user interface for moving a calendar entry in a calendar application|
|US8863016||Sep 25, 2009||Oct 14, 2014||Apple Inc.||Device, method, and graphical user interface for manipulating user interface objects|
|US8948509 *||Nov 15, 2012||Feb 3, 2015||Adobe Systems Incorported||Blending with multiple blend modes for image manipulation|
|US8972879||Jul 30, 2010||Mar 3, 2015||Apple Inc.||Device, method, and graphical user interface for reordering the front-to-back positions of objects|
|US9081494||Jul 30, 2010||Jul 14, 2015||Apple Inc.||Device, method, and graphical user interface for copying formatting attributes|
|US9098182||Jul 30, 2010||Aug 4, 2015||Apple Inc.||Device, method, and graphical user interface for copying user interface objects between content regions|
|US9105094 *||Oct 1, 2013||Aug 11, 2015||Adobe Systems Incorporated||Image layers navigation|
|US9141594||Jan 6, 2011||Sep 22, 2015||Adobe Systems Incorporated||Methods and apparatus for redacting content in a document|
|US20030023644 *||Jul 12, 2002||Jan 30, 2003||Mattias Bryborn||Editing data|
|US20030214553 *||Jun 28, 2002||Nov 20, 2003||Microsoft Corporation||Ink regions in an overlay control|
|US20030215140 *||Jun 28, 2002||Nov 20, 2003||Microsoft Corporation||Interfacing with ink|
|US20030215142 *||Jun 28, 2002||Nov 20, 2003||Microsoft Corporation||Entry and editing of electronic ink|
|US20040054509 *||Sep 12, 2002||Mar 18, 2004||Breit Stephen R.||System and method for preparing a solid model for meshing|
|US20040066538 *||Oct 4, 2002||Apr 8, 2004||Rozzi William A.||Conversion of halftone bitmaps to continuous tone representations|
|US20050068312 *||Sep 26, 2003||Mar 31, 2005||Denny Jaeger||Method for programming a graphic control device with numeric and textual characters|
|US20050088464 *||Oct 24, 2003||Apr 28, 2005||Microsoft Corporation||Fast rendering of ink|
|US20050099398 *||Nov 7, 2003||May 12, 2005||Microsoft Corporation||Modifying electronic documents with recognized content or other associated data|
|US20050213848 *||Mar 25, 2004||Sep 29, 2005||Jian Fan||Image processing methods and systems|
|US20060093218 *||Dec 16, 2005||May 4, 2006||Microsoft Corporation||Interfacing with ink|
|US20060093219 *||Dec 16, 2005||May 4, 2006||Microsoft Corporation||Interfacing with ink|
|US20060274974 *||Jun 6, 2006||Dec 7, 2006||Canon Kabushiki Kaisha||Image processing method and image processing apparatus|
|US20070240076 *||Jun 25, 2007||Oct 11, 2007||Nokia Corporation||System and Method for Visual History Presentation and Management|
|US20070296736 *||Jun 26, 2006||Dec 27, 2007||Agfa Inc.||System and method for scaling overlay images|
|US20080013865 *||Jul 6, 2007||Jan 17, 2008||Samsung Electronics Co., Ltd.||Image processing apparatus and image processing method|
|US20090073188 *||Dec 17, 2007||Mar 19, 2009||James Williams||System and method of modifying illustrations using scaleable vector graphics|
|US20110181529 *||Jul 28, 2011||Jay Christopher Capela||Device, Method, and Graphical User Interface for Selecting and Moving Objects|
|US20120207390 *||Aug 16, 2012||Sayers Craig P||Systems and methods for replacing non-image text|
|US20130111380 *||May 2, 2013||Symantec Corporation||Digital whiteboard implementation|
|US20140095992 *||Mar 11, 2013||Apr 3, 2014||Microsoft Corporation||Grouping writing regions of digital ink|
|US20140133748 *||Nov 15, 2012||May 15, 2014||Adobe Systems Incorporated||Blending with multiple blend modes for image manipulation|
|US20150093029 *||Oct 1, 2013||Apr 2, 2015||Adobe Systems Incorporated||Image layers navigation|
|International Classification||G06T11/80, G06F3/12, H04N1/387, G06T11/60, G09G5/00, B41J5/30|
|Mar 22, 2002||AS||Assignment|
Owner name: XEROX CORPORATION, CONNECTICUT
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAUND, ERIC;MORAN, THOMAS P.;LAMER, DANIEL L.;AND OTHERS;REEL/FRAME:012734/0723;SIGNING DATES FROM 20020319 TO 20020321
|Jul 30, 2002||AS||Assignment|
Owner name: BANK ONE, NA, AS ADMINISTRATIVE AGENT,ILLINOIS
Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:013111/0001
Effective date: 20020621
|Oct 31, 2003||AS||Assignment|
Owner name: JPMORGAN CHASE BANK, AS COLLATERAL AGENT,TEXAS
Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:015134/0476
Effective date: 20030625
|Oct 16, 2008||FPAY||Fee payment|
Year of fee payment: 4
|Nov 13, 2012||FPAY||Fee payment|
Year of fee payment: 8