Publication number | US6788801 B2 |

Publication type | Grant |

Application number | US 10/455,928 |

Publication date | Sep 7, 2004 |

Filing date | Jun 6, 2003 |

Priority date | Aug 19, 1999 |

Fee status | Lapsed |

Also published as | CA2284866A1, CA2284866C, EP1077570A2, EP1077570A3, US6654479, US20030198363 |

Publication number | 10455928, 455928, US 6788801 B2, US 6788801B2, US-B2-6788801, US6788801 B2, US6788801B2 |

Inventors | Hong-Yuan Mark Liao, Chun-Shien Lu |

Original Assignee | Academia Sinica |

Export Citation | BiBTeX, EndNote, RefMan |

Patent Citations (14), Non-Patent Citations (16), Referenced by (10), Classifications (27), Legal Events (7) | |

External Links: USPTO, USPTO Assignment, Espacenet | |

US 6788801 B2

Abstract

A novel image protection scheme named “cocktail watermarking” improves over current spread-spectrum watermarking approaches. Two watermarks, which play complementary roles, are simultaneously embedded into an original image. The new watermarking scheme has the characteristic that, no matter what an attack is, at least one watermark typically survives well and can be detected. Results of extensive experiments indicate that our cocktail watermarking scheme is effective in resisting various attacks.

Claims(11)

1. The method for detecting information embedded in an image comprising:

accepting an image;

transforming the accepted image to form a plurality of transform coefficients which represent the accepted image;

accepting an original image;

transforming the original image to form a plurality of transform coefficients which represent the original image;

computing a difference between the transform coefficients of the original image and the transform coefficients of the accepted image;

determining a plurality of estimates of a watermark sequence, each estimate being determined from a different subset of the computed differences between transform coefficients;

computing a plurality of indicators that the watermark sequence was encoded in the accepted image, each indicator being associated with a different one of the determined estimates of the watermark sequence; and

determining an overall indicator that the watermark sequence was encoded in the accepted image from the plurality of indicators, wherein determining the plurality of estimates comprises determining a positive watermark estimator that estimates a positive watermark used to modify a subset of transform coefficients of the original image such that the magnitude of each of the coefficients is more likely to be increased than decreased.

2. The method of claim 1 , further comprising receiving a watermark sequence and mapping data that indicates the location of the positive watermark in the watermark sequence.

3. The method of claim 2 in which determining the plurality of indicators comprises determining a positive watermark similarity indicator representing a similarity between the positive watermark estimator and the positive watermark in the watermark sequence.

4. The method of claim 1 , in which transforming the accepted image to form the plurality of transform coefficients that represent the accepted image comprises using a wavelet transform to generates the transform coefficients.

5. The method of claim 1 , in which transforming the original image to form the plurality of transform coefficients that represent the original image comprises using a wavelet transform to generates the transform coefficients.

6. The method for detecting information embedded in an image comprising:

accepting an image;

transforming the accepted image to form a plurality of transform coefficient which represent the accepted image;

accepting an original image;

transforming the original image to form plurality of transform coefficients which represent the original image;

computing a difference between the transform coefficients of the original image and the transform coefficients of the accepted image;

determining a plurality of estimates of a watermark sequence, each estimate being determined from a different subset of the computed differences between transform coefficients;

computing a plurality of indicators that the watermark sequence was encoded in the accepted image, each indicator being associated with a different one of the determined estimates of the water mark sequence; and

determining an overall indicator that the watermark sequence was encoded in the accepted image from the plurality of indicators determining the plurality of estimates comprises determining a negative watermark estimator that estimates a negative watermark used to modify a subset of transform coefficients of the original image such that the magnitude of each of the coefficients is more likely to be decreased than increased.

7. The method of claim 6 , further comprising receiving a watermark sequence and mapping data that indicates the location of the negative watermark in the watermark sequence.

8. The method of claim 7 in which determining the plurality of indicators comprises determining a negative watermark similarity indicator representing a similarity between the negative watermark estimator and the negative watermark in the watermark sequence.

9. A method comprising:

generating an indicator to indicate whether a watermark sequence is in an accepted image based on an estimate of a positive watermark in the accepted image and an estimate of a negative watermark in the accepted image,

the positive watermark being used to modify a subset of transform coefficients of an original image such that the magnitude of each of the transform coefficients is more likely to be increased than decreased, and

the negative watermark being used to modify a subset of transform coefficients of the original image such that the magnitude of each of the transform coefficients is more likely to be decreased than increased.

10. The method of claim 9 , further comprising generating a positive watermark similarity indicator and a negative watermark similarity indicator, the positive watermark similarity indicator indicating a similarity between the positive watermark and the estimate of the positive watermark, the negative watermark similarity indicator indicating a similarity between the negative watermark and the estimate of the negative watermark.

11. The method of claim 9 , further comprising receiving a watermark sequence and mapping data, the mapping data indicating the location of the positive watermark in the watermark sequence and the location of the negative watermark in the watermark sequence.

Description

This application is a divisional application and claims priority to U.S. application Ser. No. 09/377,236, filed Aug. 19, 1999, now U.S. Pat. No. 6,654,479, the contents of which are incorporated herein by reference.

This invention relates to embedding information in an image.

Conventionally, hand-written signatures, seals, or other markings are used by artists and authors to identify documents or images as their work, for example to claim a copyright or other ownership right on their work. However, digital technology for manipulating images has made it difficult to mark images in a way that the marking cannot be removed or obliterated. Additionally, owing to the popularity of Internet, the use and transfer of digitized media including media bearing digitized images has increased. Therefore, it is imperative to protect works from intentional or unwitting use which is contrary to an owner's rights. A commonly used method for identifying a work is to insert a watermark into the original work. Watermarks which are embedded in an original work are expected to tolerate attacks of any kind. Detection of a valid watermark in a work enables an owner of the work to identify the work as their own. It is desirable to be able to detect such a watermark, even if the work is modified, for example by processing an image.

The invention is directed to a novel image protection scheme named “cocktail watermarking”. To improve over current spread-spectrum watermarking approaches, two watermarks, which play complementary roles, are simultaneously embedded into an original image. The new watermarking scheme has the characteristic that, no matter what an attack is, at least one watermark typically survives well and can be detected. Results of extensive experiments indicate that our cocktail watermarking scheme is effective in resisting various attacks.

In one aspect, in general, the invention is a method for adding information to a first image including the following steps. The method includes transforming the first image to form a set of transform coefficients which represent the image. A first subset of the transform coefficients is selected and each of this first subset is modified such that the magnitude of each of the coefficients more likely to be increased than decreased. A second subset of the transform coefficients is selected and modified such that the magnitude of each of the coefficients is more likely to be decreased than increased. The method then includes forming a second image using the modified first and second subsets of transform coefficients.

The invention can include one or more of the following features:

Transforming the first image is done by computing a wavelet transform of the image, and the second image is formed by taking an inverse wavelet transform of modified wavelet transform coefficients

The magnitude of each of the coefficients in the first and the second subsets is greater than a just noticeable difference value for that coefficient.

Modifying each of the first subset of transform coefficients includes increasing the magnitude of each of said first set of coefficients, and modifying each of the second subset of transform coefficients includes decreasing the magnitude of each of said second set of coefficients.

The method further includes computing a set of random numbers. Increasing the magnitude of each of the first subset of coefficients then includes increasing the magnitude of each of the coefficients according to a different one of the random numbers, and decreasing the magnitude of each of the second subset of coefficients includes decreasing the magnitude of each of the coefficients according to a different one of the random numbers.

The method can further include accepting a third image, which may be a processed version of the second image. The method then includes transforming the third image to form a set of transform coefficients which represent said third image and computing a difference between the transform coefficients of the first image and the transform coefficients of the third image. An indicator that the third image is a processed version of the second image is then determined from the computed difference.

In another general aspect of the invention, a method for detecting information embedded in an image includes the following. The method includes accepting an image and transforming the accepted image to form a set of transform coefficient which represent the accepted image. The method also includes accepting an original image and transforming the original image to form a set of transform coefficients which represent the original image. A difference between the transform coefficients of the original image and the transform coefficients of the accepted image are computed. Multiple estimates of a watermark sequence are determined such that each estimate is determined from a different subset of the computed differences between transform coefficients. Multiple indicators that the watermark sequence was encoded in the accepted image are computed, each indicator being associated with a different one of the determined estimates of the watermark sequence. The method then includes determining an overall indicator that the watermark sequence was encoded in the accepted image from the plurality of indicators.

Other features and advantages of the invention are apparent from the following description, and from the claims.

FIG. 1 is a block diagram showing encoding of a watermark and subsequent detection of the watermark;

FIG. 2A is a block diagram of an encoder, which encodes a watermark sequence into an original image;

FIG. 2B is a block diagram of a detector, which detects a watermark sequence encoded in an image; and

FIG. 3 is an illustration of steps of an encoding process.

Referring to FIG. 1, an original image I(x,y) **100** is processed by an encoder **110** in order to mark the image with a so-called “watermark” to produce a watermarked image I^{(m)}(x,y) **130**. This watermarked image is distributed, for example, over electronic distribution media or channels such as on magnetic disks or over the Internet. During distribution, watermarked image I^{(m)}(x,y) **130** may be modified either inadvertently or intentionally such that the resulting image is not identical to watermarked image I^{(m)}(x,y). Such a modification is often referred to as an “attack” alluding to an intentional modification aimed at removing a watermark. Here we refer to an attack as any modification, intentional or not. In FIG. 1, this modification is represented by attack **150**, which takes watermarked image I^{(m)}(x,y) **130** and produces attacked watermarked image I^{*}(x,y) **170**. A detector **180** processes attacked watermarked image I^{*}(x,y) **170**, along with additional information produced during the encoding phase (described further below), to produce a scalar quantity, Sim **190**, which indicates whether the input to detector **180** is indeed a modified version of watermarked image I^{(m)}(x,y) **130**. That is, detector **180** determines whether its input is attacked watermarked image I^{*}(x,y) **120** as shown in FIG. 1 as opposed to a version of original image I(x,y) **100** or that was not watermarked by encoder **100**, or marked with a different watermark. Detector **180** makes use of original image I(x,y) **100** as well as other information produced by encoder **110**, such as a random watermark sequence N **120** and a mapping m(x,y) **122** which identifies where in watermarked image I^{(m)}(x,y) **130** watermark sequence N **120** is “hidden.” This other information is not distributed along with the watermarked image, thereby making it difficult to remove the watermark from the distributed image.

A desirable property of the combination of encoder **110** and detector **180** is that the determination of whether the input to detector **180** is an attacked watermarked image should be robust to a variety of types of attacks **150**. Typical types of attacks **150** include median filtering, resealing, sharpening, histogram equalization, dithering, compression, photocopying, and blurring. A property of many types of attacks is that the coefficients of a wavelet transform of an attacked image are either mostly increased in magnitude (that is, significantly more than one half of the coefficients are increased in magnitude), or are mostly decreased in magnitude, compared to the corresponding coefficients of the image prior to the attack. Although not limited to attacks with such characteristics, the approach embodied in this invention is particularly well matched to attacks with this property.

Referring to FIG. 2A, encoder **110** includes a number of logical modules. An overall approach used in encoder **110** is to hide two complementary watermarks in original image I(x,y) **100** to produce watermarked image I^{(m)}(x,y) **130**. The complementary watermarks are chosen such that under typical attacks, at least one of the watermarks survives and is easily detectable by detector **180** (FIG. **1**). We refer to this general approach of applying two, or more, watermarks to an image as “cocktail” watermarking.

Encoder **110** accepts original image I(x,y) **100**. In the discussion that follows, original image **100** is made up of 128 by 128 grayscale pixels. In alternative embodiments, other sizes of images, and black-and-white or color images are processed using the same approach. Encoder **110** applies two watermarks in the original image I(x,y) **100** in the transform domain by modifying a selected subset of transform coefficients of the image to encode a watermark sequence. In this embodiment, encoder **110** uses a wavelet transform **210** to compute a wavelet representation made up of wavelet coefficients H(x,y) **212**. In other embodiments, other transforms are used, for example, a discrete cosine transform. After computing the wavelet representation, encoder **110** modifies a subset of wavelet coefficients H(x,y) **212** at a wavelet modulator **215** to produce a modified representation made up of modulated wavelet coefficients H^{(m)}(x,y) **216**. The encoder applies an inverse wavelet transform **220** to the modulated wavelet coefficients **216** to produce watermarked image I^{(m)}(x,y) **130**.

Turning now to FIG. 2B, detector **180** inputs attacked watermarked image I^{*}(x,y) **170** which is either watermarked image I^{(m)}(x,y) **130** or an attacked version of that watermarked image. Detector **180** produces a scalar quantity Sim **190**, indicates whether the image was indeed processed (watermarked) by encoder **110**. In order to compute Sim **190**, the detector makes use of original image I(x,y), **100**, attacked watermarked image I^{*}(x,y) **170**, as well as several other quantities computed by encoder **110**, which are described below, that were computed during the encoding process.

Referring back to FIG. 2A, encoder **110** encodes watermark sequence N **120** into original image I(x,y) **100**. Encoder **110** applies the watermark sequence as two separate watermarks: as as a positive watermark M^{(p)}(x,y) produced by a positive watermark generator **214**, and as a negative watermark M^{(n)}(x,y) produced by a negative watermark generator **218**. The outputs of watermark generators **214** and **218** are passed to wavelet modulator **215** which modifies wavelet coefficients H(x,y) **212** of the original image.

Watermark sequence N **120** is passed to the detector for use in determining whether the attacked watermarked image indeed encodes that watermark sequence. In addition, mapping m(x,y) **122** is passed from the encoder to the decoder. This mapping identifies which wavelet coefficients were modified during the encoding stage. In addition, a scale factor w **124** is passed from the encoder to the decoder. Scale factor w **124** is related to the degree to which watermark sequence N **120** is encoded into the original image.

Turning to FIG. 3, the process carried out by encoder **110** is illustrated as a sequence of three transformations. First, original image I(x,y) **100** is transformed using wavelet transform **210** (FIG. 2A) to produce wavelet coefficients H(x,y) **212**. The wavelet transform produces the same number of coefficients as in the original image, in this case 128 by 128. Using conventional wavelet transform techniques, the wavelet coefficients are arranged in terms of nested sets of coefficients each associated with different spatial scales: three sets of 64 by 64 coefficients **302** represent three orientations of a first spatial scale; three sets of 32 by 32 coefficients **304** represent the next scale; three sets of 16 by 16 coefficients **306** represent the next; and a final set of 16 by 16 coefficients **308** represent a remaining image at the final spatial scale. Although illustrated with the scale and orientation structure, wavelet coefficients H(x,y) are indexed by a “position” (x,y) where the x and y indices each range over **128** values spanning all the scales and orientations of the wavelet transform coefficients.

Referring still to FIG. 3, in the next transformation, wavelet coefficients H(x,y) **212** are modulated by the encoder to produce H^{(m)}(x,y) **216**. In general, most of the coefficient values are unchanged in this transformation, thereby avoiding a significant degradation of the original image. A sequence of coefficients **322** (the positions of which are illustrated with the plus signs) are modulated according to the positive watermark, and a sequence of coefficients **320** (the positions of which are illustrated with the minus signs) are modulated according to the negative watermark. The selection of these sequences and the details of modulating the coefficients are described below. The positions of these modulated coefficients are encoded in mapping m(x,y) **122** which is passed from encoder **110** to detector **180**.

In the final transformation carried out by encoder **110**, modulated wavelet coefficients **216** are passed to inverse wavelet transform **220** to produce watermarked image I^{(m)}(x, y) **130**.

Turning back to FIG. 2A, wavelet coefficients H(x,y) **212** are passed to a coefficient selector **230** which determines the sequence of positions of coefficients to modulate **320** and **322** (see FIG. **3**). In order to reduce the perceptual effects of the encoding procedure, coefficient selector **230** chooses a subset of the wavelet coefficients such that each of the selected coefficients is greater in magnitude than the just noticeable difference (JND) for that coefficient. The just noticeable difference for a coefficient is the least amount by which the coefficient may be changed for the change to be perceptible in the corresponding image. In this embodiment which makes use of the wavelet transform, the JND for each coefficient is computed independently of the original image, and depends on the spatial scales of the wavelet coefficients. Of coefficients with sufficiently large magnitude, half are used for the positive watermark and half are used for the negative watermark. Coefficient selector **230** passes a length, k, which is one half the number of selected coefficients to a watermark sequence generator **232**.

Watermark generator **232** generates a random sequence watermark sequence N=(n_{1}, . . . , n_{k}) **120**, each element of which is independently chosen from a Gaussian distribution with mean zero and variance 1 (i.e., n_{i}˜N(0,1)). Encoder **110** passes watermark sequence **120** to both positive watermark generator **214** and negative watermark generator **218** as well as subsequently to detector **180**.

Returning to coefficient selector **230**, after having selected the coefficients with sufficiently large magnitude, coefficient selector **230** determines a randomized sequence of those selected coefficients. Coefficient selector sends the positions and values of the sequence of coefficients to positive and negative watermark generators **214** and **218**, respectively. Each of the watermark generators uses alternating elements in the sequence. That is, the positive and negative watermarks are interleaved.

Positive watermark generator **214** generates positive watermark M^{(p)}(x,y) such that the magnitude of the corresponding selected wavelet coefficients is, in general increased. On the other hand, negative watermark generator **218** generates negative watermark M^{(n)}(x, y) such that the magnitude of the corresponding selected wavelet coefficients is, in general, decreased.

Positive watermark generator **214** generates positive watermark M^{(p)}(x,y) as follows. First, it sorts watermark sequence N **120**. Values from the watermark sequence are used in turn: n_{bottom }refers to the largest (most positive) value in the sequence that has not yet been used, and n_{top }refers to the smallest (most negative) values that has not yet been used. For every other of the coefficient sequence, (x_{p},y_{p}) generated by coefficient selector **230** (that is positions **322** in FIG. 3) positive watermark generator **214** computes:

In this way M^{(p)}(x_{p},y_{p}) will typically (but not necessarily due to the random nature of N **120**) have the same sign as H(x_{p},y_{p}) and therefore when added to H(x_{p},y_{p}) will increase its magnitude.

Negative watermark generator **218** generates negative watermark M^{(n)}(x,y) in a complementary manner. For every other of the coefficient sequence generated by coefficient selector **230**, that is, the coefficients not used by the positive watermark generator, (x_{n},y_{n}), negative watermark generator **218** computes

so that M^{(n)}(xn,yn) will typically (but not necessarily due to the random nature of N) have the opposite as H(x_{n},x_{n}).

Positive watermark generator **214** and negative watermark generator **218** pass the indices of the selected elements of watermark sequence **120** to a mapping module **222** which generates mapping m(x,y) **122** such that m(x_{p},y_{p})=i at the position that uses n_{i }in the positive watermark and m(x_{n},y_{n})=−i at the position that uses n_{i }in the negative watermark.

Referring still to FIG. 2A, wavelet modulator **215** accepts positive and negative watermarks M^{(p)}(x_{p},y_{p}) and M^{(n)}(x_{n},y_{n}) and their positions. For each position to be modified by the positive watermark, wavelet modulator **215** computes

*H* ^{(m)}(*x* _{p} *,y* _{p})=*H*(*x* _{p} *,y* _{p})+M^{(p)}(*x* _{p} *,y* _{p})

and for each position to be modified by the negative watermark, it computes

*H* ^{(m)}(*x* _{n} *,y* _{n})=*H*(*x* _{n} *,y* _{n})+*M* ^{(n)}(*x* _{n} *,y* _{n})

and leave the remaining coefficients unchanged

*H* ^{(m)}(*x,y*)=*H*(*x,y*).

Referring now to FIG. 2B, detector **180** accepts attacked watermarked image I^{*}(x,y) **170**. Detector **180** also receives original image I(x,y) **100**, mapping m(x,y) **122** and watermark sequence N **120**. Detector **180** applies wavelet transform **260** to original image I(x,y) **100** to compute wavelet coefficients H(x,y) **262** and applies wavelet transform **264** to attacked watermarked image I^{*}(x,y) **170** to compute wavelet coefficients H* (x,y) **266**. Wavelet transforms **260** and **264** perform the same function as wavelet transform **210** (FIG. 2A) in encoder **110**. Detector **180** then computes a difference between these sets of wavelet coefficients at module **270** by computing

*DIFF*(*x,y*)=(*H**(*x,y*)−*H*(*x,y*))/(*JND*(*x,y*)×*w*

for each positing in the transforms.

Detector **180** passes the computed difference to a positive watermark estimator **280** and a negative watermark estimator **284**. Positive watermark estimator **280** accepts mapping m(x,y) **122** to select the positions at which the watermark sequence was encoded as a positive watermark and determine N^{(p)*}, an estimate of watermark sequence **120** as encoded in the positive watermark. Specifically, n^{(p)*} _{i}=DIFF(x_{p},y_{p}) for the position (x_{p},y_{p}) that satisfies m(x_{p},y_{p})=i. Similarly, negative watermark estimatorcomputes N^{(n)* }such that n^{(n)*} _{i}=DIFF(x_{n},y_{n}) that satisfies m(x_{n},y_{n})=−i.

Detector **180** computes a similarity between each of N^{(p)* }and N^{(n)* }and watermark sequence **120** to produce scalar similarities Sim^{(p) }and Sim^{(n}, respectively. In particular, detector **180** computes

*Sim* ^{(p)} *=N·N* ^{(p)*} */sqrt*(*N* ^{(p)*} *·N* ^{(p)*})

and

*Sim* ^{(n)} *=N·N* ^{(n)*} */sqrt*(*N* ^{(n)*} *·N* ^{(n)*})

where · signifies an inner product between the corresponding sequences. Then detector **180** takes the maximum of Sim^{(p) }and Sim^{(n) }to determine Sim **190**. The larger the value of Sim **190**, the more certain that its input is indeed a modified version of watermarked image I^{(m)}(x,y) **130**.

In an alternative embodiment, detector **180** performs a relocation step prior to computing the difference between the wavelet coefficients of the original image and the attacked watermarked image. The relocation step involves the detector using H^{(m)}(x,y), the wavelet coefficients of the watermarked image (prior to attack), which it either receives from the encoder or alternatively that it recomputes from the original image it receives from the encoder. The coefficients of H^{(m)}(x,y) and H^{*}(x,y) are each sorted by magnitude and the coefficients of H^{*}(x,y) are relocated such that the k^{th }largest coefficient of H^{*}(x,y) is moved to the position of the k^{th }largest coefficient of H^{(m)}(x,y) for all positions in the transformed images.

In experiment using the above approaches, a tiger image of size 128×128 was used for hiding watermarks. The length k of a hidden watermark sequence N depends on the original image and the wavelet-based visual model which determined the JND values for the wavelet coefficients. Using the tiger image, a total 2714 wavelet coefficients of the possible 16,348=128^{2 }were selected by coefficient selector **230** (FIG. **2**A). The PSNR of the watermarked image was 34.5 dB. 32 different attacks **150** (FIG. 1) were to test the watermarking approach. The results show that typically, one of Sim^{(p) }or Sim^{(n) }is significantly greater than the other, indicating that one watermark may be destroyed while the other one survives well. Some attacks severely damaged the watermarked image, but the embedded watermarks can still be extracted with high detector response. Also, the detector response was generally increased using the relocation step described above as compared to not performing relocation.

It is to be understood that while the invention has been described in conjunction with the detailed description thereof, the foregoing description is intended only to illustrate particular embodiments of the invention and not to limit the scope of the invention, which is defined by the scope of the appended claims. Other aspects, advantages, and modifications are within the scope of the following claims.

Patent Citations

Cited Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US5809160 * | Nov 12, 1997 | Sep 15, 1998 | Digimarc Corporation | Method for encoding auxiliary data within a source signal |

US5905819 * | Feb 5, 1996 | May 18, 1999 | Eastman Kodak Company | Method and apparatus for hiding one image or pattern within another |

US5930369 * | Sep 10, 1997 | Jul 27, 1999 | Nec Research Institute, Inc. | Secure spread spectrum watermarking for multimedia data |

US5991426 * | Dec 18, 1998 | Nov 23, 1999 | Signafy, Inc. | Field-based watermark insertion and detection |

US6041143 * | Apr 14, 1998 | Mar 21, 2000 | Teralogic Incorporated | Multiresolution compressed image management system and method |

US6108434 * | Sep 12, 1997 | Aug 22, 2000 | Signafy, Inc. | Counteracting geometric distortions for DCT based watermarking |

US6154571 * | Jul 17, 1998 | Nov 28, 2000 | Nec Research Institute, Inc. | Robust digital watermarking |

US6240121 * | Jul 9, 1998 | May 29, 2001 | Matsushita Electric Industrial Co., Ltd. | Apparatus and method for watermark data insertion and apparatus and method for watermark data detection |

US6359998 * | Apr 23, 1998 | Mar 19, 2002 | 3Com Corporation | Method and apparatus for wavelet-based digital watermarking |

US6373974 * | Sep 30, 1998 | Apr 16, 2002 | Sharp Laboratories Of America, Inc. | Method for extracting multiresolution watermark images to determine rightful ownership |

US6385329 * | Jul 19, 2000 | May 7, 2002 | Digimarc Corporation | Wavelet domain watermarks |

US6587821 * | Nov 17, 1999 | Jul 1, 2003 | Digimarc Corp | Methods for decoding watermark data from audio, and controlling audio devices in accordance therewith |

US6625295 * | Sep 12, 1997 | Sep 23, 2003 | Purdue Research Foundation | Authentication of signals using watermarks |

WO1999036876A2 | Jan 20, 1999 | Jul 22, 1999 | Digimarc Corporation | Multiple watermarking techniques |

Non-Patent Citations

Reference | ||
---|---|---|

1 | Antonini, et al., "Image Coding Using Wavelet Transform", IEEE Transactions on Image Processing, vol. 1, Apr. 1992, pp. 205-220. | |

2 | Barni et al., "Copyright protection of digital images by embedded . . . ", Image and Vision Computing, vol. 16, 1996, pp. 897-906. | |

3 | Chun-Shien Lu et al. "Cocktail Watermarking on Images", Lecture Notes in Computer Science, Springer-Verlag. XP-002230493. | |

4 | Cox et al., "Secure Spread Spectrum Watermarking for Media", IEEE Transactions of Image Processing, vol. 6, Dec. 1997, pp. 1673-1686. | |

5 | Craver et al., "Resolving Rightful Ownerships with Invisible Watermarking . . . ", IEEE Journal of Selected Areas in Communication, vol. 16, May 1998, pp. 573-586. | |

6 | Delaigle et al., "Watermarking algorithm based on a human visual model", Signal Processing, vol. 66, 1998, pp. 319-335. | |

7 | Hartung et al., "Spread Spectrum Watermarking: Malicious Attacks and Counterattacks", Telecommunications Laboratory, University of Erlangen-Nuremberg, Germany, 1999. | |

8 | Hartung et al., "Watermarking of uncompressed and compressed video", Signal Processing, vol. 66, 1998, pp. 283-301. | |

9 | J. Fridrich, "Combining low-frequency and spread spectrum watermarking", Center for Intelligent System, SUNY Binghamton, NY and Mission Research Corporation, Albuquerque, NM. | |

10 | Kundur et al., "Digital Watermarking Using Multiresolution Wavelet Decomposition", Dept. of Electrical and Computer Engineering, University of Toronto, Canada. | |

11 | Lu et al., "A New Watermarking Technique for Multimedia Protection", Institute of Information Science, Academia Sinica, Taipei, Taiwan. | |

12 | Petitcolas et al., "Attacks on copyright marking systems", Second Workshop on Information Hiding, Oregon, USA, Apr. 15-17, 1998, pp. 1-21. | |

13 | Podilchuk et al., "Image-Adaptive Watermarking Using Visual Models", IEEE Journal on Selected Areas in Communication, vol. 16, May 1998, pp. 525-539. | |

14 | Said et al., "A New, Fast, and Efficient Image Codec Based on Set Partitioning . . . ", IEEE Transactions on Circuit and Systems for Video Technology, vol. 6, Jun. 196. pp. 243-249. | |

15 | Swanson et al., "Multiresolution Scene-Based Video Watermarking Using Perceptual Models", IEEE Journal on Selected Areas in Communication, vol. 16, May 1998, pp. 540-550. | |

16 | Watson et al., "Visibility of Wavelet Quantification Noise", IEEE Transactions on Image Processing, vol. 6, Aug. 1997, pp. 1164-1174. |

Referenced by

Citing Patent | Filing date | Publication date | Applicant | Title |
---|---|---|---|---|

US7391880 * | Jul 3, 2003 | Jun 24, 2008 | Digimarc Corporation | Color adaptive watermarking |

US7693300 * | Jun 24, 2008 | Apr 6, 2010 | Digimarc Corporation | Color image or video processing |

US8477990 | Mar 7, 2011 | Jul 2, 2013 | Digimarc Corporation | Reducing watermark perceptibility and extending detection distortion tolerances |

US8873797 | Jul 2, 2013 | Oct 28, 2014 | Digimarc Corporation | Reducing watermark perceptibility and extending detection distortion tolerances |

US8971567 | May 14, 2013 | Mar 3, 2015 | Digimarc Corporation | Reducing watermark perceptibility and extending detection distortion tolerances |

US9311687 | Oct 28, 2014 | Apr 12, 2016 | Digimarc Corporation | Reducing watermark perceptibility and extending detection distortion tolerances |

US20030068068 * | Sep 28, 2001 | Apr 10, 2003 | Nam-Deuk Kim | Content based digital watermarking using wavelet based directionality measures |

US20040125983 * | Jul 3, 2003 | Jul 1, 2004 | Reed Alastair M. | Color adaptive watermarking |

US20090003648 * | Jun 24, 2008 | Jan 1, 2009 | Reed Alastair M | Color Image or Video Processing |

US20110216936 * | Mar 7, 2011 | Sep 8, 2011 | Reed Alastair M | Reducing Watermark Perceptibility and Extending Detection Distortion Tolerances |

Classifications

U.S. Classification | 382/100, 375/E07.04, 375/E07.089, 713/176 |

International Classification | H04N7/26, G06T1/00, H04N1/32 |

Cooperative Classification | H04N19/467, H04N19/63, H04N1/32304, H04N21/23892, G06T2201/0081, G06T1/0071, G06T2201/0052, H04N1/3217, H04N1/32144, H04N1/32187, H04N21/8358 |

European Classification | H04N21/2389B, H04N21/8358, H04N1/32C19B2S, H04N1/32C19B6D, H04N1/32C19B2D, H04N7/26H30, H04N1/32C19, G06T1/00W6M, H04N7/26E10 |

Legal Events

Date | Code | Event | Description |
---|---|---|---|

Apr 28, 2008 | AS | Assignment | Owner name: TRANSPACIFIC INNOVATION, LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ACADEMIA SINICA;REEL/FRAME:020866/0153 Effective date: 20080301 |

Aug 15, 2008 | FPAY | Fee payment | Year of fee payment: 4 |

Aug 15, 2008 | SULP | Surcharge for late payment | |

Sep 23, 2011 | FPAY | Fee payment | Year of fee payment: 8 |

Apr 15, 2016 | REMI | Maintenance fee reminder mailed | |

Sep 7, 2016 | LAPS | Lapse for failure to pay maintenance fees | |

Oct 25, 2016 | FP | Expired due to failure to pay maintenance fee | Effective date: 20160907 |

Rotate