|Publication number||US7398000 B2|
|Application number||US 10/108,000|
|Publication date||Jul 8, 2008|
|Filing date||Mar 26, 2002|
|Priority date||Mar 26, 2002|
|Also published as||US8244101, US20030185541, US20080267584|
|Publication number||10108000, 108000, US 7398000 B2, US 7398000B2, US-B2-7398000, US7398000 B2, US7398000B2|
|Original Assignee||Microsoft Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (15), Referenced by (14), Classifications (23), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This invention relates to television entertainment architectures and, in particular, to methods, program products, and data structures for identifying a segment of a digital video record.
Traditionally, during television programs, viewers have been limited in the ability to control the content of a program being viewed. For example, during a television commercial segment or any other segment that may not be of immediate interest, viewers have been either forced to view the entire segment or change the channel and wait for the segment to conclude. The advent of video cassette recorders (VCRs) allowed viewers greater control over the content of segments when the program was pre-recorded. In recent years some VCR systems have included a relatively unsophisticated one-touch commercial skip feature. The feature consists of little more than a mechanism for automatically fast-forwarding the playback of video data by thirty seconds. By pushing a single button, the VCR automatically advances the video tape by the estimated length of an average commercial segment. While this feature introduces the convenience of a one-touch skip, the length of the skip does not always correspond with the length of a segment that is not of immediate interest to the viewer and is particularly ill-suited for identifying many program transitions that do not have predictable durations.
The advent of digital video formats has allowed for many conveniences not considered practical for a traditional VCR system. Such digital video formats, in particular the Moving Pictures Experts Group (MPEG) and other video compression formats, allow for more sophisticated segment skips. For example, a viewer using a digital video data system that records digital video data in a digital video record on a hard disk or another mass storage device may skip or replay to predetermined scenes, without the time consuming fast forward or rewind of a video tape.
Although digital video systems can more conveniently jump from one portion of a video program to another without having to physically advance a tape, conventional digital video data systems have also generally been capable of advancing between video segments at predetermined increments, such as at thirty-second intervals. Thus, viewers of recorded video data, whether using VCR systems or digital video data systems, have generally been constrained to advancing the video playback in certain, restrictive ways. For instance, the viewer can cause the playback to be skipped ahead by thirty seconds. Alternatively, viewers who wish to advance the playback of a video program past one or more commercials to the beginning of the next non-commercial segment have been forced to place the VCR or digital video data system in a fast-forward mode and then visually identify, by trial and error, the position that represents a segment transition. Accordingly, for entertainment systems that are capable of playing back a stored video program received from a television broadcast system, there is a need for a technique to identify the presence of the one or more commercials in the digital video record so that a playback of the digital video record can use this identified presence during any playback thereof.
An entertainment system that is capable of playing back a stored digital video record that includes frames of video data received from a television broadcast system, and that identifies commercials in a digital video record that includes frames of video data by approximating ranges of frames that are either commercials or non-commercials, approximating frames that are either a beginning or an ending of a commercial or a non-commercial, and associating as a commercial two of the approximated frames that are a beginning or an ending of a commercial or a non-commercial and one or more ranges there between until a predetermined percentage of the playback time of the digital video record is taken up by the associated commercials.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
The following discussion is directed to an entertainment system that is capable of playing back a stored digital video record that includes frames of video data received from a television broadcast system such as interactive TV networks, cable networks that utilize electronic program guides, and Web-enabled TV networks. The entertainment device can be a client device and can range from full-resource clients with substantial memory and processing resources, such as TV-enabled personal computers and TV recorders equipped with hard-disks, to low-resource clients with limited memory and/or processing resources, such as traditional set-top boxes.
The entertainment systems can be used, in accordance with an embodiment of the present invention, to skip or not skip playback of a portion of a digital video record, such as a commercial, which is in a recorded television (TV) Network broadcast program. Digital video formats are associated with video data compression techniques that compress video images into manageable volumes of data for processing the images on a microprocessor or home entertainment system. In a video compression format such as MPEG, the data encoding each frame contains quantifiable parameters defining various properties of the frame. In a typical video selection, such as a television program, the quantifiable parameters defining the frames of different portions of a digital video record vary with time. For example, the parameters of commercial portions of a digital video record are typically different from those of a television program portion of the digital video record. By analyzing the parameters it is possible to identify both transitions from one portion to a different and distinguishable portion, as well as a range of frames that make up the different portions of the digital video record.
The present invention identifies both transitions and ranges there between so as to allow for skipping, or not skipping, replay to a selected range between transitions. The positions in the video data that are likely candidates for being a range of frames that make up a commercial portion or a non-commercial portion of the digital video record are identified based on the observation of parameters in the video data. In order to approximate any such range of frames, the video data system can use a variety of parameters and techniques to identify “primary features” present in the digital video record. Additionally, the positions in the video data that are likely candidates for being transitions in and out of commercial or non-commercial portions of the digital video record are approximated based on the observation of various parameters in the video data. In order to identify such candidates for transitions, the video data system can use a variety of parameters and techniques to identify “secondary features” present in the digital video record.
The approximated frames, determined by one or more secondary features, and the approximated ranges, determined by one or more primary features, are subjected to various processes through which they are associated into commercials C1 through C3, seen in
The predetermined percentage of the playback time of the digital video record taken up by the associated commercials can be selected by a user of the entertainment system using a user interface and executable instructions executed by the entertainment system. Alternatively, the range can be preset in the entertainment system according to the type of programming likely for each channel, in a given television market, that the entertainment system is capable of receiving. As such, some channels may not have commercials while other channels may have more commercials than others. As a general rule, a default range for typical television markets is about 20% to about 30% of a typical hour of television programming in the United States.
Given the foregoing, the approximations are made in the digital video record by using respective thresholds for the primary and secondary features such that the digital video record can be separated into commercials C1, C2, and C3. The thresholds for the primary and secondary features are for segmenting the digital video record into commercials and non-commercial on the basis of the type of feature. Once obtained, the thresholds are then normalized by subtracting the respective threshold values, so that a new threshold for each primary feature is now at a value of zero. As such, a resultant positive value would be indicative of a commercial and a resultant negative value would be indicative of a noncommercial. Then, the normalized primary features are combined using a weight-per-primary-feature basis, then an overall threshold is used to do another segmentation based on a combined compound primary feature. A threshold on the combined compound primary feature is then adjusted to arrive at a predetermined percentage of the digital video record (e.g. 20% to 30%). The weight that is used for each primary feature can be selected based upon the accuracy that each primary feature exhibits in approximating a range of frames that is either a commercial or is not a non-commercial. Similarly, the weight that is used for each secondary feature can be selected based upon the accuracy that the each secondary feature exhibits in approximating frames that are either a beginning or an ending of a commercial or a non-commercial. The weights and thresholds are useful in obtaining greater certainty in the approximation of the transition frames and the ranges of frames so as associate the same into commercials.
Different primary feature techniques, only some of which are given herein by way of example and not by way of limitation, can be used to approximate ranges of frames in a digital video record in which a commercial or a non-commercial occurs, where some primary feature techniques are better or more accurate than others at approximating ranges of frames for commercial or non-commercial segments. Similarly, different secondary feature techniques, only some which are given herein by way of example and not by way of limitation, can be used to approximate a frame that is a transition in or out of a commercial of non-commercial, where some secondary feature techniques are better than others at approximating transition frames. Accordingly, the values resulting from each respective primary and secondary feature can be thresholded, normalized, and then weighted according to the accuracy and reliability thereof in the range and frame approximation process, to thereby arrive at and improve the selection of commercial and non-commercial segments in a digital video record.
Those secondary features that are more reliable can be more heavily weighted than less reliable secondary features so as to decide with greater certainty the approximated frames at which there is most likely a transition in or out of a commercial or a non-commercial. Alternatively, more and different types of secondary features can be used and can be weighted accordingly to then decide with still greater certainty on the approximated transition frames.
After approximations, which may be weighted as described above, the digital video record has remaining non-commercial segments seen in
During playback of the digital video record, the entertainment system can be preset, or the viewer can issue a request on demand, to skip any playback of the associated commercials therein to the next non-commercial portion. In response to the skip request, the system then automatically skips the playback to the next non-commercial portion that has been identified. As such, the playback of the video data can be skipped to a next non-commercial portion that may be temporally displaced from the current playback position by an arbitrary amount of time, rather than by a predetermined amount of time, such as thirty seconds. Moreover, the viewer can rapidly and conveniently skip through a commercial portion of the recorded video data without being required to place the video data system in a fast-forward mode, view the remaining portion of the video data in the fast-forward mode, and then resume the normal speed playback mode when the current commercial portion is completed, as has been required in many conventional systems. Alternatively, during playback of the digital video record, the entertainment system can be preset, or the viewer can issue a request on demand, to skip any playback of the associated non-commercials. The system then automatically plays back only the commercial portions by skipping the identified non-commercial portions.
One embodiment of the present invention is depicted by a process 200 seen in
Secondary features in the digital video record are located at block 212 of process 200. Secondary features occur in the stored frames of video data and can be used to identify a respective one or more transition frames in or out of a commercial or a non-commercial. Each transition frame can be approximated by a predetermined secondary threshold that is selected on the basis of the particular secondary feature. The approximated frames can be flagged for later use of the flags. At block 214, the starting frame and the ending frame of each segment are then re-identified with one of the approximated transition frames. This re-identification is based upon the predetermined secondary threshold and the chronological location of each transition frame with respect to the chronological location of the starting frame and the ending frame of the respective segment. Each segment can then be flagged at block 216 of process 200. The flags referred to above, or similar indicators, can be organized into a data structure that is stored in the entertainment system. The data structure can be used during a playback of the stored video data of the digital video record in order to skip commercials during a playback of the corresponding digital video record. Alternatively, the data structure can be used during a playback of the stored video data of the digital video record in order to skip only non-commercials and to play only commercials.
Different and many attributes occurring in video data from Network broadcast television programming can be used as primary features. Any such primary feature now known, or yet to be understood, developed, or implemented, is intended to be used and is considered to be within the scope of one of more embodiments of the present invention. By way of example, and not by way of limitation, several primary features are discussed below.
One of the primary features that can occur in the digital video record is a substantially repeated portion of video data, which is typical of commercials that are played more than once in a television program. As such, this primary feature can be used to identify substantially identical sets of frames by comparing multiple sets of contiguous frames in the stored frames of video data to other such sets in the digital video record, or to sets of contiguous frames of video data in a pre-existing database of known commercials. Here an assumption is made that a viewer would not wish to watch identical segments in a digital video record, such as repeated commercials. Each substantially identical set of stored frames of video data is approximated as being a range, where the approximated range is approximated using the predetermined primary threshold selected for this particular primary feature. By way of illustration of a threshold for the primary feature of duplicate ranges, each range can be flagged to have a value that is representative of how close the range is to a typical length of a commercial (e.g. 60, 30, 15, or 10 seconds). As such, ranges that are duplicates and that have a playback duration of about 60, 30, 15, or 10 seconds could be flagged to have the highest value. Conversely, ranges that are duplicates but that have a playback duration that deviates from these typical commercial durations could be flagged with a lower value, depending on the degree of deviation. It may be desirable, depending upon the programming of the television market and/or television channel, to identify ranges that are within a particular duration range, such as more than 5 seconds and not more the 2 minutes, to ensure that ranges that are likely to be commercials will be flagged.
When the length of time between adjacent substantially identical sets, or between known commercials, is not more that a predetermined time length, the frames there between can also be approximated to be a range, such as where a commercial is between two previously identified commercials. Here, the starting and ending frames of the range can be, respectively, the ending frame of the chronologically first of the substantially identical sets and the starting frame of the chronologically last of the substantially identical sets.
Another way of identifying commercials in a digital video record is the primary feature of specific words or non-word indicators in closed captioning data. This primary feature can be located by comparing closed caption text corresponding to stored frames of video data to a pre-existing database of known commercial words, phrases, and non-word indicators. A comparison can be done to identify a match there between. For each match, a starting and an ending frame of a range can be set around the known commercial words and phrases as set by a particular predetermined primary margin. For example, a match might be found in a close captioning stream of text that is a trade name or trademark, or on a phrase that is a telephone number that does not have a ‘555’ prefix. Other phrases can also be in the database, such as “operators are standing by”. The textual close proximity of well known trademarks and trade names can also be used in setting a range. Symbols or other non-work indicators can also be used to set a range.
Another way of identifying repeated video data in a digital video record that is indicative of a commercial is the primary feature of repeated closed captioning data. This primary feature can be located by comparing closed caption text corresponding to stored frames of video data to other such data in the digital video record or to corresponding sets of frames of video data in a pre-existing database of known commercials. This text comparison is done to identify a match there between. For each matching frame, the starting and ending frames of the respective range can be set to be separated from the respective matching frame by a particular predetermined primary margin. Matching a string of contiguous words and non-word control data in closed captioning from showing to showing of the same commercial is one such technique. For each range having a corresponding matching frame, the starting and ending frames of the range can be re-set, respectively, to the chronologically first and last starting and ending frames for all of the matching frames within the range. Another way to achieve the same result is to create one range that contains all the frames involved in the matched closed captioning data by setting the starting frame of the range to be the frame that is before the chronologically first frame of the match by the predetermined primary margin and setting the ending frame of the range to be the frame that is after the chronologically last frame of the match by the predetermined primary margin.
An entertainment system can be used to identify commercials in a television broadcast even when the entertainment system is not being used by a user to record or play back a digital video record. When the entertainment system is otherwise idle, one or more tuners in the entertainment system can be used to monitor one or more channels, and those channels can be analyzed for commercials. A database of known commercials can then be built up by and stored in the entertainment system for future use in identification of those commercials in a digital video record. By way of example, a database of commercials can be built by the entertainment system by use of its one or more tuners to monitor one channel with each tuner. This monitoring examines strings of text in the closed captioned data being broadcast. When a string of closed captioned text in one range of frames matches that of another, this indicates a commercial. When there are two sets of close captioning data that each match another set of closed captioning data, and these two sets are separated in time by a duration typical of a commercial, the separating interval will also likely be a commercial. As such, all or a portion of the closed caption text of the likely commercial is then stored in the database. When a closed captioned text of a digital video record is compared to this database, matches to the database can be found and the match in the digital video record can be flagged. The flagged match can then be used for a variety of purposes, such as to prevent the showing of the same commercial in the digital video record upon playback, or to show only the commercials in the digital video record upon playback.
Another useful primary feature involves an evaluation of one or both of audio volume and active audio frequency bandwidth for changes. Either of these may be higher or lower for a commercial. As seen in
The quality of video data in a digital video record can be a useful primary feature. Some commercials are produced with higher quality than some non-commercials such as rerun television programs. To detect an increase in quality, the maximum sharpness can be derived by an edge detection filter (EDF) over an entire frame in one or more frames of the digital video record. The output of the EDF, which gives a metric for sharpness, will have a magnitude at any given position in the frame. As such, the maximum value for all of the points in the frame can be found, which is the maximum effective resolution and is a sharpness measure. Accordingly, video sharpness as a whole can be used a primary feature.
The amount of video data in the digital video record can be quite large. An entertainment system can be designed to process and analyze large amounts of video data. Alternatively, as is common to consumer electronics, the entertainment system may be designed to process a digital video record for recording and playback as efficiently as possible, due to lack of processing power or other demands on the entertainment system. It may be desirable to reduce the amount of video data in the digital video record that is processed and analyzed in order to identify primary features.
The visible area of the video data can be cropped as a form of data reduction. The outer periphery/edges of the video portion of the data is cropped and an algorithm processes only on the visible area of the video data. This is done because there is often more variation in the edges of video data than there is in visible area. The cropping of the data can be done at the pixel resolution level. The edges are substantially dissimilar, especially at the top where there can be closed captioning data leakage in an analog representation of the NTSC signal that leaks down into the encoded MPEG signal. The cropping of the visible area ensures that the encoded MPEG signal is examined where practical.
The entertainment system can discard information in the video data in the digital video record on a per-pixel basis as another way to reduce the amount of video data to be processed. For instance, each pixel can be processed to further reduce the amount of data under analysis. In the case of Red-Green-Blue (RGB) video, each pixel has a Red component, a Green component, and a Blue component. The entertainment system can process each RGB pixel and obtain a single value per pixel that represents the brightness of the pixel. Alternatively, the entertainment system could process each pixel and obtain a value that represents the magnitude of the purple color of each pixel. Also, the entertainment system could discard the Green and Blue components and process only the magnitude of the Red of each pixel. Digital video systems generally operate in the YUV color space, rather than RGB, where the Y is the luminance of the pixel, and the U and V together represent the color of the pixel. In this case, the entertainment system could discard color on a per-pixel basis and process only the luminance information. Per-pixel processing, as described, can be performed by the entertainment system at any stage of other video data reduction processes, such as before or after a scale down process performed upon the video data of a digital video record, or before or after a TWA process, etc. The selection of which values to compute on a per-pixel basis can be driven by the desirability of the per-pixel values to be stable from showing to showing of the same video frame, but somewhat unstable from frame to frame as time progresses.
By way of comparison, a scale down is a spatial operation, whereas TWA is a temporal operation. TWA, for instance, averages frames 0, 1, and 2 to obtain one (1) averaged frame. A scale down process takes the full size picture and reduces it down to a single pixel such that the picture is no longer recognizable. The single pixel from the scale down process would represent the overall color or the overall brightness of the entire frame, so in this case, there would be three pixels that represent frames 0, 1, and 2. A factor, such as the magnitude of a particular color, could be computed without doing a scale down process. However, the magnitude of a particular color might be determined by first performing a scale down process by averaging the full picture down to a single pixel and then examining the magnitude of the particular color of that one single remaining pixel.
A quick and efficient means of locating identical segments in a digital video record is desirable for an entertainment system of limited processing power. A difference metric is a value that can be computed by comparing two segments on a frame by frame basis. A given pair of frames can only add to the value computed by the difference metric. When comparing two segments to each other, rather than process every pair of frames in the two segments, the entertainment system can process a subset thereof if a selected difference metric threshold is exceeded before the difference metric is fully computed. Early termination difference metrics are a quick way to reject or exclude non-match candidates in a digital video record (i.e. non-repeated video data) because the candidates are not close enough as determined by the selected difference metric threshold. Thus, every pair of frames need not be processed.
If all frames of the digital video record are to be used to find repeated video data, the offset of each frame of the digital video record is considered. Each offset in the digital video record is used as a base offset to determine the set of frames under consideration, excluding those base offsets that would otherwise attempt to use a frame outside the digital video record. For example, for offset 0, the frames 0+0, 0+10, and 0+30, or the frames 0, 10, and 30 might be considered. The offsets from the base offset (e.g. 0,10,30) do not change during analysis, whether that analysis spans multiple digital video record's or is only performed within a single digital video record. The selected frames to be reviewed are then compared to a database of known commercials and an evaluation is made to see if these three frames match something in the database. If not, then the offset is incremented from zero to one so that the Frame0 is now Frame1, Frame10 becomes Frame11, and Frame30 becomes Frame31. Then another evaluation is made to see if these three new frames match anything in the database. This incrementing continues, by one (1) to 2, 12, and 32, etc, until the last base offset is considered. Because comparing the digital video record to itself or to the database of known commercials can be computationally rigorous, it is faster to compare the digital video record to a data-reduced database of known commercials, where each reduced dataset in the database has a pointer to the rest of the corresponding frames, or a pointer to where the reduced versions of those frames can be located which are needed to do a more complete difference metric calculation. As such, the data-reduced database of known commercial can be small enough to be stored in a random access memory (RAM) of a consumer electronics entertainment system. By keeping the database small a hard drive need not be used, with its inherent time intensive constraints, and the processing time for the comparison algorithms can be faster than if a hard drive was required to store the database. Even if the database is too large to fit in available RAM, the database will occupy only a relatively small amount of hard drive space, and can be structured to require reading only a subset of the database when doing a single compare of a set of frames within a digital video record to the database.
The radius r of circle A represents the maximum value resulting from a full difference metric calculation below which two segments of video are considered substantially identical. The plotted difference metric can operate on other parameters, including the brightness of any particular color. Here a scale down process of the frames may first be desirable to reduce the amount of data and to limit the effect of frame alignment and frame shifts. A given algorithm for detecting the presence of duplicates need not explicitly compute a distance from a particular plotted point based at a base offset in the digital video record to each commercial in the spatial data structure. Rather, the algorithm can use a spatial data structure to avoid making computations on a per-known-commercial basis for each base offset. Such an algorithm only needs to compute the distance, and possibly the full difference metric, for those commercials that are not immediately excluded by the spatial data structure.
An alternative to a 2D circle application, discussed above, is a 3D sphere that can be made for all points equidistant by distance ‘r’ using three (3) frames, e.g. plotting Frame0, Frame15, and Frame30 for a point representing a commercial in the pre-existing data. A comparison of offsets in the digital video record to the volume of the sphere can be made to exclude non-matching candidates.
A full difference metric calculation requires processing all frames in two equal-length sections of the digital video record, which can be too great of a computational overhead. A faster way to perform this computation is possible because the actual difference metric value isn't needed unless it is less than a particular value above which the two segments are considered a non-match. In this faster way, the squares of the differences of values in corresponding frames in the two segments of the digital video record are added together to make a partial calculation of the distance. The sum of squared value differences may exceed the squared radius of the spatial figure (e.g. circle or sphere or n-dimensional representation) before all of the terms have been added that make up the squared distance. Once the squared distance exceeds the radius of the spatial figure, the candidate can be excluded as a possible match candidate and the remaining squared differences need not be added. The process can then quickly move to an analysis of the next match candidate. This early termination version of the distance computation can greatly reduce the number of additions and multiplications that need to be made, as compared to the full difference metric, because many match candidates will quickly turn out to be non-matches. When all of the terms in the appropriate digital video record segment have been added and the difference is not greater than the radius of the spatial figure, then the segment is a match to the commercial in the pre-existing database. The digital video record range of frames can then be flagged with the occurrence of the primary feature of a commercial repeat for further analysis under secondary features.
Process 700 moves from blocks 706 or 710 to block 712 where offsets in the list are added to the database. Then, at block 712, the offsets are used to collect parameters to create the commercial signature which is the collection of parameters and the offset. At block 716, process 700 defines the spatial data structure volume using the commercial signature. Again referring to
Process 700, seen in
A process 800 in
With respect to block 808, because two or more commercials can be shown multiple times in the same sequence, the database contains not only the reduced form of the commercial, but also the length of the commercial and the offset of the first frame of the commercial relative to the first frame used in the commercial's signature, if known. Any time a match is found between a stream under analysis and a commercial in the database, if the length of the commercial in the database is known by the database, and the match is shorter than the length of the commercial in the database but still long enough to be considered a commercial, and if the non-match portion or portions (1 or 2) are all also long enough to be considered commercials, a known commercial split is undertaken, as is explained below.
A known commercial split takes one entry in the database and creates one or two additional new entries, depending on whether the non-match portion before the current match is long enough to be a commercial, and whether the non-match portion after the current match is long enough to be a commercial. The first step in a known commercial split is to duplicate the item that represents the known commercial in the database. The new item that is created is updated to refer to one of the non-match portions of the preexisting item, and the preexisting item is updated to refer to the portion of the preexisting item that is not referred to by the new item. If there is a second non-match portion of the preexisting item that is long enough to be considered a commercial, the preexisting item is again duplicated, and the newly created item is updated to refer to the remaining non-match portion of the preexisting item, and the preexisting item is updated to refer to the portion of the preexisting item that is not referred to by the second new item. Then a signature is created for all new items and all new items are inserted into the spatial data structure as separate known commercials.
If the length of the commercial is not yet known by the database, the database is updated to indicate the length of the commercial as known and equal to the length of the match, and the signature-relative offset of commercial in the database is updated to reflect the start of the match.
Block 808 can employ audio data in addition to video data in determining similarity. In this way, if the audio is found to be significantly different, a false match can be avoided. Process 800 moves to block 812 after block 808, where match parameters of the stream are set to a value derived from the length of the match for the duration of the matched segment in current stream. The match lengths can be near a duration in seconds of 10, 15, 30, etc. in order to obtain large values. Process 800 moves to block 814 where a query is made as to whether all of the data has been analyzed for the commercial matching process, e.g. are there more candidates? If not, process 800 moves to block 820, and if so the next candidate is retrieved at block 816 and process 800 moves to block 808.
At block 820, a query is made as to whether all offsets have been considered. If all offsets have been considered, then process 800 terminates for the stream of video data under consideration at block 822, and otherwise block 819 increments the offset to the next offset for the stream of video data under consideration and repeats the forgoing blocks by returning to block 805. Accordingly, video data in a digital video record can be efficiently matched to a database of known commercials to identify commercials. Also, processes 700, 701, and 800 can be used to compare video data in a digital video record against itself to identify repeated commercials in a digital video record.
Secondary features that occur in video data of a digital video record, as discussed above, can be used to approximate frames in a digital video record that are transitions in or out of a commercial or non-commercial. In distinction, a secondary feature is used to approximate a frame as the point of transition, whereas a primary feature is used to approximate a range of frames. Optionally, and in addition thereto, the approximated beginning and ending frames of each range can also be used as one such secondary feature.
Different and many attributes occurring in video data from Network broadcast television programming can be used as secondary features. Any such secondary feature now known, or yet to be understood, developed, or implemented, is intended to be used and is contemplated to be within the scope of one of more embodiments of the present invention. By way of example, and not by way of limitation, several secondary features are discussed below.
Secondary features can be identified using video data analysis. One such secondary feature is the occurrence of a transition into or out of 3:2 pulldown. 3:2 pulldown is a technique that uses 59.94 fields per second to represent material that was originally 24 Frames per second (24 fps). Motion picture shows (movies) produced by studios are typically broadcast on Network television using 3:2 pulldown. If 3:2 pulldown is on, it is an indicator that the original source material was film, and if 3:2 pulldown is off, it is an indicator that the original source material was video. A change indicates a transition from film to video or from video to film. If the data in the digital video record located 30 seconds of 3:2 pulldown within a large, multiple minute recording that did not have 3:2 pulldown, a primary feature would be deemed to have occurred, in that it would be estimated that portion of the digital video record is a 30 second commercial. However, due to an absence in consistency of this phenomenon in Network broadcast television, the 30 seconds may not be a commercial. As such, shifts to or from 3:2 pulldown are more accurate in detecting transitions (i.e. a secondary feature) rather than ranges of frames (i.e. a primary feature).
Another secondary feature occurrence in video data is a change in the size and frequency of the frame type, including frame types I, P, and B. As such, changes in the size of the I, P, and/or B frames can be monitored and a threshold thereof can be set to flag the occurrence of a secondary feature. Similarly, changes in the frequency of I, P, and/or B frames in the digital video record can be monitored and a threshold thereof can be set to flag the occurrence of a secondary feature. In that this type of secondary feature is not as reliable as the 3:2 pulldown secondary feature, where both are used to identify segments of commercial and non-commercial, weighting can be used for each to provide better association of transitions and ranges into identified segments of commercials.
Yet another secondary feature occurrence in video data is a Fade-To-Black and Back (FTBAB) transition. Here, most of the pixels or a high threshold thereof, go to a color at or near black, both going both in and out of a commercial. This is a fairly reliable secondary feature and could be weighted accordingly where used with other types of secondary features.
Secondary features can be identified using non-video data analysis. One such secondary feature is the occurrence of a change in the total active audio frequency band. Another example is audio cutting out substantially, or exhibiting a Go-To-Quiet-And-Back (GTQAB) occurrence, before going in and out of a commercial. As such, a GTQAB secondary feature is a transition from a non-commercial to a commercial or vice versa. As such, this secondary feature can be used to approximate the beginning or the end of each non-commercial or commercial segment. After a nearby primary feature is identified, a GTQAB occurrence can be identified to approximate the beginning and end of commercials and non-commercials.
The foregoing discussion is directed towards Network broadcast digital video television programming that is recorded and analyzed. The present invention also contemplates and is applicable to Network broadcast analog video television programming that is digitized into frames of digital video data in a digital video record.
While aspects of the described methods, program products, and data structures can be used in any of these systems and for any types of client devices, they are described in the context of the following exemplary environment.
Program provider 102 includes an electronic program guide (EPG) database 110 and an EPG server 112. The EPG database 110 stores electronic files of program data 114 which is used to generate an electronic program guide (or, “program guide”) that can be separately multiplexed into a data stream. Program data includes program titles, ratings, characters, descriptions, actor names, station identifiers, channel identifiers, schedule information, and so on. The terms “program data” and “EPG data” are used interchangeably throughout this discussion. For discussion purposes, an electronic file maintains program data 114 that includes a program title 116, a program day or days 118 to identify which days of the week the program will be shown, and a start time or times 120 to identify the time that the program will be shown on the particular day or days of the week.
The EPG server 112 processes the EPG data prior to distribution to generate a published version of the program data which contains programming information for all channels for one or more days. The processing may involve any number of techniques to reduce, modify, or enhance the EPG data. Such processes might include selection of content, content compression, format modification, and the like. The EPG server 112 controls distribution of the published version of the program data from program data provider 102 to the content distribution system 104 using, for example, a file transfer protocol (FTP) over a TCP/IP network (e.g., Internet, UNIX, etc.). Alternatively, this distribution can be transmitted directly from a satellite to a local client satellite dish receiver for communication to a local client set top box.
Content provider 103 includes a content server 122 and stored content 124, such as movies, television programs, commercials, music, and similar audio and/or video content. Content provider 103, also known as a ‘headend’, does video insertion from a content source and an advertising source, and then places the content with insertions into a transmission link or a satellite uplink. Content server 122 controls distribution of the stored content 124 and EPG data from content provider 102 to the content distribution system 104. Additionally, content server 102 controls distribution of live content (e.g., content that was not previously stored, such as live feeds) and/or content stored at other locations to the content distribution system 104.
Content distribution system 104 contains a broadcast transmitter 126 and one or more content and program data processors 128. Broadcast transmitter 126 broadcasts signals, such as cable television signals, across broadcast network 108. Broadcast network 108 can include a cable television network, RF, microwave, satellite, and/or data network, such as the Internet, and may also include wired or wireless media using any broadcast format or broadcast protocol. Additionally, broadcast network 108 can be any type of network, using any type of network topology and any network communication protocol, and can be represented or otherwise implemented as a combination of two or more networks.
Content and program data processor 128 processes the content and program data received from content provider 102 prior to transmitting the content and program data across broadcast network 108. A particular content processor may encode, or otherwise process, the received content into a format that is understood by the multiple client devices 106(1), 106(2), . . . , 106(N) coupled to broadcast network 108. Although
Content distribution system 104 is representative of a headend service that provides EPG data, as well as content, to multiple subscribers. Each content distribution system 104 may receive a slightly different version of the program data that takes into account different programming preferences and lineups. The EPG server 112 creates different versions of EPG data (e.g., different versions of a program guide) that include those channels of relevance to respective headend services. Content distribution system 104 transmits the EPG data to the multiple client devices 106(1), 106(2), . . . , 106(N). In one implementation, for example, distribution system 104 utilizes a carousel file system to repeatedly broadcast the EPG data over an out-of-band (OOB) channel to the client devices 106.
Client devices 106 can be implemented in a number of ways. For example, a client device 106(1) receives broadcast content from a satellite-based transmitter via a satellite dish 130. Client device 106(1) is also referred to as a set-top box or a satellite receiving device. Client device 106(1) is coupled to a television 132(1) for presenting the content received by the client device (e.g., audio data and video data), as well as a graphical user interface. A particular client device 106 can be coupled to any number of televisions 132 and/or similar devices that can be implemented to display or otherwise render content. Similarly, any number of client devices 106 can be coupled to a television 132.
Client device 106(2) is also coupled to receive broadcast content from broadcast network 108 and provide the received content to associated television 132(2). Client device 106(N) is an example of a combination television 134 and integrated set-top box 136. In this example, the various components and functionality of the set-top box are incorporated into the television, rather than using two separate devices. The set-top box incorporated into the television may receive broadcast signals via a satellite dish (similar to satellite dish 130) and/or via broadcast network 108. In alternate implementations, client devices 106 may receive broadcast signals via the Internet or any other broadcast medium.
Each client 106 runs an electronic program guide (EPG) application that utilizes the program data. An EPG application enables a TV viewer to navigate through an onscreen program guide and locate television shows of interest to the viewer. With an EPG application, the TV viewer can look at schedules of current and future programming, set reminders for upcoming programs, and/or enter instructions to record one or more television shows.
Exemplary Client Device
Client device 106 receives one or more broadcast signals 410 from one or more broadcast sources, such as from a satellite or from a broadcast network. Client device 106 includes hardware and/or software for receiving and decoding broadcast signal 410, such as an NTSC, PAL, SECAM, MPEG or other TV system video signal. Client device 106 also includes hardware and/or software for providing the user with a graphical user interface (GUI). The GUI can be used by the user for a variety of purposes. One such purpose is to allow the user to set a predetermined percentage of the playback time of the digital video record that will most likely be taken up by the associated commercials identified by the client device 106, both for all or some for the channels that the client device 106 is capable of receiving. The user also can use the GUI, for example, to access various network services, configure the client device 106, and perform other functions.
Client device 106 is capable of communicating with other devices via one or more connections including a conventional telephone link 412, an ISDN link 414, a cable link 416, an Ethernet link 418, and a link 419 that is a DSL or an ADSL link. Client device 106 may use any one or more of the various communication links 412-419 at a particular instant to communicate with any number of other devices.
Client device 106 generates video signal(s) 420 and audio signal(s) 422, either of which can optionally be communicated to television 132 or to another video and/or audio output device. The video signals and audio signals can be communicated from client device 106 to television 132 (or other such output device) via an RF (radio frequency) link, S-video link, composite video link, component video link, or other communication link. Although not shown in
Client device 106 also includes one or more processors 304 and one or more memory components. Examples of possible memory components include a random access memory (RAM) 306, a disk drive 308, a mass storage component 310, and a non-volatile memory 312 (e.g., ROM, Flash, EPROM, EEPROM, etc.). Alternative implementations of client device 106 can include a range of processing and memory capabilities, and may include more or fewer types of memory components than those illustrated in
Processor(s) 304 process various instructions to control the operation of client device 106 and to communicate with other electronic and computing devices. The memory components (e.g., RAM 306, disk drive 308, storage media 310, and non-volatile memory 312) store various information and/or data such as content, EPG data, configuration information for client device 106, and/or graphical user interface information.
An operating system 314 and one or more application programs 316 may be stored in non-volatile memory 312 and executed on processor 304 to provide a runtime environment. A runtime environment facilitates extensibility of client device 106 by allowing various interfaces to be defined that, in turn, allow application programs 316 to interact with client device 106. In the illustrated example, an EPG application 318 is stored in memory 312 to operate on the EPG data and generate a program guide. The application programs 316 that may be implemented at client device 106 can include programs that can approximate and flag ranges of frames in a digital video record that are commercials or non-commercials, to approximate and flag frames in a digital video record that are either the beginning or end of a commercial or a non-commercial, and to associate and flag as a commercial two of the approximated frames that are a beginning or an ending of a commercial or a non-commercial and one or more of the approximated ranges there between, where the associated commercials make up a predetermined percentage of the playback time of the digital video record.
Other application programs 316 that may be implemented at client device 106 include a browser to browse the Web, an email program to facilitate electronic mail, and so on. Client device 106 can also include other components pertaining to a television entertainment system which are not illustrated in this example for simplicity purposes. For instance, client device 106 can include a user interface application and user interface lights, buttons, controls, etc. to facilitate viewer interaction with the device.
Client device 106 also includes a decoder 320 to decode a broadcast video signal, such as an NTSC, PAL, SECAM, MPEG or other TV system video signal. Alternatively, a decoder for client device 106 can be implemented, in whole or in part, as a software application executed by processor(s) 304. Client device 106 further includes a wireless interface 322, a network interface 324, a serial and/or parallel interface 326, and a modem 328. Wireless interface 322 allows client device 106 to receive input commands and other information from a user-operated input device, such as from a remote control device or from another IR, Bluetooth, or similar RF input device.
Network interface 324 and serial and/or parallel interface 326 allows client device 106 to interact and communicate with other electronic and computing devices via various communication links. Although not shown, client device 106 may also include other types of data communication interfaces to communicate with other devices. Modem 328 facilitates client device 106 in communications with other electronic and computing devices via a conventional telephone line.
Client device 106 also includes an audio output 330 and a video output 332 that provide signals to a television or other device that processes and/or presents or otherwise renders the audio and video data. Although shown separately, some of the components of client device 106 may be implemented in an application specific integrated circuit (ASIC). Additionally, a system bus (not shown) typically connects the various components within client device 106. A system bus can be implemented as one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, or a local bus using any of a variety of bus architectures. By way of example, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnects (PCI) bus also known as a Mezzanine bus.
General reference is made herein to one or more client devices, such as client device 106. As used herein, “client device” means any electronic device having data communications, data storage capabilities, and/or functions to process signals, such as broadcast signals, received from any of a number of different sources.
Portions of the methods, program products, and data structures described herein may be implemented in any combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) or programmable logic devices (PLDs) could be designed or programmed to implement one or more portions of the methods, program products, and data structures.
Although the methods, program products, and data structures have been described in language specific to structural features and/or methodological steps, it is to be understood that the invention defined in the appended claims is not necessarily limited to the specific features or steps described. Rather, the specific features and steps are disclosed as preferred forms of implementing the claimed invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4538188 *||Dec 22, 1982||Aug 27, 1985||Montage Computer Corporation||Video composition method and apparatus|
|US5343251 *||May 13, 1993||Aug 30, 1994||Pareto Partners, Inc.||Method and apparatus for classifying patterns of television programs and commercials based on discerning of broadcast audio and video signals|
|US5485534 *||Jun 25, 1993||Jan 16, 1996||Fuji Photo Film Co., Ltd.||Method and apparatus for emphasizing sharpness of image by detecting the edge portions of the image|
|US5485611 *||Dec 30, 1994||Jan 16, 1996||Intel Corporation||Video database indexing and method of presenting video database index to a user|
|US5610841 *||Sep 30, 1994||Mar 11, 1997||Matsushita Electric Industrial Co., Ltd.||Video server|
|US5999688 *||Aug 13, 1996||Dec 7, 1999||Srt, Inc.||Method and apparatus for controlling a video player to automatically locate a segment of a recorded program|
|US6061056 *||Mar 4, 1996||May 9, 2000||Telexis Corporation||Television monitoring system with automatic selection of program material of interest and subsequent display under user control|
|US6205181 *||Mar 10, 1998||Mar 20, 2001||Chips & Technologies, Llc||Interleaved strip data storage system for video processing|
|US6356664 *||Feb 24, 1999||Mar 12, 2002||International Business Machines Corporation||Selective reduction of video data using variable sampling rates based on importance within the image|
|US6577346 *||Jan 24, 2000||Jun 10, 2003||Webtv Networks, Inc.||Recognizing a pattern in a video segment to identify the video segment|
|US6608930 *||Aug 9, 1999||Aug 19, 2003||Koninklijke Philips Electronics N.V.||Method and system for analyzing video content using detected text in video frames|
|US6771888 *||Sep 7, 2000||Aug 3, 2004||Christopher J. Cookson||Data structure for allowing play of a video program in multiple aspect ratios|
|US7058278 *||Jul 19, 2001||Jun 6, 2006||Sony Corporation||Information signal processing apparatus, information signal processing method, and information signal recording apparatus|
|US7110658 *||Aug 27, 1999||Sep 19, 2006||Televentions, Llc||Method and apparatus for eliminating television commercial messages|
|US20030123841 *||Dec 27, 2001||Jul 3, 2003||Sylvie Jeannin||Commercial detection in audio-visual content based on scene change distances on separator boundaries|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7853968 *||Dec 14, 2010||Lsi Corporation||Commercial detection suppressor with inactive video modification|
|US8254756 *||Aug 28, 2012||Macrovision Corporation||Method and apparatus for reducing and restoring the effectiveness of a commercial skip system|
|US8332403 *||Dec 11, 2012||Sony Corporation||Information processing apparatus, information processing method and program|
|US8341527 *||Dec 25, 2012||Aniruddha Gupte||File format method and apparatus for use in digital distribution system|
|US8761545 *||Nov 19, 2010||Jun 24, 2014||Rovi Technologies Corporation||Method and apparatus for identifying video program material or content via differential signals|
|US9060182||Aug 13, 2012||Jun 16, 2015||Rovi Solutions Corporation||Method and apparatus for reducing and restoring the effectiveness of a commercial skip system|
|US20050120367 *||Dec 2, 2003||Jun 2, 2005||Lsi Logic Corporation||Commercial detection suppressor with inactive video modification|
|US20060282864 *||Jun 10, 2005||Dec 14, 2006||Aniruddha Gupte||File format method and apparatus for use in digital distribution system|
|US20070199040 *||Feb 23, 2006||Aug 23, 2007||Lawrence Kates||Multi-channel parallel digital video recorder|
|US20080044154 *||Oct 17, 2007||Feb 21, 2008||Ronald Quan||Method and apparatus for reducing and restoring the effectiveness of a commercial skip system|
|US20080204468 *||Feb 28, 2007||Aug 28, 2008||Wenlong Li||Graphics processor pipelined reduction operations|
|US20080228760 *||Mar 13, 2008||Sep 18, 2008||Shunji Yoshimura||Information processing apparatus, information processing method and program|
|US20110211812 *||Sep 1, 2011||Comcast Cable Communications, LLC.||Program Segmentation of Linear Transmission|
|US20120128257 *||Nov 19, 2010||May 24, 2012||Rovi Technologies Corporation||Method and apparatus for identifying video program material or content via differential signals|
|U.S. Classification||386/249, 360/72.2, 360/72.1, 386/E05.001, 360/13, 358/908, 386/346|
|International Classification||G11B27/02, H04N7/01, H04N5/76, H04N5/00, H04N5/91, G11B19/02, H04N9/79|
|Cooperative Classification||H04N21/812, H04N21/44008, H04N21/4325, H04N5/76, Y10S358/908|
|European Classification||H04N21/44D, H04N21/432P, H04N21/81C, H04N5/76|
|Mar 26, 2002||AS||Assignment|
Owner name: MICROSOFT CORPORATION, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GREEN, DUSTIN;REEL/FRAME:012773/0304
Effective date: 20020320
|Apr 19, 2011||CC||Certificate of correction|
|Sep 21, 2011||FPAY||Fee payment|
Year of fee payment: 4
|Dec 9, 2014||AS||Assignment|
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0477
Effective date: 20141014
|Dec 23, 2015||FPAY||Fee payment|
Year of fee payment: 8