Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20050198006 A1
Publication typeApplication
Application numberUS 11/063,559
Publication dateSep 8, 2005
Filing dateFeb 24, 2005
Priority dateFeb 24, 2004
Also published asCA2498364A1, CA2498364C, US8015159, US20080072256
Publication number063559, 11063559, US 2005/0198006 A1, US 2005/198006 A1, US 20050198006 A1, US 20050198006A1, US 2005198006 A1, US 2005198006A1, US-A1-20050198006, US-A1-2005198006, US2005/0198006A1, US2005/198006A1, US20050198006 A1, US20050198006A1, US2005198006 A1, US2005198006A1
InventorsTrevor Boicey, Christopher Johnson
Original AssigneeDna13 Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
System and method for real-time media searching and alerting
US 20050198006 A1
Abstract
A method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video. A bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel. A parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information. Users can retrieve program information by other methods, such as by airdate, originating station, program name and program description. An alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream. The system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.
Images(9)
Previous page
Next page
Claims(41)
1. A media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time, comprising:
a media management system for continuously storing all the data of the at least one video channel locally and for extracting the corresponding closed captioned text into decoded text, the decoded text being provided to a global storage database, the media management system having
a search engine for comparing the decoded text against search terms to provide matching results, and
an indexing engine for indexing units of the decoded text by time; and
a user access system for receiving and displaying the matching results, the user access system transmitting a request for stored data corresponding to specific units of the decoded text from the media management system, the media management system providing said stored data corresponding to specific units of the decoded text in response to the request.
2. The media monitoring system of claim 1, wherein the media management system includes
a media server pod for receiving the at least one video channel and for locally storing the data of the at least one video channel, the media server pod including a close caption decoder for extracting the corresponding closed captioning text into the decoded text,
an index server for receiving the decoded text from the media server pod over a first network, the index servers having the index engine, and
a web server including the global storage database for storing the decoded text received from the index servers over a second network, the web servers having the search engine and a search term database for storing the search terms.
3. The media monitoring system of claim 2, wherein the media server pod includes
at least one media source for providing the at least one video channel, and
a media server in direct communication with the at least one media source for receiving the at least one video channel, the media server having a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel, the media server including mass storage media for storing the data of the at least one video channel.
4. The media monitoring system of claim 3, wherein the media server includes a parser for generating the stored data corresponding to specific units of the decoded text.
5. The media monitoring system of claim 3, wherein the media server pod includes a plurality of media sources for providing a corresponding number of video channels.
6. The media monitoring system of claim 3, wherein the media server includes a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
7. The media monitoring system of claim 6, wherein the media server includes a speech-to-text system for converting audio signals corresponding to the at least one video channel into text.
8. The media monitoring system of claim 6, wherein the media server includes a text detector for detecting an absence of the corresponding closed captioned text, the text detector generating an alert indicating the absence of the corresponding closed captioned text.
9. The media monitoring system of claim 3, wherein the media source includes one of a satellite receiver, a cable box, an antenna, and a digital radio source.
10. The media monitoring system of claim 2, wherein the index server includes the global storage database.
11. The media monitoring system of claim 2, wherein the web server includes the global storage database.
12. The media monitoring system of claim 3, wherein the first network includes a wide area network.
13. The media monitoring system of claim 12, wherein the media management system further includes a second media server pod for receiving data of a different video channel, the second media server pod being in communication with the first network.
14. The media monitoring system of claim 13, wherein the media server pod and the second media server pod are geographically distant from each other.
15. The media monitoring system of claim 1, wherein the user access system includes a duplicate video clip detector for identifying the matching results that are duplicates of each other.
16. The media monitoring system of claim 2, wherein the user access system includes a user access device in communication with the web server over a third network, for receiving and displaying the matching results.
17. The media monitoring system of claim 16, wherein the user access system includes a fourth network in communication with the user access device and the media server pod, the user access device receiving said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
18. The media monitoring system of claim 16, wherein the user access device provides the search terms to the media management system.
19. A method for searching video data corresponding to at least one video channel collected and stored in a media monitoring system, comprising:
(a) providing search terms;
(b) comparing the search terms to stored closed captioned text corresponding to the video data, the closed captioned text being indexed by channel and time;
(c) displaying matching results from the step of comparing;
(d) requesting selected video data corresponding to one of the matching results; and
(e) providing the selected video data corresponding to one of the matching results.
20. The method of claim 19, wherein the step of requesting includes selecting a time indexed segment of the closed captioned text of one of the matching results.
21. The method of claim 20, wherein the step of selecting includes setting a video start time and a video end time for the selected time indexed segment.
22. The method of claim 21, wherein the step of providing the selected video data includes parsing the video data to correspond with the selected time indexed segment to provide the selected video data.
23. The method of claim 19, wherein the step of providing search terms includes storing the search terms.
24. The method of claim 19, wherein the search terms are provided to a web server over a first network, the web server executing the step of comparing and providing the matching results to a user access device for display over the first network.
25. The method of claim 24, wherein the step of providing the video data includes transferring the video data over a second network to the user access device.
26. The method of claim 24, wherein the step of providing the video data includes parsing the video data to provide the portion of the video data.
27. A method for automatic identification of video clips matching stored search terms comprising:
(a) continuously receiving and locally storing video data corresponding to at least one video channel in real time;
(b) extracting and globally storing the closed captioned text from the video data;
(c) indexing the closed captioned text by channel and time;
(d) comparing the stored closed captioned text to the stored search terms; and,
(e) providing match results of the closed captioned text matching the search terms, each match result having an optionally viewable video clip.
28. The method of claim 27, further including the steps of:
displaying the match results on a user access device, requesting the video clip corresponding to a selected match result, and displaying the video clip on the user access device.
29. The method of claim 28, wherein the step of requesting includes viewing the closed captioned text corresponding to the selected match result with time indices,
setting a video start time and a video end time, and,
providing a request having the video start time and the video end time, and channel information corresponding to the selected match result.
30. The method of claim 29, wherein the step of displaying the video clip includes
receiving the request, and,
parsing the video data to provide the video clip having the video start time and the video end time.
31. The method of claim 27, wherein the video data is compressed prior to being stored.
32. The method of claim 30, wherein the extracted closed captioned text is transmitted over a first network to an indexing server for indexing the closed captioned text by channel and time.
33. The method of claim 32, wherein the closed captioned text is transmitted over a second network for storage on a web server.
34. The method of claim 33, wherein the step of comparing is executed on the web server, and the match results are transmitted over a third network to a user access device.
35. The method of claim 34, wherein the step of displaying the video clip includes transmitting the video clip over a fourth network to the user access device.
36. The method of claim 27, wherein the step of comparing includes
(i) providing a segment of the closed captioned text,
(ii) iteratively obtaining search terms from the stored search terms for comparing to the segment of the closed captioned text until all of the stored search terms have been compared to the segment of the closed captioned text, and
(iii) storing details of all matches to the stored search terms as the match results.
37. The method of claim 36, wherein the step of storing includes selectively generating an alert when the segment of the closed captioned text matches the stored search terms.
38. The method of claim 37, wherein the step of selectively generating includes generating the alert only when the matching search term has an associated alerting status.
39. The method of claim 38, wherein the alert can include one of in-system alerts, mobile device activation, pager activation and automatic email generation.
40. The method of claim 27, wherein the step of extracting includes detecting an absence of the closed captioned text from the video data.
41. The method of claim 40, wherein the step of detecting includes generating an alert message when no closed captioned text is detected.
Description
    CROSS REFERENCE TO RELATED APPLICATIONS
  • [0001]
    This application claims the benefit of U.S. Provisional Application No. 60/546,954, filed Feb. 24, 2004, the entire contents of which are incorporated herein by reference.
  • FIELD OF THE INVENTION
  • [0002]
    The present invention relates generally to media monitoring systems. More particularly, the present invention relates to video media searching and alerting systems.
  • BACKGROUND OF THE INVENTION
  • [0003]
    Many businesses and organizations have an interest in what is being broadcast, but the volume of information available makes it prohibitive to monitor completely.
  • [0004]
    The overwhelming majority of broadcast sources include closed captions, which have been used successfully to identify the subject matter of a video stream. Systems have been developed to monitor and act upon the closed captioned text. For example, such systems trigger on the basis of keywords and selectively record video for later viewing. However, no refinement or cross-referencing could be performed on past video, and new searches would only be applied to subsequent video broadcasts.
  • [0005]
    U.S. Pat. No. 5,481,296 is directed to a scanning method of monitoring video content using a predefined set of keywords. Based on a keyword, the system has the ability to monitor multiple streams and to return reception devices in real-time to selectively capture the matching video. The described system also attempts to selectively save video that has matched while removing segments that have not matched. The goal is to selectively record only the video that is desired.
  • [0006]
    U.S. Pat. No. 5,986,692 is directed to a system for generating a custom-tailored video stream. The system is designed to work unattended, watching video signals, extracting and collating those that are deemed to be of interest to a specific user. The system also defines filters that attempt to detect and discern specific components of a video signal that are unwanted. For example, opening credits are video components that are typically undesired.
  • [0007]
    U.S. Pat. No. 6,061,056 is directed to a system that automatically monitors a video stream for desired content. Users enter their search parameters, and the system watches the applied video streams for matches. However, this system only records video when a match occurs. The user is then presented with a series of clips that were saved based on their matches. Any new searches or refinements to the query only take effect for future searches. As well, any desired content that was not caught by the programmed search is lost forever. As an example, a user search for “Company A” may produce a result announcing a surprise merger of “Company A” and “Company B”. With the system as described in U.S. Pat. No. 6,061,056, new searches for “Company B” will only take effect on video occurring after the user adds this search. Therefore, the system is incapable of searching for any records prior to the new search being executed, such as recent happenings leading up to the merger.
  • [0008]
    U.S. Pat. No. 6,266,094 is directed to a system of aggregating and distributing closed caption text over a distributed system. The system focuses on extensive scrubbing and preparation of closed caption text to enhance usability. However, the described system has no facility for archiving the video associated with the clip, nor does it present the program text to the user.
  • [0009]
    It is, therefore, desirable to provide a media monitoring system that can dynamically search archived media content and real-time media content with unlimited queries.
  • SUMMARY OF THE INVENTION
  • [0010]
    It is an object of the present invention to obviate or mitigate at least one disadvantage of previous media monitoring systems. In particular, it is an object of the present invention to provide a system and method for conducting real-time searches of recorded video, by comparing extracted closed captioned text of the video to predefined search parameters. Selected video segments time indexed to closed captioned text segments can be selectively viewed. The system searches real-time video and archived video.
  • [0011]
    In a first aspect, the present invention provides a media monitoring system for receiving at least one video channel having corresponding closed captioned text in real time. The media monitoring system includes a media management system and a user access system. The media management system continuously stores all the data of the at least one video channel locally and extracts the corresponding closed captioned text into decoded text. The decoded text is provided to a global storage database. The media management system further includes a search engine for comparing the decoded text against search terms to provide matching results, and an indexing engine for indexing units of the decoded text by time. The user access system receives and displays the matching results, and transmits a request for stored data corresponding to specific units of the decoded text from the media management system. The media management system then provides said stored data corresponding to specific units of the decoded text in response to the request.
  • [0012]
    According to embodiments of the first aspect, the media management system can include a media server pod, an index server and a web server. The media server pod receives the at least one video channel and locally stores the data of the at least one video channel. The media server pod can include a close caption decoder for extracting the corresponding closed captioning text into the decoded text. The index server receives the decoded text from the media server pod over a first network, and includes the index engine. The web server includes the global storage database for storing the decoded text received from the index servers over a second network. The web servers can include the search engine and a search term database for storing the search terms. The media server pod can include at least one media source for providing the at least one video channel, and a media server in direct communication with the at least one media source. The media server receives the at least one video channel, and has a decoder for extracting the corresponding closed captioned text into decoded text from a vertical blanking interval of the at least one video channel. The media server can further include mass storage media for storing the data of the at least one video channel.
  • [0013]
    In aspects of the present embodiment, the media server can include a parser for generating the stored data corresponding to specific units of the decoded text, and the media server pod can include a plurality of media sources for providing a corresponding number of video channels. The media server can include a video/audio compression system for compressing the data of the at least one video channel prior to storage onto the mass storage media.
  • [0014]
    According to further aspects of the present embodiment, the media server can include a speech-to-text system for converting audio signals corresponding to the at least one video channel into text, and a text detector for detecting an absence of the corresponding closed captioned text, such that the text detector generates an alert indicating the absence of the corresponding closed captioned text. In yet other aspects, the media source can include one of a satellite receiver, a cable box, an antenna, and a digital radio source. The index server can include the global storage database, or the web server can include the global storage database.
  • [0015]
    According to yet another embodiment of the present aspect, the first network can include a wide area network. The media management system can further include a second media server pod for receiving data of a different video channel, where the second media server pod is in communication with the first network. The media server pod and the second media server pod can be geographically distant from each other.
  • [0016]
    In further embodiments of the present aspect, the user access system can include a duplicate video dip detector for identifying the matching results that are duplicates of each other, and a user access device in communication with the web server over a third network for receiving and displaying the matching results. The user access device can provide the search terms to the media management system. In an aspect of the present embodiments, the user access system can include a fourth network in communication with the user access device and the media server pod, where the user access device receives said stored data corresponding to specific units of the decoded text in response to the request over the fourth network.
  • [0017]
    In a second aspect, the present invention provides a method for searching video data corresponding to at least one video channel collected and stored in a media monitoring system. The method includes (a) providing search terms; (b) comparing the search terms to stored closed captioned text corresponding to the video data, the closed captioned text being indexed by channel and time; (c) displaying matching results from the step of comparing; (d) requesting selected video data corresponding to one of the matching results; and (e) providing the selected video data corresponding to one of the matching results.
  • [0018]
    In an embodiment of the present aspect, the step of requesting includes selecting a time indexed segment of the closed captioned text of one of the matching results, the step of selecting includes setting a video start time and a video end time for the selected time indexed segment, and the step of providing the selected video data includes parsing the video data to correspond with the selected time indexed segment to provide the selected video data. The step of providing search terms can include storing the search terms. In yet another embodiment of the present aspect, the search terms are provided to a web server over a first network, where the web server executes the step of comparing and providing the matching results to a user access device for display over the first network. The step of providing the video data includes transferring the video data over a second network to the user access device, and the step of providing the video data can include parsing the video data to provide the portion of the video data.
  • [0019]
    In a third aspect, the present invention provides a method for automatic identification of video clips matching stored search terms. The method includes (a) continuously receiving and locally storing video data corresponding to at least one video channel in real time; (b) extracting and globally storing the closed captioned text from the video data; (c) indexing the closed captioned text by channel and time; (d) comparing the stored closed captioned text to the stored search terms; and, (e) providing match results of the closed captioned text matching the search terms, each match result having an optionally viewable video clip.
  • [0020]
    According to an embodiment of the present aspect, the method can further include the steps of displaying the match results on a user access device, requesting the video clip corresponding to a selected match result, and displaying the video clip on the user access device. The step of requesting includes viewing the closed captioned text corresponding to the selected match result with time indices, setting a video start time and a video end time, and, providing a request having the video start time and the video end time, and channel information corresponding to the selected match result. The step of displaying the video clip includes receiving the request, and parsing the video data to provide the video clip having the video start time and the video end time.
  • [0021]
    According to another embodiment of the present aspect, the video data is compressed prior to being stored, the extracted closed captioned text is transmitted over a first network to an indexing server for indexing the closed captioned text by channel and time, the closed captioned text is transmitted over a second network for storage on a web server, the step of comparing is executed on the web server. The match results can be transmitted over a third network to a user access device, and the step of displaying the video clip includes transmitting the video dip over a fourth network to the user access device.
  • [0022]
    According to yet another embodiment of the present aspect, the step of comparing includes (i) providing a segment of the closed captioned text, (ii) iteratively obtaining search terms from the stored search terms for comparing to the segment of the closed captioned text until all of the stored search terms have been compared to the segment of the closed captioned text, and (iii) storing details of all matches to the stored search terms as the match results. The step of storing includes selectively generating an alert when the segment of the closed captioned text matches the stored search terms, the step of selectively generating includes generating the alert only when the matching search term has an associated alerting status, and the alert can include one of in-system alerts, mobile device activation, pager activation and automatic email generation.
  • [0023]
    According to a further embodiment of the present aspect, the step of extracting includes detecting an absence of the closed captioned text from the video data, and generating an alert message when no closed captioned text is detected.
  • [0024]
    Other aspects and features of the present invention will become apparent to those ordinarily skilled in the art upon review of the following description of specific embodiments of the invention In conjunction with the accompanying figures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0025]
    Embodiments of the present invention will now be described, by way of example only, with reference to the attached Figures, wherein:
  • [0026]
    FIG. 1 is a schematic of the media monitoring system according to an embodiment of the present invention;
  • [0027]
    FIG. 2 is a schematic of the media monitoring system according to another embodiment of the present invention;
  • [0028]
    FIG. 3 is a block diagram of the functional components of the media monitoring system shown in FIG. 1;
  • [0029]
    FIG. 4 is a flow chart illustrating a manual operation mode of the media monitoring system of the present invention;
  • [0030]
    FIG. 5 is a computer screen user interface for prompting search parameters from a user;
  • [0031]
    FIG. 6 is a computer screen user interface showing compact example results from a search;
  • [0032]
    FIG. 7 is a computer screen user interface showing detailed example results from a search;
  • [0033]
    FIG. 8 is a computer screen user interface showing matching captioning and timing information; and,
  • [0034]
    FIG. 9 is a flow chart illustrating an automatic scanning mode of the media monitoring system of the present invention.
  • DETAILED DESCRIPTION
  • [0035]
    Generally, the present invention provides a method and system for continually storing and cataloguing streams of broadcast content, allowing real-time searching and real-time results display of all catalogued video. A bank of video recording devices store and index all video content on any number of broadcast sources. This video is stored along with the associated program information such as program name, description, airdate and channel. A parallel process obtains the text of the program, either from the closed captioning data stream, or by using a speech-to-text system. Once the text is decoded, stored, and indexed, users can then perform searches against the text, and view matching video immediately along with its associated text and broadcast information.
  • [0036]
    Users can also retrieve program information by other methods, such as by airdate, originating station, program name and program description. Additionally, an alerting mechanism scans all content in real-time and can be configured to notify users by various means upon the occurrence of a specified search criteria in the video stream. The system is preferably designed to be used on publicly available broadcast video content, but can also be used to catalog private video, such as conference speeches or audio-only content such as radio broadcasts.
  • [0037]
    The system according to the embodiments of the present invention non-selectively records all video/audio applied to it, and allows user searches to review all video on the system. Only under user control is a presentation clip prepared and retrieved. Furthermore, searches can be performed at any time to examine archived video, rather than searches being the basis by which video is saved. Video is only shown to the user that they specifically request, and any editing can be done under user control.
  • [0038]
    A general block diagram of a media monitoring system according to an embodiment of the present invention is shown in FIG. 1. Media monitoring system 100 comprises two major component groups. The first is the media management system 102, and the second is a user access system 104.
  • [0039]
    The media management system 102 is responsible for receiving and archiving video and its corresponding audio. Preferably, the video and audio data is continuously received and stored. Video streams are tuned using media sources 106 such as satellite receivers, other signal receiving devices such as cable boxes, antennas, and VCR's or DVD players. Alternately, media sources 106 can include non-video media sources, such as digital radio sources, for example. Preferably, the media sources 106 receive digital signals. The video signal, corresponding audio, and any corresponding closed-captioned text, is captured by video/audio capture hardware/software stored on media servers 108. The video/audio data can be stored in any sized segment, such as in one-hour segments, with software code to later extract any desired segment of any size by channel, start time, and end time. Those of skill in the art will understand that the video/audio data can be stored in any suitable format. As shown in FIG. 1, media management system 102 can include any number of media servers 108, and each video server can be in communication with any number of media sources 106. If storage space is limited, the media servers can compress the digital data from the media sources 106 into smaller file sizes. The data from media sources 106 are stored in consecutive segments onto a mass storage device using uniquely generated filenames that encode the channel and airdate of the video segment. If the data is a video stream, closed-captioned text is extracted from the video stream and stored in web servers 114 as searchable text, as will be discussed later. The extracted closed-captioned text is indexed to its corresponding video/audio clips stored in the media servers 108.
  • [0040]
    The media management system 102 stores all of the text associated with the video stream. In most cases, this text is obtained from the closed-captioning signal encoded into the video. The media management system 102 can further Include a closed-captioned text detector for detecting the absence of closed-captioned text in the data stream, in order to alert the system administrator that closed-captioned text has not been detected for a predetermined amount of time. In a situation where closed-captioned text is undetected, the aforementioned alert can notify the system operator to take appropriate action in order to resolve the problem. In some cases, the stream may not be a digital stream, and the system can include a speech-to-text system to convert the audio signals into text. Accordingly, these sub-systems can be executed within each media server 108. The extracted text is broken into small sections, preferably into one-minute segments. Each clip is then stored in a database along with the program name, channel, and airdate of the clip. The text is also pushed into an indexing engine of index servers 110, which allows it to be searched. In a preferred embodiment, the closed captioned text spanning a preset time received by index servers 110 are converted to XML format, bundled, and sent to web servers 114 for global storage via network 116. Web servers 114 can execute the searches for matches between user specified terms and the stored closed captioned text, via a web-based search interface. Alternately, the closed captioned text can be stored in index servers 110. The channel and airdate fields of the text segment allow it to be matched to a video clip stored by the media management system 102 as needed. Further details of media management system 102 will be described later.
  • [0041]
    Although not shown in FIG. 1, the media management system 102 includes an alerting system. This system watches for each closed captioned segment that is indexed and cross-references it against the stored list of user defined alerts. Any matches will trigger user alerts to notify the user that a match has occurred. Alerts can include in-system alerts, mobile device activation, pager activation, automatic email generation, which can be generated from web servers 114.
  • [0042]
    The user access system 104 can include access devices such as a computer workstation 118, or mobile computing devices such as a laptop computer 120 and a PDA 122. Of course, other wireless devices such as mobile phones can also be used. These web enabled access devices can communicate with the web servers 114 via the Internet 126 wirelessly through BlueTooth or WiFi network systems, or through traditional wired systems, Optionally, users can dial up directly to the network 116 with a non-web search interface enabled computer 124. As will be shown later in FIG. 3, the user access system 104 further includes an alternate data transfer path for transferring video data to the access devices to reduce congestion within media management system 102. As previously discussed, each web server 114 can store identical copies of the closed captioned text bundle received from index servers 110. This configuration facilitates searches conducted by users since the text data is quickly accessible, and search results consisting of closed captioned text can be quickly forwarded to the user's access device.
  • [0043]
    From user access system 104, the user can search for occurrences of keywords, retrieve video by date and time, store alert parameters, etc. The user interface software can take the form of a web interface, locally run software, mobile device interface, or any other interactive form.
  • [0044]
    The back end portion of the web interface maintains a connection to the text database, as well as to the index of video streams. Depending on configuration, the user interface software can be used to stream video to the user, or alternatively, to direct the user to an alternate server where video will be presented.
  • [0045]
    The previously described embodiment of the invention can be deployed locally, at a single site for example, to monitor all the media channels of interest. Therefore, networks 112 and 116 can be implemented as a local area network (LAN), such as in an office building for example. Local area networks typically provide high bandwidth operation. Alternately, media monitoring system 100 can be deployed across a wide network, meaning that the components of the system can be geographically dispersed, making networks 112 and 116 wide area networks (WAN). The bandwidth of a WAN Is generally smaller than that of a LAN. Of course, those of skill in the art will understand that the presently described system can be implemented with a combination of WAN and LAN.
  • [0046]
    In a wide deployment embodiment of the invention, media servers 108 and their corresponding media sources 106 can be geographically distributed to collect and store local video, which is then shared within the system. For example, “pods” of media servers 108 and their corresponding media sources 106 can be located in different cities, and in different countries. As such, it is advantageous to store the relatively large video/audio data locally within respective media servers 108. In such an embodiment, the server the user is connected to may not physically be at the location where the video streams are being recorded. In the present context, the distributed media server pods are considered remotely connected to index servers 110, since they are connected via a WAN. However, an advantage of the present invention is that the monitoring and notification speed remains fast regardless of the network configuration of the media monitoring system 100. This is due to the fact that the small sized closed captioned text can be rapidly transferred within the system, and more particularly, between the media servers 108 and the user access devices.
  • [0047]
    Once the user desires to view corresponding video, then the larger video data is accessed and sent to the user. Due to the size of the video, it is preferable to avoid congesting the networks 112, 116 and 126 and limiting performance for all users. However, video may be transferred to the user in an all-LAN environment with satisfactory speed. In a system implementation with relatively high bandwidth, the user access device connects to index servers 110, which functions as the conductor of traffic between media servers 108 and the user access device. Therefore, according to another embodiment of the invention, requested video can be directly sent from the appropriate media server 108 to the video enabled user access device.
  • [0048]
    FIG. 2 illustrates the configuration of the media monitoring system 100 when video data is to be transferred to a user access device in a geographically distributed system. In the present example, one video server 108 and its corresponding media sources 106 represent a single video processing unit of a pod of video processing units 130, that may be deployed in a particular city and geographically distant from index servers 110 and network 112. The pod 130 remains in communication with remote access devices 118, 120 and 122 via LAN/WAN network 132, which may be geographically distant from pod 130. Hence, once a user requests a particular video clip, the request is sent directly to the appropriate video server 108, which then transfers the requested video clip, parsed as requested by the user, to their access device via WAN/LAN network 132. Media server 108 can include a parser for providing the requested video dip that corresponds with the time-indexed closed captioned text. Since the video clips are received through a path outside of the media management system 102 and user access system 104, the potential for congestion of data traffic within the system is greatly reduced. At the same time, multiple users can receive their respective requested video clips rapidly.
  • [0049]
    In general operation, when a user specifies key search terms through their computer or wireless device, the index servers will search the archived closed captioned text, and notify the user if any matches have occurred. Matches are displayed with the relevant bibliographic information such as air date and channel. The user then has the option of viewing and hearing a time segment of the videos containing the matched terms, the time segment being selectable by the user. The search of key terms can extend to future broadcasts, such that the search Is conducted dynamically in real-time. Thus, the user can be notified shortly after a search term has been matched in a current broadcast. Since the video broadcast is recorded, the user can selectively view the entire broadcast, or any portion thereof.
  • [0050]
    FIG. 3 illustrates a block diagram of the general functional components of media monitoring system 100 shown in FIG. 1.
  • [0051]
    The media monitoring system 100 converts a video signal to an indexed series of digital files on a mass storage system, which can then be retrieved by specifying the desired channel, start time, and end time. This capability is then used to supply the actual video that matches the search result from the user interface component. Video is archived at a specified quality, depending on operator configuration. Higher quality settings allow for larger video frames, higher frame rates, and greater image detail, but with a penalty of greater file storage requirements. All parameters are configurable by the operator at the system level. As previously mentioned, the video/audio signal to be archived is made available from an external source. In practice, this usually consists of an antenna, or a satellite receiver or cable feed supplied by a signal provider. Any standard video signal may be used, although the originating device preferably supports encoding of closed-captions in the Vertical Blanking Interval (VBI), which is the dead time where the scanning gun of the monitor finishes at the bottom and moves back to the top of the screen. The system can also be configured to store audio-only content should the signal not have a video component.
  • [0052]
    The video/audio signal is applied to the input of a video capture device 200, which, either through a hardware or a software compression system 202, converts the video signal to a digital stream. In FIG. 1, video capture device 200 and software compression system 202 can be implemented in media servers 108. The exact format of this stream can be specified by the operator, but is typically chosen to be a compressed stream in a standard format such as MPEG or AVI formatted video. The video capture process outputs a continuous stream of video, which is then divided into manageable files. According to an embodiment of the present invention, the files are preferably limited to one hour blocks of video. These files are then stored on a mass storage system 204 within their respective media servers 108, indexed by the channel they represent, and the block of time that the video recording was done. Accordingly, mass storage system 204 locally stores the video/audio data for its corresponding media sources 106.
  • [0053]
    Video clips can be retrieved from mass storage system 204 in response to retrieval requests from permitted machines. These requests would be generated from servers that are serving users who have requested a video clip. From the users standpoint, this video clip is chosen by its content, but the system would know it as belonging to a specified channel for a given period of time. Most user clip requests are for small segments of video, an example being “CBC-Ottawa, 5:55 pm-5:58 pm”. The archive system, using the channel and the date required, first deduces which large file the video segment is located in. It then parses the video file to locate and extract the stream data representing the segment selected. The stream data is then re-encapsulated to convert it to a stand-alone video file, and the result is returned to the calling machine, ultimately to be delivered to the user.
  • [0054]
    Since storage space is finite, the system can continuously replace the oldest video streams in its archive with the newest. This ensures that as much video is stored as possible. Additional storage can be added or removed as needed.
  • [0055]
    Media monitoring system 100 can include self monitoring functions to ensure robust operation, and to minimize potential errors. For example, the video digitizing process has the ability to detect the lack of video present at its input. This condition will raise an operator alert to allow the operator to locate the cause of the outage. In the field, this can be attributed to cabling problems, weather phenomena, hardware failure, upstream problems, etc. In certain cases the system can be configured to attempt an automatic repair, by restarting or re-initializing a process or external device.
  • [0056]
    The closed captioned text associated with the video is preferably extracted from the closed captioning stream in the video signal, or an associated speech-to-text device. If closed captioning data is available in the video signal, the signal is applied to a decoder 206 typically located in each media server 108, that can read the VBI stream. The decoder 206 extracts the closed captions that are encoded into the video signal. In practice, this can be the same device performing the video compression, and the extraction can be done in software. If closed captioning data is not available, the audio stream is fed into a speech-to-text device instead of decoder 206, and the resulting text is fed into the system. This option can be used if the content is not a video signal, such as a commercial radio stream or recorded speech. The decoder 206 includes a buffer, into which text accumulates at “human reading” speed. After a short increment of time, preferably one minute, the text buffer is stored into text database 208 along with the channel and time information associated with the clip. This database 208 then contains a complete record of all text that has flowed through the system, sorted by channel and airdate. As previously mentioned, database 208 can be located within either index servers 110 or web servers 114. In either case, database 208 functions as global storage of the decoded closed captioned text.
  • [0057]
    To facilitate and accelerate searching, the program text is provided to an indexing engine 210. Indexing engine 210, implemented in index servers 110 receives a block of text, which in this case represents a small unit of video transcript (typically one minute), and stores it in a format that is optimized for full text searches. For practical implementation purposes, standard off-the-shelf products can be employed for the indexing function. According to the presently described embodiments, the video captions are indexed by channel and time for example. The formatted text is stored in index database 212, which can be located in index servers 110 or web servers 114. Database 212 can also function as global storage of all the formatted text.
  • [0058]
    For searching the text database 208, the user's search string is submitted to a full text search engine that searches database 212. Any results returned from this engine also contain indexes to the corresponding channel and time of the airing. Furthermore, since the entire text is stored in database 208, it can be retrieved using standard techniques to search on the channel and air time. It is noted that database 212 is used for full text searching, while database 208 has been formatted such that the data is ordered by time and channel to facilitate look up by time and channel.
  • [0059]
    Due to the small size of text streams, all extracted text could be retained for as long as required, even after its corresponding video clip has been deleted. The cleanup thread of the text system removes the captions from the database and the search index as they expire from the archival service. Alternatively, they may be retained as long as desired but are flagged to indicate that the associated video is no longer available. Additional search options allow searches to include this “archived” text if desired.
  • [0060]
    Once video data has been received, processed and archived in media management system 102 as previously described, user-defined searches can be executed through user access system 104. Operating upon each access device is a user search interface that provides the functionality of the system. The interface is designed to allow users with minimal training to be able to perform text searches, examine the program text that matches, and selectively view or archive the video streams where the captioning appeared. While the reference application is a web-based system, the system can be searched through other means, such as mobile WiFi devices, Bluetooth-enabled devices, and locally running software, for example.
  • [0061]
    Following is an example of a common interactive mode of operation between a user and the media monitoring system 100 shown in FIG. 1. FIG. 4 shows a flow chart of the process executed by the media monitoring system 100, while FIGS. 5-8 are examples of user interface screens that prompt the user for information and display results to the user.
  • [0062]
    The process begins at step 300, where the user logs into the interface with the goal of researching a topic's appearance in the recent media. The user is presented with a screen that allows them to enter the search terms that would match their desired content. Common search parameters are provided, such as specifying phrases that must appear as typed, words that should appear within a certain distance of each other, boolean queries, etc. As well, the query can be limited to only return results from specific broadcast channels. FIG. 5 is an example user interface for prompting the search parameters from the user.
  • [0063]
    Upon submitting the form, the search parameters provided by the user are first groomed at step 302. Grooming is an optional step, which refers to optimization of the search parameters, especially if the user's search parameters are broken. For example, the user may enter “red blue” in the MUST CONTAIN THESE WORDS search field, and “GREEN” in the MAY CONTAIN search field. The grooming process then optimizes the search parameters to “GREEN RED AND BLUE”. The groomed search parameters are compared to database 208 that stores all the closed captioned text. The user is presented with a match results page at step 304, itemizing the results obtained, the programs they appeared in, and a score that represents how strong the match was. The results can be sorted in numerous ways, such as by date, by program name, or by score. A compact example results page is shown in FIG. 6, and a more detailed version is shown in FIG. 7. In both the compact and detailed results pages, the user can select any row to view further details of that program segment. The results pages shown in FIGS. 6 and 7 may list concurrent segments belonging to the same broadcast, since the search term appears in each segment. For example, the results may return “Channel Y, 6:00 pm to 6:01 pm”, “Channel Y, 6:01 pm to 6:02 pm” and “Channel Y, 6:02 pm to 6:03 pm” as separate program segment items. The system can optimize the results by recognizing that the three segments are chronological segments of Channel Y, and collapse the results into a simplified description, such as “Channel Y, 6:00 pm to 6:03 pm”.
  • [0064]
    Upon selecting a program segment at step 306, the user is presented with a caption viewing screen showing the matching captioning and timing information, as shown in FIG. 8. The present screen gives the user the option of viewing the clip associated with the shown extracted closed captioned text. From the caption viewing screen, the user is also presented with a navigation system that allows the user to move forward or backward in the video stream beyond the matched segment, to peruse the context that the clip was presented in. The caption viewing screen also features controls to compose a video clip that consists of several consecutive units of video. More specifically, the user has the ability to define the start and end points of a video clip, and then view or save that clip. This is suitable for preparing a salient clip that is to be saved for future reference.
  • [0065]
    If the user chooses not to view the corresponding video clip at step 306, the process can return to step 300 to restart the search. Optionally, the process can return to step 304 to permit the user to view the results page and select a different program segment. If the user chooses to view the corresponding video clip, then the system determines if the video clip is stored locally at step 308. It is important to note that a locally stored video clip refers to one that is accessible via a high bandwidth network, which is typically available in a local area network, such as in an office environment. In contrast, remotely stored video clips are generally available only through a low bandwidth network, or one that is too low to have a copy of all video sent to it all the time. As previously discussed, the user can access the video remotely over a low bandwidth connection. Therefore, the process provides a video access method optimized according to whether or not the user is accessing the system remotely. If the video clip is stored locally, ie. on a high bandwidth connection suitable for streaming video, then the system proceeds to step 310. At step 310, the video clip is retrieved and assembled with the appropriate video segments, and then displayed for the user at step 312. The video clip can be played with the user's preferred video playing software. Alternately at step 308, if the video clip is not stored locally, the system proceeds to step 314, where a query is sent to the specific remote server that will return the video that the user is asking for. The video clip is retrieved from the remote system at step 316, and finally displayed for the user at step 312. Once the clip has ended, the user has the option of returning to step 304 to view another program segment. Alternately, the user may return to step 300 to initiate a new search.
  • [0066]
    Some installations and user devices (such as WiFi or Bluetooth wireless devices) do not have the ability to view video clips. In this scenario, the video dip can be ordered through the user interface where it will be delivered to the user via email, via a link to a web site, or a physical medium such as a DVD, CD or video cassette for example. This service is suitable for clients requiring a permanent copy of especially important video segments.
  • [0067]
    The previously described manual interactive operation method of FIG. 4 is effective for searching and viewing archived video. According to an embodiment of the present invention, the media monitoring system 100 can concurrently operate in an automatic scanning mode to match user defined terms with real time extracted closed captioned text. The user can selectively activate the alerting system to provide notification for specific terms.
  • [0068]
    As previously described, searches can be stored by users so that they are executed on all incoming text corresponding to real-time recorded video. Any matches will selectively generate an immediate alert, which can be communicated to the user by various means. Selective generation of an alert refers to the fact that the user can set specific search terms to trigger an alert when matched. The stored search terms are archived in a search term database, preferably located on web servers, 114 including parameters reflecting the desired level of alerting the user has requested. Examples of such alerting levels can include “Never alert me”, “alert me by putting a message in the product”, “and alert me urgently by sending me an email to my mobile device”.
  • [0069]
    The automatic scanning mode method of operation of the media monitoring system 100 is described with reference to FIG. 9. It is assumed that the following process operates upon each stored unit of program text after the text is stored and indexed. Then the index is searched again with the terms to detect if anything new appears. It is further assumed that the user has previously defined his/her search terms and stored them in a search term database 404, which can be physically located on web server 114. The process begins at step 400, where the text from index database 212 for the unit is retrieved. At step 402, a search term from the users search term database 404 is retrieved and compared to the stored unit of program text at step 406. If there is an absence of a match, the system proceeds to step 408, where the system checks if there are any further search terms to check against the stored unit of program text. If there are no more search terms, the process ends at step 410. Otherwise, the system loops back to step 402 to fetch the next search term.
  • [0070]
    If a match was found at step 406, the system proceeds to step 412 to store the match information In a results database 414. This results database is preferably located in web server 114, and is local to the user's portal. The results summarize matches between the search terms and the video clips for the user when they log in to their portal. At step 416, the system checks if the user has activated an alert, for the present search term. If an alert has been activated for the present search term, the system generates a notification message for the user at step 418, in accordance with their desired alert level. Depending on settings and system configuration, this alert/notification can be delivered using a number of methods, including but not limited to, alerts in the interface, via email, via mobile and wireless devices, for example. Once the user has been alerted at step 418, or if no alert has been activated for the present search term at step 416, the matched search result processing the system proceeds to step 408 to determine if there are any further search terms. This aforementioned process is executed for each unit of program text stored in the index.
  • [0071]
    Therefore, should the user add a new search term to his/her search term database at a later time, the media monitoring system of the present invention can immediately search the archives to identify any prior program segments that match the new search term, and monitor new program segments for occurrences of the new search term.
  • [0072]
    The system described in this application stores all video from all channels, allowing searches to be refined or changed at will with instant results. As well, learnings from the results of one query can be incorporated into a new search with instant results.
  • [0073]
    This invention improves the user experience by storing and indexing all recent video and captions. This allows not only unlimited queries with real time results, but also allows new searches inspired by results to be performed immediately and with instant results.
  • [0074]
    The aforementioned embodiments of the present invention records and stores video/audio clips that are broadcast across any number of channels. There are instances where the same video clips are broadcast by affiliated channels. An example includes all those channels affiliated with CTV. Hence, there is a great likelihood that a user's search parameters will return duplicate video clips. In an enhancement to the embodiments of the present invention, web server 114 can include a duplicate video clip detector to mark matching video clip results that are essentially the same. This function can be executed in web servers 114 as search results are returned to the user. For example, the text of the returned search results can be scanned such that the duplicates are marked as such. This feature allows the user to view one video clip and dismiss those marked as duplicates very quickly, without opening each one and viewing the clip. Preferably, the duplicate video clip detector can be implemented on web server 114, but can be executed in index servers 110. Generally, a first matching result is added to the database and then fuzzy matching is executed to determine if further matches are essentially the same as the first stored instance. If so, then the duplicates are marked as such for the users' convenience. Those of skill in the art should understand that an essential match between two clips is one where a substantial percentage of the content are the same. Naturally, this percentage can be preset by the system administrator.
  • [0075]
    The above-described embodiments of the present invention are intended to be examples only. Alterations, modifications and variations may be effected to the particular embodiments by those of skill in the art without departing from the scope of the invention, which is defined solely by the claims appended hereto.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3760275 *Dec 28, 1970Sep 18, 1973M KaritaAutomatic telecasting or radio broadcasting monitoring system
US4792864 *Jun 27, 1986Dec 20, 1988Video Research LimitedApparatus for detecting recorded data in a video tape recorder for audience rating purposes
US4975770 *Jul 31, 1989Dec 4, 1990Troxell James DMethod for the enhancement of contours for video broadcasts
US5157491 *Aug 27, 1990Oct 20, 1992Kassatly L Samuel AMethod and apparatus for video broadcasting and teleconferencing
US5231494 *Oct 8, 1991Jul 27, 1993General Instrument CorporationSelection of compressed television signals from single channel allocation based on viewer characteristics
US5313297 *Mar 24, 1992May 17, 1994Costem Inc.System for providing pictures responding to users' remote control
US5636346 *May 9, 1994Jun 3, 1997The Electronic Address, Inc.Method and system for selectively targeting advertisements and programming
US5717878 *Feb 10, 1995Feb 10, 1998Sextant AvioniqueMethod and device for distributing multimedia data, providing both video broadcast and video distribution services
US5847760 *May 22, 1997Dec 8, 1998Optibase Ltd.Method for managing video broadcast
US5892554 *Nov 28, 1995Apr 6, 1999Princeton Video Image, Inc.System and method for inserting static and dynamic images into a live video broadcast
US5986692 *Dec 15, 1998Nov 16, 1999Logan; James D.Systems and methods for computer enhanced broadcast monitoring
US5999970 *Apr 10, 1996Dec 7, 1999World Gate Communications, LlcAccess system and method for providing interactive access to an information source through a television distribution system
US6061056 *Mar 4, 1996May 9, 2000Telexis CorporationTelevision monitoring system with automatic selection of program material of interest and subsequent display under user control
US6157809 *Aug 6, 1997Dec 5, 2000Kabushiki Kaisha ToshibaBroadcasting system, broadcast receiving unit, and recording medium used in the broadcasting system
US6160988 *May 30, 1996Dec 12, 2000Electronic Data Systems CorporationSystem and method for managing hardware to control transmission and reception of video broadcasts
US6188436 *Jan 31, 1997Feb 13, 2001Hughes Electronics CorporationVideo broadcast system with video data shifting
US6226030 *Mar 28, 1997May 1, 2001International Business Machines CorporationAutomated and selective distribution of video broadcasts
US6266094 *Jun 14, 1999Jul 24, 2001Medialink Worldwide IncorporatedMethod and apparatus for the aggregation and selective retrieval of television closed caption word content originating from multiple geographic locations
US6320917 *May 1, 1998Nov 20, 2001Lsi Logic CorporationDemodulating digital video broadcast signals
US6397041 *Dec 22, 1999May 28, 2002Radio Propagation Services, Inc.Broadcast monitoring and control system
US6546556 *Dec 28, 1998Apr 8, 2003Matsushita Electric Industrial Co., Ltd.Video clip identification system unusable for commercial cutting
US6606128 *Dec 21, 2001Aug 12, 2003United Video Properties, Inc.Interactive special events video signal navigation system
US20010049820 *Dec 18, 2000Dec 6, 2001Barton James M.Method for enhancing digital video recorder television advertising viewership
US20030229900 *May 8, 2003Dec 11, 2003Richard ReismanMethod and apparatus for browsing using multiple coordinated device sets
US20040031058 *May 8, 2003Feb 12, 2004Richard ReismanMethod and apparatus for browsing using alternative linkbases
US20050273828 *Jul 15, 2005Dec 8, 2005Tivo Inc.Method for enhancing digital video recorder television advertising viewership
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7260564 *Apr 6, 2001Aug 21, 2007Virage, Inc.Network video guide and spidering
US7295752Feb 5, 2002Nov 13, 2007Virage, Inc.Video cataloger system with audio track extraction
US7631015Sep 14, 2006Dec 8, 2009Microsoft CorporationInteractive playlist generation using annotations
US7769827Aug 3, 2010Virage, Inc.Interactive video application hosting
US7860884 *Aug 14, 2007Dec 28, 2010Electronics And Telecommunications Research InstituteSystem and method for processing continuous integrated queries on both data stream and stored data using user-defined shared trigger
US7912827Aug 26, 2005Mar 22, 2011At&T Intellectual Property Ii, L.P.System and method for searching text-based media content
US7930420 *Jun 25, 2008Apr 19, 2011University Of Southern CaliforniaSource-based alert when streaming media of live event on computer network is of current interest and related feedback
US7945622May 17, 2011Adobe Systems IncorporatedUser-aware collaboration playback and recording
US7954049May 15, 2006May 31, 2011Microsoft CorporationAnnotating multimedia files along a timeline
US7962937Jun 14, 2011Microsoft CorporationMedia content catalog service
US7962948Jun 14, 2011Virage, Inc.Video-enabled community building
US8156114 *Aug 26, 2005Apr 10, 2012At&T Intellectual Property Ii, L.P.System and method for searching and analyzing media content
US8171509May 1, 2012Virage, Inc.System and method for applying a database to video multimedia
US8214374 *Jul 3, 2012Limelight Networks, Inc.Methods and systems for abridging video files
US8239359 *Sep 23, 2008Aug 7, 2012Disney Enterprises, Inc.System and method for visual search in a video media player
US8296797 *Oct 18, 2006Oct 23, 2012Microsoft International Holdings B.V.Intelligent video summaries in information access
US8301731Oct 30, 2012University Of Southern CaliforniaSource-based alert when streaming media of live event on computer network is of current interest and related feedback
US8381249Feb 19, 2013United Video Properties, Inc.Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
US8387087Feb 26, 2013Virage, Inc.System and method for applying a database to video multimedia
US8396878Sep 26, 2011Mar 12, 2013Limelight Networks, Inc.Methods and systems for generating automated tags for video files
US8495694May 12, 2011Jul 23, 2013Virage, Inc.Video-enabled community building
US8521719Oct 10, 2012Aug 27, 2013Limelight Networks, Inc.Searchable and size-constrained local log repositories for tracking visitors' access to web content
US8548978Aug 21, 2007Oct 1, 2013Virage, Inc.Network video guide and spidering
US8555317May 3, 2011Oct 8, 2013Microsoft CorporationMedia content catalog service
US8625960 *Jan 5, 2006Jan 7, 2014Samsung Electronics Co., Ltd.Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function
US8630531Apr 22, 2010Jan 14, 2014Samsung Electronics Co., Ltd.Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function
US8688667 *Feb 8, 2011Apr 1, 2014Google Inc.Providing intent sensitive search results
US8688679Jul 19, 2011Apr 1, 2014Smartek21, LlcComputer-implemented system and method for providing searchable online media content
US8781996Jul 3, 2008Jul 15, 2014At&T Intellectual Property Ii, L.P.Systems, methods and computer program products for searching within movies (SWiM)
US8832742Dec 18, 2006Sep 9, 2014United Video Properties, Inc.Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
US8842977 *Jan 5, 2006Sep 23, 2014Samsung Electronics Co., Ltd.Storage medium storing metadata for providing enhanced search function
US8910232Sep 14, 2009Dec 9, 2014At&T Intellectual Property I, LpSystem and method of analyzing internet protocol television content for closed-captioning information
US8914829Sep 14, 2009Dec 16, 2014At&T Intellectual Property I, LpSystem and method of proactively recording to a digital video recorder for data analysis
US8938761Sep 14, 2009Jan 20, 2015At&T Intellectual Property I, LpSystem and method of analyzing internet protocol television content credits information
US8966389Sep 21, 2007Feb 24, 2015Limelight Networks, Inc.Visual interface for identifying positions of interest within a sequentially ordered information encoding
US9015172Jun 15, 2012Apr 21, 2015Limelight Networks, Inc.Method and subsystem for searching media content within a content-search service system
US9021538Apr 16, 2014Apr 28, 2015Rovi Guides, Inc.Client-server based interactive guide with server recording
US9055317Oct 7, 2013Jun 9, 2015Microsoft Technology Licensing, LlcMedia content catalog service
US9055318Nov 1, 2013Jun 9, 2015Rovi Guides, Inc.Client-server based interactive guide with server storage
US9055319Nov 3, 2014Jun 9, 2015Rovi Guides, Inc.Interactive guide with recording
US9118948Jun 14, 2013Aug 25, 2015Rovi Guides, Inc.Client-server based interactive guide with server recording
US9122754Oct 22, 2012Sep 1, 2015Microsoft International Holdings B.V.Intelligent video summaries in information access
US9125169Jun 26, 2014Sep 1, 2015Rovi Guides, Inc.Methods and systems for performing actions based on location-based rules
US9154843Apr 16, 2014Oct 6, 2015Rovi Guides, Inc.Client-server based interactive guide with server recording
US9165070 *Jun 27, 2012Oct 20, 2015Disney Enterprises, Inc.System and method for visual search in a video media player
US9178957Sep 27, 2007Nov 3, 2015Adobe Systems IncorporatedApplication and data agnostic collaboration services
US9183277Mar 27, 2014Nov 10, 2015Google Inc.Providing intent sensitive search results
US9190109 *Mar 23, 2010Nov 17, 2015Disney Enterprises, Inc.System and method for video poetry using text based related media
US9215504Aug 1, 2014Dec 15, 2015Rovi Guides, Inc.Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
US9218425Jul 14, 2014Dec 22, 2015At&T Intellectual Property Ii, L.P.Systems, methods and computer program products for searching within movies (SWiM)
US9226006Jun 29, 2015Dec 29, 2015Rovi Guides, Inc.Client-server based interactive guide with server recording
US9232254Dec 27, 2011Jan 5, 2016Rovi Guides, Inc.Client-server based interactive television guide with server recording
US9294291Nov 12, 2008Mar 22, 2016Adobe Systems IncorporatedAdaptive connectivity in network-based collaboration
US9294799Oct 29, 2015Mar 22, 2016Rovi Guides, Inc.Systems and methods for providing storage of data on servers in an on-demand media delivery system
US9338520Jan 25, 2013May 10, 2016Hewlett Packard Enterprise Development LpSystem and method for applying a database to video multimedia
US9372926Aug 31, 2015Jun 21, 2016Microsoft International Holdings B.V.Intelligent video summaries in information access
US20050033760 *Sep 10, 2004Feb 10, 2005Charles FullerEmbedded metadata engines in digital capture devices
US20060122984 *Aug 26, 2005Jun 8, 2006At&T Corp.System and method for searching text-based media content
US20060153535 *Jan 5, 2006Jul 13, 2006Samsung Electronics Co., Ltd.Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function
US20060153542 *Jan 5, 2006Jul 13, 2006Samsung Electronics Co., Ltd.Storage medium storing metadata for providing enhanced search function
US20060212905 *Sep 19, 2005Sep 21, 2006Hitachi, Ltd.Broadcast receiving terminal and information processing apparatus
US20060294082 *Jun 28, 2006Dec 28, 2006Samsung Electronics Co., LtdApparatus and method for playing content according to numeral key input
US20070027844 *Jul 28, 2005Feb 1, 2007Microsoft CorporationNavigating recorded multimedia content using keywords or phrases
US20070050406 *Aug 26, 2005Mar 1, 2007At&T Corp.System and method for searching and analyzing media content
US20070154171 *Jan 4, 2006Jul 5, 2007Elcock Albert FNavigating recorded video using closed captioning
US20070154176 *Jan 4, 2006Jul 5, 2007Elcock Albert FNavigating recorded video using captioning, dialogue and sound effects
US20070174326 *Jan 24, 2006Jul 26, 2007Microsoft CorporationApplication of metadata to digital media
US20070203945 *Feb 28, 2006Aug 30, 2007Gert Hercules LouwMethod for integrated media preview, analysis, purchase, and display
US20070204285 *Feb 28, 2006Aug 30, 2007Gert Hercules LouwMethod for integrated media monitoring, purchase, and display
US20070276852 *May 25, 2006Nov 29, 2007Microsoft CorporationDownloading portions of media files
US20080046401 *Aug 14, 2007Feb 21, 2008Myung-Cheol LeeSystem and method for processing continuous integrated queries on both data stream and stored data using user-defined share trigger
US20080046929 *Aug 1, 2006Feb 21, 2008Microsoft CorporationMedia content catalog service
US20080086456 *Dec 18, 2006Apr 10, 2008United Video Properties, Inc.Systems and methods for acquiring, categorizing and delivering media in interactive media guidance applications
US20080091513 *Sep 13, 2007Apr 17, 2008Video Monitoring Services Of America, L.P.System and method for assessing marketing data
US20080097970 *Oct 18, 2006Apr 24, 2008Fast Search And Transfer AsaIntelligent Video Summaries in Information Access
US20080212932 *Jun 27, 2007Sep 4, 2008Samsung Electronics Co., Ltd.System for managing video based on topic and method using the same and method for searching video based on topic
US20080320159 *Jun 25, 2008Dec 25, 2008University Of Southern California (For Inventor Michael Naimark)Source-Based Alert When Streaming Media of Live Event on Computer Network is of Current Interest and Related Feedback
US20090019009 *Jul 3, 2008Jan 15, 2009At&T Corp.SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR SEARCHING WITHIN MOVIES (SWiM)
US20090089379 *Sep 27, 2007Apr 2, 2009Adobe Systems IncorporatedApplication and data agnostic collaboration services
US20090281897 *May 6, 2009Nov 12, 2009Antos Jeffrey DCapture and Storage of Broadcast Information for Enhanced Retrieval
US20090319365 *Dec 24, 2009James Hallowell WaggonerSystem and method for assessing marketing data
US20100005485 *Dec 19, 2005Jan 7, 2010Agency For Science, Technology And ResearchAnnotation of video footage and personalised video generation
US20100082585 *Apr 1, 2010Disney Enterprises, Inc.System and method for visual search in a video media player
US20100202753 *Aug 12, 2010Samsung Electronics Co., Ltd.Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function
US20100217775 *Aug 26, 2010Samsung Electronics Co., Ltd.Apparatus and method for reproducing storage medium that stores metadata for providing enhanced search function
US20110067077 *Sep 14, 2009Mar 17, 2011At&T Intellectual Property I, L.P.System and Method of Analyzing Internet Protocol Television Content Credits Information
US20110067078 *Mar 17, 2011At&T Intellectual Property I, L.P.System and Method of Proactively Recording to a Digital Video Recorder for Data Analysis
US20110067079 *Sep 14, 2009Mar 17, 2011At&T Intellectual Property I, L.P.System and Method of Analyzing Internet Protocol Television Content for Closed-Captioning Information
US20110072456 *Mar 24, 2011At&T Intellectual Property I, L.P.System and Method for Substituting Broadband Delivered Advertisements for Expired Advertisements
US20110167136 *Jul 7, 2011University Of Southern CaliforniaSource-Based Alert When Streaming Media of Live Event on Computer Network is of Current Interest and Related Feedback
US20110209185 *Aug 25, 2011Microsoft CorporationMedia content catalog service
US20110239099 *Mar 23, 2010Sep 29, 2011Disney Enterprises, Inc.System and method for video poetry using text based related media
US20130007620 *Jun 27, 2012Jan 3, 2013Jonathan BarsookSystem and Method for Visual Search in a Video Media Player
US20130066633 *Sep 9, 2011Mar 14, 2013Verisign, Inc.Providing Audio-Activated Resource Access for User Devices
US20140032561 *Oct 8, 2013Jan 30, 2014Aol Inc.Searching for transient streaming multimedia resources
EP1873966A1 *Mar 8, 2007Jan 2, 2008Huawei Technologies Co., Ltd.Playing method, system and device
WO2007073347A1 *Dec 19, 2005Jun 28, 2007Agency For Science, Technology And ResearchAnnotation of video footage and personalised video generation
WO2007073349A1 *May 11, 2006Jun 28, 2007Agency For Science, Technology And ResearchMethod and system for event detection in a video stream
WO2009026564A1 *Aug 22, 2008Feb 26, 2009Google Inc.Detection and classification of matches between time-based media
Classifications
U.S. Classification1/1, 725/38, 707/999.002
International ClassificationG06F17/30, H04N7/173, G06F13/00
Cooperative ClassificationG06F17/30796, G06F17/30817
Legal Events
DateCodeEventDescription
Feb 24, 2005ASAssignment
Owner name: DNA13 INC., CANADA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOICEY, TREVOR NELSON;JOHNSON, CHRISTOPHER JAMES;REEL/FRAME:016331/0106
Effective date: 20050223