Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20070101266 A1
Publication typeApplication
Application numberUS 11/614,406
Publication dateMay 3, 2007
Filing dateDec 21, 2006
Priority dateOct 11, 1999
Also published asUS7181757
Publication number11614406, 614406, US 2007/0101266 A1, US 2007/101266 A1, US 20070101266 A1, US 20070101266A1, US 2007101266 A1, US 2007101266A1, US-A1-20070101266, US-A1-2007101266, US2007/0101266A1, US2007/101266A1, US20070101266 A1, US20070101266A1, US2007101266 A1, US2007101266A1
InventorsJae Gon Kim, Hyun Sung Chang, Munchurl Kim, Jin Woong Kim
Original AssigneeElectronics And Telecommunications Research Institute
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Video summary description scheme and method and system of video summary description data generation for efficient overview and browsing
US 20070101266 A1
Abstract
A video summary description scheme for describing video summary intervals by meta data that provides overview functionality, and which makes it feasible to understand overall contents of the original video within a short time, and navigation and browsing functionalities, which make it feasible to search the desired video contents efficiently. A Video Summary DS includes at least one HighlightSegment DS that describes information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.
Images(5)
Previous page
Next page
Claims(23)
1. A computer-readable recording medium having a Video Summary Description Scheme (DS) for describing a video summary interval stored therein, the Video Summary DS comprising: at least one HighlightSegment DS for describing information about a highlight segment corresponding to the video summary interval, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
2. The computer-readable recording medium of claim 1 wherein the VideoSegmentLocator DS comprises one of time information and video itself of the highlight segment.
3. A method for generating video summary description data corresponding to original video according to a video summary description scheme, comprising the steps of:
(a) analyzing the original video and producing a video analysis result;
(b) defining a summary rule for selecting a video summary interval;
(c) selecting the video summary interval capable of summarizing video contents from the original video based on the original video analysis result and the summary rule, which constitute video summary interval information;
(d) extracting a representative frame based on the video summary interval information; and
(e) generating video summary description data according to a Video Summary Description Scheme (DS) for enabling execution of browsing based on the video summary interval information and the representative frame,
wherein the Video Summary DS comprises at least one HighlightSegment DS for describing information on a highlight segment corresponding to the video summary interval, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
4. The method of claim 3 wherein the VideoSegmentLocator DS comprises one of time information and video itself of the highlight segment.
5. The method of claim 4 wherein step (a) comprises the steps of:
extracting features from the original video and outputting the types of features and video time interval at which those features are detected;
detecting key events included in the original video based on the types of features and video time interval at which those features are detected; and
detecting an episode by dividing the original video into a story flow base on the basis of the detected key events.
6. The method of claim 4 wherein step (d) comprises the step of extracting a representative sound from the video summary interval information.
7. The method of claim 4 wherein the HighlightSegment DS further comprises a SoundLocator DS describing representative sound information of the highlight segment.
8. The method of claim 4 wherein the HighlightSegment DS further comprises an AudioSegmentLocator DS describing audio segment information constituting an audio summary of the highlight segment.
9. A system for generating video summary description data of original video according to a video summary description scheme, comprising:
video analyzing means for analyzing the original video and producing a video analysis result;
summary rule defining means for defining a summary rule for selecting a video summary interval;
video summary interval selecting means for selecting a video interval capable of summarizing the video contents of the original video and outputting video summary interval information based on the video analysis result from the video analyzing means and the summary rule from the summary rule defining means;
representative frame extracting means for outputting a representative frame representing the video summary interval based on the video summary interval information from the video summary interval selecting means; and
video summary describing means for generating video summary description data with a Video Summary Description Scheme (DS) by inputting the video summary interval information from the video summary interval selecting means and the representative frame information from the representative frame extracting means,
wherein the Video Summary DS comprises at least one HighlightSegment DS for describing information on a highlight segment corresponding to the video summary interval, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
10. The system of claim 9 wherein the VideoSegmentLocator DS comprises one of time information and video itself of the highlight segment.
11. The system of claim 10 wherein the video analyzing means comprises:
feature extracting means for extracting features from the original video and producing types of features and a video time interval at which the types of features are detected;
event detecting means for detecting key events included in the original video by inputting the types of features and the video time interval at which the types of features are detected; and
episode detecting means for detecting an episode by dividing the original video into a story flow base on the basis of the detected event.
12. The system of claim 10, the system further comprises representative sound extracting means for extracting a representative sound by inputting the video summary interval information and providing the extracted representative sound to a video summary describing means.
13. The system of claim 10 wherein the HighlightSegment DS further comprises a SoundLocator DS for describing a representative sound information of the highlight segment.
14. The system of claim 10 wherein the HighlightSegment DS further comprises an AudioSegmentLocator DS for describing audio segment information constituting an audio summary of the highlight segment.
15. An apparatus for browsing video summary description data, the video summary description data having a Video Summary Description Scheme (DS) for describing video summary intervals, the Video Summary DS comprising at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, the HighlightSegment DS comprising a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
16. The apparatus of claim 15 wherein the apparatus is arranged to display a representative frame of the highlight segment on a display device and to play the highlight segment.
17. The apparatus of claim 16 wherein the VideoSegmentLocator DS describes one of time information and video itself of the highlight segment.
18. The apparatus of claim 17 wherein the HighlightSegment DS further comprises:
a SoundLocator DS for describing representative sound information of the highlight segment; and
an AudioSegmentLocator DS for describing audio segment information constituting an audio summary of the highlight segment.
19. An apparatus for browsing video summary description data corresponding to an original video, the video summary description data having a HierarchicalSummary Description Scheme (DS) for describing a video summary, the apparatus comprising:
a video player for playing an original video or the video summary;
an original video representative frame player for playing a representative frame of the original video; and
a video summary representative frame player for playing a summary level of video interval.
20. The apparatus of claim 19 wherein the HierarchicalSummary DS comprises a HighlightLevel DS that comprises at least one HighlightSegment DS describing information on a highlight segment corresponding to the video summary interval,
the HighlightSegment DS comprising a VideoSegmentLocator DS for describing the highlight segment, and an ImageLocator DS for describing a representative frame of the highlight segment.
21. The apparatus of claim 20 wherein the VideoSegmentLocator DS describes one of time information and video itself of the highlight segment.
22. The apparatus of claim 21 wherein the HighlightSegment DS further comprises:
a SoundLocator DS for describing representative sound information of the highlight segment; and
an AudioSegmentLocator DS for describing an audio segment information constituting an audio summary of the highlight segment.
23. A Video Summary Description Scheme (DS) for describing video summary intervals of an original video, wherein the Video Summary DS comprises at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of video summary intervals, wherein the HighlightSegment DS comprises a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to a video summary description scheme for efficient video overview and browsing, and also relates to a method and system of video summary description generation to describe video summary according to the video summary description scheme.

The technical fields in which the present invention is involved are content-based video indexing and browsing/searching and summarizing video to the content and then describing it.

2. Description of the Related Art

The format of summarizing video largely falls into dynamic summary and static summary. The video description scheme according to the embodiments of the present invention is for efficiently describing the dynamic summary and the static summary in the unification-based description scheme.

Generally, because the existing video summary and description scheme provide simply the information of video interval which is included in the video summary, the existing video summary and description scheme are limited to conveying overall video contents through the playing of the video summary.

However, in many cases, the browsing for identifying and revisiting concerned parts through overview of overall contents is needed rather than only overview of overall contents through the video summary.

Also, the existing video summary provides only the video interval which is considered to be important according to the criteria determined by the video summary provider. Accordingly, if the criteria of users and the video provider are different from each other or users have special criteria, the users cannot obtain the video summary they desire.

That is, although the existing video summary permits the users selecting the video summary with a desired level by providing several levels' video summary, it makes the selecting extent of the users limited so that the users cannot select by the contents of the video summary.

The U.S. Pat. No. 5,821,945 entitled “Method and apparatus for video browsing based on content and structure” represents video in compact form and provides browsing functionality accessing to the video with desired content through the representation.

However, the patent pertains to static summary based on the representative frame, and although the existing static summary summarizes by using the representative frame of the video shot, the representative frame of this patent provides only visual information representing the shot. The patent has a limitation on conveying the information using the summary scheme.

As compared with the patent, the video description scheme and browsing method of the embodiments described herein utilize the dynamic summary based on the video segment.

The video summary description scheme was proposed by the MPEG-7 Description Scheme (VO.5) announced ISO/IEC JTC1/SC29/WG11 MPEG-7 Output Document No. N2844 on July 1999. Because the scheme describes the interval information of each video segment of dynamic video summary, in spite of providing basic functionalities describing dynamic summary, the scheme has problems in the following aspects.

First, there is the drawback that it cannot provide access to the original video from summary segments constituting the video summary. That is, when users want to access the original video to understand more detailed information on the basis of the summary contents and overview through video summary, the existing scheme cannot meet the need.

Secondly, the existing scheme cannot provide sufficient audio summary description functionalities.

And finally, there is the drawback that in the case of representing event-based summary, the duplicate description and the complexity of searching is indispensable.

BRIEF SUMMARY OF THE INVENTION

The disclosed embodiments of the present invention provide a hierarchical video summary description scheme, which comprises the representative frame information and the representative sound information at each video interval that is included in the video summary and makes feasible the user-customized event-based summary providing the users' selection for the contents of the video summary and efficient browsing, and a video summary description data generation method and system using the description scheme.

According to one aspect of the present invention, a Video Summary DS is provided that includes at least one HighlightSegment DS describing information on a highlight segment corresponding to one or more video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.

In a computer-readable recording medium according to the present invention, a Video Summary Description Scheme (DS) is provided for describing a video summary stored in the computer readable recording medium. The Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one or more video summary intervals, and the HighlightSegment DS includes a VideoSegmentLocator DS describing the highlight segment and an ImageLocator DS describing a representative frame of the highlight segment.

A method for generating video summary description data according to the present invention is provided, the method including the steps of:

(a) analyzing the input original video and producing a video analysis result;

(b) defining a summary rule for selecting video summary intervals;

(c) selecting the video summary interval capable of summarizing video contents from the original video based on the original video analysis result and the summary rule, which together constitute video summary interval information;

(d) extracting a representative frame based on the video summary interval information; and

(e) generating video summary description data according to a Video Summary Description Scheme (DS) that enables execution of browsing based on the video summary interval information and the representative frame, the Video Summary DS including at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.

The present invention also includes a system for generating video summary description data according to a video summary description scheme corresponding to an original video, the system, including:

a video analyzer for analyzing the input original video and producing video analysis result;

a summary rule definer for defining the summary rule for selecting the video summary interval;

a video summary interval selector for selecting one of the video summary intervals capable of summarizing the video contents of the original video and outputting video summary interval information based on the video analysis result from the video analyzer and the summary rule from the summary rule definer;

a representative frame extractor for outputting a representative frame representing video summary interval based on the video summary interval information from the video summary interval selector; and

a video summary describer for generating video summary description data with a Video Summary Description Scheme (DS) by inputting the video summary interval information from the video summary interval selector and the representative frame information from the representative frame extractor,

wherein the Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.

An apparatus for browsing video summary description data according to the present invention, the video summary description data having a Video Summary Description Scheme (DS) for describing a video summary interval, wherein the Video Summary DS includes at least one HighlightSegment DS for describing information on a highlight segment corresponding to one of the video summary intervals, wherein the HighlightSegment DS includes a VideoSegmentLocator DS for describing the highlight segment and an ImageLocator DS for describing a representative frame of the highlight segment.

The browsing apparatus includes:

a video player for playing an original video or the video summary interval;

an original video representative frame player for playing a representative frame of the original video;

a first video summary representative frame player for playing a first summary level of the video summary interval,

a second video summary representative frame player for playing a second summary level of a video summary interval, wherein the second summary level is summarized more finely than the first summary level;

a level selector for selecting the first summary level or the second summary level thereby enabling the video player to play the selected summary level; and

an event selector for enumerating the event or the subject for a user to browse a desired event.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

The embodiments of the present invention will be explained with reference to the accompanying drawings, in which:

FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.

FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).

FIG. 3 is a compositional drawing of a user interface of the tool for playing and browsing of the video summary inputting the video summary description data described by the same description scheme as FIG. 2.

FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the video summary of the present invention.

DETAILED DESCRIPTION OF THE INVENTION

The present invention will be described in detail by way of a preferred embodiment with reference to accompanying drawings, in which like reference numerals are used to identify the same or similar parts.

FIG. 1 is a block diagram illustrating a system for generating video summary description data according to the description scheme of the present invention.

As illustrated in FIG. 1, the apparatus for generating video description data according to an embodiment of the present invention is composed of a feature extracting part 101, an event detecting part 102, an episode detecting part 103, a video summary interval selecting part 104, a summary rule defining part 105, a representative frame extracting part 106, a representative sound extracting part 107 and a video summary describing part 108.

The feature extracting part 101 extracts necessary features to generate video summary by inputting the original video. The general features include shot boundary, camera motion, caption region, face region and so on.

In the step of extracting features, the types of features and video time interval at which those features are detected are output to the step of detecting event in the format of (types of features, feature serial number, time interval) by extracting those features.

For example, in the case of camera motion, (camera zoom, 1, 100˜150) represents the information that the first zoom of camera was detected in the 100˜150 frame.

The event detecting part 102 detects key events that are included in the original video. Because these events must represent the contents of the original video well and are the references for generating video summary. These events are generally differently defined according to genre of the original video.

These events either may represent higher meaning level or may be visual features that can directly infer higher meaning. For example, in the case of soccer video, goal, shoot, caption, replay and so on can be defined as events.

The event detecting part 102 outputs the types of detected events and the time interval in the format of (types of events, event serial number, time interval). For example, the event information indicating that the first goal occurred at between 200 and 300 frame is output in the format of (goal, 1, 200˜300).

The episode detecting part 103, on the basis of the detected event, divides the video into an episode with a larger unit than an event based on the story flow. After detecting key events, an episode is detected while including accompanied events that follow the key event. For example, in the case of soccer video, the goal and shoot can be key events and the bench scene, audiences scene, goal ceremony scene, replay of goal scene and so on compose accompanied events of the key events.

That is, the episode is detected on the basis of the goal and shoot.

The episode detection information is output in the format of (episode number, time interval, priority, feature shot, associated event information). Herein, the episode number is a serial number of the episode and the time interval represents the time interval of the episode by the shot unit. The priority represents the degree of importance of the episode. The feature shot represents the shot number including the most important information out of the shots comprising the episode and the associated event information represents the event number of the event related to the episode. For example, in the case of representing the episode detection information as (episode 1, 4˜6, 1, 5, goal 1, caption 3), the information means that the first episode includes 4˜6th shot, the priority is the highest (1), the feature shot is the fifth shot, and the associated events are the first goal and the third caption.

The video summary interval selecting part 104 selects the video interval at which the contents of the original video can be summarized well on the basis of the detected episode. The reference of selecting the interval is performed by the predefined summary rule of the summary rule defining part 105.

The summary rule defining part 105 defines rule for selecting the summary interval and outputs control signal for selecting the summary interval. The summary rule defining part 105 also outputs the types of summary events, which are bases in selecting the video summary interval, to the video summary describing part 108.

The video summary interval selecting part 104 outputs the time information of the selected video summary intervals by frame units and outputs the types of events corresponding to the video intervals. That is, the format of (100˜200, goal), (500˜700. shoot) and so on represent that the video segments selected as the video summary intervals are 100˜200 frame, 500˜700 frame and so on and the event of each segment is goal and shoot respectively. As well, the information such as file name can be output to facilitate the access of an additional video, which is composed of only the video summary interval.

If the video summary interval selection is completed, the representative frame and the representative sound are extracted from the representative frame extracting part 106 and the representative sound extracting part 107 respectively by using the video summary interval information.

The representative frame extracting part 106 outputs the image frame number representing the video summary interval or outputs the image data.

The representative sound extracting part 107 outputs the sound data representing the video summary interval or outputs the sound time interval.

The video summary describing part 108 describes the related information in order to make efficient summary and browsing functionalities to be feasible according to the Hierarchical Summary Description Scheme of the present invention shown in FIG. 2.

The main information of the Hierarchical Summary Description Scheme comprises the types of summary events of the video summary, the time information describing each video summary interval, the representative frame, the representative sound, and the event types in each interval.

The video summary describing part 108 outputs the video summary description data according to the description scheme illustrated in FIG. 2.

FIG. 2 is a drawing that illustrates the data structure of the HierarchicalSummary DS describing the video summary description scheme according to the present invention in UML (Unified Modeling Language).

The HierarchicalSummary DS 201 describing the video summary is composed of one or more HighlightLevel DS 202 and one or zero SummaryThemeList DS 203.

The SummaryThemeList DS provides the functionality of the event based summary and browsing by enumeratively describing the information of subject or event constituting the summary. The HighlightLevel DS 202 is composed of the HighlightSegment DSs 204 as many as the number of the video intervals constituting the video summary of that level and zero or several number of HighlightLevel DS.

The HighlightSegment DS describes the information corresponding to the interval of each video summary. The HighlightSegment DS is composed of one VideoSegmentLocator DS 205, zero or several ImageLocator DSs 206, zero or several SoundLocator DSs 207 and AudioSegmentLocator 208.

The following give more detailed description about the HierarchicalSummary DS.

The HierarchicalSummary DS has an attribute of SummaryComponentList, which obviously represents the summary type and which is comprised of the HierarchicalSummary DS.

The SummaryComponentList is derived on the basis of the SummaryComponentType and describes by enumerating all comprised SummaryComponentTypes.

In the SummaryComponentList, there are five types, such as keyFrames, keyVideoClips, keyAudioClips, keyEvents, and unconstraint.

The keyFrames represents the key frame summary composed of representative frames. The keyVideoClips represents the key video clip summary composed of key video intervals' sets. The keyEvents represents the summary composed of the video interval corresponding to either the event or the subject. The keyAudioClips represents the key audio clip summary composed of representative audio intervals' sets. And, the unconstraint represents the types of summary defined by users except for the summaries.

Also, in order to describe the event-based summary, the HierarchicalSummary DS might comprise the SummaryThemeList DS which is enumerating the event (or subject) comprised in the summary and describing the ID.

The SummaryThemeList has arbitrary number of SummaryThemes as elements. The SummaryTheme has an attribute of id of ID type and selectively has an attribute of parentld.

The SummaryThemeList DS permits the users browsing the video summary from the viewpoint of each event or several subjects described in the SummaryThemeList. That is, the application tool inputting description data makes the user select the desired subject by parsing the SummaryThemeList DS and providing the information to the user.

At this time, in the case of enumerating these subjects into simple format, if the number of the subjects is large, it might not be easy to find out the subject desired by the users.

Accordingly, by representing the subject as a tree structure similar to ToC (Table of Content), the users efficiently can do browsing at each subject after finding out the desired subject.

In order to do so, the embodiments of the present invention permit the attribute of parentld being selectively used in the SummaryTheme. The parentld means the upper element (upper subject) in the tree structure.

The HierarchicalSummary DS of the present invention comprises HighlightLevel DSs, and each HighlightLevel DS comprises one or more HighlightSegment DS, which corresponds to a video segment (or interval) constituting the video summary.

The HighlightLevel DS has an attribute of themelds of IDREFS type.

The themelds describes the subject and event id, common to the children HighlightLevel DS of corresponding HighlightLevel DS or all HighlightSegment DSs comprised in the HighlightLevel, and the id is described in the SummaryThemeList DS.

The themelds can denote several events and, when doing event based summary, solve the problem that same id is unnecessarily repeated in all segments constituting the level by having the themelds representing common subject type in the HighlightSegment constituting the level.

The HighlightSegment DS comprises one VideoSegmentLocator DS and one or more ImageLocator DS, zero or one SoundLocator DS and zero or one AudioSegmentLocator DS.

Herein, the VideoSegmentLocator DS describes the time information or video itself of the video segment constituting the video summary. The ImageLocator DS describes the image data information of the representative frame of the video segment. The SoundLocator DS describes the sound information representing the corresponding video segment interval. The AudioSegmentLocator DS describes the interval time information of the audio segment constituting the audio summary or the audio information itself.

The HighlightSegment DS has an attribute of themelds. The themelds describes using the id defined in the SummaryThemeList which subjects or events described in the SummaryThemeList DS relates to the corresponding highlight segment.

The themelds can denote more than one event, and by allowing one highlight segment to have several subjects, it is an efficient technique of the present invention which is solving the problem of indispensable duplication of descriptions caused by describing the video segment at each event (or subject) when using the existing method for event-based summary.

When describing the highlight segment constituting the video summary, in a different way from the existing hierarchical summary description scheme describing only the time information of the highlight video interval, in order to describe the video interval information of each highlight segment, the representative frame information and the representative sound information, by placing the VideoSegmentLocator DS, the ImageSegmentLocator DS and the SoundLocator DS, the present invention makes the overview through the highlight segment video and the navigation and browsing utilizing the representative frame and the representative sound of the segment to be feasible to efficiently utilize through the introduction of the HighlightSegment DS for describing the highlight segment constituting the video summary.

By placing the SoundLocator DS capable of describing the representative sound corresponding to the video interval, in real instances through the characteristic sound capable of representing the video interval, for example gun shot, outcry, anchor's comment in soccer (for example, goal and shoot), actors' name in drama, specific word, etc., it is possible to do efficient browsing by roughly understanding whether the interval is an important interval containing the desired contents or what contents are contained in the interval within a short time without playing the video interval.

FIG. 3 is a compositional drawing of a user interface of the tool for playing and browsing of the video summary inputting the video summary description data described by the same description scheme as FIG. 2.

The video playing part 301 plays the original video or the video summary according to the control of the user. The original video representative frame part 305 shows the representative frames of the original video shots. That is, it is composed of a series of images with reduced sizes.

The representative frame of the original video shot is described not by the HierarchicalSummary DS of the present invention but by additional description scheme and can be utilized when both the description data are provided along with the summary description data described by the HierarchicalSummary DS of the present invention.

The user accesses to the original video shot corresponding to the representative frame by clicking the representative frame.

The video summary level 0 representative frame part and the representative sound part 307 and the video summary level 1 representative frame part and the representative sound part 306 shows the frame and sound information representing each video interval of the video summary level 0 and the video summary level 1 respectively. That is, it is composed of the iconic images representing a series of the images and sounds with reduced sizes.

If the user clicks the representative frame of the video summary representative frame part and the representative sound part, the user accesses to the original video interval corresponding to the representative frame. Herein, in the case of clicking the representative sound icon corresponding to the representative frame of the video summary, the representative sound of the video interval is played.

The video summary controlling part 302 inputs the control for user selection to play the video summary. In the case of being provided with the multi-level video summary, the user does overview and browsing by selecting the summary of the desired level through the level selecting part 303. The event selecting part 304 enumerates the event and the subject provided by the SummaryThemeList and the user does overview and browsing by selecting the desired event. After all, this realizes the summary of the user customization type.

FIG. 4 is a compositional drawing for the flow of the data and control for hierarchical browsing using the video summary of the present invention.

The browsing is performed by accessing the data for browsing with the method of FIG. 4 through the use of the user interface of FIG. 3. The data for browsing are the video summary and the representative frame of the video summary and the original video 406 and the original video representative frame 405.

The video summary is assumed to have two levels. Needless to say, the video summary may have more levels than two. The video summary level 0 401 is what is summarized with shorter time than the video summary level 1 403. That is, the video summary level 1 contains more contents than the video summary level 0. The video summary level 0 representative frame 402 is the representative frame of the video summary level 0 and the video summary level 1 representative frame 404 is the representative frame of the video summary level 1.

The video summary and the original video are played through the video playing part 301 shown in FIG. 3. The video summary level 0 representative frame is displayed in the video summary level 0 representative frame and the representative sound part 306, the video summary level 1 representative frame is displayed in the video summary level 1 representative frame and the representative sound part 307, and the original video representative frame is displayed in the original video representative frame part 305.

The hierarchical browsing method illustrated in FIG. 4 can have various types of hierarchical paths as the following example.

Case 1: (1)-(2)

Case 2: (1)-(3)-(5)

Case 3: (1)-(3)-(4)-(6)

Case 4: (7)-(5)

Case 5: (7)-(4)-(6)

The overall browsing scheme is as follows.

First, understand the overall contents of the original video by watching the video summary of the original video. Herein, the video summary may play either the video summary level 0 or the video summary level 1. When more detailed browsing is wanted after watching the video summary, the interested video interval is identified through the video summary representative frame. If the scene which is desired to be exactly found, is identified in the video summary representative frame, play it by directly accessing to the video interval of the original video to which the representative frame is connected. And if the more detailed information is needed, the user may access the desired original video either by understanding the representative frame of the next level or by hierarchically understanding the contents of the representative frame of the original video.

Although these hierarchical browsing techniques might take a long time in browsing to access the desired contents while the original video is being played, the browsing time is substantially reduced by directly accessing the contents of the original video through the hierarchical representative frame.

The existing general video indexing and browsing techniques divide the original video in shot unit and access to the shot by perceiving the desired shot from the representative frame after constituting the representative frame representing each shot.

In this case, because the number of shots in the original video is large, substantial time and efforts are necessary to do browsing the desired contents out of many representative frames.

In the present invention, it is feasible to quickly access the desired video by constituting the hierarchical representative frame with the representative frame of the video summary.

The case 1 is the case that plays the video summary level 0 and directly accesses to the original video from the video summary level 0 representative frame.

The case 2 is the case that plays the video summary level 0 and selects the most interested representative frame from the video summary level 0 representative frame and identifies the desired scene in the video summary level 1 representative frame corresponding to the neighborhood of the representative frame to understand more detailed information before access to the original video and then accesses to the original video.

The case 3 is the case that selects the most interested representative frame to obtain more detailed information in the case that the access from the video summary level 1 representative frame to the original video is difficult in the case 2 and by the original video representative frames neighboring the representative frame identifies the desired scene and then accesses to the original video using the representative frame of the original frame.

The case 4 and case 5 are the cases that start at the playing of the video summary level 1 and the paths are similar to the above cases.

When applied to the server/client circumstance, the present invention can provide a system in which multiple clients can access one server and do video overview and browsing. The original video is inputted to the server and the video summary description data is produced on the basis of the hierarchical summary description scheme and the video summary description data generation system linking the original video and the video summary description data is equipped. The client accesses the server through the communication network, does overview of the video using the video summary description data, and does browsing and navigation of the video by accessing to the original video.

Although, the present invention was described on the basis of preferably executable examples, these executable examples do not limit the present invention but exemplify. Also, it will be appreciated by those skilled in the art that changes and variations in the embodiments herein can be made without departing from the spirit and scope of the present invention as defined by the following claims and the equivalents thereof.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7787679 *Nov 22, 2006Aug 31, 2010Agfa Healthcare Inc.Study navigation system and method
US8028234 *Mar 8, 2005Sep 27, 2011Sharp Laboratories Of America, Inc.Summarization of sumo video content
US8386255 *Mar 17, 2009Feb 26, 2013Avaya Inc.Providing descriptions of visually presented information to video teleconference participants who are not video-enabled
US20100241432 *Mar 17, 2009Sep 23, 2010Avaya Inc.Providing descriptions of visually presented information to video teleconference participants who are not video-enabled
Classifications
U.S. Classification715/719, 348/473, G9B/27.029, G9B/27.051, G9B/27.019
International ClassificationH04N7/00
Cooperative ClassificationG11B27/34, G11B27/105, G11B27/28
European ClassificationG11B27/10A1, G11B27/28, G11B27/34