Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20100322596 A9
Publication typeApplication
Application numberUS 11/344,292
Publication dateDec 23, 2010
Filing dateJan 31, 2006
Priority dateDec 15, 2004
Also published asCA2640004A1, CA2640004C, DE602007011394D1, EP1979907A2, EP1979907A4, EP1979907B1, US7895617, US20070189710, WO2007089752A2, WO2007089752A3
Publication number11344292, 344292, US 2010/0322596 A9, US 2010/322596 A9, US 20100322596 A9, US 20100322596A9, US 2010322596 A9, US 2010322596A9, US-A9-20100322596, US-A9-2010322596, US2010/0322596A9, US2010/322596A9, US20100322596 A9, US20100322596A9, US2010322596 A9, US2010322596A9
InventorsLeo Pedlow
Original AssigneePedlow Leo M
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Content substitution editor
US 20100322596 A9
Abstract
In accordance with certain embodiments consistent with the present invention, a method providing alternate digital audio and video content in a segment of content containing compressed primary audio and encoded primary video involves inserting blank audio in an alternate audio track between segments of alternate audio; inserting black video in an alternate video track between segments of alternate video; synchronizing the alternate audio track to a master timeline; synchronizing the alternate video track to the master timeline; compressing the alternate audio track; compressing the alternate video track; trimming the blank audio from the compressed alternate audio track; trimming the black video from the compressed alternate video track; synchronizing the trimmed compressed alternate audio to locate the trimmed compressed alternate audio temporally with the primary compressed audio; synchronizing the trimmed compressed alternate video to locate the trimmed compressed alternate video temporally with the primary encoded video; and multiplexing the trimmed compressed alternate audio and the trimmed compressed alternate video with the primary compressed audio and the primary encoded video. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
Images(6)
Previous page
Next page
Claims(31)
1. A method providing alternate digital audio and video content in a segment of content containing compressed primary audio and encoded primary video, comprising:
inserting blank audio in an alternate audio track between segments of alternate audio;
inserting black video in an alternate video track between segments of alternate video;
synchronizing the alternate audio track to a master timeline;
synchronizing the alternate video track to the master timeline;
compressing the alternate audio track;
encoding the alternate video track;
trimming the blank audio from the compressed alternate audio track;
trimming the black video from the encoded alternate video track;
synchronizing the trimmed compressed alternate audio to locate the trimmed compressed alternate audio temporally with the primary compressed audio;
synchronizing the trimmed encoded alternate video to locate the trimmed encoded alternate video temporally with the primary encoded video; and
multiplexing the trimmed compressed alternate audio and the trimmed encoded alternate video with the primary compressed audio and the primary encoded video.
2. The method according to claim 1, wherein the primary audio and the alternate audio are compressed using a single audio compressor.
3. The method according to claim 2, wherein the audio compressor is compliant with one of AC-3, AAC, DTS or MPEG-1.
4. The method according to claim 1, wherein the primary audio and the alternate audio are compressed using a primary and secondary audio compressors.
5. The method according to claim 4, wherein the audio compressors are compliant with one of AC-3, AAC, DTS or MPEG-1.
6. The method according to claim 1, wherein the primary video and the alternate video are encoded using primary and alternate video encoders.
7. The method according to claim 6, wherein the video encoders is compliant with one of MPEG-2, AVC, VC-1 or MPEG-4.
8. The method according to claim 1, wherein the primary video and the alternate video are encoded using a single video encoder.
9. The method according to claim 8, wherein the video encoder is compliant with one of MPEG-2, AVC, VC-1 or MPEG-4.
10. The method according to claim 1, wherein a Packet Identifier (PID) remapper maps the primary audio, the alternate audio, the primary video and the alternate video each to separate PID values.
11. A computer readable storage medium storing instructions which, when executed on a programmed processor, carry out a process according to claim 1.
12. A video editor that provides alternate digital audio and video content in a segment of content containing compressed primary audio and encoded primary video, comprising:
an audio sequencer that inserts blank audio in an alternate audio track between segments of alternate audio, wherein the alternate audio track is synchronized to a master timeline;
a video sequencer that inserts black video in an alternate video track between segments of alternate video, wherein the alternate video track is synchronized to the master timeline;
a compressor that compresses the alternate audio track;
an encoder that encodes and compresses the alternate video track;
means for trimming the blank audio from the compressed alternate audio track;
means for trimming the black video from the encoded alternate video track;
means for synchronizing the trimmed compressed alternate audio to locate the trimmed compressed alternate audio temporally with the compressed primary audio;
means for synchronizing the trimmed encoded alternate video to locate the trimmed encoded alternate video temporally with the encoded primary video; and
a multiplexer that multiplexes the trimmed compressed alternate audio and the trimmed compressed alternate video with the primary audio and the primary video.
13. The video editor according to claim 12, wherein the means for trimming the video and means for trimming the audio are implemented in an audio/video processor.
14. The video editor according to claim 13, wherein the means for synchronizing the video and the means for synchronizing the audio are implemented in the audio/video processor.
15. The video editor according to claim 13, wherein the multiplexer is implemented in the audio/video processor.
16. The video editor according to claim 12, wherein the primary and secondary audio compressors are compliant with one of AC-3, AAC, DTS or MPEG-1.
17. The video editor according to claim 12, wherein the primary and secondary video encoders are compliant with one of MPEG-2, AVC, VC-1 or MPEG-4.
18. The video editor according to claim 12, further comprising a Packet Identifier (PID) remapper that maps the primary audio, the alternate audio, the primary video and the alternate video each to separate PID values.
19. A video editor that provides alternate digital audio and video content in a segment of content containing primary audio and primary video, comprising:
an audio sequencer that inserts blank audio in an alternate audio track between segments of alternate audio, wherein the alternate audio track is synchronized to a master timeline;
a video sequencer that inserts black video in an alternate video track between segments of alternate video, wherein the alternate video track is synchronized to the master timeline;
compressor means for compressing the primary audio and alternate audio track;
encoder means for encoding and compressing the primary video and the alternate video track;
means for trimming the blank audio from the compressed alternate audio track;
means for trimming the black video from the encoded and compressed alternate video track;
means for synchronizing the trimmed compressed alternate audio to locate the trimmed compressed alternate audio temporally with the primary audio;
means for synchronizing the trimmed encoded compressed alternate video to locate the trimmed encoded compressed alternate video temporally with the primary video; and
a multiplexer that multiplexes the trimmed compressed alternate audio and the trimmed encoded compressed alternate video with the compressed primary audio and the encoded and compressed primary video.
20. The video editor according to claim 19, wherein the means for trimming the video and means for trimming the audio are implemented in an audio/video processor.
21. The video editor according to claim 19, wherein the means for synchronizing the video and the means for synchronizing the audio are implemented in the audio/video processor.
22. The video editor according to claim 19, wherein the multiplexer is implemented in an audio/video processor.
23. The video editor according to claim 19, wherein the primary and secondary audio is compliant with one of AC-3, AAC, DTS or MPEG-1.
24. The video editor according to claim 19, wherein the primary and secondary video is compliant with one of MPEG-2, AVC, VC-1 or MPEG-4.
25. The video editor according to claim 19, further comprising a Packet Identifier (PID) remapper that maps the primary audio, the alternate audio, the primary video and the alternate video each to separate PID values.
26. The video editor according to claim 19, wherein the compressor means comprises a single audio compressor that sequentially encodes the primary and alternate audio.
27. The video editor according to claim 19, wherein the compressor means comprises a primary compressor that encodes the primary audio and an alternate compressor that compresses the alternate audio.
28. The video editor according to claim 19, wherein the encoding means comprises a single video encoder that sequentially encodes the primary video and the alternate video.
29. The video editor according to claim 19, wherein the encoding means comprises a primary video encoder and an alternate video encoder.
30. The video editor according to claim 19, wherein the encoding means comprises a single video encoder that sequentially encodes the primary and alternate video.
31. A video editor that provides alternate digital audio and video content in a segment of content containing primary audio and primary video, comprising:
an audio sequencer that inserts blank audio in an alternate audio track between segments of alternate audio, wherein the alternate audio track is synchronized to a master timeline;
a video sequencer that inserts black video in an alternate video track between segments of alternate video, wherein the alternate video track is synchronized to the master timeline;
compressor means comprising a primary audio compressor for compressing the primary audio, and an alternate audio compressor for compressing the alternate audio track;
encoder means for encoding and compressing the primary video and the alternate video track, wherein the encoder means comprises a primary video encoder and an alternate video encoder;
means for trimming the blank audio from the compressed alternate audio track;
means for trimming the black video from the encoded and compressed alternate video track;
means for synchronizing the trimmed compressed alternate audio to locate the trimmed compressed alternate audio temporally with the primary audio;
means for synchronizing the trimmed encoded and compressed alternate video to locate the trimmed encoded and compressed alternate video temporally with the encoded and compressed primary video;
a multiplexer that multiplexes the trimmed compressed alternate audio and the trimmed compressed alternate video with the compressed primary audio and the encoded and compressed primary video;
wherein the means for trimming the video and means for trimming the audio are implemented in an audio/video processor, and wherein the means for synchronizing the video and the means for synchronizing the audio are implemented in the audio/video processor, and wherein the multiplexer is implemented in an audio/video processor; and
a Packet Identifier (PID) remapper that maps the primary audio, the alternate audio, the primary video and the alternate video each to separate PID values.
Description
  • [0001]
    This application is related to U.S. patent application Ser. Nos. 10/319,066; 10/667,614; and 10/822,891 relate to mechanisms for content replacement and which are hereby incorporated herein by reference.
  • BACKGROUND
  • [0002]
    Audio-visual content, such as television programming, movies, digital versatile discs (DVD), and the like, sometimes contain content which certain people may find objectionable. It may be objectionable either for them personally or they may consider it objectionable for children or others to view. The above-referenced patent applications are related to a mechanism that can be used for replacement of objectionable content (or content replacement for any other reason).
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0003]
    Certain illustrative embodiments illustrating organization and method of operation, together with objects and advantages may be best understood by reference detailed description that follows taken in conjunction with the accompanying drawings in which:
  • [0004]
    FIG. 1 depicts an example of content and their temporal relationships in a nonlinear editing system.
  • [0005]
    FIG. 2 shows the process flow of content once the editing process has been completed.
  • [0006]
    FIG. 3 shows a nonlinear editing system modified to support synchronization and delivery of alternative video and audio content in a manner consistent with certain embodiments of the present invention.
  • [0007]
    FIG. 4 shows post-edit content flow supporting dynamic content substitution consistent with certain embodiments of the present invention.
  • [0008]
    FIG. 5 is a diagram illustrating A/V processor operation in a manner consistent with certain embodiments of the present invention.
  • DETAILED DESCRIPTION
  • [0009]
    While this invention is susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail specific embodiments, with the understanding that the present disclosure of such embodiments is to be considered as an example of the principles and not intended to limit the invention to the specific embodiments shown and described. In the description below, like reference numerals are used to describe the same, similar or corresponding parts in the several views of the drawings.
  • [0010]
    The terms “a” or “an”, as used herein, are defined as one or more than one. The term “plurality”, as used herein, is defined as two or more than two. The term “another”, as used herein, is defined as at least a second or more. The terms “including” and/or “having”, as used herein, are defined as comprising (i.e., open language). The term “coupled”, as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. The term “program” or “computer program” or similar terms, as used herein, is defined as a sequence of instructions designed for execution on a computer system. A “program”, or “computer program”, may include a subroutine, a function, a procedure, an object method, an object implementation, in an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
  • [0011]
    The term “program”, as used herein, may also be used in a second context (the above definition being for the first context). In the second context, the term is used in the sense of a “television program”. In this context, the term is used to mean any coherent sequence of audio video content which would be interpreted as and reported in an electronic program guide (EPG) as a single television program, without regard for whether the content is a movie, sporting event, segment of a multi-part series, news broadcast, etc.
  • [0012]
    Reference throughout this document to “one embodiment”, “certain embodiments”, “an embodiment” or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation.
  • [0013]
    The term “or” as used herein is to be interpreted as meaning either or all. Therefore, “A, B or C” means “any of the following: A; B; C; A and B; A and C; B and C; A, B and C”. An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
  • [0014]
    In order to provide content which can be manipulated to provide alternatives, e.g., in the case of providing alternative content to modify the rating of a movie or television program, an authoring tool is needed. Current linear and non-linear editing tools do not provide this capability.
  • [0015]
    The management of alternate content for use in dynamic substitution applications such as the removal/restoration of potentially objectionable content can be implemented during content authoring/editing using a nonlinear editing system consistent with certain embodiments of the present invention. Turing to FIG. 1, an example is presented of the content relationships in a nonlinear editing system. In such a system, video scenes 10, dialog tracks 12 and 14, along with music tracks such as 16 and other audio tracks 18 are associated with a master timeline 20. This information is stored in a “non-linear” fashion. The term “non-linear” storage is used in the art to differentiate digital storage, e.g., using disc drive technology, in contrast to “linear” storage that uses tape and film as the storage medium. By use of non-linear storage, any element of the content can be randomly accessed without need to traverse a length of “linear” medium such as film or tape to reach the element of content. Elements 10-18 may be stored as discrete elements anywhere on the disc drive or other non-linear storage medium and manipulated, rearranged, substituted, etc. in the non-linear editing process.
  • [0016]
    Nonlinear editing systems have become the prevalent method of content authoring for television and increasingly so for film. A nonlinear editing system can be used to select the desired portions of audio and video sequences (scenes) taken from a library containing all the raw footage/video and audio recordings under consideration for the project (e.g., video, movie or television program) and then establish their temporal relationships, both with the adjacent sequences of the same type (video, dialog, music score, etc.) as well as to establish the synchronization of the video with one or more corresponding audio tracks. Even though the end product appears as one continuous video sequence with a single synchronized audio track (containing a composite of multiple audio elements), all components that make up the content remain distinct and separate while being manipulated in the editing system.
  • [0017]
    FIG. 1 shows an example depiction of content in a nonlinear editing system and their temporal relationships. In the case of a conventional editing system, no provision is made for assuring synchronization of multiple sets of content in which certain “scenes” can be substituted for others in a transparent manner at playback. Selective multiple encryption systems, consistent with Sony's Passage™ system utilize mapping of Packet Identifiers (PIDs) to achieve a multiple carriage of content destined for differing encryption systems. The above-referenced patent applications utilize a similar system of PID mapping to achieve content replacement functions. However, to date, the issue of how to author content for such systems has not been addressed.
  • [0018]
    FIG. 2 shows the process flow of content once the non-linear editing process has been completed for conventional non-linear editing systems. The content stored in most professional nonlinear editing systems is uncompressed digitized video and pulse code modulated (PCM) audio samples. This content is depicted in FIG. 2 as content track storage 26 and content scene storage 30. It is generally considered much easier to edit video sequences and edit/combine (mix) audio samples in this raw form and maintain high picture and sound quality. The sequencing of the audio and video content is depicted at 34 and 38 respectively.
  • [0019]
    When the final edited version of the content is completed, it can then be assembled into its final video and audio sequences and the audio mixed to its final monophonic, stereophonic or surround sound image at the output of digital mixdown 42. The various audio tracks (dialog, music, sound effects, etc.) are mixed down at a mixdown process depicted as a digital mixdown 42. The finished “cut” is then compressed using, for example without intent of any limitation using MPEG (e.g., MPEG-2) compression for the video at 46 and AC-3 audio compression at 50 for the video and audio content, respectively, to reduce the size of the file containing the final product. Any other suitable compression and encoding technique could be used including, but not limited to for example AAC, DTS, MPEG-1, etc. for audio, and AVC, VC-1, MPEG-4, etc. for video. Embodiments consistent with the present invention also contemplate use with other encoding and compression mechanisms, existing or not yet developed. Commonly, compression by a factor of 80 or greater is achieved. This reduction in storage makes the transmission, broadcast and/or storage of digital video content more practical, allowing it to fit on inexpensive media like DVDs or to be carried in a standard 6 MHz RF spectral channel concurrent with eight or more other A/V services with minimal degradation of quality. The final content can be stored at 54, and from there, may be used for any number of purposes including DVD mastering, satellite, cable, or terrestrial broadcasting.
  • [0020]
    A similar process can be followed in order to create an alternate audio track in a second language. In this case, the same audio tracks containing the musical score, sound effects, etc. are used but an alternate dialog track, edited to match the duration and context of the common video content, is substituted for the primary language dialog track. An alternate composite audio track can be created by a separate mixdown and encoding process, paralleling that used to create the primary audio track. The second audio track may then be either carried concurrently with the video and primary audio track for multilingual support or it can be substituted in its entirety for the primary audio for content intended exclusively for an alternate language.
  • [0021]
    It should be noted that in all cases, there is a single, continuous video/visual track running at a constant rate (e.g., 24 or 60 frames per second) that depends upon media type, in the final “cut”. This track is always present, even if the actual content of the visual track contains a black screen. All audio content is synchronized to the visual track to maintain proper lip to voice synchronization and appropriate timing of sound effect and musical score occurrence. Unlike the visual track, audio may or may not be present, depending upon the context of the scene. Once the final cut is produced and compressed, like the video track, there is a continuous audio track. During periods of silence, compressed audio data is still present, but the data values indicate a silent period. Hence, synchronization of the second audio track with the video is routine.
  • [0022]
    Now consider a content authoring process that supports dynamic content substitution. In order to support dynamic content substitution on a scene-by-scene basis, the authoring process described earlier must be substantially modified to allow concurrent editing of a second or alternate video track and additional audio tracks corresponding to scene substitutions (in contrast to a simple alternative audio track that runs the full length of the content. An example of such content with alternative audio and video is shown in FIG. 3 with the original track and master timeline relationships as shown in FIG. 1. In FIG. 3, the alternate video track 62 and alternate dialog track 64 are subordinate to the primary video track 10 and dialog track 12 and are temporally synchronized with the master timeline. However, since they represent alternate scenes and/or dialog, the alternative content does not have the benefit of continuously following the original time line.
  • [0023]
    After post-processing, both video tracks 12 and 64 are carried in the final content using the techniques to be described later. The nonlinear editor can be extended in accordance with the present teachings to accommodate the additional tracks for alternate video and audio, and is complimentary to the editing paradigm established for conventional linear editing tools.
  • [0024]
    One departure from the conventional process is the handling of the content comprising the final product or “cut”. As described earlier, the final cut is assembled, mixed (audio) and streamed to compression equipment (encoders). A conventional video encoder can only accept a single, continuous video stream. The primary video stream meets that criterion. The alternate video stream can be characterized as a non-continuous (staccato) sequence of video to be transmitted or played concurrently with the primary video so that the receiving devices may elect whether or not to substitute the alternate versions for the primary. In the example content shown in FIG. 3, for example, alternate scenes are provided for Scene 2 and Scene 4, but not Scene 1 and Scene 3. Similarly, alternate dialog is provided for only portions of the dialog, as can be seen by comparing the example dialog tracks 12 and 64.
  • [0025]
    FIG. 4 shows an illustrative example of the post-edit content flow supporting dynamic content substitution consistent with certain embodiments of the present invention. In order to remain compatible with conventional video encoders, the nonlinear editing system fills periods between alternate video sequences (alternate scenes) with synthesized black screen in order to create a continuous video stream, which the encoder will accept, for purposes of the assembly of the final content. The encoding of primary and alternate video can occur using the same encoder so that the two processes occur serially, or can use multiple video encoders as shown to encode the two video streams in parallel. When processed serially, the editing system communicates with the encoder so that that synchronization information can be inserted, using any suitable protocol, by the encoder in both resultant compressed data streams for post-encoding reprocessing to combine the two video streams with proper synchronization.
  • [0026]
    As shown in FIG. 4, the process depicted in FIG. 2 is supplemented with an alternate video path 72 and an alternate audio path 74. The alternate video path 72 incorporates an additional scene sequencing in which the black screen is inserted at 78 and either a second video encoder 80 or a second sequential use of video encoder 46 (both of which are conceptualized by video encoder 80 in this depiction). During this encoding process for the alternate video, PIDs are utilized in a conventional manner to identify related video packets. In a similar manner, the alternate audio path 74 includes sequencing at 82 with the alternate dialog being mixed as appropriate with other audio tracks before digital mixdown at 84 and audio compression at 86. As with the video, the audio can either be separately processed in parallel at each stage using separate hardware, or in series using the same hardware as in the primary audio processing. During this encoding process for the alternate audio, PIDs are utilized in a conventional manner to identify related audio packets. Synchronization information is derived from the two video streams at 88.
  • [0027]
    The two compressed audio outputs and the two compressed video outputs and the synchronization information are processed using a device referred to herein as an A/V processor 90, whose operation is depicted in connection with FIG. 5. The processed audio and video are stored as finished content at storage 54 as described previously.
  • [0028]
    The two compressed content multiplexes, original (primary) version and the second stream containing only the portions available for substitution, both with added synchronization marks, are inserted into a A/V processor. The operation of this processor 90 is shown in FIG. 5. The A/V processor 90 performs four major functions, alternate stream “trimming” at 92, content synchronization at 94, PID mapping at 96 and content remultiplexing at 98. These functions can be carried out using a programmed processor (or multiple programmed processors operating in concert) in certain embodiments.
  • [0029]
    The alternate content contains blank video (black screen) and muted audio between segments of alternate content. This is a byproduct of preparing the content for compression. The A/V processor 90 trims all black screen content and muted audio at 92 to allow the alternative content to be multiplexed into a primary transport stream in a manner similar to that described in connection with selective multiple encrypted content described in the applications above.
  • [0030]
    Next, at 94, the processor uses synchronization marks inserted by the encoders to allow the alternate content to be correctly located temporally within the primary transport stream so that primary audio and/or video content having alternate audio and/or video content can be contextually located in adjacent positions. That is to say, if the data are stored in packets, the primary audio or video and alternate audio or video are preferably situated in adjacent packets or nearby packets for ease of retrieval. This information is obtained from the synchronization information derived at 88 for the two video streams.
  • [0031]
    At 96, the PIDs for the audio and video streams may be remapped to provide PIDs which uniquely identify the primary and secondary audio and primary and secondary video. This provides individually identifiable packets of content that can be multiplexed together. At 98, the A/V processor 90 then merges the alternate content into the primary transport or program stream and provides signaling and formatting that enables suitably equipped playback devices to dynamically select any combination of primary/alternate content during broadcast or playback of the resultant composite content. As part of the merging process, the remultiplexer corrects Program Clock References (PCR) and other tasks normally encountered and associated with digital remultiplexing processes.
  • [0032]
    The composite, homogeneous output of the processor is then returned to the normal content process flow, where it is stored or forwarded to the distribution phase, either in mastering of package media, like DVD or to a broadcast source such as a video spooler for video on demand (VOD) or terrestrial broadcast & cable or uplink to satellite for Direct Broadcast Satellite service (DBS).
  • [0033]
    While the illustrative embodiment shown herein depicts providing a single set of alternate content, the process is readily incremented to provide several sets of alternate content using the same principles described.
  • [0034]
    Thus, in accordance with certain embodiments consistent with the present invention, a method providing alternate digital audio and video content in a segment of content containing compressed primary audio and encoded primary video involves inserting blank audio in an alternate audio track between segments of alternate audio; inserting black video in an alternate video track between segments of alternate video; synchronizing the alternate audio track to a master timeline; synchronizing the alternate video track to the master timeline; compressing the alternate audio track; compressing the alternate video track; trimming the blank audio from the compressed alternate audio track; trimming the black video from the compressed alternate video track; synchronizing the trimmed compressed alternate audio to locate the trimmed compressed alternate audio temporally with the primary compressed audio; synchronizing the trimmed compressed alternate video to locate the trimmed compressed alternate video temporally with the primary encoded video; and multiplexing the trimmed compressed alternate audio and the trimmed compressed alternate video with the primary compressed audio and the primary encoded video.
  • [0035]
    In certain embodiments, the primary audio and the alternate audio are compressed sequentially using a single audio compressor such as an AC-3, MPEG-1, AAC or DTS (by way of example). In other embodiments, the primary audio and the alternate audio are compressed using a primary and secondary audio compressor. In certain embodiments, the primary video and the alternate video are compressed using primary and alternate video encoders such as MPEG-2, AVC, VC-1 or MPEG-4, compliant video encoders (by way of example). In other embodiments, the primary video and the alternate video are encoded sequentially using a single video encoder. According to certain embodiments, a PID remapper maps the primary audio, the alternate audio, the primary video and the alternate video each to separate PID values. A computer readable storage medium can be used for storing instructions which, when executed on a programmed processor, carry out these processes.
  • [0036]
    In another embodiment, a video editor that provides alternate digital audio and video content in a segment of content containing compressed primary audio and encoded primary video has an audio sequencer that inserts blank audio in an alternate audio track between segments of alternate audio, wherein the alternate audio track is synchronized to a master timeline. A video sequencer inserts black video in an alternate video track between segments of alternate video, wherein the alternate video track is synchronized to the master timeline. A compressor compresses the alternate audio track and an encoder encodes and compresses the alternate video track. The blank audio is trimmed from the compressed alternate audio track and the black video is trimmed from the compressed alternate video track. A synchronizer is used to synchronize the trimmed compressed alternate audio to locate the trimmed compressed alternate audio temporally with the compressed primary audio. A synchronizer is also used for synchronizing the trimmed compressed alternate video to locate the trimmed compressed alternate video temporally with the encoded and compressed primary video. A multiplexer that multiplexes the trimmed compressed alternate audio and the trimmed compressed alternate video with the primary audio and the primary video.
  • [0037]
    Another video editor consistent with certain embodiments, provides alternate digital audio and video content in a segment of content containing primary audio and primary video has an audio sequencer that inserts blank audio in an alternate audio track between segments of alternate audio, wherein the alternate audio track is synchronized to a master timeline. A video sequencer inserts black video in an alternate video track between segments of alternate video, wherein the alternate video track is synchronized to the master timeline. A compressor mechanism for compressing the primary audio and alternate audio track. An encoder encodes and compresses the primary video and the alternate video track. The blank audio is trimmed from the compressed alternate audio track and the black video is trimmed from the compressed alternate video track. The compressed alternate audio is synchronized so that the trimmed compressed alternate audio can be temporally situated with the primary audio. The trimmed compressed alternate video is synchronized to locate the trimmed compressed alternate video temporally with the primary video. A multiplexer multiplexes the trimmed compressed alternate audio and the trimmed compressed alternate video with the compressed primary audio and the encoded and compressed primary video.
  • [0038]
    Other embodiments will occur to those skilled in the art in view of the above teachings.
  • [0039]
    Those skilled in the art will recognize, upon consideration of the above teachings, that certain of the above exemplary embodiments are or can be based upon use of a programmed processor. However, the invention is not limited to such exemplary embodiments, since other embodiments could be implemented using hardware component equivalents such as special purpose hardware and/or dedicated processors. Similarly, general purpose computers, microprocessor based computers, micro-controllers, optical computers, analog computers, dedicated processors, application specific circuits and/or dedicated hard wired logic may be used to construct alternative equivalent embodiments.
  • [0040]
    Similarly, certain embodiments herein were described in conjunction with specific circuitry that carries out the functions described, but other embodiments are contemplated in which the circuit functions are carried out using equivalent software or firmware embodiments executed on one or more programmed processors. General purpose computers, microprocessor based computers, micro-controllers, optical computers, analog computers, dedicated processors, application specific circuits and/or dedicated hard wired logic and analog circuitry may be used to construct alternative equivalent embodiments. Other embodiments could be implemented using hardware component equivalents such as special purpose hardware and/or dedicated processors.
  • [0041]
    Certain embodiments described herein, are or may be implemented using a programmed processor executing programming instructions that are broadly described above in process flow diagrams that can be stored on any suitable electronic or computer readable storage medium and/or can be transmitted over any suitable electronic communication medium. However, those skilled in the art will appreciate, upon consideration of the present teaching, that the processes described above can be implemented in any number of variations and in many suitable programming languages without departing from embodiments of the present invention. For example, the order of certain operations carried out can often be varied, additional operations can be added or operations can be deleted without departing from certain embodiments of the invention. Error trapping can be added and/or enhanced and variations can be made in user interface and information presentation without departing from certain embodiments of the present invention. Such variations are contemplated and considered equivalent.
  • [0042]
    While certain illustrative embodiments have been described, it is evident that many alternatives, modifications, permutations and variations will become apparent to those skilled in the art in light of the foregoing description.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4634808 *Mar 15, 1984Jan 6, 1987M/A-Com Government Systems, Inc.Descrambler subscriber key production system utilizing key seeds stored in descrambler
US4722003 *Nov 19, 1986Jan 26, 1988Sony CorporationHigh efficiency coding apparatus
US4815078 *Mar 31, 1987Mar 21, 1989Fuji Photo Film Co., Ltd.Method of quantizing predictive errors
US4989245 *Mar 6, 1989Jan 29, 1991General Instrument CorporationControlled authorization of descrambling of scrambled programs broadcast between different jurisdictions
US4995080 *Jul 16, 1990Feb 19, 1991Zenith Electronics CorporationTelevision signal scrambling system and method
US5091936 *Jan 30, 1991Feb 25, 1992General Instrument CorporationSystem for communicating television signals or a plurality of digital audio signals in a standard television line allocation
US5195135 *Aug 12, 1991Mar 16, 1993Palmer Douglas AAutomatic multivariate censorship of audio-video programming by user-selectable obscuration
US5196931 *Dec 23, 1991Mar 23, 1993Sony CorporationHighly efficient coding apparatus producing encoded high resolution signals reproducible by a vtr intended for use with standard resolution signals
US5379072 *Dec 8, 1992Jan 3, 1995Sony CorporationDigital video signal resolution converting apparatus using an average of blocks of a training signal
US5381481 *Aug 4, 1993Jan 10, 1995Scientific-Atlanta, Inc.Method and apparatus for uniquely encrypting a plurality of services at a transmission site
US5398078 *Oct 30, 1992Mar 14, 1995Kabushiki Kaisha ToshibaMethod of detecting a motion vector in an image coding apparatus
US5400401 *Oct 30, 1992Mar 21, 1995Scientific Atlanta, Inc.System and method for transmitting a plurality of digital services
US5481554 *Aug 31, 1993Jan 2, 1996Sony CorporationData transmission apparatus for transmitting code data
US5481627 *Aug 31, 1994Jan 2, 1996Daewoo Electronics Co., Ltd.Method for rectifying channel errors in a transmitted image signal encoded by classified vector quantization
US5485577 *Dec 16, 1994Jan 16, 1996General Instrument Corporation Of DelawareMethod and apparatus for incremental delivery of access rights
US5491748 *Mar 1, 1994Feb 13, 1996Zenith Electronics CorporationEnhanced security for a cable system
US5594507 *May 27, 1994Jan 14, 1997Ictv, Inc.Compressed digital overlay controller and method for MPEG type video signal
US5598214 *Sep 28, 1994Jan 28, 1997Sony CorporationHierarchical encoding and decoding apparatus for a digital image signal
US5600378 *May 22, 1995Feb 4, 1997Scientific-Atlanta, Inc.Logical and composite channel mapping in an MPEG network
US5600721 *Jul 27, 1994Feb 4, 1997Sony CorporationApparatus for scrambling a digital video signal
US5606359 *Jun 30, 1994Feb 25, 1997Hewlett-Packard CompanyVideo on demand system with multiple data sources configured to provide vcr-like services
US5608448 *Apr 10, 1995Mar 4, 1997Lockheed Martin CorporationHybrid architecture for video on demand server
US5615265 *Dec 20, 1994Mar 25, 1997France TelecomProcess for the transmission and reception of conditional access programs controlled by the same operator
US5717814 *Sep 16, 1994Feb 10, 1998Max AbecassisVariable-content video retriever
US5726702 *Feb 23, 1995Mar 10, 1998Hitachi, Ltd.Television signal receiving apparatus incorporating an information retrieving and reproducing apparatus
US5726711 *Mar 15, 1996Mar 10, 1998Hitachi America, Ltd.Intra-coded video frame data processing methods and apparatus
US5732346 *Feb 16, 1996Mar 24, 1998Research In Motion LimitedTranslation and connection device for radio frequency point of sale transaction systems
US5870474 *Dec 29, 1995Feb 9, 1999Scientific-Atlanta, Inc.Method and apparatus for providing conditional access in connection-oriented, interactive networks with a multiplicity of service providers
US6011849 *Aug 28, 1997Jan 4, 2000Syndata Technologies, Inc.Encryption-based selection system for steganography
US6012144 *Oct 1, 1997Jan 4, 2000Pickett; Thomas E.Transaction security method and apparatus
US6016348 *Nov 27, 1996Jan 18, 2000Thomson Consumer Electronics, Inc.Decoding system and data format for processing and storing encrypted broadcast, cable or satellite video data
US6021199 *Oct 14, 1997Feb 1, 2000Kabushiki Kaisha ToshibaMotion picture data encrypting method and computer system and motion picture data encoding/decoding apparatus to which encrypting method is applied
US6021201 *Jan 7, 1997Feb 1, 2000Intel CorporationMethod and apparatus for integrated ciphering and hashing
US6026164 *Dec 26, 1995Feb 15, 2000Kabushiki Kaisha ToshibaCommunication processing system with multiple data layers for digital television broadcasting
US6028932 *Apr 1, 1998Feb 22, 2000Lg Electronics Inc.Copy prevention method and apparatus for digital video system
US6170075 *Dec 18, 1997Jan 2, 20013Com CorporationData and real-time media communication over a lossy network
US6181334 *Jul 3, 1997Jan 30, 2001Actv, Inc.Compressed digital-data interactive program system
US6181364 *May 16, 1997Jan 30, 2001United Video Properties, Inc.System for filtering content from videos
US6184610 *Aug 1, 1996Feb 6, 2001Canon Kabushiki KaishaElectron-emitting device, electron source and image-forming apparatus
US6185369 *Sep 16, 1997Feb 6, 2001Samsung Electronics Co., LtdApparatus and method for synchronously reproducing multi-angle data
US6185546 *Jun 12, 1998Feb 6, 2001Intel CorporationApparatus and method for providing secured communications
US6189096 *Aug 6, 1998Feb 13, 2001Kyberpass CorporationUser authentification using a virtual private key
US6192131 *Nov 15, 1996Feb 20, 2001Securities Industry Automation CorporationEnabling business transactions in computer networks
US6199053 *Apr 8, 1999Mar 6, 2001Intel CorporationDigital signature purpose encoding
US6201927 *Feb 13, 1998Mar 13, 2001Mary Lafuze ComerTrick play reproduction of MPEG encoded signals
US6204843 *Oct 28, 1999Mar 20, 2001Actv, Inc.Compressed digital-data interactive program system
US6209098 *Sep 21, 1998Mar 27, 2001Intel CorporationCircuit and method for ensuring interconnect security with a multi-chip integrated circuit package
US6337947 *Mar 24, 1998Jan 8, 2002Ati Technologies, Inc.Method and apparatus for customized editing of video and/or audio signals
US6351538 *Oct 6, 1998Feb 26, 2002Lsi Logic CorporationConditional access and copy protection scheme for MPEG encoded video data
US6351813 *Aug 7, 1998Feb 26, 2002Digital Privacy, Inc.Access control/crypto system
US6505032 *Oct 10, 2000Jan 7, 2003Xtremespectrum, Inc.Carrierless ultra wideband wireless signals for conveying application data
US6505299 *Mar 1, 1999Jan 7, 2003Sharp Laboratories Of America, Inc.Digital image scrambling for image coding systems
US6510554 *Apr 27, 1998Jan 21, 2003Diva Systems CorporationMethod for generating information sub-streams for FF/REW applications
US6519693 *Jul 21, 1997Feb 11, 2003Delta Beta, Pty, Ltd.Method and system of program transmission optimization using a redundant transmission sequence
US6526144 *Jun 2, 1998Feb 25, 2003Texas Instruments IncorporatedData protection system
US6678740 *Jun 23, 2000Jan 13, 2004Terayon Communication Systems, Inc.Process carried out by a gateway in a home network to receive video-on-demand and other requested programs and services
US6681326 *May 7, 2001Jan 20, 2004Diva Systems CorporationSecure distribution of video on-demand
US6684250 *Apr 3, 2001Jan 27, 2004Quova, Inc.Method and apparatus for estimating a geographic location of a networked entity
US6697489 *Feb 3, 2000Feb 24, 2004Sony CorporationMethod and apparatus for securing control words
US6853728 *Jul 21, 2000Feb 8, 2005The Directv Group, Inc.Video on demand pay per view services with unmodified conditional access functionality
US6988238 *Jan 24, 2000Jan 17, 2006Ati Technologies, Inc.Method and system for handling errors and a system for receiving packet stream data
US7158185 *May 1, 2001Jan 2, 2007Scientific-Atlanta, Inc.Method and apparatus for tagging media presentations with subscriber identification information
US7161833 *Feb 6, 2004Jan 9, 2007Sandisk CorporationSelf-boosting system for flash memory cells
US7336785 *Dec 15, 1999Feb 26, 2008Koninklijke Philips Electronics N.V.System and method for copy protecting transmitted information
US7490236 *Jan 14, 2004Feb 10, 2009Cisco Technology, Inc.Conditional access overlay partial encryption using MPEG transport continuity counter
US7490344 *Mar 5, 2001Feb 10, 2009Visible World, Inc.System and method for seamless switching
US7496198 *Dec 6, 2005Feb 24, 2009Cisco Technology, Inc.Partial dual encrypted stream utilizing program map tables
US20020003881 *Oct 30, 1998Jan 10, 2002Glenn Arthur ReitmeierSecure information distribution system utilizing information segment scrambling
US20020010835 *Jan 25, 2001Jan 24, 2002Post Christian H.Removable memory cartridge system for use with a server or other processor-based device
US20020021805 *Jun 15, 2001Feb 21, 2002Schumann Robert WilhelmDigital content distribution system and method
US20020026587 *May 10, 2001Feb 28, 2002Talstra Johan CornelisCopy protection system
US20030002854 *Jun 29, 2001Jan 2, 2003International Business Machines CorporationSystems, methods, and computer program products to facilitate efficient transmission and playback of digital information
US20030009669 *Mar 6, 2001Jan 9, 2003White Mark Andrew GeorgeMethod and system to uniquely associate multicast content with each of multiple recipients
US20030012286 *Jul 10, 2001Jan 16, 2003Motorola, Inc.Method and device for suspecting errors and recovering macroblock data in video coding
US20030021412 *Jan 2, 2002Jan 30, 2003Candelore Brant L.Partial encryption and PID mapping
US20030026423 *Jan 2, 2002Feb 6, 2003Unger Robert AllanCritical packet partial encryption
US20030026523 *Jul 31, 2001Feb 6, 2003Soo Jin ChuaHigh carrier injection optical waveguide switch
US20030028879 *Oct 7, 2002Feb 6, 2003Gordon Donald F.Picture-in-picture and multiple video streams using slice-based encoding
US20030034997 *Jul 3, 2001Feb 20, 2003Mckain James A.Combined editing system and digital moving picture recording system
US20030035482 *Aug 15, 2002Feb 20, 2003Klompenhouwer Michiel AdriaanszoonImage size extension
US20030035540 *Aug 17, 2001Feb 20, 2003Martin FreemanSystem and method for hybrid conditional access for receivers of encrypted transmissions
US20030035543 *Aug 15, 2001Feb 20, 2003Gillon William M.System and method for conditional access key encryption
US20030046687 *Mar 14, 2002Mar 6, 2003Octiv, Inc.Techniques for manipulating programming breaks in streaming content
US20030059047 *Sep 27, 2001Mar 27, 2003Ryuichi IwamuraPC card recorder
US20040003008 *Jun 25, 2003Jan 1, 2004Wasilewski Anthony J.Method for partially encrypting program data
US20040010717 *Jul 31, 2002Jan 15, 2004Intertainer Asia Pte Ltd.Apparatus and method for preventing digital media piracy
US20040021764 *Jan 3, 2003Feb 5, 2004Be Here CorporationVisual teleconferencing apparatus
US20040028227 *Aug 8, 2002Feb 12, 2004Yu Hong HeatherPartial encryption of stream-formatted media
US20040037421 *Dec 17, 2001Feb 26, 2004Truman Michael MeadParital encryption of assembled bitstreams
US20050004875 *Mar 12, 2002Jan 6, 2005Markku KontioDigital rights management in a mobile communications environment
US20050014713 *Apr 27, 2004Jan 20, 2005Freier Susan M.Modulation of glucagon receptor expression
US20050015816 *Oct 29, 2003Jan 20, 2005Actv, IncSystem and method of providing triggered event commands via digital program insertion splicing
US20050026547 *Aug 31, 2004Feb 3, 2005Moore Scott E.Semiconductor processor control systems, semiconductor processor systems, and systems configured to provide a semiconductor workpiece process fluid
US20050028193 *Apr 13, 2004Feb 3, 2005Candelore Brant L.Macro-block based content replacement by PID mapping
US20050036067 *Aug 5, 2003Feb 17, 2005Ryal Kim AnnonVariable perspective view of video images
US20050036087 *Aug 13, 2003Feb 17, 2005Che-Kuei MaiLiquid crystal display device
US20060026926 *Jul 5, 2005Feb 9, 2006Triel Manfred VBeverage bottling plant for filling bottles with a liquid beverage material having a machine and method for wrapping filled bottles
US20060029060 *Aug 5, 2004Feb 9, 2006Dust NetworksDigraph based mesh communication network
US20070006253 *Jun 29, 2005Jan 4, 2007Pinder Howard GPartial pre-encryption with network-based packet sorting
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8560718 *Mar 3, 2010Oct 15, 2013Ronald R. Davenport, JR.Wired Internet network system for the Internet video streams of radio stations
US20090328163 *Jun 28, 2008Dec 31, 2009Yahoo! Inc.System and method using streaming captcha for online verification
US20100228877 *Mar 3, 2010Sep 9, 2010Davenport Jr Ronald RWired internet network system for the internet video streams of radio stations
Classifications
U.S. Classification386/285
International ClassificationG11B27/00
Cooperative ClassificationG11B27/036, G11B27/034, G11B27/10
European ClassificationG11B27/036, G11B27/034, G11B27/10
Legal Events
DateCodeEventDescription
Feb 13, 2006ASAssignment
Owner name: SONY ELECTRONICS INC., NEW JERSEY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PEDLOW, JR., LEO M.;REEL/FRAME:017160/0567
Effective date: 20060127
Owner name: SONY CORPORATION, JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PEDLOW, JR., LEO M.;REEL/FRAME:017160/0567
Effective date: 20060127
Aug 22, 2014FPAYFee payment
Year of fee payment: 4