Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20080253369 A1
Publication typeApplication
Application numberUS 11/736,463
Publication dateOct 16, 2008
Filing dateApr 17, 2007
Priority dateApr 16, 2007
Also published asUS8711854, US20120189007
Publication number11736463, 736463, US 2008/0253369 A1, US 2008/253369 A1, US 20080253369 A1, US 20080253369A1, US 2008253369 A1, US 2008253369A1, US-A1-20080253369, US-A1-2008253369, US2008/0253369A1, US2008/253369A1, US20080253369 A1, US20080253369A1, US2008253369 A1, US2008253369A1
InventorsDavid R. Oran, William VerSteeg
Original AssigneeCisco Technology, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Monitoring and correcting upstream packet loss
US 20080253369 A1
Abstract
An upstream error controller monitors a media stream at a location upstream from an associated set of receivers receiving the media stream. The upstream error controller sends out suppression notices for any media packets lost upstream causing the receivers to suppress sending lost packet notices. In another embodiment, a repair point joins a primary multicast group with multiple receivers for receiving a native media stream. The repair point also joins a second multicast group receiving multicast backup data for retransmitting or repairing the native media stream that does not include an associated set of receivers. In yet another embodiment, the upstream error controller is used in combination with a hybrid packet repair scheme for adaptively switching among unicast retransmission, multicast retransmission, and Forward Error Correction (FEC).
Images(10)
Previous page
Next page
Claims(20)
1. An apparatus, comprising:
one or more processors; and
a memory coupled to the one or more processors comprising instructions executable by the processors, the processors operable when executing the instructions to:
monitor a media stream at a location upstream from an associated set of receivers receiving the media stream;
identifying media packets in the media stream lost upstream of the associated set of the receivers; and
sending out suppression notices to the associated set of receivers that keep the associated set of receivers from sending back lost packet notices.
2. The apparatus according to claim 1 wherein the one or more processors monitoring the media stream are not located in a direct media stream path between a media stream server and the receivers and wherein the one or more processors use a repair path separate from the media stream for sending the suppression notices.
3. The apparatus according to claim 1 wherein the one or more processors:
identify either a block outage for a group of lost media packets or identify individual outages for individual lost media packets;
send a block suppression notice for the identified block outage that cause the receivers to suppress all lost packet notices for some period of time; and
send individual suppression notices for the identified individual outages that cause the receivers to only suppress lost packet notices associated with the individual lost media packets.
4. The apparatus according to claim 1 wherein the one or more processors:
receive a backup data stream separate from the media stream that is not sent to the receivers; and
send out packets from the backup data stream to the receivers for repairing or replacing the lost packets from the media stream.
5. The apparatus according to claim 4 wherein the backup data stream includes either a second redundant media stream or Forward Error Correction (FEC) packets for correcting the media stream.
6. The apparatus according to claim 1 wherein the one or more processors track a number of lost upstream packets lost upstream of a network monitoring location above a particular associated group of receivers and also track a number of lost downstream packets lost downstream of the network monitoring location, the one or more processors then sending different retransmission packets or Forward Error Correction (FEC) packets to the receivers according to a pattern or number of lost upstream packets and a pattern or number of lost downstream packets.
7. The apparatus according to claim 1 wherein the one or more processors:
receive lost packet notices that identify downstream packets lost in a downstream portion of a network;
update a lost packet table to reflect the number of lost packet notices; and
dynamically select different types of correction or retransmission schemes for correcting or replacing the lost downstream packets according to the lost packet table.
8. The apparatus according to claim 7 wherein the one or more processors:
dynamically identify a general number of receivers actively receiving the media stream;
identify when upstream packets in an upstream portion of the network are lost;
send out suppression notices that keep the receivers from sending back lost packet notices for the identified upstream packets;
update the lost packet table to reflect the number of lost packet notices that would have normally been received from the identified general number of receivers if not for the suppression notices; and
dynamically selecting different types of correction or retransmission schemes for correcting or replacing the lost upstream packets and lost downstream packets according to the lost packet table.
9. The apparatus according to claim 1 wherein:
the media stream is transmitted in a Real Time Protocol (RTP) session; and
the one or more processors are associated with a Real Time Control Protocol (RTCP) feedback address for the RTP session and multicast the suppression notices as Real Time Control Protocol (RTCP) messages.
10. An apparatus, comprising:
computer processing logic configured to receive a media stream, detect lost packets in the media stream, and send lost packet notifications to a repair point identifying the detected lost packets; and
the computer processing logic further configured to suppress the lost packet notifications that would have been normally been sent when the lost packets are detected responsive to received suppression messages received on a separate repair channel.
11. The apparatus according to claim 10 wherein the computer processing logic stops sending the lost packet notifications for any detected lost packets when the suppression messages identify a complete outage of the media stream.
12. The apparatus according to claim 10 wherein the computer processing logic stops sending the lost packet notifications only for specific lost packets identified by the suppression messages.
13. The apparatus according to 10 wherein the computer processing logic uses error correction packets or retransmission packets received from the repair point to recreate or replace the lost packets.
14. The apparatus according to claim 10 wherein the computer processing logic establishes a Real Time Protocol (RTP) session with a media server for receiving the media stream, and establishes a separate Real Time Control Protocol (RTCP) session with the repair point for both sending the lost packet notifications and receiving back the suppression messages.
15. A method, comprising:
joining a primary multicast group with multiple receivers for receiving a native media stream;
joining a second multicast group that does not include a significant fraction of an associated set of receivers for receiving backup data for the native media stream;
identifying lost packets in the native media stream;
identifying the backup data associated with the identified lost packets; and
multicasting the identified backup data to the receivers in the primary multicast group for repairing the lost packets in the native media stream.
16. The method according to claim 15 including:
identifying a general number of receivers actively receiving the media stream;
receiving lost packet notices from the receivers that identify downstream packets lost in a downstream portion of a network:
updating a lost packet table to reflect the number of lost packet notices received from the receivers;
identifying upstream packets lost in an upstream portion of the network;
sending out suppression notices for the identified lost upstream packets that keep the receivers from sending back lost packet notices;
updating the lost packet table to reflect the number of lost packet notices that would have normally been received from the receivers if the suppression notices were not sent; and
dynamically selecting different types of correction or retransmission schemes for correcting or replacing the lost upstream packets and the lost downstream packets according to the lost packet table.
17. The method according to claim 15 including sending out suppression messages that cause the receivers in the primary multicast group to suppress sending lost packet notices for packets lost in the native media stream.
18. The method according to claim 17 including multicasting the suppression messages to all of the receivers that are members of the primary multicast group over a repair channel separate from the native media steam.
19. The method according to claim 16 including:
sending complete outage suppression messages when the entire native media stream is disrupted that cause to the receivers to suppress sending any lost packet messages; and
sending specific packet suppression messages that cause the receivers to only suppress sending lost packet messages for identified lost media packets.
20. The method according to claim 15 including:
tracking a number or pattern of packets lost in the native media stream;
sending out Forward Error Correction (FEC) packets to the receivers when the number or pattern of lost packets is more efficiently corrected by the receivers using FEC; and
sending out retransmissions of the lost packets when the number or pattern of lost packets is more efficiently corrected by retransmitting the lost packets.
Description
  • [0001]
    The following application is a continuation in part of U.S. patent application Ser. No. 11/735,930, filed Apr. 16, 2007, entitled: HYBRID CORRECTION SCHEME FOR DROPPED PACKETS which is incorporated by reference in its entirety.
  • TECHNICAL FIELD
  • [0002]
    The present disclosure relates generally to the field of networking.
  • BACKGROUND
  • [0003]
    Packet switch networks are now being used to transport streaming media, such as video or audio from a media server to multiple receivers, such as computer terminals and Set Top Boxes (STBs). However, packet switched networks typically use a best effort transport that may significantly delay or drop some packets. Retransmission schemes have been designed to retransmit the dropped or delayed media packets to receivers but may be inadequate to resolve or deal with certain packet switched network outages.
  • [0004]
    For example, media packets may be multicast by a media server to multiple different receivers. The packet switched network then automatically branches the multicast packets along different network paths to the different receivers in an associated multicast group. Problems arise when the multicast packets are lost upstream of branch points near the leaves of the delivery tree where the receivers are. For example, the upstream packet loss may cause a significant fraction of the receivers to send Negative ACKnowledgmets (NACKs) back to the media stream repair point. These numerous returned NACKs, all reporting the same loss, use up network bandwidth and can overwhelm the media stream repair point.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0005]
    FIG. 1 is a block diagram of a network that uses an upstream error controller to handle upstream packet loss.
  • [0006]
    FIG. 2 shows how the upstream error controller in FIG. 1 suppresses NACKs for upstream packet loss.
  • [0007]
    FIG. 3A shows different repair points used to detect upstream packet loss at different locations in a network.
  • [0008]
    FIG. 3B shows how the upstream error controller in FIG. 1 both suppresses NACKs and sends out repair or retransmission packets.
  • [0009]
    FIG. 4 is a flow diagram showing in more detail how the upstream error controller operates.
  • [0010]
    FIG. 5 is a flow diagram showing how the receivers respond to the upstream error controller.
  • [0011]
    FIG. 6 is a block diagram showing how the upstream error controller can be used in conjunction with a hybrid packet repair scheme.
  • [0012]
    FIG. 7 shows NACK tables that are used by the upstream error controller and the hybrid packet repair scheme.
  • [0013]
    FIG. 8 is a flow diagram showing in more detail the combined operation of the upstream controller and hybrid packet repair scheme.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS Overview
  • [0014]
    An upstream error controller monitors a media stream at a location upstream from an associated set of receivers receiving the media stream. The upstream error controller sends out suppression notices for any media packets lost upstream causing the receivers to suppress sending lost packet notices.
  • [0015]
    In another embodiment, a repair point joins a primary multicast group with multiple receivers for receiving a native media stream. The repair point also joins a second multicast group for receiving multicast backup data for retransmitting or repairing the native media stream that does not include an associated set of receivers.
  • [0016]
    In yet another embodiment, the upstream error controller is used in combination with a hybrid packet repair scheme that adaptively switches among unicast retransmission, multicast retransmission, and Forward Error Correction (FEC) depending on the receiver population and the nature of the error prompting the repair operation.
  • Description
  • [0017]
    FIG. 1 shows an Internet network 12 that includes a packet switched network 40 having multiple different nodes 42A-42C. The nodes 42 may be routers, switches, gateways, or any other network processing device that directs packets 24 from media server 14 to different receivers 50A-50H. The media server 14 (media source) could store media locally or receive media from another server or media source via another network, satellite, cable, or any other communication media.
  • [0018]
    The receivers 50 can be any device that receives media packets 24. For example, the receives 50 could be Personal Computers (PCs), Set-Top Boxes (STBs), Personal Digital Assistants (PDAs), Voice Over Internet Protocol (VoIP) phones, Internet connected televisions, Digital Video Recorders (DVRs), cellular telephones, etc.
  • [0019]
    A repair server is alternatively referred to as a repair point 16 and receives and caches the media packets 24 from media stream 22 sent by media server 14 to the receivers 50. The packet switched network 40 includes an upstream portion 40A and a downstream portion 40B. The upstream portion 40A of the network is upstream of a significant fraction of the receivers 50 located in the downstream network portion 40B.
  • Upstream Packet Loss
  • [0020]
    Any combination of media packets 24 may be dropped, lost, and/or delayed for any number of different reasons and any number of different locations along the network paths from media server 14 to the different receivers 50. Any of the receivers 50 that do not successfully receive any of the media packets 24 may send associated Negative ACKnowledge (NACK) messages 26 back to the repair point 16.
  • [0021]
    A multicast media packet 24 lost in the upstream portion 40A of the network would likely not be received by any of the receivers 50 that are members of the same multicast group in an associated downstream network portion 40. Accordingly, every one of the receivers in the participating multicast group would normally send back NACKs 26 to repair point 16. This implosion of NACK messages 26 would use up substantial network bandwidth and possibly overwhelm the repair point 16.
  • [0022]
    To stop NACK implosions, an upstream error controller 20B is operated by one or more processors 18 in the repair point 16. In one embodiment, the upstream error controller 20B is implemented as computer instructions in a memory that are executed by the processor 18. However, the operations of the controller 20B could be implemented in any type of logic device or circuitry.
  • [0023]
    One characteristic worth noting is that the repair point 16 is located in the upstream portion 40A of the network along with the media server 14 and typically receives the media packets 24 prior to the packets being forwarded through the downstream network portion 40B to the receivers 50. Accordingly, the repair point 16 will typically be able to detect an upstream loss or outage in media stream 22 prior to that loss being detected by the receivers 50. This is shown in FIG. 1 where a packet 24 lost on network branch 15A will be identified as a packet loss by repair point 16 on branch 15 b and also identified as a packet loss by all of the receivers 50 stemming off of branch 15C.
  • [0024]
    The identified upstream packet loss indicates either the media packet 24 was indeed lost on a common branch 15A upstream of both the repair point 16 and the receivers 50, or the loss was due to a failure of an upstream interface of the repair point 16. The failure of the repair point 16 would be rare and usually detectable by some means that could take the repair point 16 offline for the affected media stream 22.
  • [0025]
    Accordingly, any packet detected as lost by the repair point 16 may be identified as an upstream loss by the upstream error controller 20B. There is no reason for the receivers 50 to issue NACKs 26 for such upstream losses since the repair point 16 is already aware of the packet loss.
  • [0026]
    Referring to FIG. 2, the upstream error controller 20B monitors the media stream 22 at network portion 40A upstream from a significant fraction of the receivers 50 receiving the media stream 22. The upstream error controller 20B can accordingly identify media packets 24 lost upstream of a significant fraction of the receivers 50. In this example, it is apparent that any media packets 24 not received by repair point 16 will also not be received by any of the receivers 50.
  • [0027]
    The upstream error controller 20B will accordingly send out a Suppression NACK (SN) message 25 to prevent the NACK implosion shown in FIG. 1. In one embodiment, the SN message 25 is multicast on a separate repair channel, connection, or session 28 to all of the receivers 50 that are members of the multicast group for media stream 22. In response to receiving the SN message 25, all of the receivers 50 suppress sending back NACKs after detecting the same lost media packet 24.
  • [0028]
    It should be understood that the upstream network portion 40A and downstream network portion 40B may part of a larger network. Referring to FIG. 3A, there may be multiple downstream sub-networks 40B_1-40B_N that are each serviced by a different associated repair point 16A-16N, respectively. Each of these downstream sub-networks 40B_1-40B_N may still be served by the same media source 14. Thus, any particular repair point 16 would be upstream to what is referred to as a significant fraction of associated receivers 50. Alternatively, each repair point 16 may be described as having an associated set of downstream receivers 50.
  • [0029]
    Any combination of receivers 50 connected to the different downstream sub-networks 40B_1-40B_N could be members of the same multicast group. An upstream loss may be on a link leading only to a subset of the repair points 16 and correspondingly to a subset, but not all, of the downstream sub-networks 40B in network 12. In this case the NACK implosion may only apply to the one or more repair points 16 that was down-tree from the point of loss.
  • [0030]
    For example, a lost packet detected by repair point 16B may only cause a NACK implosion on downstream sub-network 40B_2. Similarly, a lost packet detected by repair point 16N may only cause a NACK implosion on downstream sub-network 40B_N.
  • [0031]
    The repair points 16 can be located anywhere in the overall network 12. It is also possible that some repair points 16 may be upstream of other repair points. For example, repair point 16C is upstream of repair points 16A and 16B. The downstream sub-networks 40B serviced by a particular repair point 16 may cover any combination or arrangement of nodes 42, sub-network branches, and receivers 50.
  • Packet Repair
  • [0032]
    Referring to FIG. 3B, there are typically three cases for upstream loss. A total outage is where the repair point 16 stops receiving packets for media stream 22 altogether. An un-repairable loss is not a complete outage, but the repair point 16 cannot repair the lost packets. For example, there may be too many lost packets to repair or the repair, point 16 may not have the data required to repair the media stream. A repairable loss is where the repair point 16 has the ability to repair the one or more lost packets.
  • [0033]
    FIG. 3B shows the repair point 16 receiving a separate backup data stream 44 that may be a redundant media stream 45 and/or an FEC stream 46. A scheme for providing a redundant media stream is described in copending U.S. patent application entitled: UNIFIED TRANSMISSION SCHEME FOR MEDIA STREAM REDUNDANCY, Filed: Mar. 14, 2007, Ser. No. 11/686,321 which is herein incorporated by reference.
  • [0034]
    The backup data stream 44 might only be sent to one or more repair points 16 and not transmitted all the way through network 40 to the receivers 50. For example, the one or more repair points 16 can join a second multicast group associated with the backup data stream 44. Accordingly, the media source 14 will only multicast the back up data stream 44 to the repair points 16 thus reducing bandwidth utilization in network 40.
  • [0035]
    Referring to both FIG. 3B and FIG. 4, the upstream error controller 20B in operation 54A identifies lost upstream media packets 24 and identifies the type of loss in operation 54B. For example as described above, the upstream error controller may distinguish between a total media stream outage, un-repairable loss, and repairable loss. When a total outage is identified in operation 54C, the upstream error controller may multicast total outage suppression NACK 25 to the receivers 50 in operation 54D.
  • [0036]
    For example, all or a large portion of the media stream 22 may not be successfully received by repair point 16. The total outage suppression NACK 25 accordingly directs the receivers 50 to suppress all NACKs for all lost packets for some period of time. In one embodiment, the total outage NACK 25 is sent using a Real Time Control Protocol (RTCP) report. But any other type of messaging protocol could alternatively be used for sending the suppression NACK 25. The error controller 20B would then continue to periodically multicast the total outage NACK 25 until the native media stream 22 returns to repair point 16.
  • [0037]
    In operation 54E, the error controller 20B identifies one or more individual lost packets. Accordingly, the upstream error controller in operation 54F multicasts individual NACK suppression packets 25 associated with the specific missing media packets 24. These suppression packets 25 direct the receivers 50 to not send NACKs for the particular media packets identified in the suppression NACK 25, since those identified media packets will not be forthcoming. Multiple individual suppression NACKs can also be sent in the same suppression message 25.
  • [0038]
    There are two sub-cases where the identified lost upstream packets 24 are repairable and depend on whether the repair point 16 recovers the lost data via FEC or via a redundant media stream. When the backup data stream 44 comprises FEC packets 46, the lost upstream packets 24 are identified as recoverable via FEC in operation 54G. The FEC packets 36 used by repair point 16 to reconstruct the lost packets are then multicast to the receivers 50 in operation 54I. The receivers 50 then perform the corresponding reconstruction using the minimal number of FEC repair packets.
  • [0039]
    The lost media packets 24 may alternatively be recovered by the repair point 16 via redundant media stream 45. In this case, the upstream error controller 20B in operation 54H constructs RTP retransmission packets 34 from the redundant media stream 45. The retransmission packets 34 are then multicast over the multicast repair session 27 to the receivers 50 in operation 54J.
  • [0040]
    The upstream error controller 20B may also multicast a NACK, RTCP message, or suppression NACK 25 to the receivers 50 that identifies the particular type of packets 34 or 36 sent to repair the lost media packets.
  • [0041]
    FIG. 5 explains in more detail the operations performed by the receivers. Referring to FIGS. 3 and 5, computer processing logic in the receivers 50, such as a processor executing instructions is configured to detect lost packets in the media stream 22 and send lost packet notifications, such as NACKs 26 (FIG. 1), to the repair point 16. The computer logic in the receivers 50 is further configured in operation 56A to receive and detect suppression NACKs 25 from the repair point 16 that suppress the NACKs 26 that would have normally been sent out when a lost packet is detected.
  • [0042]
    The receiver in operation 56B determines when the suppression NACK 25 is associated with total media stream outage. For example, the NACK message 25 may include a tag identifying a total media steam outage. In this case, the receiver 50 stops sending any NACKs back to the repair point 16 for some predetermined period of time. The receiver 50 could alternatively identify the suppression NACK 25 as a packet specific suppression in operation 56D. Accordingly, the receiver in operation 56E will not send NACKs for the specific media packets identified in the suppression NACK 25.
  • [0043]
    For repairable packet losses, the receiver may receive some notification in operation 56F that the lost packet is repairable via FEC. This notification may be provided along with the FEC packets 36, the suppression NACK 25, or with some other message. The receiver in operation 56G then suppresses any NACKs that would have normally been sent and then uses the received FEC packets for repairing the lost packets.
  • [0044]
    Alternatively, the receiver may receive a notification in operation 56H that the lost packet is repairable via retransmission. This notification again may be provided along with the actual retransmission packets 34, in the suppression NACK 25, or in some other message. The receiver in operation 561 suppresses any NACKs that would have been normally sent and then uses the retransmission packets 34 received over the repair session 27 to repair the lost packets.
  • Hybrid Packet Repair for Downstream Packet Loss
  • [0045]
    Upstream packet repair can be combined with a hybrid packet repair scheme described in co-pending patent application Ser. No. 11/735,930, entitled: HYBRID CORRECTIVE SCHEME FOR DROPPED PACKETS, filed Apr. 16, 2007 which is herein incorporated by reference. The hybrid packet repair scheme adaptively switches among unicast retransmission, multicast retransmission, and FEC depending on the receiver population and the nature of the error prompting the repair operation.
  • [0046]
    When there is a packet loss in the downstream network portion 40B in FIG. 1, NACKs 26 are still sent by receivers 50. The hybrid packet repair scheme then determines the most efficient unicast, multicast, or FEC scheme for repairing the lost downstream packets according to the received NACK pattern.
  • [0047]
    FIG. 6 shows the retransmission server (repair point) 16 in more detail. The processor 18 operates both a hybrid packet repair scheme 20A and the upstream error controller 20B that in one embodiment are computer executable software instructions. For each media channel 22, the repair point 16 caches the packet data 24A necessary for repairing any of the lost packets in media channel 22. The hybrid packet repair scheme 20A operates in conjunction with a retransmission cache 60 that caches the media packets 24A transmitted by the media server 14. Retransmission cache 60 is used in conjunction with a NACK table 62 that counts the number of NACKs 26 received for each cached media packet 24A. For example, the NACKs 26 identify the sequence numbers for lost media packets. Each time a NACK 26 is received by repair point 16, a NACK count 66 for the associated lost packets 64 are incremented in NACK table 62.
  • [0048]
    Based on the NACK pattern in NACK table 62, the hybrid packet repair scheme 20A sends different combinations of unicast retransmission packets 32, multicast retransmission packets 34 and/or FEC packets 36 to the receivers 50. The repair packets are used to replace the lost packets identified in the NACK messages 26.
  • [0049]
    The repair point 16 can also gauge the intensity of a NACK implosion even when NACKs might be lost due to congestion or the inability of the repair point to receive and process all the NACKs 26. The three loss cases of individual loss, correlated loss, and outage on the downstream primary multicast stream 22 can also be analyzed. In the case of correlated loss, the repair point 16 can also determine enough about the loss pattern to choose among unicast packet retransmission, mulitcast packet retransmission, and FEC repair.
  • [0000]
    Combining Hybrid Packet Repair with Upstream Packet Loss
  • [0050]
    Upstream packet loss detection provided by the upstream error controller 20B can be combined with the hybrid packet repair provided by the hybrid packet repair scheme 20A. Whenever a media packet 24 is identified as lost in the upstream portion 40A of the packet switched network (FIG. 1), the upstream error controller 20B increases the lost packet count for each identified lost media packet 24 in NACK table 62. The packet count 66 is increased by approximately the number of receivers 50 in the multicast group associated with the media stream 22.
  • [0051]
    For example, two different media packets 24 may be identified by the upstream error controller 20B as being lost in the upstream network portion 40A. The upstream controller 20B may have also previously determined an approximate number of receivers 50 in the multicast group receiving the media stream 22. For example, the receivers 50 may periodically send RTCP reports 58 to repair point 16 identifying their associated media streams. The repair point uses these RTCP reports 58 to identify the number of receivers 50 actively receiving different media streams. The upstream error controller 20B then increases the NACK count 66 for the two lost media packets in media stream 22 by the number of identified active receivers. Identifying receiver density is further explained in co-pending application Ser. No. 11/735,930, entitled: HYBRID CORRECTIVE SCHEME FOR DROPPED PACKETS, filed Apr. 16, 2007 which has already been incorporated by reference.
  • Selecting Repair Schemes
  • [0052]
    FIG. 7 shows different NACK patterns 70 that may determine the type of repair scheme 32, 34, or 36 used to repair lost packets. It should be understood that the example NACK patterns shown in FIG. 7 are only for illustrative purposes. The actual number of NACKs and the number of associated lost packets considered by the hybrid packet repair scheme 20A may vary according to the type of network, network bandwidth, type of transmitted media, number of receivers 50, etc.
  • [0053]
    Referring to FIGS. 2, 6 and 7, a first example NACK pattern 70A in NACK table state 62A shows one NACK received for a first media packet and one NACK received for a seventh media packet. In this example, the hybrid repair scheme 20A may determine that sending two unicast retransmission packets 32 (FIG. 6) is the most efficient scheme for repairing the two lost packets. For example, sending two unicast retransmission packets 32 would use less network bandwidth than sending two multicast retransmission packets.
  • [0054]
    A second example NACK pattern 70B in NACK table state 62B shows several hundred NACKs received only for the third media packet. In this state, the hybrid packet repair scheme 20A may determine that sending one multicast retransmission packet 34 for the third lost packet is most efficient. For example, sending one multicast retransmission packet 34 uses less bandwidth than sending 200 separate unicast packets 32 to each one of the individual receivers sending one of the 200 NACKs 26.
  • [0055]
    As described above, if the third packet was lost in the upstream network portion 40A (FIG. 2), then the upstream error controller 20B may have previously sent out a suppression NACK 25 (FIG. 2) and then assumed, based on the RTCP reports 58 (FIG. 6), that 200 receivers would have eventually sent NACKs back to the repair point 16.
  • [0056]
    Accordingly, the upstream error controller 20B operates as a proxy for the receivers 50 and artificially adds 200 NACKs to the third packet in table state 62B. The hybrid packet repair scheme 20A then operates in the manner described above by selecting a particular repair scheme based on the number and pattern of NACKs in table state 62B.
  • [0057]
    A third example NACK pattern 70C in NACK table state 62C indicates three different packets have been lost by multiple different receivers 50. In this condition, the hybrid packet repair scheme 20A may determine that sending two FEC packets is the most efficient way to repair the lost packets. For example, two FEC packets may be able to repair all three lost packets 1, 2, and 6. Thus, multicasting two FEC packets 36 (FIG. 2) would be more efficient than sending 110 individual unicast retransmission packets 32 or sending three separate multicast retransmission packets 34.
  • [0058]
    The FEC packets 36 can work with any number of packet-level FEC schemes, and do not require any particular form of FEC. FEC mapping onto IP protocols is described in a large number of Internet Engineering Task Force (IETF) Request For Comments (RFCs) and drafts, such as RFC3009, RFC3452, RFC3695, and therefore is not described in any further detail.
  • [0059]
    A fourth example NACK pattern 70D in NACK table state 62D indicates five different packets 1, 2, 4, 5, and 7 have been lost. In this case a combination of unicast retransmission packets 32 and multicast retransmission packets 34 may be the most efficient repair scheme. For example, unicast retransmission packets 32 may be sent to the relatively small number of individual receivers that lost packets 1, 2, 5, ad 7 and a single multicast retransmission packet 34 may be sent to all of the associated receivers 50 for lost packet 4.
  • [0060]
    The NACK pattern 70D could be a result of a combination of both upstream and downstream packet losses. For example, packet 4 could have been lost in the upstream network portion 40A and packets 1, 2, 5, and 7 could have been lost somewhere in the downstream network portion 40B.
  • [0061]
    In this example, both the upstream error controller 20B and the hybrid packet repair scheme 20A work in combination to record the NACK pattern 70D in NACK table state 62D. The upstream error controller 20B detects lost packet 4, sends suppression NACK 25, and then adds the 130 NACK count to table 62D for lost packet 4 on behalf of the associated receivers 50. In conjunction, the hybrid packet repair scheme 20A increments the NACK count for the lost packets 1, 2, 5, and 7 according to the number of NACKs 26 that are actually received by the repair point 16 from particular receivers 50.
  • [0062]
    A fifth example NACK pattern 70E in NACK table state 62E indicates every one of the packets 1-7 has been lost by different combinations of receivers. In this condition, the hybrid packet repair scheme 20A or the upstream error controller 20B may determine that there is insufficient bandwidth to repair any of the lost packets and may abort any attempt to repair lost packets. In addition, the upstream error controller 20B may also send out a total outage suppression NACK 25 or specific packet suppression NACKs 25 to prevent a NACK implosion.
  • [0063]
    In the case of upstream loss, the bandwidth computation can be more aggressive about using bandwidth for multicast or FEC repair. The reason is that when the packet is lost upstream it does not consume any bandwidth on the downstream links. Therefore, sending as many retransmission or FEC packets as the number of lost upstream packet may not require substantially any extra bandwidth.
  • [0064]
    The upstream packet loss may be separately identified and separate criteria used by the hybrid packet repair scheme 20A for determining whether to use a retransmission scheme, FEC repair, or abort packet repair. Referring still to FIG. 7, a NACK table 62F may include a first column 63A associated with the number of lost upstream packets and a second column 63B associated with the number of lost downstream packets.
  • [0065]
    The count value inserted by upstream error controller 20B in column 63A may be the number of projected NACKs that would have normally been returned by the downstream receivers 50 if no suppression NACK was sent. Alternatively, the count value in column 63A may simply be the number of detected lost upstream packets. The count value inserted by hybrid packet repair scheme 20A in column 63B is the number of NACKs returned by the receivers 50 due to downstream packet loss.
  • [0066]
    In one comparison, the total number of NACKs are the same for both table 62E and table 62F. Recall that the hybrid packet repair scheme 20A may have decided to not provide any repair based on the NACK pattern in table 62E. However, isolating the number of lost upstream packets in column 63A of table 62F may change the previous no-repair decision made by the hybrid packet repair scheme 20A. For example, as explained above, the repair scheme 20A may determine that little or no additional bandwidth is required for repairing the lost upstream packets identified in column 63A. Accordingly, the repair scheme may apply a heavier weight or bias toward correcting the upstream packets identified in column 63A. A separate criteria or weighting similar to that described above for table states 62A-62E is then used when deciding how to correct the lost downstream packets identified in column 63B.
  • [0067]
    FIG. 8 shows another example of operations performed by the hybrid packet repair scheme 20A in combination with the upstream error controller 20B. In order to repair the media stream, the repair point 16 needs to determine which packets to retransmit using unicast packets 32, which packets to retransmit using multicast packets 34, whether to switch to FEC-based repair 36 rather than retransmitting, or whether to give up when there is insufficient aggregate bandwidth to sufficiently repair the media stream 22 and satisfy the receivers 50.
  • [0068]
    Operation 76 monitors for packets lost in the upstream network portion 40A. In operation 78, suppression NACKs may be multicast to associated receivers to suppress the impending NACK implosion. Operation 78 will also account for the suppressed NACKs by increasing the NACK count in the NACK table 62 by the number of associated media stream receivers 50. In operation 80, NACK packets 26 are monitored for any other downstream packet loss.
  • [0069]
    As described above, operations 78 and 80 may add to one common upstream/downstream NACK count value as shown in tables 62A-62E in FIG. 7. Alternatively, operation 78 may separately count and track the lost upstream packets and operation may separately count and track the lost downstream packets. Any subsequent decisions regarding which type of repair, if any, to provide may then be based on either a combined NACK count as shown in tables 62A-62E in FIG. 7 or based on the separate upstream and downstream NACK or lost packet counts as shown in table 62F in FIG. 7
  • [0070]
    The number and/or pattern of monitored NTACKs in combination with identified upstream packet loss and limits on network bandwidth may indicate in operation 82 that no repair should be performed. Accordingly, the identified lost media packets 24 are not repaired in operation 84.
  • [0071]
    Otherwise, operation 86 determines if error correction is available for repairing the lost packets. For example, when a limited number of different packets are indicated as lost, error correction packets 36 may be sent to the receivers 50. The receivers then locally recreate the data from the lost packets using the FEC packets 36.
  • [0072]
    In operation 88, the NACK pattern in table 62 (FIG. 2) may indicate that unicast repair is the most efficient scheme for repairing lost packets. Accordingly, the identified lost packets are sent using unicast retransmissions in operation 90 to the specific receivers identifying the lost packets.
  • [0073]
    In operation 92, the NACK pattern in table 62 may indicate that multicast retransmission is the most efficient scheme for repairing lost packets. Accordingly, multicast retransmissions of the identified lost packets are sent in operation 94 to all of the receivers in the multicast group. In operation 96, the NACK pattern in table 62 may indicate that both unicast retransmission and multicast retransmission should be used. Accordingly in operation 98 unicast retransmissions of certain lost packets are sent to specific receivers 50 and multicast retransmissions of other lost packets are sent to all of the receivers 50 in the multicast group. In operation 99, forward error correction may be used whenever applicable to improve repair efficiency.
  • Establishing Media Channels
  • [0074]
    Referring to the figures above, a given media channel 22 has a primary Multicast Real Time Protocol (RTP) session along with a corresponding Real Time Control Protocol (RTCP) control channel. The media channel 22 may have a unicast repair RTP/RTCP session which can be established on demand according to the scheme described in U.S. patent application entitled: RETRANSMISSION-BASED STREAM REPAIR AND STREAM JOIN, filed: Nov. 17, 2006, Ser. No. 11/561,237 which is herein incorporated by reference. This RTP/RTCP session may be used for unicast retransmission repair when the hybrid packet repair scheme 20A determines that unicast retransmission is the most effective way to repair a particular error.
  • [0075]
    A second RTP/RTCP multicast session is added for multicast repair. The multicast retransmissions 34 can be sourced by the same retransmission server 16 at the same source address as the feedback target address for the main multicast RTP session. However, a different destination group address is used. Receivers 50 participating in the repair scheme can join and leave this SSM group at the same time they join and leave the main SSM RTP session. This multicast repair session is used for both sending the multicast retransmission packets 34 using the RTP retransmission payload format and for sending FEC repair packets 36 using the native payload type for an in use FEC scheme. The two forms of unicast and multicast retransmission are distinguished by the receivers 50 using standard RTP conventions for payload type multiplexing in a single session.
  • [0076]
    Other unicast receiver feedback 58 (FIG. 6) is sent to the feedback address for the primary media session 22, and therefore is available to the retransmission server 16. This feedback information in one embodiment as described above may be RTCP packets containing RTCP receiver reports. The retransmission server 16 uses the RTCP reports to estimate a population of the receivers 50 that are “homed” on retransmission server 16 for repairs. This receiver population is dynamic and approximate since receivers come and go, RTCP-Receiver Report packets may be lost, and mapping of receivers 50 to repair points can change.
  • [0077]
    Based on the identified population of receivers 50 and the pattern of NACKs 26, either RTP unicast repair packets 32 are sent via unicast retransmission, RTP multicast repair packets 34 are sent via SSM multicast retransmission, or RTP FEC repair packets 36 are sent using a SSM multicast retransmission.
  • [0078]
    Several preferred examples of the present application have been described with reference to the accompanying drawings. Various other examples of the invention are also possible and practical. This application may be exemplified in many different forms and should not be construed as being limited to the examples set forth herein.
  • [0079]
    The figures listed above illustrate preferred examples of the application and the operation of such examples. In the figures, the size of the boxes is not intended to represent the size of the various physical components. Where the same element appears in multiple figures, the same reference numeral is used to denote the element in all of the figures where it appears. When two elements operate differently, different reference numerals are used regardless of whether the two elements are the same class of network device.
  • [0080]
    Only those parts of the various units are shown and described which are necessary to convey an understanding of the examples to those skilled in the art. Those parts and elements not shown are conventional and known in the art.
  • [0081]
    The system described above can use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware.
  • [0082]
    For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules or features of the flexible interface can be implemented by themselves, or in combination with other operations in either hardware or software.
  • [0083]
    Having described and illustrated the principles of the invention in a preferred embodiment thereof, it should be apparent that the invention may be modified in arrangement and detail without departing from such principles. We claim all modifications and variation coming within the spirit and scope of the following claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4426682 *May 22, 1981Jan 17, 1984Harris CorporationFast cache flush mechanism
US4802085 *Jan 22, 1987Jan 31, 1989National Semiconductor CorporationApparatus and method for detecting and handling memory-mapped I/O by a pipelined microprocessor
US4811203 *Mar 3, 1982Mar 7, 1989Unisys CorporationHierarchial memory system with separate criteria for replacement and writeback without replacement
US5307477 *May 10, 1993Apr 26, 1994Mips Computer Systems, Inc.Two-level cache memory system
US5483587 *Jun 8, 1994Jan 9, 1996Linkusa CorporationSystem and method for call conferencing
US5524235 *Oct 14, 1994Jun 4, 1996Compaq Computer CorporationSystem for arbitrating access to memory with dynamic priority assignment
US5551001 *Jun 29, 1994Aug 27, 1996Exponential Technology, Inc.Master-slave cache system for instruction and data cache memories
US5600366 *Mar 22, 1995Feb 4, 1997Npb Partners, Ltd.Methods and apparatus for digital advertisement insertion in video programming
US5636354 *Sep 6, 1994Jun 3, 1997Motorola Inc.Data processor with serially accessed set associative memory cache interface and method
US5729687 *Dec 20, 1993Mar 17, 1998Intel CorporationSystem for sending differences between joining meeting information and public meeting information between participants in computer conference upon comparing annotations of joining and public meeting information
US5734861 *Dec 12, 1995Mar 31, 1998International Business Machines CorporationLog-structured disk array with garbage collection regrouping of tracks to preserve seek affinity
US5870763 *Mar 10, 1997Feb 9, 1999Microsoft CorporationDatabase computer system with application recovery and dependency handling read cache
US5933195 *Dec 30, 1997Aug 3, 1999Sarnoff CorporationMethod and apparatus memory requirements for storing reference frames in a video decoder
US5933593 *Mar 17, 1997Aug 3, 1999Oracle CorporationMethod for writing modified data from a main memory of a computer back to a database
US6034746 *Oct 27, 1997Mar 7, 2000International Business Machines CorporationSystem and method for inserting data into a digital audio/video data stream
US6065050 *Jun 5, 1996May 16, 2000Sun Microsystems, Inc.System and method for indexing between trick play and normal play video streams in a video delivery system
US6236854 *Aug 17, 1998May 22, 2001Nortel Networks LimitedMethod and apparatus for controlling a conference call
US6278716 *Mar 23, 1998Aug 21, 2001University Of MassachusettsMulticast with proactive forward error correction
US6567929 *Jul 13, 1999May 20, 2003At&T Corp.Network-based service for recipient-initiated automatic repair of IP multicast sessions
US6570926 *Oct 18, 1999May 27, 2003Telcordia Technologies, Inc.Active techniques for video transmission and playback
US6608820 *Aug 17, 1998Aug 19, 2003Nortel Networks Ltd.Method and apparatus for controlling a conference call
US6608841 *Dec 30, 1999Aug 19, 2003Nokia Networks OySystem and method for achieving robust IP/UDP/RTP header compression in the presence of unreliable networks
US6675216 *Jul 6, 1999Jan 6, 2004Cisco Technolgy, Inc.Copy server for collaboration and electronic commerce
US6721290 *Apr 3, 2000Apr 13, 2004Hrl Laboratories, LlcMethod and apparatus for multicasting real time traffic in wireless ad-hoc networks
US6735572 *Mar 13, 2002May 11, 2004Mark LandesmannBuyer-driven targeting of purchasing entities
US6744785 *Oct 23, 2001Jun 1, 2004Skystream Networks, Inc.Network distributed remultiplexer for video program bearing transport streams
US6766418 *Apr 30, 2001Jul 20, 2004Emc CorporationMethods and apparatus for accessing data using a cache
US6865157 *May 26, 2000Mar 8, 2005Emc CorporationFault tolerant shared system resource with communications passthrough providing high availability communications
US6865540 *Aug 9, 2000Mar 8, 2005Ingenio, Inc.Method and apparatus for providing group calls via the internet
US6876734 *Feb 29, 2000Apr 5, 2005Emeeting.Net, Inc.Internet-enabled conferencing system and method accommodating PSTN and IP traffic
US6910148 *Dec 7, 2000Jun 21, 2005Nokia, Inc.Router and routing protocol redundancy
US6989856 *Nov 6, 2003Jan 24, 2006Cisco Technology, Inc.System and method for performing distributed video conferencing
US7003086 *Jan 18, 2001Feb 21, 2006Cisco Technology, Inc.Apparatus and method for allocating call resources during a conference call
US7007098 *Aug 17, 2000Feb 28, 2006Nortel Networks LimitedMethods of controlling video signals in a video conference
US7164680 *Jun 4, 2001Jan 16, 2007Koninklijke Philips Electronics N.V.Scheme for supporting real-time packetization and retransmission in rate-based streaming applications
US7180896 *Jun 23, 2000Feb 20, 2007Mitsubishi Denki Kabushiki KaishaMethod and system for packet retransmission
US7224702 *Aug 30, 2001May 29, 2007The Chinese University Of Hong KongSystem and method for error-control for multicast video distribution
US7234079 *Jul 11, 2003Jun 19, 2007Agency For Science, Technology & ResearchMethod and system for enabling recovery of data stored in a computer network; a method and a system for recovering data stored in a computer network
US7324527 *Sep 23, 1999Jan 29, 2008Siemens AktiengesellschaftMethod for connecting communications terminals to a exchange via a communications network
US7373413 *Jun 28, 2000May 13, 2008Cisco Technology, Inc.Devices and methods for minimizing start up delay in transmission of streaming media
US7379653 *Feb 20, 2002May 27, 2008The Directv Group, Inc.Audio-video synchronization for digital systems
US7392424 *May 9, 2005Jun 24, 2008Nokia Inc.Router and routing protocol redundancy
US7397759 *Mar 15, 2004Jul 8, 2008Microsoft CorporationResponse for spurious timeout
US7532621 *Aug 30, 2006May 12, 2009Cornell Research Foundation, Inc.Lateral error correction for time-critical multicast
US7562277 *Jul 14, 2009Samsung Electronics Co., Ltd.Data transmitting/receiving system and method thereof
US7707303 *Sep 6, 2002Apr 27, 2010Telefonaktiebolaget L M Ericsson (Publ)Method and devices for controlling retransmissions in data streaming
US7870590 *Oct 20, 2004Jan 11, 2011Cisco Technology, Inc.System and method for fast start-up of live multicast streams transmitted over a packet network
US20010000540 *Nov 30, 2000Apr 26, 2001Cooper Frederick J.Time shifting by concurrently recording and playing an audio stream
US20020004841 *May 1, 2001Jan 10, 2002Ryusuke SawatariCommunication apparatus and communication method
US20020010938 *May 25, 2001Jan 24, 2002Qian ZhangResource allocation in multi-stream IP network for optimized quality of service
US20020087976 *Dec 28, 2000Jul 4, 2002Kaplan Marc P.System and method for distributing video with targeted advertising using switched communication networks
US20020114332 *Feb 16, 2001Aug 22, 2002Apostolopoulos John G.Method and system for packet communication employing path diversity
US20030025786 *Jul 31, 2001Feb 6, 2003Vtel CorporationMethod and system for saving and applying a video address from a video conference
US20030025832 *Oct 10, 2001Feb 6, 2003Swart William D.Video and digital multimedia aggregator content coding and formatting
US20030076850 *Oct 22, 2001Apr 24, 2003Jason James L.Determining packet size in networking
US20030101408 *Nov 29, 2001May 29, 2003Emin MartinianApparatus and method for adaptive, multimode decoding
US20030158899 *Feb 9, 2001Aug 21, 2003John HughesApparatus and methods for video distribution via networks
US20040057449 *Sep 20, 2002Mar 25, 2004Black Peter J.Communication manager for providing multimedia in a group communication network
US20040071128 *Jul 2, 2003Apr 15, 2004Samsung Electronics Co., Ltd.Reliable multicast data retransmission method by grouping wireless terminals in wireless communication medium and apparatus for the same
US20040078624 *Dec 27, 2002Apr 22, 2004At&T Corp.Network-based service for the repair of IP multicast sessions
US20040100937 *Nov 26, 2002May 27, 2004Tao ChenMulti-channel transmission and reception with block coding in a communication system
US20040114576 *Aug 20, 2002Jun 17, 2004Tomoaki ItohDate transmission/reception method
US20040143672 *Jan 7, 2003Jul 22, 2004Microsoft CorporationSystem and method for distributing streaming content through cooperative networking
US20040165527 *Dec 5, 2003Aug 26, 2004Xiaoyuan GuControl traffic compression method
US20050058131 *Jul 28, 2004Mar 17, 2005Samuels Allen R.Wavefront detection and disambiguation of acknowledgments
US20050069102 *Sep 26, 2003Mar 31, 2005Sbc Knowledge Ventures, L.P.VoiceXML and rule engine based switchboard for interactive voice response (IVR) services
US20050074007 *Jul 28, 2004Apr 7, 2005Samuels Allen R.Transaction boundary detection for reduction in timeout penalties
US20050078171 *Nov 6, 2003Apr 14, 2005Cisco Technology, Inc. A California CorporationSystem and method for performing distributed video conferencing
US20050078698 *Jan 24, 2003Apr 14, 2005Yoshinobu ArayaBroadcast communicating apparatus, method and system, and program thereof, and program recording medium
US20050081244 *Oct 10, 2003Apr 14, 2005Barrett Peter T.Fast channel change
US20050099499 *Nov 10, 2003May 12, 2005Ariel BraunsteinRecyclable, digital one time use video camera
US20050138372 *May 25, 2004Jun 23, 2005Masaru KajiharaInformation delivering system, information delivering apparatus, information delivering method and computer readable information recording medium
US20060020995 *Jul 20, 2004Jan 26, 2006Comcast Cable Communications, LlcFast channel change in digital media systems
US20060048193 *Oct 29, 2003Mar 2, 2006Jacobs Lambert H AI-Picture insertion on request
US20060075084 *Sep 30, 2005Apr 6, 2006Barrett LyonVoice over internet protocol data overload detection and mitigation system and method
US20060075443 *Sep 27, 2004Apr 6, 2006Eckert Wieland PSwitching to a broadcast data stream
US20060083263 *Oct 20, 2004Apr 20, 2006Cisco Technology, Inc.System and method for fast start-up of live multicast streams transmitted over a packet network
US20060085551 *Oct 15, 2004Apr 20, 2006Motorola Inc.Methods for streaming media data
US20060104458 *Oct 17, 2005May 18, 2006Kenoyer Michael LVideo and audio conferencing system with spatial audio
US20060120378 *Oct 26, 2004Jun 8, 2006Izumi UsukiMobile-terminal-oriental transmission method and apparatus
US20060126667 *Dec 10, 2004Jun 15, 2006Microsoft CorporationAccelerated channel change in rate-limited environments
US20060143669 *Dec 22, 2005Jun 29, 2006Bitband Technologies Ltd.Fast channel switching for digital TV
US20060159093 *Jan 20, 2006Jul 20, 2006Samsung Electronics Co.; LtdBroadcast splitter enabling selective transmission in real time
US20070008934 *Jun 15, 2006Jan 11, 2007Srinivasan BalasubramanianMulticarrier CDMA system
US20070044130 *Aug 16, 2005Feb 22, 2007AlcatelSystem and method for implementing channel change operations in internet protocol television systems
US20070098079 *Jun 15, 2004May 3, 2007Boyce Jill MDecoding method and apparatus enabling fast channel change of compressed video
US20070110029 *Nov 12, 2005May 17, 2007Motorola, Inc.Method for linking communication channels of disparate access technologies in a selective call unit
US20070123284 *Dec 22, 2003May 31, 2007Paul Schliwa-BertlingMethod of reducing delay
US20070133435 *Nov 2, 2004Jun 14, 2007Telefonaktiebolaget Lm Ericsson (Publ)Method and System for Floor Control for Group Call Telecommunications Services
US20080062990 *Nov 17, 2006Mar 13, 2008Cisco Technology, Inc.Retransmission-based stream repair and stream join
US20090034627 *Jul 31, 2007Feb 5, 2009Cisco Technology, Inc.Non-enhancing media redundancy coding for mitigating transmission impairments
US20090034633 *Jul 31, 2007Feb 5, 2009Cisco Technology, Inc.Simultaneous processing of media and redundancy streams for mitigating impairments
US20090049361 *Aug 13, 2007Feb 19, 2009Provigent LtdProtected communication link with improved protection indication
US20090055540 *Aug 20, 2007Feb 26, 2009Telefonaktiebolaget Lm Ericsson (Publ)Methods and Systems for Multicast Control and Channel Switching for Streaming Media in an IMS Environment
US20090119722 *Nov 1, 2007May 7, 2009Versteeg William CLocating points of interest using references to media frames within a packet flow
US20090150715 *Dec 6, 2007Jun 11, 2009John PickensDelivery of streams to repair errored media streams in periods of insufficient resources
US20100003692 *Jan 7, 2010Bristol-Myers Squibb CompanyGamma secretase notch biomarkers
US20100005360 *Jan 7, 2010Cisco Technology, Inc.Importance-based fed-aware error-repair scheduling
US20100036962 *Aug 8, 2008Feb 11, 2010Gahm Joshua BSystems and Methods of Reducing Media Stream Delay
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7937531May 3, 2011Cisco Technology, Inc.Regularly occurring write back scheme for cache soft error reduction
US7940644Mar 14, 2007May 10, 2011Cisco Technology, Inc.Unified transmission scheme for media stream redundancy
US7965771Feb 27, 2006Jun 21, 2011Cisco Technology, Inc.Method and apparatus for immediate display of multicast IPTV over a bandwidth constrained network
US8031701Nov 17, 2006Oct 4, 2011Cisco Technology, Inc.Retransmission-based stream repair and stream join
US8218654Mar 8, 2006Jul 10, 2012Cisco Technology, Inc.Method for reducing channel change startup delays for multicast digital video streams
US8266492 *Sep 11, 2012Kabushiki Kaisha ToshibaFEC transmission processing apparatus and method and program recording medium
US8462847Jun 11, 2013Cisco Technology, Inc.Method and apparatus for immediate display of multicast IPTV over a bandwidth constrained network
US8539319 *Jan 28, 2011Sep 17, 2013Cisco Technology, Inc.Providing capacity optimized streaming data with forward error correction
US8588077Mar 8, 2011Nov 19, 2013Cisco Technology, Inc.Retransmission-based stream repair and stream join
US8693333Apr 29, 2011Apr 8, 2014Huawei Technologies Co., Ltd.Method, network node and system for suppressing lost packet retransmission
US8707141Aug 2, 2011Apr 22, 2014Cisco Technology, Inc.Joint optimization of packetization and error correction for video communication
US8711854Mar 30, 2012Apr 29, 2014Cisco Technology, Inc.Monitoring and correcting upstream packet loss
US8769591Feb 12, 2007Jul 1, 2014Cisco Technology, Inc.Fast channel change on a bandwidth constrained network
US8787153Apr 11, 2008Jul 22, 2014Cisco Technology, Inc.Forward error correction based data recovery with path diversity
US8904262 *Aug 8, 2013Dec 2, 2014Cisco Technology, Inc.Providing capacity optimized streaming data with forward error correction
US9015555Nov 18, 2011Apr 21, 2015Cisco Technology, Inc.System and method for multicast error recovery using sampled feedback
US9083585Oct 4, 2013Jul 14, 2015Cisco Technology, Inc.Retransmission-based stream repair and stream join
US9288136 *Sep 21, 2012Mar 15, 2016Cisco Technology, Inc.Method and apparatus for in-band channel change for multicast data
US9312989Jul 7, 2008Apr 12, 2016Cisco Technology, Inc.Importance-based FEC-aware error-repair scheduling
US20060198326 *Mar 7, 2005Sep 7, 2006Yifan YangIP multicast streaming data error correction
US20080189489 *Feb 1, 2007Aug 7, 2008Cisco Technology, Inc.Regularly occurring write back scheme for cache soft error reduction
US20080225850 *Mar 14, 2007Sep 18, 2008Cisco Technology, Inc.Unified transmission scheme for media stream redundancy
US20090063928 *Aug 26, 2008Mar 5, 2009Kabushiki Kaisha ToshibaFec transmission processing apparatus and method and program recording medium
US20100005360 *Jan 7, 2010Cisco Technology, Inc.Importance-based fed-aware error-repair scheduling
US20110199907 *Aug 18, 2011Zheng HewenMethod, network node and system for suppressing lost packet retransmission
US20120063462 *Nov 18, 2011Mar 15, 2012Huawei Technologies Co., Ltd.Method, apparatus and system for forwarding video data
US20120198300 *Aug 2, 2012Neil DienerProviding capacity optimized streaming data with forward error correction
US20130326303 *Aug 8, 2013Dec 5, 2013Cisco Technology, Inc.Providing capacity optimized streaming data with forward error correction
US20140086243 *Sep 21, 2012Mar 27, 2014Cisco Technology, Inc.Method and apparatus for in-band channel change for multicast data
CN102223218A *Apr 15, 2010Oct 19, 2011华为技术有限公司Method and equipment for inhibiting media message retransmission
CN103124292A *Dec 21, 2012May 29, 2013东莞中山大学研究院Method and device for scheduling data in P2P (peer-to-peer) stream media system
EP2352248A1 *Jul 28, 2009Aug 3, 2011Huawei Technologies Co., Ltd.Lost packet retransmission suppressing method, network node and system
EP2424241A1 *Apr 30, 2010Feb 29, 2012Huawei Technologies Co., Ltd.Method, device and system for forwarding video data
EP2424241A4 *Apr 30, 2010Nov 21, 2012Huawei Tech Co LtdMethod, device and system for forwarding video data
EP3041184A1 *Dec 29, 2014Jul 6, 2016Koninklijke KPN N.V.Controlling transmission of data over a lossy transmission path to a client
WO2010048825A1 *Jul 28, 2009May 6, 2010Huawei Technologies Co., Ltd.Lost packet retransmission suppressing method, network node and system
WO2011104115A1 *Feb 9, 2011Sep 1, 2011Ipwireless, IncApparatus and methods for broadcast-unicast communication handover
Classifications
U.S. Classification370/390, 370/401
International ClassificationH04L12/56, H04L12/26
Cooperative ClassificationH04L2001/0093, H04L43/0829, H04L1/0009, H04L41/069, H04L12/1868, H04L1/1877, H04L1/16, H04L1/0001
European ClassificationH04L43/08E1, H04L41/06G, H04L12/24D1, H04L12/18R1, H04L1/00A, H04L1/16, H04L1/18T3A
Legal Events
DateCodeEventDescription
Apr 17, 2007ASAssignment
Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ORAN, DAVID R.;VERSTEEG, WILLIAM;REEL/FRAME:019174/0021
Effective date: 20070417