|Publication number||US8015311 B2|
|Application number||US 12/235,310|
|Publication date||Sep 6, 2011|
|Priority date||Sep 21, 2007|
|Also published as||US8005975, US20090083433, US20110022660|
|Publication number||12235310, 235310, US 8015311 B2, US 8015311B2, US-B2-8015311, US8015311 B2, US8015311B2|
|Original Assignee||Polytechnic Institute Of New York University|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (18), Classifications (4), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application claims benefit to U.S. Provisional Application Ser. No. 60/994,857 (referred to as “the '857 provisional” and incorporated herein by reference) titled “PEER TO PEER VIDEO STREAMING TECHNIQUE FOR REDUCING OR MINIMIZING DELAYS,” filed on Sep. 21, 2007, and listing Yong LIU as the inventor. The scope of the present invention is not limited to any requirements of the specific embodiments described in the '857 provisional application.
This invention was made with Government support and the Government may have certain rights in the invention as provided for by grant number CNS-0519998 by the National Science Foundation.
1. §1.1. Field of the Invention
The present invention concerns peer-to-peer (“P2P”) communications. In particular, the present invention concerns reducing or minimizing delays in P2P communications, such as P2P video streaming.
2.§1.2. Background Information
IP-level multicast has not been widely deployed in the Internet. Recently, multicast functionality has been implemented at the application layer. (See, e.g., the paper Chu, Y., Rao, S., and Zhang, H., “A Case for End System Multicast,” Proceedings of ACM SIGMETRICS (2000); and the paper Francis, P. Yoid, “Extending the Internet Multicast Architecture,” Tech. rep., Cornell University, April 2000 available at http://www.cs.cornell.edu/people/francis/voidArch.pdf.) For example, users interested in the same video program may form an application layer overlay network for P2P video streaming.
P2P video streaming systems are generally categorized as tree-based and mesh-based. In a tree-based P2P video streaming system, peers form an application layer multicast tree. Video data flows from a source server to a peer through multiple hops in the tree. The video delay perceived by a peer includes video transmission delays and propagation delays on all hops. The fan out degree of a peer in the tree is determined by the number of simultaneous video streams that can be supported by a peer's uploading capacity. An example of a tree-based P2P video streaming system is Overcast. (See, e.g., the article Jannotti, J., Gifford, D. K., Johnson, K. L., Kaashoek, M. F., AND O'Toole, JR., J. W., “Overcast: Reliable Multicasting with an Overlay Network,” Proceedings of Operating Systems Design and Implementation, pp. 197-212 (2000).)
Unfortunately, present tree-based P2P video streaming systems have some problems. More specifically, since a typical peer can only upload a small number of concurrent video streams, the multicast tree formed by peers tends to have a narrow width, and consequently, a large depth. Unfortunately, with such a large depth multicast tree, peers at low layers of the multicast tree can experience excessive delays between the time of video request to the receipt of the video stream. For cases in which each peer has only enough capacity to upload one stream to one other peer, the multicast tree topology formed by N peers becomes a chain with N hops. In this worst case scenario, the multicast tree is a chain (has a width of one and a depth of N), and the delay for the peer at the end of the chain is the aggregate video chunk transmission and propagation delays along the N hops.
To address the foregoing problem, multi-tree based P2P video streaming approaches have been proposed. (See, e.g., the papers: Castro, M., Druschel, P., Kermarrec, A.-M., Nandi, A., Rowstron, A., and Singh, A., “SplitStream: High-bandwidth Multicast in Cooperative Environments,” Proceedings of ACM Symposium on Operating Systems Principles (SOSP) (2003); and Kostic, D., Rodriguez, A., Albrecht, J., and Vahdat, A., “Bullet: High Bandwidth Data Dissemination using an Overlay Mesh,” Proceedings of ACM Symposium on Operating Systems Principles (SOSP) (2003).) In multi-tree streaming, the server divides the stream into m sub-streams. Instead of one streaming tree, m sub-trees are formed—one for each sub-stream. When a fully balanced multi-tree is used for streaming, the node degree of each sub-tree is m. Each peer joins all sub-trees to retrieve sub-streams. Any peer is positioned on an internal node in only one sub-tree and only uploads one sub-stream to its m children peers in that sub-tree. In each of the remaining (m−1) sub-trees, the peer is positioned on a leaf node and downloads a sub-stream from its parent peer. It has been shown that if all peers have the same uploading capacity and the propagation delays among peers are dominated by video chunk transmission delays, the average and worst-case delays for peers in a m-degree multi-tree streaming systems are mlogm N times the video chunk transmission delay from one peer to another peer. The shortest streaming delay is achieved when the degree m=3, and shortest achievable delay is 1.89 log2 N times the chunk transmission delay.
Many recent P2P streaming systems adopt mesh-based streaming approach. (See, e.g., the articles: Zhang, X., Liu, J., Li, B., And Yum, T-S. P., “DONet/CoolStreaming: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming,” Proceedings of IEEE INFOCOM (Mar. 2005); Pai, V., Kumar, K., Tamilmani, K., Sambamurthy, V., And Mohr, A., “Chainsaw: Eliminating Trees from Overlay Multicast,” The Fourth International Workshop on Peer-to-Peer Systems (2005); Zhang, M., Zhao, L., Tang, J. L. Y., And Yang, S., “A Peer-to-Peer Network for streaming Multicast through the Internet,” Proceedings of ACM Multimedia (2005); and Magharei, N., And Rejaie, R., “Prime: Peer-to-Peer Receiver-Driven Mesh-Based Streaming,” Proceedings of IEEE INFOCOM (2007).) In some of these mesh-based systems, there is no static streaming topology. Rather, peers establish and terminate peering relationships dynamically. Further, a peer may download video from and/or upload video to multiple peers simultaneously. Unfortunately, the video data flows among peers are largely uncoordinated. Consequently, the delay performance of existing mesh-based streaming systems is unsatisfactory.
In view of the foregoing limitations of existing P2P video streaming techniques, it would be useful to have improved P2P video streaming methods and systems. For example, it would be useful to have P2P video streaming methods and systems in which peers experience lower streaming delays.
At least some embodiments consistent with the present invention provide P2P streaming which disseminates video chunks to all peers with the minimum (or at least reduced) delay. After obtaining a new video chunk, a peer keeps transmitting (uploading) that video chunk to other peers until all peers receive it. The aggregate bandwidth that can be utilized to transmit a video chunk increases quickly. For example, the aggregate peer bandwidth used to transmit a video chunk can double every time slot. For a homogeneous P2P streaming system with N peers, a time slot is defined as a unit of the single chunk transmission delay between two peers. Using the P2P streaming method, a video chunk can be disseminated to all peers within 1+log2 N time slots.
An exemplary environment in which embodiments consistent with the present invention may be used is introduced in §4.1. Then, exemplary methods and schedulers for performing operations consistent with the present invention are described in §4.2. Next, exemplary apparatus for performing various operations and generating and/or storing various information in a manner consistent with the present invention are described in §4.3. Refinements, alternatives and extensions are described in §4.4. Finally, some conclusions about such exemplary embodiments are provided in §4.5.
§4.2.1 Snowball P2P Video Streaming
A new P2P streaming method for disseminating video chunks to all peers with the minimum delay is now described. After obtaining a new video chunk, a peer keeps transmitting (uploading) that video chunk to other peers until all peers receive it. The accumulation of the aggregate upload bandwidth for the chunk mimics the formation of a snowball. This snowball approach quickly increases the aggregate bandwidth that can be utilized to transmit a video chunk. For example, the aggregate peer bandwidth used to transmit a video chunk can double every time slot. For a homogeneous P2P streaming system with N peers, a time slot is defined as a unit of the single chunk transmission delay between two peers. Using the P2P streaming method, a video chunk can be disseminated to all peers within 1+log2 N time slots. The '857 provisional demonstrated that this video chunk dissemination approach is indeed the fastest P2P chunk dissemination approach.
In continuous video streaming, there are multiple video chunks in transition at any given time. They compete for the upload bandwidth available on all peers. The allocation of peer upload bandwidth to active video chunks determines their delays, which in turn determines user-perceived streaming delay performance. To illustrate, consider the case when N=2k. The minimum video chunk delay in this case is k+1. At the beginning of some time slot j, there are 2j−i−1 peers having video chunk i, j−k≦i<j. Let Ψ(j) denote the set of 2k−1=N/2 peers with video chunk (j−k). Within time slot j, peers in Ψ(j) will upload video chunk j−k to N/2 peers that don't currently have the video chunk and finish the dissemination of video chunk j−k. Peers in set Ψ(j) can be utilized to upload other newer video chunks in time slot j+1. To make it happen, in time slot j, a peer who has a video chunk with ID j−k<l<j will upload its video chunk to some peer in set Ψ(j). In addition, different peers should upload to different peers in Ψ(j). This makes the number of peers with video chunk l, j−k<l<j, double at the beginning of time slot j+1 to 2j−l.
At the same time, a new video chunk with ID j has been generated by the video source server at the beginning of time slot j. The server will upload video chunk j to a peer in Ψ(j) who doesn't have any video chunk with ID l, j−k<l<j. The scheduling method repeats in the subsequent time slots until the complete stream is disseminated to all N peers. In this way, the upload capacity for each video chunk doubles every time slot and all video chunks can be disseminated to all peers within the minimum delay 1+log2 N.
§184.108.40.206 Hierarchical Snowball P2P Video Streaming
At least some embodiments consistent with the present invention modify the snowball P2P video streaming method to account for a P2P network with peers having different upload capacities (or upload capacities made available or offered). In such embodiments, peers are classified into a hierarchy according to their uploading capacities. Peers at the same layer of the hierarchy have roughly the same uploading capacity, with peers at higher layers have higher capacities. Under the modified hierarchical scheduling method, video chunks are sent to peers with a higher capacity (that is, those classified at a higher level of the hierarchy) first. Video chunks are then disseminated, from the top of the hierarchy to the bottom of the hierarchy, via peers at any intermediate levels of the hierarchy, to all peers.
More specifically, the server only uploads video chunks to peers at layer 1 (the highest layer) of the hierarchy. Peers at layer 1 collaboratively execute the snowball P2P video streaming method to quickly disseminate the video chunks among themselves. In addition, each peer at layer 1 also acts as a “proxy” video server for a set of peers at layer 2 of the hierarchy, and uploads chunks to them whenever it has spare upload capacity. Peers at layer 2 of the hierarchy (e.g., sharing the same video server proxy from layer 1 of the hierarchy) again execute the snowball P2P video streaming among themselves and act as video proxies for peers at layer 3 of the hierarchy, so on and so forth. The streaming process continues until all peers at the bottom layer receive the chunks.
Table 1 illustrates a video chunk transmission (uploading) schedule for a system with eight super-peers (that is, peers classified at a higher (in this case, top) level of the hierarchy) and eight free-riders (that is, peers classified at the next (in the case, bottom) level of the hierarchy). The sixteen peers are classified into one of two levels in a two-level hierarchy. Super-peers, each having an available uploading capacity of 2, form the top level of the two-level hierarchy. The super-peers are indexed from 0 to 7. Free-riders form the bottom level of the two level hierarchy. The free-riders are labeled from a to h.
In the table, a tuple (x,y) at row i, column j means super peer i will upload video chunk x to peer y in time slot j. A video chunk is uploaded to all super peers first. Then it will be uploaded to all free-riders within one additional round (e.g., one additional half time slot). The overall chunk dissemination delay is 3.5 time slots.
§220.127.116.11 Dynamic Snowball P2P Video Streaming—Centralized Scheduling
The previously described P2P video streaming methods perform best when employed in a “static” network environment—that is, a network environment in which peers are stable, their upload bandwidth is fixed and the delay between peers is negligible relative to video chunk transmission delays. In a more general network environment, peers may join and leave, the bandwidth on peering connections may fluctuate, and propagation delays between peers may be random and can become comparable with chunk transmission delays. Thus, for many practical network environments, video chunk uploading schedules should be calculated dynamically to account for, and adapt to, network bandwidth and delay variations.
After at least a round of scheduling, the video chunks are transmitted in accordance with the schedule (Block 460) before the method 400 is left (Node 470). The method 400 may be repeated until each of the peers is scheduled to receive each of the video chunks of the video stream.
Referring back to block 430, in at least some embodiments consistent with the present invention, the demand factor dk is a ratio of the number of peers without video chunk k to the number of peers with video chunk k.
Referring back to block 440, in at least some embodiments consistent with the present invention, the total expected workload for peer i is
where dk is the demand factor and wherein B is a set of video chunks buffered by the peer that have not yet been sent to all of the N peers.
Referring back to block 460, in at least some embodiments consistent with the present invention, (appropriate parts of) the scheduling information may be signaled to (appropriate ones of) the peers.
Thus, in at least some embodiments consistent with the present invention, scheduling works in rounds. At each round, let A be the set of video chunks that have been generated by the video source server, but have not been uploaded to all peers. For any video chunk kεA, let Rk be the number of peers with video chunk k, Nk be the number of peers without video chunk k. The demand factor for video chunk k is defined as dk=Nk/Rk, which is the expected workload for each peer with video chunk k to upload it to some peers without it. For any peer i, let Bi ⊂A be the set of video chunks in its buffer. The total expected workload for peer i can be calculated as
The P2P uploading schedule in a round may be determined as follows:
As discussed in the '857 provisional, simulations have shown that when there are no bandwidth variations and the propagation delays are negligible, the dynamic snowball streaming method achieves the minimum single-chunk delay bound. When peer upload bandwidth is random and the average equals the streaming rate, the delay is much longer than the corresponding average single-chunk delay bound. However, if the average peer upload bandwidth is increased to 1.25 times the streaming rate, the delay performance is very close to the minimum delay bound. When the propagation delays are non-negligible and are randomly distributed according to normal distribution with mean equals the single chunk transmission time, and when each peer's upload bandwidth equals the streaming rate (resource index=1), the delay performance is worse than the single-chunk delay bound. When the resource index is increased to 1.25 and 1.5 respectively, the delay performance is improved greatly and converges to the minimum delay bound. When both peer upload bandwidth and the propagation delays are random, and when the resource index equals 1, the chunk delivery time is much longer than the single-chunk delay bound. Increasing the system resource index to 1.25 and 1.5 can largely improve the delay performance and achieve the minimum delay bound. Thus, with a little bit extra peer uploading bandwidth, the dynamic snowball streaming method can approach the minimum delay bounds in the face of random variations in peer uploading bandwidth and propagation delays on peering connections.
§18.104.22.168 Dynamic Snowball P2P Video Streaming—Distributed Scheduling
The centralized dynamic snowball streaming method requires global knowledge of the chunk availabilities and the expected workloads on peers. The transmissions (uploads) for video chunks from all peers are coordinated by the centralized scheduling algorithm. On the other hand, in a distributed P2P streaming system, each peer only communicates with (e.g., for purposes of exchanging state information and for purposes of video chunk transmissions) a subset of peers (e.g., neighboring peers, or peers within N hops of the peer).
Referring back to block 630, in at least some embodiments consistent with the present invention, the act of determining, for any other peers of the subset, a per-time slot uploading schedule includes determining a number (C) of video chunks that the peer can transmit in one time slot, determining, from among the video chunks for which it has received an unfilled request, the oldest video chunk belonging to the active chunk set, and determining a number (X) of peers that have sent the peer a request for the determined oldest video chunk. If the determined number (X) of peers that have sent the peer a request for the determined oldest video chunk is greater than the determined number (C) of video chunks that the peer can transmit in one time slot, then, a number (C) of peers with a lowest expected workload, from among peers that have sent the peer a request for the determined oldest video chunk, is determined and the determined oldest video chunk is sent to the determined number (C) of peers with the lowest expected workload.
Otherwise, the determined oldest video chunk is sent to all of the peers that have sent the peer a request for the determined oldest video chunk. If X<C at least one other video chunk to at least one other peer using the age of the video chunk and the workloads of the peers of the subset. Otherwise, no further video chunks are sent in the time slot.
Referring back to block 610, in at least some embodiments consistent with the present invention, each peer exchanges, periodically, the video chunk availability information with any other peers with which it communicates.
Referring back to block 620, in at least some embodiments consistent with the present invention, the estimated active chunk set is a system-wide estimate. In at least some other embodiments consistent with the present invention, the estimated active chunk set is an estimate across the peer, and the peers with which it communicates. In yet some other embodiments consistent with the present invention, the estimated active chunk set is an estimate across the peer, the peers with which it communicates, and peers with which those peers communicate thereby defining a subset of peers including the peer and peers within two hops of the peer.
Referring back to block 720, in at least some embodiments consistent with the present invention, the demand factor for a kth video chunk, dk is the ratio of the number of peers of the subset without chunk k to the number of peers of the subset with chunk k.
Referring back to block 730, in at least some embodiments consistent with the present invention, the total expected workload for peer i is
where dk is the demand factor for the kth video chunk and wherein B is a set of video chunks buffered by the peer that have not yet been sent to all peers of the subset.
As can be appreciated from the foregoing, in a distributed dynamic scheduling method consistent with the present invention, peers may periodically exchange video chunk availability information with their neighbors, and send requests to neighbors to download missing video chunks. Each peer uses the video chunk availability information from a subset of peers to estimate the system-wide active video chunk set A and the demand factor for each chunk. Each peer locally estimates the expected workloads for peers of the subset. Then, each peer may determine its uploading schedules round by round as follows:
A peer may generate requests for the oldest (earliest chunk in streamed sequence) video chunk needed and may send a request the peer, of the subset of peers, with lowest workload. More generally, a peer may send requests for video chunks as a function of at least one of (i) age of the video chunk, and (ii) peer workload.
The one or more processors 910 may execute machine-executable instructions (e.g., C or C++ running on the Solaris operating system available from Sun Microsystems Inc. of Palo Alto, Calif. or the Linux operating system widely available from a number of vendors such as Red Hat, Inc. of Durham, N.C.) to perform one or more aspects of the present invention. For example, one or more software modules, when executed by a processor, may be used to perform one or more of the operations and/or methods of
In one embodiment, the machine 900 may be one or more conventional personal computers or servers. In this case, the processing units 910 may be one or more microprocessors. The bus 940 may include a system bus. The storage devices 920 may include system memory, such as read only memory (ROM) and/or random access memory (RAM). The storage devices 920 may also include a hard disk drive for reading from and writing to a hard disk, a magnetic disk drive for reading from or writing to a (e.g., removable) magnetic disk, and an optical disk drive for reading from or writing to a removable (magneto-) optical disk such as a compact disk or other (magneto-) optical media.
A user may enter commands and information into the personal computer through input devices 932, such as a keyboard and pointing device (e.g., a mouse) for example. Other input devices such as a microphone, a joystick, a game pad, a satellite dish, a scanner, or the like, may also (or alternatively) be included. These and other input devices are often connected to the processing unit(s) 910 through an appropriate interface 930 coupled to the system bus 940. The output devices 934 may include a monitor or other type of display device, which may also be connected to the system bus 940 via an appropriate interface. In addition to (or instead of) the monitor, the personal computer may include other (peripheral) output devices (not shown), such as speakers and printers for example.
The operations of schedulers, servers, and/or peers, such as those described above, may be performed on one or more computers. Such computers may communicate with each other via one or more networks, such as the Internet for example. Referring back to
Alternatively, or in addition, the various operations and acts described above may be implemented in hardware (e.g., integrated circuits, application specific integrated circuits (ASICs), field programmable gate or logic arrays (FPGAs), etc.).
Although embodiments described above were discussed with respect to streamed video chunks, other embodiments consistent with the present invention can be used with other streamed information, such as streamed audio for example.
Although some of the exemplary distributed scheduling methods used workloads of peers, simply using workloads is most effective in scenarios where peers have the same (or similar) upload capacity (or offered upload capacity). In at least some embodiments consistent with the present invention, offered upload capacity of a peer can be considered along with its workload (e.g., as a ratio of workload/offered upload capacity), instead of workload alone.
Although some of the embodiments described operations occurring in time slots, these time slots need not correspond to time slots for transmitting a video chunk between peers (although they may). Thus, time slots may be simply considered rounds of operations in at least some embodiments consistent with the present invention.
As can be appreciated from the foregoing, embodiments consistent with the present invention can provide P2P video streaming in which peers experience lower streaming delays.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US7627678 *||Nov 3, 2003||Dec 1, 2009||Sony Computer Entertainment America Inc.||Connecting a peer in a peer-to-peer relay network|
|US20050286546 *||Jun 21, 2005||Dec 29, 2005||Arianna Bassoli||Synchronized media streaming between distributed peers|
|US20060069800 *||Sep 28, 2004||Mar 30, 2006||Microsoft Corporation||System and method for erasure coding of streaming media|
|US20060190615 *||Jan 23, 2006||Aug 24, 2006||Panwar Shivendra S||On demand peer-to-peer video streaming with multiple description coding|
|US20070130361 *||Feb 5, 2007||Jun 7, 2007||Microsoft Corporation||Receiver driven streaming in a peer-to-peer network|
|US20080133767 *||Nov 22, 2007||Jun 5, 2008||Metis Enterprise Technologies Llc||Real-time multicast peer-to-peer video streaming platform|
|US20080140853 *||Oct 5, 2007||Jun 12, 2008||David Harrison||Peer-to-Peer Streaming Of Non-Live Content|
|US20080155120 *||Dec 10, 2007||Jun 26, 2008||Deutsche Telekom Ag||Method and system for peer-to-peer content dissemination|
|US20090019174 *||Jul 13, 2007||Jan 15, 2009||Spotify Technology Holding Ltd||Peer-to-Peer Streaming of Media Content|
|US20090106393 *||Mar 15, 2005||Apr 23, 2009||Siemens Business Services Ltd.||Data distribution system and method|
|US20090202221 *||Jun 27, 2006||Aug 13, 2009||Thomson Licensing||Support for Interactive Playback Devices for Performance Aware Peer-to-Peer Content-on Demand Service|
|US20090240833 *||Apr 21, 2006||Sep 24, 2009||Yongmin Zhang||Method and Apparatus for Realizing Positioning Play of Content Stream in Peer-to-Peer Network|
|US20090300673 *||Jun 11, 2007||Dec 3, 2009||Nds Limited||Peer- to- peer set-top box system|
|US20100030909 *||Nov 29, 2006||Feb 4, 2010||Thomson Licensing||Contribution aware peer-to-peer live streaming service|
|US20100042668 *||Dec 14, 2007||Feb 18, 2010||Thomson Licensing||Hierarchically clustered p2p streaming system|
|US20100138511 *||Jun 28, 2007||Jun 3, 2010||Yang Guo||Queue-based adaptive chunk scheduling for peer-to peer live streaming|
|US20100146138 *||Dec 9, 2008||Jun 10, 2010||Hong Kong Applied Science And Technology Research Institute Co., Ltd.||Method of data request scheduling in peer-to-peer sharing networks|
|US20100185753 *||Aug 30, 2007||Jul 22, 2010||Hang Liu||Unified peer-to-peer and cache system for content services in wireless mesh networks|
|Sep 22, 2008||AS||Assignment|
Owner name: POLYTECHNIC UNIVERSITY, NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LIU, YONG;REEL/FRAME:021571/0560
Effective date: 20080922
|Jul 19, 2011||AS||Assignment|
Owner name: POLYTECHNIC INSTITUTE OF NEW YORK UNIVERSITY, NEW
Free format text: CHANGE OF NAME;ASSIGNOR:POLYTECHNIC UNIVERSITY;REEL/FRAME:026618/0967
Effective date: 20080624
|Apr 17, 2015||REMI||Maintenance fee reminder mailed|
|May 20, 2015||FPAY||Fee payment|
Year of fee payment: 4
|May 20, 2015||SULP||Surcharge for late payment|