US20150012662A1 - Smart pre-fetching for peer assisted on-demand media - Google Patents

Smart pre-fetching for peer assisted on-demand media Download PDF

Info

Publication number
US20150012662A1
US20150012662A1 US14/460,660 US201414460660A US2015012662A1 US 20150012662 A1 US20150012662 A1 US 20150012662A1 US 201414460660 A US201414460660 A US 201414460660A US 2015012662 A1 US2015012662 A1 US 2015012662A1
Authority
US
United States
Prior art keywords
peers
peer
media
upload bandwidth
media file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/460,660
Other versions
US10218758B2 (en
Inventor
Jin Li
Cheng Huang
Keith W. Ross
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Priority to US14/460,660 priority Critical patent/US10218758B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, JIN, HUANG, CHENG, ROSS, KEITH W.
Publication of US20150012662A1 publication Critical patent/US20150012662A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Application granted granted Critical
Publication of US10218758B2 publication Critical patent/US10218758B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • H04L47/722Admission control; Resource allocation using reservation actions during connection setup at the destination endpoint, e.g. reservation of terminal resources or buffer space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/80Actions related to the user profile or the type of traffic
    • H04L47/805QOS or priority aware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5681Pre-fetching or pre-delivering data based on network characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • H04L67/1078Resource delivery mechanisms
    • H04L67/108Resource delivery mechanisms characterised by resources being split in blocks or fragments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1087Peer-to-peer [P2P] networks using cross-functional networking aspects
    • H04L67/1091Interfacing with client-server systems or between P2P systems

Definitions

  • the invention is related to peer-to-peer (P2P) file sharing, and in particular, to a system and method for P2P file sharing that enables on-demand multimedia streaming while minimizing server bandwidth requirements.
  • P2P peer-to-peer
  • VoD Video-on-demand
  • YouTube.com web site states that it currently serves approximately 100 million videos per day to visiting client computers, with nearly 20 million unique visitors per month.
  • Other examples of major Internet VoD publishers include MSN® Video, Google® Video, Yahoo® Video, CNN®, and a plethora of other VoD sites.
  • ISPs Internet Service Providers
  • CDNs Content Delivery Networks
  • P2P peer-to-peer
  • P2P peer-to-peer
  • P2P peer-to-peer
  • the basic idea of peer-to-peer (P2P) networks is to allow each peer in the network to directly share individual files, and/or to assist a server in distributing either individual files or streaming media content.
  • P2P peer-to-peer
  • Most peer-assisted VoD belongs to the category of the single video approach where cooperating peers help to deliver parts of a single video at any given time, rather than parts of multiple videos, and where each peer may be at a different point in the playback of the video.
  • live streaming of videos where all peers are at the same point in the video playback are often based on application-level multicast (ALM) protocols for media streaming.
  • ALM application-level multicast
  • the peer nodes are self organized into an overlay tree over an existing IP network. The streaming data is then distributed along the overlay tree. The cost of providing bandwidth is then shared amongst the peer nodes, thereby reducing the bandwidth burden (and thus dollar cost) of running the media server.
  • a somewhat related conventional P2P media streaming solution uses a “cache-and-relay” approach such that peer nodes can serve clients with previously distributed media from its cache.
  • Yet another P2P-based scheme combines multiple description coding (MDC) of video and data partitioning, and provides a VoD system with a graceful quality degradation as peers fail (or leave the network) with a resulting loss of video sub-streams.
  • MDC multiple description coding
  • Still another P2P-based VoD scheme has applied network coding theory to provide a VoD solution.
  • some conventional P2P-based schemes do support a multi-video VoD approach.
  • one such scheme uses erasure resilient coding (ERC) to partially cache portions of a plurality of previously streamed videos, with the proportion of the media cached by each peer being proportional to the upload bandwidth of the peer. The peers then serve portions of their cached content to other peers to assist in on-demand streaming of multiple videos.
  • ERP erasure resilient coding
  • a “Media Sharer,” as described herein, operates within a peer-to-peer (P2P) network to provide a unique peer-driven system for streaming high quality multimedia content, such as a video-on-demand (VoD) service, to participating peers while minimizing server bandwidth requirements.
  • the Media Sharer provides a peer-assisted framework wherein participating peers assist the server in delivering on-demand media content to later arriving peers. Consequently, peers share content to other “downstream” peers with information flowing from older peers to newer peers in various embodiments.
  • Peers cooperate to provide at least the same quality media delivery service as a pure server-client media distribution, with the server making up any bandwidth shortfall that the peers cannot provide to each other.
  • Peer upload bandwidth for redistribution of content to other peers is determined as a function of both surplus peer upload capacity and content need of neighboring peers.
  • the Media Sharer enables peer-assisted on-demand media streaming. Consequently, the Media Sharer described herein does not actually replace the server (or server farm) which stores the media to be shared to requesting peers. Instead, the server is assisted by the peers so that the responsibilities of the server are generally reduced to: 1) guaranteeing that peers can play the video (or other media) at the highest possible playback rate, while the bulk of that media is actually sent to the peers by other peers; and 2) introducing the peers to each other so that each peer knows which other peers they should serve content to. In other words, as peers come online, and as other peers either go offline or pause the media playback, the server will periodically inform each peer what “neighborhood” each peer belongs to, and their position within that neighborhood. As a result, each peer will always know what “neighboring peers” it should serve content to.
  • peers only download just enough data to meet their media streaming needs.
  • Peers cooperate to assist the server by uploading media content to other downstream peers to meet this minimum demand level.
  • peers may have additional upload capacity left after satisfying the minimum upload demands of their neighbors, in further embodiments, peers pre-fetch data from other peers to fill a “pre-fetching buffer.”
  • peers will have an additional content buffer (i.e., the pre-fetching buffer) from which they will draw content before making demands on the server.
  • peers are less likely to need to contact the server for content during the streaming session, thereby reducing server load.
  • peers act independently to send data to multiple other peers having the smallest pre-fetching buffer levels so that, as a group, the peers try to maintain similar pre-fetching buffer levels in each of their neighboring peers. Since each peer is concerned about the pre-fetching buffer level of its neighboring peers, this embodiment is referred to as a “water-leveling” buffer filling embodiment.
  • the pre-fetching buffer is also referred to herein as a “reservoir.” Note that although two or more peers may have identical pre-fetching buffer levels, if those peers joined the P2P network at different times (e.g., they began playback of the requested media at different times), then those peers will be at different playback points in the video stream.
  • each peer acts to fill the reservoir (i.e., the pre-fetching buffer) of its nearest downstream temporal neighbor to a level equal to its own level.
  • peer 1 will first act to fill the pre-fetching buffer of peer 2, where peer 2 is the closest temporal neighbor to have joined the P2P network after peer 1.
  • peer 1 will reduce (but not eliminate) the upload bandwidth to peer 2, and act to fill the buffer level of peer 3, where peer 3 is the closest temporal neighbor to have joined the P2P network after peer 2.
  • peer 2 will also be acting to fill the buffer of peer 3, and so on. Since each peer in this embodiment receives as much of content as possible from its immediate upstream neighbor without caring what the upstream gives to any further downstream neighbors, this embodiment is referred to as a “greedy-neighbor” buffer filling embodiment.
  • the Media Sharer described herein provides a unique system and method for enabling peer assisted media streaming that significantly reduces server bandwidth requirements in a P2P network.
  • other advantages of the Media Sharer will become apparent from the detailed description that follows hereinafter when taken in conjunction with the accompanying drawing figures.
  • FIG. 1 is a general system diagram depicting a general-purpose computing device constituting an exemplary system implementing a Media Sharer, as described herein.
  • FIG. 2 is a general system diagram depicting a general device having simplified computing and I/O capabilities for use in a P2P enabled by the Media Sharer, as described herein.
  • FIG. 3 illustrates one example of a peer-to-peer (P2P) network for use in implementing the Media Sharer, as described herein.
  • P2P peer-to-peer
  • FIG. 4 provides an exemplary architectural flow diagram that illustrates program modules for implementing the Media Sharer, as described herein.
  • FIG. 1 and FIG. 2 illustrate two examples of suitable computing environments on which various embodiments and elements of a Media Sharer, as described herein, may be implemented.
  • FIG. 3 illustrates a simple example of a P2P network environment within which the Media Sharer operates, as described herein. Note that the use of generic P2P network is described only for purposes of explanation, and that the Media Sharer is capable of operating within any type of P2P network where peers can be given a list of neighboring peers by the server or servers to assist with content delivery, as described in further detail below beginning in Section 2.
  • FIG. 1 illustrates an example of a suitable computing system environment 100 on which the invention may be implemented.
  • the computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100 .
  • the invention is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held, laptop or mobile computer or communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • the invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer in combination with hardware modules, including components of a microphone array 198 .
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • the invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.
  • an exemplary system for implementing the invention includes a general-purpose computing device in the form of a computer 110 .
  • Components of computer 110 may include, but are not limited to, a processing unit 120 , a system memory 130 , and a system bus 121 that couples various system components including the system memory to the processing unit 120 .
  • the system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • Computer 110 typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may comprise computer storage media such as volatile and nonvolatile removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • computer storage media includes, but is not limited to, storage devices such as RAM, ROM, PROM, EPROM, EEPROM, flash memory, or other memory technology; CD-ROM, digital versatile disks (DVD), or other optical disk storage; magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices; or any other medium which can be used to store the desired information and which can be accessed by computer 110 .
  • storage devices such as RAM, ROM, PROM, EPROM, EEPROM, flash memory, or other memory technology
  • magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices or any other medium which can be used to store the desired information and which can be accessed by computer 110 .
  • the system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132 .
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system
  • RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120 .
  • FIG. 1 illustrates operating system 134 , application programs 135 , other program modules 136 , and program data 137 .
  • the computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media.
  • FIG. 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152 , and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140
  • magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150 .
  • hard disk drive 141 is illustrated as storing operating system 144 , application programs 145 , other program modules 146 , and program data 147 . Note that these components can either be the same as or different from operating system 134 , application programs 135 , other program modules 136 , and program data 137 . Operating system 144 , application programs 145 , other program modules 146 , and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 110 through input devices such as a keyboard 162 and pointing device 161 , commonly referred to as a mouse, trackball, or touch pad.
  • Other input devices may include a joystick, game pad, satellite dish, scanner, radio receiver, and a television or broadcast video receiver, or the like. These and other input devices are often connected to the processing unit 120 through a wired or wireless user input interface 160 that is coupled to the system bus 121 , but may be connected by other conventional interface and bus structures, such as, for example, a parallel port, a game port, a universal serial bus (USB), an IEEE 1394 interface, a BluetoothTM wireless interface, an IEEE 802.11 wireless interface, etc.
  • the computer 110 may also include a speech or audio input device, such as a microphone or a microphone array 198 , as well as a loudspeaker 197 or other sound output device connected via an audio interface 199 , again including conventional wired or wireless interfaces, such as, for example, parallel, serial, USB, IEEE 1394, BluetoothTM, etc.
  • a speech or audio input device such as a microphone or a microphone array 198
  • a loudspeaker 197 or other sound output device connected via an audio interface 199 , again including conventional wired or wireless interfaces, such as, for example, parallel, serial, USB, IEEE 1394, BluetoothTM, etc.
  • a monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190 .
  • computers may also include other peripheral output devices such as a printer 196 , which may be connected through an output peripheral interface 195 .
  • the computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180 .
  • the remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computer 110 , although only a memory storage device 181 has been illustrated in FIG. 1 .
  • the logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173 , but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
  • the computer 110 When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170 .
  • the computer 110 When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173 , such as the Internet.
  • the modem 172 which may be internal or external, may be connected to the system bus 121 via the user input interface 160 , or other appropriate mechanism.
  • program modules depicted relative to the computer 110 may be stored in the remote memory storage device.
  • FIG. 1 illustrates remote application programs 185 as residing on memory device 181 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • FIG. 2 shows a general system diagram showing a simplified computing device.
  • Such computing devices can be typically be found in devices having at least some minimum computational capability in combination with a communications interface, including, for example, cell phones PDA's, dedicated media players (audio and/or video), etc.
  • a communications interface including, for example, cell phones PDA's, dedicated media players (audio and/or video), etc.
  • any boxes that are represented by broken or dashed lines in FIG. 2 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • the device must have some minimum computational capability, some storage capability, and a network communications interface.
  • the computational capability is generally illustrated by processing unit(s) 210 (roughly analogous to processing units 120 described above with respect to FIG. 1 ).
  • the processing unit(s) 210 illustrated in FIG. 2 may be specialized (and inexpensive) microprocessors, such as a DSP, a VLIW, or other micro-controller rather than the general-purpose processor unit of a PC-type computer or the like, as described above.
  • the simplified computing device of FIG. 2 may also include other components, such as, for example one or more input devices 240 (analogous to the input devices described with respect to FIG. 1 ).
  • the simplified computing device of FIG. 2 may also include other optional components, such as, for example one or more output devices 250 (analogous to the output devices described with respect to FIG. 1 ).
  • the simplified computing device of FIG. 2 also includes storage 260 that is either removable 270 and/or non-removable 280 (analogous to the storage devices described above with respect to FIG. 1 ).
  • the server rate i.e., the upload bandwidth requirements of the server
  • the server rate is directly proportional to the number of peers requesting streaming media (up to the bandwidth limit of the server).
  • the server rate can be significantly reduced, depending upon the number of peers and a surplus upload capacity of those peers.
  • the ability to reduce the server rate by very large margins allows the server to provide content encoded at higher bitrates (higher quality) than would otherwise be possible for a given server rate allowance.
  • the Media Sharer is more likely to run into a temporary deficit state that requires active server participation (and increased bandwidth usage) to serve those peers.
  • the Media Sharer described herein provides a framework in which peers assist one or more servers in delivering on-demand media content to other peers across a loosely coupled P2P network.
  • peers only download just enough data to meet their media streaming needs.
  • Peers cooperate to assist the server by uploading media content to other neighboring downstream peers (i.e., later arriving peers) to meet this minimum demand level.
  • peers may have additional upload capacity left after satisfying the minimum upload demands of their neighbors, in further embodiments, peers pre-fetch data from other peers to fill a “pre-fetching buffer,” (also referred to herein as a “reservoir”).
  • peers will have an additional content buffer (i.e., the pre-fetching buffer or “reservoir”) from which they will draw content before making demands on the server.
  • the pre-fetching buffer or “reservoir” the pre-fetching buffer
  • the Media Sharer operates within the framework of a generic P2P network.
  • a generic P2P network For example, one very simple P2P network is illustrated by FIG. 3 .
  • the server(s) 300 are dedicated computers set up for the Media Sharer operation. Basically, the job of the server is to serve media content when needed (to meet any demand not satisfied by other peers), and to introduce peers to each other as neighbors to be served by their fellow neighbors.
  • the peers 310 , 320 and 330 are all end-user nodes (such as PC-type computers, PDA's, cell phones, or any other network-enabled computing device) variously connected over the Internet.
  • the server 300 also performs various administrative functions such as maintaining a list of available peers, an arrival order (or current playback point) of those peers (for identifying neighboring peers), peer upload capabilities, performing digital rights management (DRM) functionality, etc. Further, some elements of the server 300 operation will vary somewhat depending upon the architecture of the P2P network type used for implementing the Media Sharer. However, as the various types of conventional P2P networks are well known to those skilled in the art, those minor differences will not be described herein as they do not significantly affect the sharing between peers once peers are directed to their neighbors as described in the following sections.
  • the following discussion will generally refer to communication between two or more peers, which are generically labeled as peer 1, peer 2, etc., for purposes of explanation. However, it should be understood that any given peer in the P2P network enabled by the Media Sharer may be in concurrent contact with a large number of peers that are in turn also in contact with any number of additional peers. Furthermore, the following discussion will also generally refer to video streaming in the context of a VoD service. However, it should be clear that the Media Sharer is capable of operating with any type of on-demand media or other on-demand data that is being shared between peers. In this context, the use of video and VoD is intended to be only one example of the type of content that can be shared.
  • the Media Sharer provides a peer-assisted framework wherein participating peers assist the server in delivering on-demand media content (movies, music, audio, streaming data, etc.) to other, later arriving, peers.
  • peers act to share content to other “downstream” peers where information “flows” from the “older” peers to the “newer” peers in various embodiments.
  • Peers cooperate to provide at least the same quality media delivery service as a pure server-client media distribution, with the server making up any bandwidth shortfall that the peers cannot provide to each other.
  • a very large number of peers can generally be served with relatively little increase in server bandwidth requirements.
  • each peer limits its assistance to redistributing only portions of the media content that it is currently receiving.
  • each peer maintains a cache of recently viewed media content that may include any number of media files, depending upon the storage space allocated or available to each peer.
  • each peer redistributes portions of various cached files, depending upon the demands of other peers.
  • peer upload bandwidth for redistribution is determined as a function of both surplus peer upload capacity and content need of neighboring peers.
  • the following discussion will focus on the single video case where each peer limits its assistance to redistributing only portions of the media content that it is currently receiving.
  • the server treats each set of peers that are requesting a particular video as a unique set of peers, then the distribution of N videos essentially becomes N separate sub-distribution problems, one for each video. Consequently, while the on-demand distribution of a single video (or other content) will be described herein, it should be understood that the discussion applies equally to distribution of many files as a collection of separate distribution problems.
  • the Media Sharer enables peer-assisted on-demand media streaming. Consequently, the Media Sharer described herein does not actually replace the server (or server farm) which stores the media to be shared to requesting peers. Instead, the server is assisted by the peers so that the responsibility of the server is reduced to guaranteeing that peers can play the video (or other media) at the highest available playback rate without any quality degradation, while the bulk of that media is actually sent to the peers by other peers.
  • the server also “introduces” the peers to each other so that each peer knows which other peers they should serve content to.
  • the server will periodically inform each peer what “neighborhood” each peer belongs to.
  • each peer will always know what “neighboring peers” it should serve content to.
  • the peer-assisted VoD (or other media streaming) enabled by the Media Sharer operates by having the peers that are viewing a particular video also assist in redistributing that media to other peers. Since peer-assisted VoD can move a significant fraction of the uploading from the server to the peers, it dramatically reduces the server bandwidth costs required to serve multiple peers.
  • the server makes up the difference, so that each peer receives the video at the encoded rate.
  • the server is only active when the peers alone cannot satisfy the demand.
  • the peers alone not only is the server inactive, but the peers also act to pre-fetch video from each other using any available surplus bandwidth of the other peers. This pre-fetching capability allows the peers to fill a pre-fetching buffer or “reservoir” of video content, which can then be tapped when the aggregate upload bandwidth of peers becomes less than the demand across all peers.
  • peers act independently to send data to one or more neighboring peers having the smallest reservoir levels so that, as a group, the peers try to maintain similar reservoir levels in each of their neighboring peers. Since each peer is concerned about the reservoir level of its neighboring peers, this embodiment is referred to as a “water-leveling” embodiment for filling each peers pre-fetching buffer.
  • two or more peers may have identical pre-fetching buffer levels, if those peers joined the P2P network at different times (i.e., they requested and began playback of the media at different times) or if one or more of the peers has paused the playback, then those peers will be at different playback points in the video stream, and hence each buffer will have a different buffer point. Note the difference between buffer levels and buffer points is discussed in further detail in Section 3.
  • each peer uses any surplus upload bandwidth capacity to send additional media content to fill the reservoir of its nearest downstream temporal neighbor to a level equal to its own level.
  • peer 1 will first act to fill the buffer of peer 2, where peer 2 is the closest temporal neighbor to have joined the P2P network after peer 1. Once the buffer level of peer 2 is equal to the buffer level of peer 1, then peer 1 will reduce the upload bandwidth to peer 1, and act to fill the buffer level of peer 3, where peer 3 is the closest temporal neighbor to have joined the P2P network after peer 2. In the mean time, peer 2 will also be acting to fill the buffer of peer 3, and so on. Since each peer in this embodiment receives as much of content as possible from its immediate upstream neighbor without caring what the upstream gives to any further downstream neighbors, this embodiment is referred to as a “greedy-neighbor” buffer filling embodiment.
  • Both the water-leveling and the greedy-neighbor embodiments act to fill the pre-fetching buffers (or “reservoirs”) of neighboring peers using available surplus upload capacity existing after the minimum real-time media demands of peers have been satisfied.
  • the primary difference between the two embodiments is how each peer chooses what neighbor it will assist with supplied content.
  • the job of the server in each case is basically the same—to introduce each peer to its neighbors and let the peers serve each other so that the server can limit its bandwidth requirements. Note however, that at a minimum, the server will generally need to serve the entire media content to at least the first arriving peer.
  • FIG. 4 illustrates the interrelationships between program modules for implementing the Media Sharer, as described herein. It should be noted that any boxes and interconnections between boxes that are represented by broken or dashed lines in FIG. 4 represent alternate embodiments of the Media Sharer described herein, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • peer 1 400 peer 2 410 , peer 3, 415 , and peer N 420 .
  • peer 2 410 peer 3 415 , and peer N 420 .
  • peer 3 415 peer 3 415
  • peer N peer 3 415
  • any given peer in the P2P network enabled by the Media Sharer may be in concurrent contact with a large number of other peers that are in turn also in contact with any number of additional peers.
  • the Media Sharer is enabled by variously connecting a plurality of peers ( 400 , 410 , 415 and 420 ) across a P2P network, such as the network described with respect to FIG. 3 .
  • Each of the peers ( 400 , 410 , 415 , and 420 ) generally includes the same basic program modules for enabling the Media Sharer. Consequently, for purposes of explanation, rather than reiterating each of those program modules for every peer, FIG. 4 illustrates those modules only for peer 1 400 .
  • Peers 2 through N ( 410 , 415 , and 420 ) are understood to include the same program modules as shown for peer 1.
  • each peer 400 , 410 , 415 , or 420
  • the server 405 will then use a peer evaluation module 435 to evaluate the real-time download rate requirements of each peer ( 400 , 410 , 415 , or 420 ) for the requested media content 430 in combination with the upload capabilities reported by each peer.
  • the server 405 then uses a neighborhood assignment module 440 to assign each peer to a neighborhood of fellow peers via a server network communication module 445 .
  • the minimum real-time rate requirements represents the minimum download bandwidth that each peer needs to successfully stream the media content 430 at its encoded resolution.
  • a first peer 400 contacting the server 405 will be served media content 430 across the P2P network to its network communication module 425 via the server's network communication module 445 .
  • additional peers e.g., peers 410 , 415 and 420
  • the server will use the neighborhood assignment module 440 to assign those peers to a set of one or more neighboring peers, and will then periodically send an updated neighboring peers list 450 to each peer.
  • the server 405 periodically updates the neighboring peers list 450 to address the issue of new peers coming online, and existing peers dropping offline or having changed upload capabilities for some reason. Further each peer periodically reports its capabilities and various status parameters (such as playback points, buffer levels, etc.) to the server, for use in creating updated neighboring peer lists 450 , as described herein.
  • the Media Sharer operates to ensure that the media content 430 is served to peers ( 400 , 410 , 415 , or 420 ) first from the upload capacity of other peers, and then from the server 405 only if necessary to ensure that the minimum real-time demand of every peer is fully satisfied.
  • this real-time demand may include a small transmission buffer or the like to account for packet delays and/or transmission losses across the P2P network. This type of buffer is well known to those skilled in the art, and will not be described in detail herein.
  • peers act to assist the server by streaming content to their neighboring peers (in accordance with directions included in the neighboring peer list 460 sent by the server 405 ) by using a content sharing module 465 to pull received data packets of the incoming media content 430 from a streaming media buffer 470 .
  • These pulled data packets are transmitted to other downstream (later arriving) peers via the network communication module 425 using an upload bandwidth level that is set via an upload bandwidth allocation module 475 (in accordance with directions included in the neighboring peer list 450 periodically sent by the server 405 ).
  • each peer ( 400 , 410 , 415 , or 420 ) will also using a streaming playback module 480 to pull those same data packets from the streaming media buffer 470 for real-time playback of the media content 430 on a local playback device 485 .
  • the streaming media buffer 470 holds content to meet the minimum real-time demands of the peers ( 400 , 410 , 415 , or 420 ).
  • each peer may still have some additional or “surplus” upload capacity that is not being used.
  • the server 405 is fully aware of the surplus upload capacity of the peers ( 400 , 410 , 415 , or 420 ) since each peer periodically reports its capabilities and various status parameters (such as playback points, buffer levels, etc.) to the server, for use in creating updated neighboring peer lists 450 , as described above.
  • the Media Sharer acts to use this surplus upload capacity of each peer ( 400 , 410 , 415 , or 420 ) to send additional data packets to other peers as a way to allow each of those peers to save up for a possible time in the future when the other peers may not be able to meet the minimum real-time demands of one or more neighboring peers.
  • These additional data packets are transmitted across the P2P network in the same manner as any other data packet.
  • the additional data packets are stored in a “pre-fetching buffer” 490 , also referred to as a reservoir.
  • the decision as to how much bandwidth each peer ( 400 , 410 , 415 , or 420 ) will allocate to the additional data packets, and to which peers that bandwidth will be allocated, is not the same as for meeting the real-time demands of each peer.
  • the server 405 includes a water-leveling module 492 that includes additional instructions in the neighboring peer list 450 that is periodically sent to each peer ( 400 , 410 , 415 , or 420 ). These additional instructions inform each peer ( 400 , 410 , 415 , or 420 ) as to how much of its surplus bandwidth it is to allocate, and to which peers it will be allocated, for sending the additional data packets for filling the pre-fetching buffer 490 of one or more neighboring peers.
  • the water-leveling module 492 determines allocation levels for surplus bandwidth by performing a three-step process that includes: 1) evaluating all peers ( 400 , 410 , 415 , or 420 ) in the order of their arrival in the P2P network to determine the required server rate to support real-time playback; 2) evaluating all the peers in reverse order of arrival to assign available peer upload bandwidth to downstream peers as a “growth rate” for each peers pre-fetching buffer 490 ; and 3) periodically evaluate all the peers again in order of arrival and adjust the growth rates as needed to insure that downstream peers whose pre-fetching buffer point catches up to upstream peers do not continue to receive excess bandwidth allocations.
  • the pre-fetching buffer point of a peer represents the total amount of content downloaded by a peer up to time t, and not the level or amount of content that is actually in the pre-fetching buffer at time t. As such, the pre-fetching buffer point corresponds to a future point in the playback stream of the media file. Note that when the server 405 periodically sends an updated neighboring peer list 475 to the peers, that list includes the periodic updates regarding bandwidth allocation for maintaining the desired growth rate for each peers pre-fetching buffer.
  • the server 405 includes a greedy-neighbor module 495 that includes additional instructions in the neighboring peer list 450 that is periodically sent to each peer. These additional instructions inform each peer ( 400 , 410 , 415 , or 420 ) as to how much of its surplus bandwidth it is to allocate, and to which peers it will be allocated, for sending the additional data packets for filling the pre-fetching buffer 490 of one or more neighboring peers.
  • the greedy-neighbor module 495 determines allocation levels for surplus bandwidth by performing a two-step process that includes: 1) evaluating all peers ( 400 , 410 , 415 , or 420 ) in the order of their arrival in the P2P network to determine the required server rate to support real-time playback; and 2) once the first pass through the peers has been completed, in a periodic second step, the greedy-neighbor module passes through all peers in order again, and then allocates as much bandwidth as possible from each peer to the next arriving neighboring peer.
  • the greedy-neighbor module 495 also acts to insure that the pre-fetching buffer point of a downstream peer does exceed that of the peer supplying it with additional data packets. Note that when the server 405 periodically sends an updated neighboring peer list 475 to the peers, that list includes the periodic updates regarding bandwidth allocations for filling each peers pre-fetching buffer 490 .
  • the above-described program modules are employed for implementing the Media Sharer.
  • the Media Sharer provides a peer-assisted framework wherein participating peers assist the server in delivering on-demand media content.
  • the following sections provide a detailed discussion of the operation of the Media Sharer, and of exemplary methods for implementing the program modules described in Section 2 with respect to FIG. 4 .
  • pre-fetching can reduce the average server rate by a factor of five or more.
  • the server rate actually goes to 0 when peers are allowed to pre-fetch content from upstream peers.
  • the pre-fetching buffer built up during the surplus state allows the Media Sharer system to sustain streaming without using the server bandwidth at all.
  • the server rate is much closer to the bound D ⁇ S. This is true in both the water-leveling embodiment and the greedy-neighbor embodiment.
  • the greedy-neighbor embodiment appears to achieve slightly lower server rates than the water-leveling embodiment under all the examined conditions.
  • peer assistance in delivering streaming media content to other peers operates in one of three possible modes: 1) a mode in which there is a surplus supply of peer upload bandwidth capacity relative to the current content demands of the peers; 2) a mode in which there is a deficit supply of peer upload bandwidth capacity relative to the current content demands of the peers; and 3) a balanced mode in which the upload bandwidth capacity of the peers is approximately the same as the content demands of the peers.
  • the length of the media being served (in seconds) can be denoted by T and the encoding rate of the media can be denoted by r (in bps).
  • peers arrive (i.e., contact the server with a request for the media) in some probabilistic distribution, such as, for example, a generally Poisson peer arrival process denoted by ⁇ .
  • the total number peer “types” will be denoted by M, with the peer type m corresponding to an upload bandwidth u m of each peer. Further, each such peer type m is assumed to appear with probability p m .
  • the Media Sharer is considered to be operating in a “surplus mode” if S>D; and in a “deficit mode” if S ⁇ D.
  • the Media Sharer is in the surplus mode if ⁇ >r, and in the deficit mode otherwise.
  • the server may still need to be active in supplying media to one or more of the peers for at least two reasons.
  • the Media Sharer may be in the surplus mode, due to inherent system fluctuations, at any given instant of time the supply may become less than the demand.
  • peers only download content in real-time (e.g., the download rate equals r) and do not pre-fetch for future needs.
  • a download buffer of some size can also be used by the peers to insure that packet losses or delays do not result in an unduly corrupted or unrecoverable media signal.
  • the use of such buffers with respect to media streaming is well known to those skilled in the art, and will not be described in further detail herein.
  • the pre-fetching techniques described herein are not equivalent to the use or operation of conventional download buffers. This point will be better understood by reviewing the discussion of the various pre-fetching techniques provided below in Sections 3.4 and 3.5.
  • n peers in the overall Media Sharer system. These n peers are then ordered so that peer n is the most recent to arrive, peer n ⁇ 1 is the next most recent, and so on. Thus, peer 1 has been in the system the longest.
  • Let u j , j 1, . . . , n, be the upload bandwidth of the j th peer and its probability be p(u j ).
  • the state of the Media Sharer system be (u 1 , u 2 , . . .
  • the upload bandwidth of the most recent peer (peer n) is not utilized. Furthermore, if u n ⁇ 1 >r, the upload bandwidth portion u n ⁇ 1 ⁇ r of the next most recent peer n ⁇ 1 is also wasted since it can only upload to peer n. Alternatively, if each peer adopts a sharing window and can tolerate a slight delay, then peers arriving very close in time (e.g., peers n, n ⁇ 1, . . . , n ⁇ k, for some k) can potentially upload different content blocks in their windows to each other. Then, every peers upload bandwidth could be fully utilized.
  • peers n, n ⁇ 1, . . . , n ⁇ k, for some k can potentially upload different content blocks in their windows to each other. Then, every peers upload bandwidth could be fully utilized.
  • Equation (2) For a Poisson distribution peer arrival, it can be shown that the average additional server rate needed is given by Equation (2), where
  • two parameters of the Media Sharer can be determined: 1) the server rate with respect to the supply-demand ratio; and 2) the server rate with respect to the system scale (i.e., the number of peers).
  • the following example will assume that there are only two types of peers operating within the P2P network enabled by the Media Sharer.
  • a “type 1” peer has an upload bandwidth of u 1
  • a “type 2” peer has an upload bandwidth of u 2 , with the media rate r and the length of the media T.
  • the no pre-fetching approach still significantly reduces the server rate.
  • S/D is on the order of around 1.04
  • the number of concurrent peers is around 30,000
  • the server rate is on the order of about 3.8r.
  • the bandwidth saving is clearly very significant at an almost 8,000 times reduction in server rate.
  • the server rate will increase significantly as S/D approaches 1 (i.e., a balanced system). Consequently, while the simple no pre-fetching embodiment can significantly reduce server bandwidth requirements for a “surplus” operating mode, it provides less benefit when the Media Sharer operates closer to a balanced system.
  • a “deficit” operational state exists when the supply of peer upload bandwidth capacity is less than the current content demands of the peers, i.e., when S ⁇ D.
  • D/S the demand of the peers
  • the server rate almost always equals to D ⁇ S. This means when the Media Sharer is in this high-deficit mode (in the non pre-fetching case), the server rate is again very low relative to the number of peers are being served (compared to the traditional client-server model).
  • the server rate deviates from D ⁇ S as D/S approaches 1 (i.e., a balanced system).
  • D/S approaches 1
  • the gap between the server rate and the bound D ⁇ S shrinks as the number of peers in the system increases.
  • the absolute value between the server rate and D ⁇ S is considered as the number of peers increases, the server rate is not negligible.
  • the no pre-fetching operational case performs very well in both high-surplus and high-deficit modes.
  • the pre-fetching operational case does not perform well in the balanced mode, where the average supply is approximately equal to the average demand.
  • additional embodiments of the Media Sharer have been adapted to reduce server upload bandwidth near the balanced operational mode. These additional embodiments, as described in Section 3.4 and 3.5 make use of various pre-fetching techniques to reduce server rate requirements.
  • the performance deviation from the bound in the balanced mode reveals a fundamental limitation of the no pre-fetching operational case. Further, due to the arrival/departure dynamics of many peers operating within the P2P network at any particular point in time, at any given time, even a balanced system (on average) might be instantaneously in a surplus or deficit state.
  • the no pre-fetching mode does not use surplus peer upload bandwidth that might be available.
  • the server needs to supplement the uploading efforts of the peers in order to satisfy the real-time demands of the peers as a group. Consequently, if peers pre-fetch media content before it is needed, the server rate contribution can be reduced since temporary operational states that would otherwise force the server to increase its upload rate are reduced or eliminated by drawing from pre-fetched content rather than calling the server for that content.
  • peers pre-fetch content and buffer data for their future needs, and act in cooperation to keep the pre-fetching buffer level of all neighboring peers as equal as possible.
  • This embodiment as described in further detail below is referred to as a “water-leveling” buffer filling embodiment.
  • a water-leveling buffer filling embodiment One caveat of this water-leveling embodiment is that in order to keep the server rate low, peers are not permitted to pre-fetch content from the server. In fact, each peer only pre-fetches content from other neighboring peers that arrived before it and that have sufficient upload bandwidth for distribution.
  • the server does do here is to inform each peer of its neighboring peers. Further, since peers may drop offline, or pause their playback, in one embodiment, the server periodically refreshes the list of neighboring peers that is provided to each of the peers in the P2P network.
  • this pre-fetching buffer (also referred to herein as a “reservoir”) is not the same as a conventional download buffer used to insure that network packet losses or delays do not result in an unduly corrupted or unrecoverable media signal. Further, in view of the following description, it should be clear that the manner in which the pre-fetching buffer is filled by neighboring peers differs significantly from conventional download buffers.
  • p i (t), d i (t) and b i (t) are defined as the current playback point of peer i, the current demand of peer i (relative to demands on the server to provide content), and the current pre-fetching buffer point of peer i at time t, respectively.
  • the pre-fetching buffer point b i (t) represents the total amount of content downloaded by peer i up to time t, and not the level or amount of content that is actually in the pre-fetching buffer at time t.
  • the pre-fetching buffer point corresponds to a future point in the playback stream of the media file.
  • the Media Sharer ensures that the pre-fetching buffer points of all peers follow the arrival order of each peer, such that b i ⁇ b j for all i ⁇ j.
  • the server rate would be 0 at time t, as the demands of all peers are 0 at the moment. This clearly suggests that the server rate can be significantly reduced if all peers act to accumulate a high pre-fetching buffer level whenever the system is operating in a “surplus” mode, as described above. Consequently, by treating the pre-fetching buffer of every peer as a “water tank” or “reservoir,” the water-leveling embodiment described herein operates to fill the lowest tank first.
  • the water-leveling embodiment of the Media Sharer ensures that each peer directs its upload resources to the neighboring peer having the lowest pre-fetching buffer level.
  • each peer only assists those peers that are downstream, i.e., peers only assist later arriving peers (or peers that are effectively later by virtue of pausing the playback of the media). Peers do not assist earlier arriving peers.
  • the water-leveling embodiment is implemented by a series of steps that includes: first satisfying real-time demands of peers; then allocating pre-fetching buffer growth rates of various peers based on which peers currently have the lowest pre-fetching buffer levels; and then adjusting the pre-fetching buffer growth rates of various peers as those pre-fetching buffers begin to fill from the assistance of other peers.
  • the server of the Media Sharer evaluates all peers in a first pass based on their arrival order to determine the required maximum server rate needed to satisfy each individual peer's demand level. These real-time demands are then satisfied either by the server, by the server with partial assistance from the other peers, or entirely by other peers, depending upon whether the P2P network is operating in a deficit mode, a surplus mode, or a balanced mode. This ensures all real-time demands are satisfied.
  • the server is also recording how much upload bandwidth remains at each peer (as reported to the server by each peer). This remaining upload bandwidth capacity of each peer is denoted by l i .
  • the Media Sharer allocates the remaining upload bandwidth first to peers with smallest pre-fetching buffer levels (as reported by each peer to the server).
  • l n ⁇ 1 i.e., the remaining bandwidth capacity at peer n ⁇ 1
  • l 1 i.e., the remaining bandwidth capacity at peer 1
  • the allocation of the remaining upload bandwidth of peer is performed in a backwards sweep, from node n ⁇ 1 to node 1. Again, there is no peer later than peer n (i.e., no downstream peers), so its remaining upload bandwidth is not utilized.
  • the Media Sharer moves on to allocate the bandwidth of peer n ⁇ 3 between downstream peers, peer n ⁇ 2, peer n ⁇ 1, and peer n in a similar way as described with respect to peer n ⁇ 2.
  • the Media Sharer then continues this reverse order allocation with peer n ⁇ 4, and so on, up through peer 1. Note that the entire backward allocation of bandwidth from peer n ⁇ 1 through peer 1 can be completed in O(n) time, as long as the Media Sharer maintains an auxiliary data structure to keep track of groups containing neighboring peers with the same buffer level.
  • the growth rate allocation described in section 3.4.2 is based on each peers pre-fetching buffer levels and the spare upload capacity of the upstream peers. However, since the growth rates of the pre-fetching buffers of the various peers differs, the buffer point, b i (t), may catch up with the buffer point b i ⁇ 1 (t) of the immediately preceding peer. Again, it should be noted that the buffer point is not the buffer level (i.e., the amount of content in each pre-fetching buffer), but instead represents the total amount of content that has been received by the peer and corresponds to some future point in the playback stream of the media file.
  • the third step is to pass through all peers again in order, and reduce the growth rates of those peers who have already caught up with earlier peers. Any excess bandwidth of the neighboring peers is then reassigned to other downstream peers as described in Section 3.4.2. Again, as long as the Media Sharer updates the auxiliary data structure (as described in Section 3.4.2), this growth rate adjustment process can be completed in O(n) time.
  • peer bandwidth allocation in the “water-leveling” embodiment includes three steps:
  • the complexity of the entire three step bandwidth allocation process is O(n), with the end result of the process being a group of neighboring peers having approximately equal pre-fetching buffer levels, regardless of where each peer is in the playback process.
  • overall server rate requirements are significantly reduced in the event that the overall P2P network goes into a temporary balanced or deficit operational mode since each of the peers will draw on its pre-fetching buffer rather than call on the server to make up any demand shortfall that cannot be supplied by other peers.
  • the pre-fetching buffer level, B i (t), of earliest peers is usually 0 at any given time t. This implies that data demands imposed on the server are usually generated by the earliest peers, with those earliest peers relieving some of the data demands on the server by assisting later peers. Due to the asymmetry of the Media Sharer on-demand delivery mechanism, wherein earlier peers only upload content to later peers, the earlier peers are more likely to be assigned lower growth rates than later peers.
  • the actual behavior of the “water-leveling” embodiment of the Media Sharer system is that later peers tend to have higher buffer levels than earlier peers. Consequently, while the “water-leveling” embodiment produces very good results overall, earlier peers still have a higher risk of running out of buffer, thereby increasing demands on the server. Therefore, in a related embodiment, as described in Section 3.5, the Media Sharer address this issue in a “greedy-neighbor” embodiment wherein each peer simply dedicates its remaining upload bandwidth to the neighboring peer right after itself.
  • the “greedy-neighbor” pre-fetching buffer filling embodiment generally includes two primary steps:
  • the second step can be further explained in the following pseudo code block, which shows that the growth rate of the pre-fetching buffer point (demand+growth rate) is compared between peer k and peer k+1. Consequently, the actual budget for allocating each peer's bandwidth to it neighbor does not need to evaluate the actual buffer level of the neighboring peer, just the buffer point of that neighbor.
  • this periodic allocation is illustrated by the pseudo code block provided in Table 1:

Abstract

A “Media Sharer” operates within peer-to-peer (P2P) networks to provide a dynamic peer-driven system for streaming high quality multimedia content, such as a video-on-demand (VoD) service, to participating peers while minimizing server bandwidth requirements. In general, the Media Sharer provides a peer-assisted framework wherein participating peers assist the server in delivering on-demand media content to other peers. Participating peers cooperate to provide at least the same quality media delivery service as a pure server-client media distribution. However, given this peer cooperation, many more peers can be served with relatively little increase in server bandwidth requirements. Further, each peer limits its assistance to redistributing only portions of the media content that it also receiving. Peer upload bandwidth for redistribution is determined as a function of surplus peer upload capacity and content need of neighboring peers, with earlier arriving peers uploading content to later arriving peers.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a Continuation of U.S. patent application Ser. No. 11/678,268, filed on Feb. 23, 2007, and entitled “SMART PRE-FETCHING FOR PEER ASSISTED ON-DEMAND MEDIA.”
  • BACKGROUND
  • 1. Technical Field
  • The invention is related to peer-to-peer (P2P) file sharing, and in particular, to a system and method for P2P file sharing that enables on-demand multimedia streaming while minimizing server bandwidth requirements.
  • 2. Related Art
  • Video-on-demand (VoD), also referred to as on-demand video streaming, has become an extremely popular service on the Internet. For example, the well known “YouTube.com,” web site states that it currently serves approximately 100 million videos per day to visiting client computers, with nearly 20 million unique visitors per month. Other examples of major Internet VoD publishers include MSN® Video, Google® Video, Yahoo® Video, CNN®, and a plethora of other VoD sites.
  • Much of the VoD being streamed over the Internet today is encoded in the 200-400 Kbps range. At these rates, Internet Service Providers (ISPs) or Content Delivery Networks (CDNs) typically charge video publishers on the order of about 0.1 to 1.0 cent per video minute. Consequently, serving millions of videos to millions of viewers can result in server bandwidth costs reaching millions of dollars per month. Unfortunately, these costs are expected to increase as demand increases and as higher-quality videos (e.g., videos with rates up to 3 Mbps or more) are made available for download.
  • Consequently, several peer-to-peer (P2P) based schemes have been suggested or implemented in an attempt to limit server bandwidth requirements in order to control the escalating bandwidth costs. In general, a peer-to-peer (P2P) network is a network that relies on the computing power and bandwidth of participant peers rather than a few large servers. The basic idea of peer-to-peer (P2P) networks is to allow each peer in the network to directly share individual files, and/or to assist a server in distributing either individual files or streaming media content. As is well known to those skilled in the art, there are a large number of conventional approaches to implementing P2P networks.
  • Most peer-assisted VoD belongs to the category of the single video approach where cooperating peers help to deliver parts of a single video at any given time, rather than parts of multiple videos, and where each peer may be at a different point in the playback of the video. In contrast, live streaming of videos, where all peers are at the same point in the video playback are often based on application-level multicast (ALM) protocols for media streaming. In particular, in these ALM-based schemes, the peer nodes are self organized into an overlay tree over an existing IP network. The streaming data is then distributed along the overlay tree. The cost of providing bandwidth is then shared amongst the peer nodes, thereby reducing the bandwidth burden (and thus dollar cost) of running the media server. However, one problem with such schemes is that the leaf nodes of the distribution tree only receive the streaming media and do not contribute to content distribution. Several related conventional schemes address some of the aforementioned content distribution limitations of generic ALM-based schemes by using multiple distribution trees that span the source and the peer nodes. Each “tree” can then transmit a separate piece of streaming media. As a result, all peer nodes can be involved in content distribution.
  • A somewhat related conventional P2P media streaming solution uses a “cache-and-relay” approach such that peer nodes can serve clients with previously distributed media from its cache. Yet another P2P-based scheme combines multiple description coding (MDC) of video and data partitioning, and provides a VoD system with a graceful quality degradation as peers fail (or leave the network) with a resulting loss of video sub-streams. Still another P2P-based VoD scheme has applied network coding theory to provide a VoD solution.
  • In contrast to the single video approach, some conventional P2P-based schemes do support a multi-video VoD approach. For example, one such scheme uses erasure resilient coding (ERC) to partially cache portions of a plurality of previously streamed videos, with the proportion of the media cached by each peer being proportional to the upload bandwidth of the peer. The peers then serve portions of their cached content to other peers to assist in on-demand streaming of multiple videos.
  • SUMMARY
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • A “Media Sharer,” as described herein, operates within a peer-to-peer (P2P) network to provide a unique peer-driven system for streaming high quality multimedia content, such as a video-on-demand (VoD) service, to participating peers while minimizing server bandwidth requirements. In general, the Media Sharer provides a peer-assisted framework wherein participating peers assist the server in delivering on-demand media content to later arriving peers. Consequently, peers share content to other “downstream” peers with information flowing from older peers to newer peers in various embodiments. Peers cooperate to provide at least the same quality media delivery service as a pure server-client media distribution, with the server making up any bandwidth shortfall that the peers cannot provide to each other. Peer upload bandwidth for redistribution of content to other peers is determined as a function of both surplus peer upload capacity and content need of neighboring peers.
  • As noted above, the Media Sharer enables peer-assisted on-demand media streaming. Consequently, the Media Sharer described herein does not actually replace the server (or server farm) which stores the media to be shared to requesting peers. Instead, the server is assisted by the peers so that the responsibilities of the server are generally reduced to: 1) guaranteeing that peers can play the video (or other media) at the highest possible playback rate, while the bulk of that media is actually sent to the peers by other peers; and 2) introducing the peers to each other so that each peer knows which other peers they should serve content to. In other words, as peers come online, and as other peers either go offline or pause the media playback, the server will periodically inform each peer what “neighborhood” each peer belongs to, and their position within that neighborhood. As a result, each peer will always know what “neighboring peers” it should serve content to.
  • In the simplest case, peers only download just enough data to meet their media streaming needs. Peers cooperate to assist the server by uploading media content to other downstream peers to meet this minimum demand level. However, since peers may have additional upload capacity left after satisfying the minimum upload demands of their neighbors, in further embodiments, peers pre-fetch data from other peers to fill a “pre-fetching buffer.” As a result, peers will have an additional content buffer (i.e., the pre-fetching buffer) from which they will draw content before making demands on the server. As a result, peers are less likely to need to contact the server for content during the streaming session, thereby reducing server load.
  • In one embodiment, peers act independently to send data to multiple other peers having the smallest pre-fetching buffer levels so that, as a group, the peers try to maintain similar pre-fetching buffer levels in each of their neighboring peers. Since each peer is concerned about the pre-fetching buffer level of its neighboring peers, this embodiment is referred to as a “water-leveling” buffer filling embodiment. Further, in the context of the idea of “water-leveling,” the pre-fetching buffer is also referred to herein as a “reservoir.” Note that although two or more peers may have identical pre-fetching buffer levels, if those peers joined the P2P network at different times (e.g., they began playback of the requested media at different times), then those peers will be at different playback points in the video stream.
  • In a related embodiment, each peer acts to fill the reservoir (i.e., the pre-fetching buffer) of its nearest downstream temporal neighbor to a level equal to its own level. In other words, peer 1 will first act to fill the pre-fetching buffer of peer 2, where peer 2 is the closest temporal neighbor to have joined the P2P network after peer 1. Once the pre-fetching buffer level of peer 2 is equal to the level of peer 1, then peer 1 will reduce (but not eliminate) the upload bandwidth to peer 2, and act to fill the buffer level of peer 3, where peer 3 is the closest temporal neighbor to have joined the P2P network after peer 2. In the mean time, peer 2 will also be acting to fill the buffer of peer 3, and so on. Since each peer in this embodiment receives as much of content as possible from its immediate upstream neighbor without caring what the upstream gives to any further downstream neighbors, this embodiment is referred to as a “greedy-neighbor” buffer filling embodiment.
  • In view of the above summary, it is clear that the Media Sharer described herein provides a unique system and method for enabling peer assisted media streaming that significantly reduces server bandwidth requirements in a P2P network. In addition to the just described benefits, other advantages of the Media Sharer will become apparent from the detailed description that follows hereinafter when taken in conjunction with the accompanying drawing figures.
  • DESCRIPTION OF THE DRAWINGS
  • The specific features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
  • FIG. 1 is a general system diagram depicting a general-purpose computing device constituting an exemplary system implementing a Media Sharer, as described herein.
  • FIG. 2 is a general system diagram depicting a general device having simplified computing and I/O capabilities for use in a P2P enabled by the Media Sharer, as described herein.
  • FIG. 3 illustrates one example of a peer-to-peer (P2P) network for use in implementing the Media Sharer, as described herein.
  • FIG. 4 provides an exemplary architectural flow diagram that illustrates program modules for implementing the Media Sharer, as described herein.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • In the following description of the preferred embodiments of the present invention, reference is made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific embodiments in which the invention may be practiced. It is understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
  • 1.0 Exemplary Operating Environment:
  • FIG. 1 and FIG. 2 illustrate two examples of suitable computing environments on which various embodiments and elements of a Media Sharer, as described herein, may be implemented. In addition, FIG. 3 illustrates a simple example of a P2P network environment within which the Media Sharer operates, as described herein. Note that the use of generic P2P network is described only for purposes of explanation, and that the Media Sharer is capable of operating within any type of P2P network where peers can be given a list of neighboring peers by the server or servers to assist with content delivery, as described in further detail below beginning in Section 2.
  • For example, FIG. 1 illustrates an example of a suitable computing system environment 100 on which the invention may be implemented. The computing system environment 100 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the invention. Neither should the computing environment 100 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment 100.
  • The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held, laptop or mobile computer or communications devices such as cell phones and PDA's, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer in combination with hardware modules, including components of a microphone array 198. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices. With reference to FIG. 1, an exemplary system for implementing the invention includes a general-purpose computing device in the form of a computer 110.
  • Components of computer 110 may include, but are not limited to, a processing unit 120, a system memory 130, and a system bus 121 that couples various system components including the system memory to the processing unit 120. The system bus 121 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus also known as Mezzanine bus.
  • Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media such as volatile and nonvolatile removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • For example, computer storage media includes, but is not limited to, storage devices such as RAM, ROM, PROM, EPROM, EEPROM, flash memory, or other memory technology; CD-ROM, digital versatile disks (DVD), or other optical disk storage; magnetic cassettes, magnetic tape, magnetic disk storage, or other magnetic storage devices; or any other medium which can be used to store the desired information and which can be accessed by computer 110.
  • The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation, FIG. 1 illustrates operating system 134, application programs 135, other program modules 136, and program data 137.
  • The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 1 illustrates a hard disk drive 141 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 151 that reads from or writes to a removable, nonvolatile magnetic disk 152, and an optical disk drive 155 that reads from or writes to a removable, nonvolatile optical disk 156 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 141 is typically connected to the system bus 121 through a non-removable memory interface such as interface 140, and magnetic disk drive 151 and optical disk drive 155 are typically connected to the system bus 121 by a removable memory interface, such as interface 150.
  • The drives and their associated computer storage media discussed above and illustrated in FIG. 1, provide storage of computer readable instructions, data structures, program modules and other data for the computer 110. In FIG. 1, for example, hard disk drive 141 is illustrated as storing operating system 144, application programs 145, other program modules 146, and program data 147. Note that these components can either be the same as or different from operating system 134, application programs 135, other program modules 136, and program data 137. Operating system 144, application programs 145, other program modules 146, and program data 147 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 110 through input devices such as a keyboard 162 and pointing device 161, commonly referred to as a mouse, trackball, or touch pad.
  • Other input devices (not shown) may include a joystick, game pad, satellite dish, scanner, radio receiver, and a television or broadcast video receiver, or the like. These and other input devices are often connected to the processing unit 120 through a wired or wireless user input interface 160 that is coupled to the system bus 121, but may be connected by other conventional interface and bus structures, such as, for example, a parallel port, a game port, a universal serial bus (USB), an IEEE 1394 interface, a Bluetooth™ wireless interface, an IEEE 802.11 wireless interface, etc. Further, the computer 110 may also include a speech or audio input device, such as a microphone or a microphone array 198, as well as a loudspeaker 197 or other sound output device connected via an audio interface 199, again including conventional wired or wireless interfaces, such as, for example, parallel, serial, USB, IEEE 1394, Bluetooth™, etc.
  • A monitor 191 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 190. In addition to the monitor, computers may also include other peripheral output devices such as a printer 196, which may be connected through an output peripheral interface 195.
  • The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in FIG. 1. The logical connections depicted in FIG. 1 include a local area network (LAN) 171 and a wide area network (WAN) 173, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.
  • When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 1 illustrates remote application programs 185 as residing on memory device 181. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • With respect to FIG. 2, this figure shows a general system diagram showing a simplified computing device. Such computing devices can be typically be found in devices having at least some minimum computational capability in combination with a communications interface, including, for example, cell phones PDA's, dedicated media players (audio and/or video), etc. It should be noted that any boxes that are represented by broken or dashed lines in FIG. 2 represent alternate embodiments of the simplified computing device, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • At a minimum, to allow a device to join the overall P2P network environment to participate in content sharing operations, the device must have some minimum computational capability, some storage capability, and a network communications interface. In particular, as illustrated by FIG. 2, the computational capability is generally illustrated by processing unit(s) 210 (roughly analogous to processing units 120 described above with respect to FIG. 1). Note that in contrast to the processing unit(s) 120 of the general computing device of FIG. 1, the processing unit(s) 210 illustrated in FIG. 2 may be specialized (and inexpensive) microprocessors, such as a DSP, a VLIW, or other micro-controller rather than the general-purpose processor unit of a PC-type computer or the like, as described above.
  • In addition, the simplified computing device of FIG. 2 may also include other components, such as, for example one or more input devices 240 (analogous to the input devices described with respect to FIG. 1). The simplified computing device of FIG. 2 may also include other optional components, such as, for example one or more output devices 250 (analogous to the output devices described with respect to FIG. 1). Finally, the simplified computing device of FIG. 2 also includes storage 260 that is either removable 270 and/or non-removable 280 (analogous to the storage devices described above with respect to FIG. 1).
  • The exemplary operating environment having now been discussed, the remaining part of this description will be devoted to a discussion of the program modules and processes embodying a “Media Sharer” which provides on-demand video, or other media or data file, access across a P2P network.
  • 2.0 Introduction:
  • In a straight server system (i.e., no assistance from peers), the server rate (i.e., the upload bandwidth requirements of the server) is directly proportional to the number of peers requesting streaming media (up to the bandwidth limit of the server). However, given the assistance of peers in serving other peers in the manner enabled by the Media Server, the server rate can be significantly reduced, depending upon the number of peers and a surplus upload capacity of those peers. In fact, the ability to reduce the server rate by very large margins allows the server to provide content encoded at higher bitrates (higher quality) than would otherwise be possible for a given server rate allowance. On the other hand, as described in further detail in Section 3, when the number of concurrent peers is small, the Media Sharer is more likely to run into a temporary deficit state that requires active server participation (and increased bandwidth usage) to serve those peers.
  • In general, the Media Sharer described herein provides a framework in which peers assist one or more servers in delivering on-demand media content to other peers across a loosely coupled P2P network. In the simplest case, peers only download just enough data to meet their media streaming needs. Peers cooperate to assist the server by uploading media content to other neighboring downstream peers (i.e., later arriving peers) to meet this minimum demand level. However, since peers may have additional upload capacity left after satisfying the minimum upload demands of their neighbors, in further embodiments, peers pre-fetch data from other peers to fill a “pre-fetching buffer,” (also referred to herein as a “reservoir”). As a result, peers will have an additional content buffer (i.e., the pre-fetching buffer or “reservoir”) from which they will draw content before making demands on the server. As a result, peers are less likely to need to contact the server for content during the streaming session, thereby reducing server load.
  • Note that while the Media Sharer described herein is applicable for use in large P2P networks with multiple peers, the following description will generally refer to individual peers (or groups of two or more communicating peers) for purposes of clarity of explanation. Those skilled in the art will understand that the described system and method offered by the Media Sharer is applicable to multiple peers, and that it can be scaled to any desired P2P network size or type.
  • As noted above, the Media Sharer operates within the framework of a generic P2P network. For example, one very simple P2P network is illustrated by FIG. 3. In this type of P2P network, the server(s) 300 are dedicated computers set up for the Media Sharer operation. Basically, the job of the server is to serve media content when needed (to meet any demand not satisfied by other peers), and to introduce peers to each other as neighbors to be served by their fellow neighbors. The peers 310, 320 and 330 are all end-user nodes (such as PC-type computers, PDA's, cell phones, or any other network-enabled computing device) variously connected over the Internet.
  • In addition, the server 300 also performs various administrative functions such as maintaining a list of available peers, an arrival order (or current playback point) of those peers (for identifying neighboring peers), peer upload capabilities, performing digital rights management (DRM) functionality, etc. Further, some elements of the server 300 operation will vary somewhat depending upon the architecture of the P2P network type used for implementing the Media Sharer. However, as the various types of conventional P2P networks are well known to those skilled in the art, those minor differences will not be described herein as they do not significantly affect the sharing between peers once peers are directed to their neighbors as described in the following sections.
  • Note that the following discussion will generally refer to communication between two or more peers, which are generically labeled as peer 1, peer 2, etc., for purposes of explanation. However, it should be understood that any given peer in the P2P network enabled by the Media Sharer may be in concurrent contact with a large number of peers that are in turn also in contact with any number of additional peers. Furthermore, the following discussion will also generally refer to video streaming in the context of a VoD service. However, it should be clear that the Media Sharer is capable of operating with any type of on-demand media or other on-demand data that is being shared between peers. In this context, the use of video and VoD is intended to be only one example of the type of content that can be shared.
  • 2.1 System Overview:
  • In general, the Media Sharer provides a peer-assisted framework wherein participating peers assist the server in delivering on-demand media content (movies, music, audio, streaming data, etc.) to other, later arriving, peers. In this sense, peers act to share content to other “downstream” peers where information “flows” from the “older” peers to the “newer” peers in various embodiments. Peers cooperate to provide at least the same quality media delivery service as a pure server-client media distribution, with the server making up any bandwidth shortfall that the peers cannot provide to each other. However, given this peer cooperation, a very large number of peers can generally be served with relatively little increase in server bandwidth requirements.
  • In a single media file embodiment, each peer limits its assistance to redistributing only portions of the media content that it is currently receiving. In a related embodiment, each peer maintains a cache of recently viewed media content that may include any number of media files, depending upon the storage space allocated or available to each peer. In this embodiment, each peer redistributes portions of various cached files, depending upon the demands of other peers.
  • In either case, peer upload bandwidth for redistribution is determined as a function of both surplus peer upload capacity and content need of neighboring peers. However, for purposes of explanation, the following discussion will focus on the single video case where each peer limits its assistance to redistributing only portions of the media content that it is currently receiving. In fact, even where the server is serving multiple videos to multiple peers, if the server treats each set of peers that are requesting a particular video as a unique set of peers, then the distribution of N videos essentially becomes N separate sub-distribution problems, one for each video. Consequently, while the on-demand distribution of a single video (or other content) will be described herein, it should be understood that the discussion applies equally to distribution of many files as a collection of separate distribution problems.
  • As noted above, the Media Sharer enables peer-assisted on-demand media streaming. Consequently, the Media Sharer described herein does not actually replace the server (or server farm) which stores the media to be shared to requesting peers. Instead, the server is assisted by the peers so that the responsibility of the server is reduced to guaranteeing that peers can play the video (or other media) at the highest available playback rate without any quality degradation, while the bulk of that media is actually sent to the peers by other peers.
  • In addition, the server (or servers) also “introduces” the peers to each other so that each peer knows which other peers they should serve content to. In other words, as peers come online, and as other peers either go offline or pause their playback, the server will periodically inform each peer what “neighborhood” each peer belongs to. As a result, each peer will always know what “neighboring peers” it should serve content to. However, it should be noted that just because two peers are neighbors, there is no guarantee that they will include the same set of other neighbors.
  • As noted above, the peer-assisted VoD (or other media streaming) enabled by the Media Sharer operates by having the peers that are viewing a particular video also assist in redistributing that media to other peers. Since peer-assisted VoD can move a significant fraction of the uploading from the server to the peers, it dramatically reduces the server bandwidth costs required to serve multiple peers. Whenever the peers alone cannot fully meet the real-time demands of other peers, the server makes up the difference, so that each peer receives the video at the encoded rate. The server is only active when the peers alone cannot satisfy the demand. When the peers alone can satisfy the demand, not only is the server inactive, but the peers also act to pre-fetch video from each other using any available surplus bandwidth of the other peers. This pre-fetching capability allows the peers to fill a pre-fetching buffer or “reservoir” of video content, which can then be tapped when the aggregate upload bandwidth of peers becomes less than the demand across all peers.
  • In one embodiment, using surplus upload bandwidth capacity available after the minimum peer demand has been met, peers act independently to send data to one or more neighboring peers having the smallest reservoir levels so that, as a group, the peers try to maintain similar reservoir levels in each of their neighboring peers. Since each peer is concerned about the reservoir level of its neighboring peers, this embodiment is referred to as a “water-leveling” embodiment for filling each peers pre-fetching buffer. Note that although two or more peers may have identical pre-fetching buffer levels, if those peers joined the P2P network at different times (i.e., they requested and began playback of the media at different times) or if one or more of the peers has paused the playback, then those peers will be at different playback points in the video stream, and hence each buffer will have a different buffer point. Note the difference between buffer levels and buffer points is discussed in further detail in Section 3.
  • In a related embodiment, each peer uses any surplus upload bandwidth capacity to send additional media content to fill the reservoir of its nearest downstream temporal neighbor to a level equal to its own level. In other words, peer 1 will first act to fill the buffer of peer 2, where peer 2 is the closest temporal neighbor to have joined the P2P network after peer 1. Once the buffer level of peer 2 is equal to the buffer level of peer 1, then peer 1 will reduce the upload bandwidth to peer 1, and act to fill the buffer level of peer 3, where peer 3 is the closest temporal neighbor to have joined the P2P network after peer 2. In the mean time, peer 2 will also be acting to fill the buffer of peer 3, and so on. Since each peer in this embodiment receives as much of content as possible from its immediate upstream neighbor without caring what the upstream gives to any further downstream neighbors, this embodiment is referred to as a “greedy-neighbor” buffer filling embodiment.
  • Both the water-leveling and the greedy-neighbor embodiments act to fill the pre-fetching buffers (or “reservoirs”) of neighboring peers using available surplus upload capacity existing after the minimum real-time media demands of peers have been satisfied. The primary difference between the two embodiments is how each peer chooses what neighbor it will assist with supplied content. The job of the server in each case is basically the same—to introduce each peer to its neighbors and let the peers serve each other so that the server can limit its bandwidth requirements. Note however, that at a minimum, the server will generally need to serve the entire media content to at least the first arriving peer. Exceptions to this rule involve the case wherein each peer buffers multiple different media files that have been previously viewed by that peer, in which case the server may not have to provide the entire content to a first arriving peer for a given media streaming session. Further, in this case, the server must also track which peer contains which buffered content, and then match each of those peers as neighbors accordingly. Again, as noted above, for purposes of explanation, the following discussion will focus on the single video case where each peer limits its assistance to redistributing only portions of the media content that it is currently receiving.
  • 2.2 System Architectural Overview:
  • The processes summarized above are illustrated by the general system diagram of FIG. 4. In particular, the system diagram of FIG. 4 illustrates the interrelationships between program modules for implementing the Media Sharer, as described herein. It should be noted that any boxes and interconnections between boxes that are represented by broken or dashed lines in FIG. 4 represent alternate embodiments of the Media Sharer described herein, and that any or all of these alternate embodiments, as described below, may be used in combination with other alternate embodiments that are described throughout this document.
  • Note that for purposes of explanation, the following discussion will generally refer to communication between several peers, which are generically labeled as peer 1 400, peer 2 410, peer 3, 415, and peer N 420. However, it should be understood that any given peer in the P2P network enabled by the Media Sharer may be in concurrent contact with a large number of other peers that are in turn also in contact with any number of additional peers.
  • In general, as illustrated by FIG. 4, the Media Sharer is enabled by variously connecting a plurality of peers (400, 410, 415 and 420) across a P2P network, such as the network described with respect to FIG. 3. Each of the peers (400, 410, 415, and 420) generally includes the same basic program modules for enabling the Media Sharer. Consequently, for purposes of explanation, rather than reiterating each of those program modules for every peer, FIG. 4 illustrates those modules only for peer 1 400. Peers 2 through N (410, 415, and 420) are understood to include the same program modules as shown for peer 1.
  • Each time a peer (400, 410, 415, or 420) comes online in the P2P network, it will use a network communication module 425 to connect to server(s) 405 to request particular media content 430 to be served to that peer. Requests for particular content are generated by a media request module 432, such as, for example, a user interface that allows a user to select of otherwise specify the media content 430 to be streamed. At the same time, each peer (400, 410, 415, or 420) will also report its upload bandwidth capabilities to the server 405 so that the server can track the upload capabilities of each peer and assign each peer to a group of one or more neighboring peers. Note that communication across a network using a network communication module or the like is a concept that is well understood to those skilled in the art, and will not be described in detail herein.
  • The server 405 will then use a peer evaluation module 435 to evaluate the real-time download rate requirements of each peer (400, 410, 415, or 420) for the requested media content 430 in combination with the upload capabilities reported by each peer. The server 405 then uses a neighborhood assignment module 440 to assign each peer to a neighborhood of fellow peers via a server network communication module 445. Note that as described in further detail in section 3, the minimum real-time rate requirements represents the minimum download bandwidth that each peer needs to successfully stream the media content 430 at its encoded resolution.
  • In general, a first peer 400 contacting the server 405 will be served media content 430 across the P2P network to its network communication module 425 via the server's network communication module 445. As additional peers (e.g., peers 410, 415 and 420) later join the P2P network and contact the server 405 with requests for the media content 430, the server will use the neighborhood assignment module 440 to assign those peers to a set of one or more neighboring peers, and will then periodically send an updated neighboring peers list 450 to each peer. Note that the server 405 periodically updates the neighboring peers list 450 to address the issue of new peers coming online, and existing peers dropping offline or having changed upload capabilities for some reason. Further each peer periodically reports its capabilities and various status parameters (such as playback points, buffer levels, etc.) to the server, for use in creating updated neighboring peer lists 450, as described herein.
  • As described in further detail in Section 3, the Media Sharer operates to ensure that the media content 430 is served to peers (400, 410, 415, or 420) first from the upload capacity of other peers, and then from the server 405 only if necessary to ensure that the minimum real-time demand of every peer is fully satisfied. Note that as discussed in Section 3, this real-time demand may include a small transmission buffer or the like to account for packet delays and/or transmission losses across the P2P network. This type of buffer is well known to those skilled in the art, and will not be described in detail herein.
  • In general, peers (400, 410, 415, or 420) act to assist the server by streaming content to their neighboring peers (in accordance with directions included in the neighboring peer list 460 sent by the server 405) by using a content sharing module 465 to pull received data packets of the incoming media content 430 from a streaming media buffer 470. These pulled data packets are transmitted to other downstream (later arriving) peers via the network communication module 425 using an upload bandwidth level that is set via an upload bandwidth allocation module 475 (in accordance with directions included in the neighboring peer list 450 periodically sent by the server 405). At the same time, each peer (400, 410, 415, or 420) will also using a streaming playback module 480 to pull those same data packets from the streaming media buffer 470 for real-time playback of the media content 430 on a local playback device 485. As discussed in further detail in Section 3, the streaming media buffer 470 holds content to meet the minimum real-time demands of the peers (400, 410, 415, or 420).
  • At any point in time, once the real-time demands of each peer (400, 410, 415, or 420) has been satisfied (for ensuring uninterrupted media playback), one or more of the peers may still have some additional or “surplus” upload capacity that is not being used. Further, the server 405 is fully aware of the surplus upload capacity of the peers (400, 410, 415, or 420) since each peer periodically reports its capabilities and various status parameters (such as playback points, buffer levels, etc.) to the server, for use in creating updated neighboring peer lists 450, as described above.
  • Therefore, in various embodiments, the Media Sharer acts to use this surplus upload capacity of each peer (400, 410, 415, or 420) to send additional data packets to other peers as a way to allow each of those peers to save up for a possible time in the future when the other peers may not be able to meet the minimum real-time demands of one or more neighboring peers. These additional data packets are transmitted across the P2P network in the same manner as any other data packet. However, since they are not currently needed for real-time playback of the media content 430, the additional data packets are stored in a “pre-fetching buffer” 490, also referred to as a reservoir.
  • While the additional data packets are transmitted in the same manner as packets needed for real-time demand, the decision as to how much bandwidth each peer (400, 410, 415, or 420) will allocate to the additional data packets, and to which peers that bandwidth will be allocated, is not the same as for meeting the real-time demands of each peer.
  • For example, in one embodiment, described herein as a “water-leveling” embodiment, the server 405 includes a water-leveling module 492 that includes additional instructions in the neighboring peer list 450 that is periodically sent to each peer (400, 410, 415, or 420). These additional instructions inform each peer (400, 410, 415, or 420) as to how much of its surplus bandwidth it is to allocate, and to which peers it will be allocated, for sending the additional data packets for filling the pre-fetching buffer 490 of one or more neighboring peers.
  • As described in Section 3.4, the water-leveling module 492 determines allocation levels for surplus bandwidth by performing a three-step process that includes: 1) evaluating all peers (400, 410, 415, or 420) in the order of their arrival in the P2P network to determine the required server rate to support real-time playback; 2) evaluating all the peers in reverse order of arrival to assign available peer upload bandwidth to downstream peers as a “growth rate” for each peers pre-fetching buffer 490; and 3) periodically evaluate all the peers again in order of arrival and adjust the growth rates as needed to insure that downstream peers whose pre-fetching buffer point catches up to upstream peers do not continue to receive excess bandwidth allocations. Note that the pre-fetching buffer point of a peer represents the total amount of content downloaded by a peer up to time t, and not the level or amount of content that is actually in the pre-fetching buffer at time t. As such, the pre-fetching buffer point corresponds to a future point in the playback stream of the media file. Note that when the server 405 periodically sends an updated neighboring peer list 475 to the peers, that list includes the periodic updates regarding bandwidth allocation for maintaining the desired growth rate for each peers pre-fetching buffer.
  • In an alternate embodiment for using the surplus peer (400, 410, 415, or 420) bandwidth, described herein as a “greedy-neighbor” embodiment, the server 405 includes a greedy-neighbor module 495 that includes additional instructions in the neighboring peer list 450 that is periodically sent to each peer. These additional instructions inform each peer (400, 410, 415, or 420) as to how much of its surplus bandwidth it is to allocate, and to which peers it will be allocated, for sending the additional data packets for filling the pre-fetching buffer 490 of one or more neighboring peers.
  • As described in Section 3.5, the greedy-neighbor module 495 determines allocation levels for surplus bandwidth by performing a two-step process that includes: 1) evaluating all peers (400, 410, 415, or 420) in the order of their arrival in the P2P network to determine the required server rate to support real-time playback; and 2) once the first pass through the peers has been completed, in a periodic second step, the greedy-neighbor module passes through all peers in order again, and then allocates as much bandwidth as possible from each peer to the next arriving neighboring peer. However, as with the water-leveling module 492, the greedy-neighbor module 495 also acts to insure that the pre-fetching buffer point of a downstream peer does exceed that of the peer supplying it with additional data packets. Note that when the server 405 periodically sends an updated neighboring peer list 475 to the peers, that list includes the periodic updates regarding bandwidth allocations for filling each peers pre-fetching buffer 490.
  • 3.0 Operation Overview:
  • The above-described program modules are employed for implementing the Media Sharer. As summarized above, the Media Sharer provides a peer-assisted framework wherein participating peers assist the server in delivering on-demand media content. The following sections provide a detailed discussion of the operation of the Media Sharer, and of exemplary methods for implementing the program modules described in Section 2 with respect to FIG. 4.
  • 3.1 Operational Details of the Media Sharer:
  • The following paragraphs detail specific operational and alternate embodiments of the Media Sharer described herein. In particular, the following paragraphs describe details of the Media Sharer operation, including: generic surplus and deficit bandwidth operational modes; peer-assisted content delivery with no pre-fetching; peer-assisted content delivery with water-leveling based pre-fetching; and peer-assisted content delivery with greedy-neighbor based pre-fetching.
  • In general, as discussed in the following paragraphs, embodiments that do not make use of surplus peer bandwidth for pre-fetching content have been observed to provide significant server load reductions in both high-surplus and high-deficit operational modes. However, in cases where the average peer bandwidth supply approximately equals the average demand (i.e., a balanced operational mode), pre-fetching becomes necessary for reducing server load. In particular, assuming a relatively low peer arrival rate, the P2P network will tend to fluctuate between instantaneous surplus and instantaneous deficit states in an approximately balanced system.
  • In tested embodiments, it has been observed that the two pre-fetching embodiments described below can result in dramatically lower average server rates. For example, in the perfectly balanced mode, pre-fetching can reduce the average server rate by a factor of five or more. In the surplus modes, the server rate actually goes to 0 when peers are allowed to pre-fetch content from upstream peers. The pre-fetching buffer built up during the surplus state allows the Media Sharer system to sustain streaming without using the server bandwidth at all. In a deficit system, the server rate is much closer to the bound D−S. This is true in both the water-leveling embodiment and the greedy-neighbor embodiment. Moreover, the greedy-neighbor embodiment appears to achieve slightly lower server rates than the water-leveling embodiment under all the examined conditions. These concepts are discussed in detail in Sections 3.2 through 3.5.
  • 3.2 Surplus and Deficit Bandwidth Operational Modes:
  • In general, peer assistance in delivering streaming media content to other peers operates in one of three possible modes: 1) a mode in which there is a surplus supply of peer upload bandwidth capacity relative to the current content demands of the peers; 2) a mode in which there is a deficit supply of peer upload bandwidth capacity relative to the current content demands of the peers; and 3) a balanced mode in which the upload bandwidth capacity of the peers is approximately the same as the content demands of the peers.
  • In general, in providing on-demand media, the length of the media being served (in seconds) can be denoted by T and the encoding rate of the media can be denoted by r (in bps). Further, it is assumed that peers arrive (i.e., contact the server with a request for the media) in some probabilistic distribution, such as, for example, a generally Poisson peer arrival process denoted by λ. Further, since various peers will have different upload bandwidth capabilities, typically as a function of their Internet service provider (ISP), the total number peer “types” will be denoted by M, with the peer type m corresponding to an upload bandwidth um of each peer. Further, each such peer type m is assumed to appear with probability pm.
  • Consequently, using the property of the compound Poisson process, the peer arrival model described above is the same as if each type m peer arrives in a Poisson process with independent parameter pmλ. Therefore, the average upload bandwidth μ of all peers is given by μ=Σpmum.
  • It follows from the well known “Little's Law” that in steady state the expected number of type m peers in the system is given by ρm=pmλT since in conventional queuing theory, Little teaches that the average number of elements arriving in a stable system (over some time interval), T, is equal to the average arrival rate of those elements (in this case peers), λ, multiplied by the average time in the system (in this case the length of the requested media), T. Therefore, assuming a steady state process, the average demand is D=rΣρm=rλT, and the average supply S=Σumρm=μλT.
  • Given this model, the Media Sharer is considered to be operating in a “surplus mode” if S>D; and in a “deficit mode” if S<D. In other words, the Media Sharer is in the surplus mode if μ>r, and in the deficit mode otherwise. It is important to note that even if the Media Sharer is operating in surplus mode, at any given instant of time, the server may still need to be active in supplying media to one or more of the peers for at least two reasons. First, although, on average, the Media Sharer may be in the surplus mode, due to inherent system fluctuations, at any given instant of time the supply may become less than the demand. Second, it may not be possible to use all of the supply bandwidth of the peers at any given instant of time. This second point is discussed in further detail with respect to the different pre-fetching strategies discussed in Section 3.4 and 3.5.
  • 3.3 Peer-Assisted Content Delivery with No Pre-Fetching:
  • For purposes of explanation, the most basic peer-assistance scenario will be described for the case where there is no pre-fetching of content by peers. In this case, peers only download content in real-time (e.g., the download rate equals r) and do not pre-fetch for future needs. Note that whether or not pre-fetching is used, a download buffer of some size can also be used by the peers to insure that packet losses or delays do not result in an unduly corrupted or unrecoverable media signal. The use of such buffers with respect to media streaming is well known to those skilled in the art, and will not be described in further detail herein. However, it is important to note that the pre-fetching techniques described herein are not equivalent to the use or operation of conventional download buffers. This point will be better understood by reviewing the discussion of the various pre-fetching techniques provided below in Sections 3.4 and 3.5.
  • Therefore, assuming no pre-fetching, at any particular instant of time there will be n peers in the overall Media Sharer system. These n peers are then ordered so that peer n is the most recent to arrive, peer n−1 is the next most recent, and so on. Thus, peer 1 has been in the system the longest. Let uj, j=1, . . . , n, be the upload bandwidth of the jth peer and its probability be p(uj). As noted above, peer j is of type m with probability pm, so p(uj=um)=pm. Further let the state of the Media Sharer system be (u1, u2, . . . , un) and the rate required from the server be s(u1, u2, . . . , un). Since there is no pre-fetching in this example, the demand of peer 1 can only be satisfied by the server, which is the media rate r. Then, the demand of peer 2 will be satisfied first by peer 1 and then the server only if u1 is not sufficient. Similarly, the demand of peer 3 is satisfied first by peer 1, then peer 2, and then the server, and so on. In other words, for n=1, s(u1)=r, and for n=2, s(u1, u2)=r+max(0, r−u1). Therefore, for a given state, the upload rate required from the server is given be Equation (1), where:
  • s ( u 1 , u 2 , , u n ) = max 1 j n ( r + max ( 0 , ( j - 1 ) r - i = 1 j - 1 u i ) )
  • Note that in accordance with Equation (1), the upload bandwidth of the most recent peer (peer n) is not utilized. Furthermore, if un−1>r, the upload bandwidth portion un−1−r of the next most recent peer n−1 is also wasted since it can only upload to peer n. Alternatively, if each peer adopts a sharing window and can tolerate a slight delay, then peers arriving very close in time (e.g., peers n, n−1, . . . , n−k, for some k) can potentially upload different content blocks in their windows to each other. Then, every peers upload bandwidth could be fully utilized.
  • For a Poisson distribution peer arrival, it can be shown that the average additional server rate needed is given by Equation (2), where
  • s = n ( λ T ) n n ! - λ T u j p ( u 1 , u 2 , , u n ) s ( u 1 , u 2 , , u n ) and where p ( u 1 , u 2 , , u n ) = 1 j n p ( u j )
  • Although this result is not in closed form, s can be readily calculated using a conventional Monte Carlo summation.
  • 3.3.1 Surplus Peer Upload Capacity:
  • Given the framework described in Section 3.3, two parameters of the Media Sharer can be determined: 1) the server rate with respect to the supply-demand ratio; and 2) the server rate with respect to the system scale (i.e., the number of peers). For purposes of explanation, the following example will assume that there are only two types of peers operating within the P2P network enabled by the Media Sharer. In particular, it will be assumed that a “type 1” peer has an upload bandwidth of u1, and a “type 2” peer has an upload bandwidth of u2, with the media rate r and the length of the media T.
  • In this case, it has been observed that when the total peer upload capacity, or supply S, is greater than the demand D by a substantial margin (e.g., S/D is on the order of about 1.4 or more), the server upload rate is very close to the media bit encoding rate r and does not increase as the system scales (i.e., as the number of peers grows). In other words, when there is sufficient average surplus in the system, an approach as simple as the no pre-fetching case can be adopted and the server rate will remain very low.
  • Further, even with a relatively small average surplus S, the no pre-fetching approach still significantly reduces the server rate. For example, when S/D is on the order of around 1.04, and the number of concurrent peers is around 30,000, it has been observed that the server rate is on the order of about 3.8r. Compared to traditional client-server models, where the server streams all data and thus its rate would be about 30,000r, the bandwidth saving is clearly very significant at an almost 8,000 times reduction in server rate. However, the server rate will increase significantly as S/D approaches 1 (i.e., a balanced system). Consequently, while the simple no pre-fetching embodiment can significantly reduce server bandwidth requirements for a “surplus” operating mode, it provides less benefit when the Media Sharer operates closer to a balanced system.
  • 3.3.2 Deficit Peer Upload Capacity:
  • As noted above, a “deficit” operational state exists when the supply of peer upload bandwidth capacity is less than the current content demands of the peers, i.e., when S<D. In the no pre-fetching case, it has been observed that when the supply S is less than the demand D by a substantial margin e.g., D/S is on the order of about 1.4 or more), the server rate almost always equals to D−S. This means when the Media Sharer is in this high-deficit mode (in the non pre-fetching case), the server rate is again very low relative to the number of peers are being served (compared to the traditional client-server model). Further, it has been observed that the server rate deviates from D−S as D/S approaches 1 (i.e., a balanced system). In addition, it has also been observed that the gap between the server rate and the bound D−S shrinks as the number of peers in the system increases. However, if the absolute value between the server rate and D−S is considered as the number of peers increases, the server rate is not negligible.
  • In summary, the no pre-fetching operational case performs very well in both high-surplus and high-deficit modes. Unfortunately, the pre-fetching operational case does not perform well in the balanced mode, where the average supply is approximately equal to the average demand. Since media streaming in a P2P network is likely to frequently operate near a balanced mode, additional embodiments of the Media Sharer have been adapted to reduce server upload bandwidth near the balanced operational mode. These additional embodiments, as described in Section 3.4 and 3.5 make use of various pre-fetching techniques to reduce server rate requirements.
  • 3.4 Peer-Assisted Delivery with “Water-Leveling” Based Pre-Fetching:
  • As discussed above, the performance deviation from the bound in the balanced mode reveals a fundamental limitation of the no pre-fetching operational case. Further, due to the arrival/departure dynamics of many peers operating within the P2P network at any particular point in time, at any given time, even a balanced system (on average) might be instantaneously in a surplus or deficit state.
  • Further, as discussed above, when the Media Sharer operates in a surplus operational state, the no pre-fetching mode does not use surplus peer upload bandwidth that might be available. In addition, when the Media Sharer enters a deficit operational state, the server needs to supplement the uploading efforts of the peers in order to satisfy the real-time demands of the peers as a group. Consequently, if peers pre-fetch media content before it is needed, the server rate contribution can be reduced since temporary operational states that would otherwise force the server to increase its upload rate are reduced or eliminated by drawing from pre-fetched content rather than calling the server for that content.
  • In the first pre-fetching embodiment, peers pre-fetch content and buffer data for their future needs, and act in cooperation to keep the pre-fetching buffer level of all neighboring peers as equal as possible. This embodiment, as described in further detail below is referred to as a “water-leveling” buffer filling embodiment. One caveat of this water-leveling embodiment is that in order to keep the server rate low, peers are not permitted to pre-fetch content from the server. In fact, each peer only pre-fetches content from other neighboring peers that arrived before it and that have sufficient upload bandwidth for distribution. As noted above, one thing that the server does do here is to inform each peer of its neighboring peers. Further, since peers may drop offline, or pause their playback, in one embodiment, the server periodically refreshes the list of neighboring peers that is provided to each of the peers in the P2P network.
  • Whenever a peer has pre-fetched content in its pre-fetching buffer, it can drain that buffer before it requests any new data. Consequently, the current demand of each peer can vary depending on its pre-fetching buffer level, as opposed to the constant demand that exists in the case where no pre-fetching is used. Again, it must be noted that this pre-fetching buffer (also referred to herein as a “reservoir”) is not the same as a conventional download buffer used to insure that network packet losses or delays do not result in an unduly corrupted or unrecoverable media signal. Further, in view of the following description, it should be clear that the manner in which the pre-fetching buffer is filled by neighboring peers differs significantly from conventional download buffers.
  • In particular, in the water-leveling embodiment, pi(t), di(t) and bi(t), are defined as the current playback point of peer i, the current demand of peer i (relative to demands on the server to provide content), and the current pre-fetching buffer point of peer i at time t, respectively. Note that the pre-fetching buffer point bi(t) represents the total amount of content downloaded by peer i up to time t, and not the level or amount of content that is actually in the pre-fetching buffer at time t. As such, the pre-fetching buffer point corresponds to a future point in the playback stream of the media file. In this embodiment, the Media Sharer ensures that the pre-fetching buffer points of all peers follow the arrival order of each peer, such that bi≧bj for all i<j. Note that the pre-fetching buffer level, Bi(t), is defined as Bi(t)=bi(t)−pi(t). Note that each peer must maintain Bi(t)>0, in order ensure continuous real-time playback of the streaming media from content provided by other peers. Consequently, the demand di(t) of peer i is 0 (no server requests), if its pre-fetching buffer level Bi(t)>0, and r if its pre-fetching buffer level Bi(t)=0.
  • Therefore, if all peers maintain their pre-fetching buffer level Bi(t) above 0, the server rate would be 0 at time t, as the demands of all peers are 0 at the moment. This clearly suggests that the server rate can be significantly reduced if all peers act to accumulate a high pre-fetching buffer level whenever the system is operating in a “surplus” mode, as described above. Consequently, by treating the pre-fetching buffer of every peer as a “water tank” or “reservoir,” the water-leveling embodiment described herein operates to fill the lowest tank first.
  • In particular, the water-leveling embodiment of the Media Sharer ensures that each peer directs its upload resources to the neighboring peer having the lowest pre-fetching buffer level. However, as noted above, each peer only assists those peers that are downstream, i.e., peers only assist later arriving peers (or peers that are effectively later by virtue of pausing the playback of the media). Peers do not assist earlier arriving peers. Given these constraints, the water-leveling embodiment is implemented by a series of steps that includes: first satisfying real-time demands of peers; then allocating pre-fetching buffer growth rates of various peers based on which peers currently have the lowest pre-fetching buffer levels; and then adjusting the pre-fetching buffer growth rates of various peers as those pre-fetching buffers begin to fill from the assistance of other peers. These basic steps are described in the following sections.
  • 3.4.1 Satisfying Real-Time Demands:
  • At some time t, assume there are n neighboring peers in the P2P network. The demand of each peer is either 0 or r, depending upon whether or not each peer has a non-zero pre-fetching buffer level, as described in Section 3.4. As with the no pre-fetching embodiment described above, the server of the Media Sharer evaluates all peers in a first pass based on their arrival order to determine the required maximum server rate needed to satisfy each individual peer's demand level. These real-time demands are then satisfied either by the server, by the server with partial assistance from the other peers, or entirely by other peers, depending upon whether the P2P network is operating in a deficit mode, a surplus mode, or a balanced mode. This ensures all real-time demands are satisfied. However, at the same time that the demand requirements of each peer are determined at the server, the server is also recording how much upload bandwidth remains at each peer (as reported to the server by each peer). This remaining upload bandwidth capacity of each peer is denoted by li.
  • 3.4.2 Allocating Growth Rates:
  • After the first pass through all peers in the water-leveling embodiment, the Media Sharer allocates the remaining upload bandwidth first to peers with smallest pre-fetching buffer levels (as reported by each peer to the server). Clearly, since peers only assist downstream peers, ln−1 (i.e., the remaining bandwidth capacity at peer n−1) can only be allocated to peer n, while l1 (i.e., the remaining bandwidth capacity at peer 1) can be allocated to any peers from 2 to n. Therefore, in order to address this asymmetry, in one embodiment, the allocation of the remaining upload bandwidth of peer is performed in a backwards sweep, from node n−1 to node 1. Again, there is no peer later than peer n (i.e., no downstream peers), so its remaining upload bandwidth is not utilized.
  • The growth rate gi for each peers pre-fetching buffer represents the additional upload bandwidth assigned to peer i (from other neighboring peers) beyond satisfying the real-time demand of peer i. For example, starting with peer n−1, the remaining upload bandwidth of peer n−1 (i.e., ln−1) is assigned to the growth rate gn of peer n, i.e., gn=ln−1, as long as the buffer point of peer n−1, is ahead that of peer n, i.e., as long as bn−1>bn.
  • Then, peer n−2 is examined. The allocation of bandwidth for peer n−2 is be calculated as illustrated by Equation (3), where:
  • { g n - 1 = l n - 2 , B n - 1 < B n - 2 , g n - 2 = g n - 2 + l n - 2 , B n - 1 > B n - 2 , g n - 1 = min ( l n - 2 + g n - 2 2 , l n - 2 ) , B n - 1 = B n - 2 , g n - 2 = g n - 2 + l n - 2 ,
  • In other words, if Bn−1≠Bn, then the extra upload capacity, ln−2, of peer n−2 is assigned to whichever peer has smaller buffer level. Otherwise, for Bn−1=Bn, the bandwidth assignment is made to ensure that the growth rate of the peer n−1 and peer n are equal after the allocation.
  • Next, after the remaining upload bandwidth of peer n−2 is completely assigned to downstream peers, the Media Sharer moves on to allocate the bandwidth of peer n−3 between downstream peers, peer n−2, peer n−1, and peer n in a similar way as described with respect to peer n−2. The Media Sharer then continues this reverse order allocation with peer n−4, and so on, up through peer 1. Note that the entire backward allocation of bandwidth from peer n−1 through peer 1 can be completed in O(n) time, as long as the Media Sharer maintains an auxiliary data structure to keep track of groups containing neighboring peers with the same buffer level.
  • 3.4.3 Adjusting Growth Rates:
  • The growth rate allocation described in section 3.4.2 is based on each peers pre-fetching buffer levels and the spare upload capacity of the upstream peers. However, since the growth rates of the pre-fetching buffers of the various peers differs, the buffer point, bi(t), may catch up with the buffer point bi−1(t) of the immediately preceding peer. Again, it should be noted that the buffer point is not the buffer level (i.e., the amount of content in each pre-fetching buffer), but instead represents the total amount of content that has been received by the peer and corresponds to some future point in the playback stream of the media file. In other words, peer k+1 can catch up with peer k such that bk+1(t)=bk(t) if peer k+1 has a higher growth rate than peer k, such that gk+1>gk at time (t). If this occurs, then the buffer point of peer k+1 will surpass that of peer k. In this case, the Media Sharer will act to decrease the growth rate gk+1 to the same level as gk.
  • In other words, the third step is to pass through all peers again in order, and reduce the growth rates of those peers who have already caught up with earlier peers. Any excess bandwidth of the neighboring peers is then reassigned to other downstream peers as described in Section 3.4.2. Again, as long as the Media Sharer updates the auxiliary data structure (as described in Section 3.4.2), this growth rate adjustment process can be completed in O(n) time.
  • In summary, peer bandwidth allocation in the “water-leveling” embodiment includes three steps:
      • 1) Pass through all the peers in order of arrival to determine the required server rate to support real-time playback;
      • 2) Process all the peers in reverse order of arrival to assign available peer upload bandwidth to downstream peers; and
      • 3) Process all the peers again in order of arrival and adjust the growth rates as needed to insure that downstream peers whose buffer point catches up to upstream peers do not continue to receive excess bandwidth allocations. Note that this third step then repeats periodically through the media streaming process.
  • The complexity of the entire three step bandwidth allocation process is O(n), with the end result of the process being a group of neighboring peers having approximately equal pre-fetching buffer levels, regardless of where each peer is in the playback process. As a result, overall server rate requirements are significantly reduced in the event that the overall P2P network goes into a temporary balanced or deficit operational mode since each of the peers will draw on its pre-fetching buffer rather than call on the server to make up any demand shortfall that cannot be supplied by other peers.
  • In a tested example of the above-described “water-leveling” pre-fetching buffer filling embodiment, it has been observed that when the server rate is positive, the pre-fetching buffer level, Bi(t), of earliest peers (i.e., peer 1, 2, 3, etc.) is usually 0 at any given time t. This implies that data demands imposed on the server are usually generated by the earliest peers, with those earliest peers relieving some of the data demands on the server by assisting later peers. Due to the asymmetry of the Media Sharer on-demand delivery mechanism, wherein earlier peers only upload content to later peers, the earlier peers are more likely to be assigned lower growth rates than later peers. Therefore, as noted above, the actual behavior of the “water-leveling” embodiment of the Media Sharer system is that later peers tend to have higher buffer levels than earlier peers. Consequently, while the “water-leveling” embodiment produces very good results overall, earlier peers still have a higher risk of running out of buffer, thereby increasing demands on the server. Therefore, in a related embodiment, as described in Section 3.5, the Media Sharer address this issue in a “greedy-neighbor” embodiment wherein each peer simply dedicates its remaining upload bandwidth to the neighboring peer right after itself.
  • 3.5 Peer-Assisted Delivery with “Greedy-Neighbor” Based Pre-Fetching:
  • The “greedy-neighbor” pre-fetching buffer filling embodiment generally includes two primary steps:
      • 1) The first step is similar to a combination of the no-pre-fetching embodiment and the first step of the “water-leveling” embodiment. In particular, in the “greedy-neighbor” embodiment, the Media Sharer first passes through all peers in order of arrival and determines the server rate needed to satisfy the real-time demands of those peers. As with the “water-leveling” embodiment, the Media Sharer also records the remaining bandwidth at each peer at the same time;
      • 2) Once the first pass through the peers has been completed, in a periodic second step, the Media Sharer passes through all peers in order again, and then allocates as much bandwidth as possible from each peer to the next arriving neighboring peer. However, this second step is repeated periodically so that the growth rate of peer k+1 does not cause the buffer point of peer k+1 to surpass that of peer k, as discussed above with respect to the water-leveling embodiment. In this case, as with the water-leveling embodiment, the Media Sharer will act to decrease the growth rate gk+1 to the same level as gk.
  • The second step can be further explained in the following pseudo code block, which shows that the growth rate of the pre-fetching buffer point (demand+growth rate) is compared between peer k and peer k+1. Consequently, the actual budget for allocating each peer's bandwidth to it neighbor does not need to evaluate the actual buffer level of the neighboring peer, just the buffer point of that neighbor. In particular, this periodic allocation is illustrated by the pseudo code block provided in Table 1:
  • TABLE 1
    Second (Periodic) Step of the “Greedy Neighbor” Embodiment
    1: bandwidth allocation := 0
    2: for k = 1 : n − 1 do
    3: bandwidth allocation := bandwidth allocation + lk;
    4: if bk = bk+1 and dk + gk < dk+1 + bandwidth allocation
    5:  gk+1 := dk + gk − dk+1
    6: else
    7: gk+1 := bandwidth allocation
    8: bandwidth allocation := bandwidth allocation −gk+1
    9: return
  • The foregoing description of the Media Sharer has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate embodiments may be used in any combination desired to form additional hybrid embodiments of the Media Sharer. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.

Claims (20)

1. A computer-implemented process, comprising:
identifying relative times when each of two or more peers in a peer-to-peer (P2P) network requested streaming of a common media file;
directing streaming of media packets of the requested media file to each peer to enable streaming playback of the media file for each peer at different times corresponding to the relative request times from a combination of:
server transmission of streaming packets to one or more of the peers, and
allocation of a first portion of an available upload bandwidth capacity of each peer to transmit one or more received streaming media packets to one or more other peers that requested streaming of the common media file at a later relative request time; and
periodically directing one or more of the peers to allocate a second portion of its available upload bandwidth to transmit additional packets of the common media file to one or more of the peers that requested the common media file at later relative request times.
2. The computer-implemented process of claim 1 further comprising an on-demand service for peer selection of the requested media file from a plurality of available media files.
3. The computer-implemented process of claim 1 further comprising evaluating each of the two or more peers in reverse order of their relative request times and allocating the second portion of each peers available upload bandwidth among a plurality of peers having a later relative request time for filling a pre-fetching buffer of those peers with excess media packets of the requested media file.
4. The computer-implemented process of claim 3 further comprising reducing allocations of the second portions of the upload bandwidth of the peers having earlier relative request times to particular peers having later relative request times when a level of the pre-fetching buffer of those peers reaches that of the peers which are allocating the upload bandwidth.
5. The computer-implemented process of claim 1 wherein periodically directing one or more of the peers to allocate a second portion of its available upload bandwidth comprises:
for each of the two or more peers, identifying a nearest downstream neighbor peer as a particular one of the peers having a closest later relative request time; and
periodically directing each peer to allocate as much upload bandwidth as possible to its nearest downstream neighbor peer for filling a pre-fetching buffer of the nearest downstream neighbor peer with excess media packets of the requested media file.
6. The computer-implemented process of claim 5 further comprising reallocating a portion of the second portion of the upload bandwidth of each peer to a next nearest downstream neighbor peer having a next closest later relative request time when the level of the pre-fetching buffer of the nearest downstream neighbor peer reaches that of the peer which is allocating the upload bandwidth.
7. The computer-implemented process of claim 1 wherein each peer self reports its available upload bandwidth capacity.
8. The computer-implemented process of claim 1 further comprising periodically adjusting the first portion of the upload bandwidth capacity of each peer relative to the second portion of that peers upload bandwidth capacity to adapt to changes in a connection status of one or more other peers in the P2P network.
9. The computer-implemented process of claim 1 wherein an effective relative request time of any peer that pauses playback of the common media file is changed as a function of that peers current playback point relative to playback points of the other peers.
10. A computing device, comprising:
a memory configured to store at least one program module; and
a processing unit configured to execute the at least one program module to:
determine an upload bandwidth capacity of each of a plurality of peer computing devices joined to a peer-to-peer (P2P) network;
identify a request time when each peer computing device requested on-demand streaming of a common media file;
for each peer, identifying a set of one or more downstream neighbor peers as those peers that requested the on-demand streaming of the common media file at a later time;
direct streaming of media packets of the common media file to each peer for real-time playback of the media file at different times corresponding to the request times from a combination of:
server transmission of streaming packets to one or more of the peers, and
directing each peer to allocate a first portion of its upload bandwidth capacity to retransmit one or more received streaming media packets to one or more downstream neighbor peers; and
direct each peer to allocate a second portion of its upload bandwidth capacity to one or more downstream neighbor peers, with that second portion being used to retransmit additional packets of the common media file in excess of those used to provide real-time playback.
11. The computing device of claim 10 further comprising on-demand peer selection of the requested media file from a plurality of available media files.
12. The computing device of claim 10 further comprising periodically adjusting the first portion of the upload bandwidth capacity of each peer relative to the second portion of that peers upload bandwidth capacity to adapt to changes in a connection status of one or more other peers in the P2P network.
13. The computing device of claim 10 wherein an effective relative request time of any peer that pauses playback of the common media file is changed as a function of that peers current playback point relative to playback points of the other peers.
14. The computing device of claim 10 further comprising evaluating each of the peer computing device in reverse order of their request times and allocating the second portion of each peers available upload bandwidth among a plurality of its downstream neighbor peers for filling a pre-fetching buffer of those peers with excess media packets of the common media file.
15. The computing device of claim 14 further comprising reducing allocations of the second portions of the upload bandwidth of the peers having earlier request times to particular downstream neighbor peers when a level of the pre-fetching buffer of those downstream neighbor peers reaches that of the peers which are allocating the upload bandwidth.
16. The computing device of claim 10 wherein directing each peer to allocate a second portion of its upload bandwidth capacity comprises:
periodically directing each peer to allocate as much upload bandwidth as possible to its temporally closest downstream neighbor peer for filling a pre-fetching buffer of the temporally closest downstream neighbor peer with excess media packets of the common media file.
17. The computing device of claim 16 further comprising reallocating a portion of the second portion of the upload bandwidth of each peer to a next nearest downstream neighbor peer having a next closest request time when the level of the pre-fetching buffer of the nearest downstream neighbor peer reaches that of the peer which is allocating the upload bandwidth.
18. A computer-readable medium having computer executable instructions stored therein, said instructions causing a computing device to execute a method comprising:
identifying relative request times when each two or more peers in a peer-to-peer (P2P) network requested on-demand streaming of a common media file;
directing streaming of media packets of the requested media file to each peer to enable streaming playback of the media file for each peer at different times corresponding to the relative request times from a combination of:
server transmission of streaming packets to one or more of the peers, and
using a first portion of an available upload bandwidth capacity of each peer to transmit one or more received streaming media packets to one or more other peers that requested on-demand streaming of the common media file at a later relative request time; and
periodically directing one or more of the peers to use a second portion of its available upload bandwidth to transmit additional packets of the common media file to one or more of the peers that requested the common media file at later relative request times.
19. The computer-readable medium of claim 18 wherein the first portion of the upload bandwidth capacity of each peer is periodically adjusted relative to the second portion of that peers upload bandwidth capacity to adapt to changes in a connection status of one or more other peers in the P2P network.
20. The computer-readable medium of claim 18 wherein an effective relative request time of any peer that pauses playback of the common media file is changed as a function of that peers current playback point relative to playback points of the other peers.
US14/460,660 2007-02-23 2014-08-15 Smart pre-fetching for peer assisted on-demand media Expired - Fee Related US10218758B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/460,660 US10218758B2 (en) 2007-02-23 2014-08-15 Smart pre-fetching for peer assisted on-demand media

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/678,268 US8832290B2 (en) 2007-02-23 2007-02-23 Smart pre-fetching for peer assisted on-demand media
US14/460,660 US10218758B2 (en) 2007-02-23 2014-08-15 Smart pre-fetching for peer assisted on-demand media

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/678,268 Continuation US8832290B2 (en) 2007-02-23 2007-02-23 Smart pre-fetching for peer assisted on-demand media

Publications (2)

Publication Number Publication Date
US20150012662A1 true US20150012662A1 (en) 2015-01-08
US10218758B2 US10218758B2 (en) 2019-02-26

Family

ID=39715778

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/678,268 Expired - Fee Related US8832290B2 (en) 2007-02-23 2007-02-23 Smart pre-fetching for peer assisted on-demand media
US14/460,660 Expired - Fee Related US10218758B2 (en) 2007-02-23 2014-08-15 Smart pre-fetching for peer assisted on-demand media

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/678,268 Expired - Fee Related US8832290B2 (en) 2007-02-23 2007-02-23 Smart pre-fetching for peer assisted on-demand media

Country Status (1)

Country Link
US (2) US8832290B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230102888A1 (en) * 2021-09-30 2023-03-30 Rovi Guides, Inc. Systems and methods for streaming media content during unavailability of content server

Families Citing this family (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7818444B2 (en) 2004-04-30 2010-10-19 Move Networks, Inc. Apparatus, system, and method for multi-bitrate content streaming
US8868772B2 (en) 2004-04-30 2014-10-21 Echostar Technologies L.L.C. Apparatus, system, and method for adaptive-rate shifting of streaming content
US8904463B2 (en) * 2005-03-09 2014-12-02 Vudu, Inc. Live video broadcasting on distributed networks
US8909807B2 (en) 2005-04-07 2014-12-09 Opanga Networks, Inc. System and method for progressive download using surplus network capacity
US9065595B2 (en) 2005-04-07 2015-06-23 Opanga Networks, Inc. System and method for peak flow detection in a communication network
US11258531B2 (en) 2005-04-07 2022-02-22 Opanga Networks, Inc. System and method for peak flow detection in a communication network
US7500010B2 (en) 2005-04-07 2009-03-03 Jeffrey Paul Harrang Adaptive file delivery system and method
US8719399B2 (en) * 2005-04-07 2014-05-06 Opanga Networks, Inc. Adaptive file delivery with link profiling system and method
US8589508B2 (en) * 2005-04-07 2013-11-19 Opanga Networks, Inc. System and method for flow control in an adaptive file delivery system
US8370514B2 (en) 2005-04-28 2013-02-05 DISH Digital L.L.C. System and method of minimizing network bandwidth retrieved from an external network
US8683066B2 (en) 2007-08-06 2014-03-25 DISH Digital L.L.C. Apparatus, system, and method for multi-bitrate content streaming
EP2109940A4 (en) * 2007-01-16 2013-10-09 Opanga Networks Inc Wireless data delivery management system and method
US8832290B2 (en) * 2007-02-23 2014-09-09 Microsoft Corporation Smart pre-fetching for peer assisted on-demand media
US8019830B2 (en) * 2007-04-16 2011-09-13 Mark Thompson Methods and apparatus for acquiring file segments
US8832220B2 (en) * 2007-05-29 2014-09-09 Domingo Enterprises, Llc System and method for increasing data availability on a mobile device based on operating mode
GB2450473A (en) * 2007-06-04 2008-12-31 Sony Comp Entertainment Europe A Server in a Peer to Peer system selecting and notifying a device that it is to become a member of a peer group
CN100461740C (en) * 2007-06-05 2009-02-11 华为技术有限公司 Customer end node network topological structure method and stream media distributing system
US20090049184A1 (en) 2007-08-15 2009-02-19 International Business Machines Corporation System and method of streaming data over a distributed infrastructure
US8046453B2 (en) * 2007-09-20 2011-10-25 Qurio Holdings, Inc. Illustration supported P2P media content streaming
US8190994B2 (en) * 2007-10-25 2012-05-29 Nokia Corporation System and method for listening to audio content
US7979419B2 (en) * 2007-11-01 2011-07-12 Sharp Laboratories Of America, Inc. Distributed search methods for time-shifted and live peer-to-peer video streaming
US7975282B2 (en) * 2007-11-01 2011-07-05 Sharp Laboratories Of America, Inc. Distributed cache algorithms and system for time-shifted, and live, peer-to-peer video streaming
US8386629B2 (en) * 2007-12-27 2013-02-26 At&T Intellectual Property I, L.P. Network optimized content delivery for high demand non-live contents
US20110047215A1 (en) * 2008-02-27 2011-02-24 Yang Guo Decentralized hierarchically clustered peer-to-peer live streaming system
JP5014244B2 (en) * 2008-05-02 2012-08-29 キヤノン株式会社 VIDEO DISTRIBUTION DEVICE, ITS CONTROL METHOD, VIDEO DISTRIBUTION SYSTEM, AND PROGRAM
CN101588468B (en) * 2008-05-20 2013-08-07 华为技术有限公司 Medium playing method, medium playing device and medium playing system based on P2P
US8516125B2 (en) * 2008-06-08 2013-08-20 Apple Inc. System and method for simplified data transfer
US11258652B2 (en) 2008-06-08 2022-02-22 Apple Inc. System and method for placeshifting media playback
US9626363B2 (en) * 2008-06-08 2017-04-18 Apple Inc. System and method for placeshifting media playback
US20100027966A1 (en) * 2008-08-04 2010-02-04 Opanga Networks, Llc Systems and methods for video bookmarking
US20100031299A1 (en) * 2008-08-04 2010-02-04 Opanga Networks, Llc Systems and methods for device dependent media content delivery in a local area network
WO2010033750A2 (en) * 2008-09-18 2010-03-25 Jeffrey Harrang Systems and methods for automatic detection and coordinated delivery of burdensome media content
CN101378494B (en) * 2008-10-07 2011-04-20 中兴通讯股份有限公司 System and method for implementing internet television medium interaction
KR20110082192A (en) * 2008-11-07 2011-07-18 오팡가 네트웍스, 인크. Portable data storage devices that initiate data transfers utilizing host devices
US20100131385A1 (en) * 2008-11-25 2010-05-27 Opanga Networks, Llc Systems and methods for distribution of digital media content utilizing viral marketing over social networks
US8924460B2 (en) * 2008-12-19 2014-12-30 International Business Machines Corporation Method and system of administrating a peer-to-peer file sharing network
US9357000B2 (en) 2009-02-17 2016-05-31 Thomson Licensing Method for providing incentive mechanisms for out-of-order download in communication networks dedicated to the distribution of video-on-demand content
US8289365B2 (en) * 2009-03-30 2012-10-16 Alcatel Lucent Method and apparatus for the efficient transmission of multimedia streams for teleconferencing
WO2010138972A2 (en) * 2009-05-29 2010-12-02 Abacast, Inc. Selective access of multi-rate data from a server and/or peer
US10045083B2 (en) 2009-07-13 2018-08-07 The Directv Group, Inc. Satellite seeding of a peer-to-peer content distribution network
US8886790B2 (en) 2009-08-19 2014-11-11 Opanga Networks, Inc. Systems and methods for optimizing channel resources by coordinating data transfers based on data type and traffic
KR101689778B1 (en) * 2009-08-19 2016-12-27 오팡가 네트웍스, 인크. Enhanced data delivery based on real time analysis of network communications quality and traffic
US9595300B2 (en) * 2009-10-21 2017-03-14 Media Ip, Llc Contextual chapter navigation
US8966553B2 (en) * 2009-11-23 2015-02-24 At&T Intellectual Property I, Lp Analyzing internet protocol television data to support peer-assisted video-on-demand content delivery
US8326929B2 (en) * 2009-12-30 2012-12-04 Verizon Patent And Licensing Inc. Peer-to-peer based feature network
US8161156B2 (en) 2009-12-30 2012-04-17 Verizon Patent And Licensing, Inc. Feature delivery packets for peer-to-peer based feature network
US8898803B1 (en) * 2010-01-11 2014-11-25 Media Ip, Llc Content and identity delivery system for portable playback of content and streaming service integration
US9510029B2 (en) 2010-02-11 2016-11-29 Echostar Advanced Technologies L.L.C. Systems and methods to provide trick play during streaming playback
US8527649B2 (en) 2010-03-09 2013-09-03 Mobixell Networks Ltd. Multi-stream bit rate adaptation
US8495196B2 (en) 2010-03-22 2013-07-23 Opanga Networks, Inc. Systems and methods for aligning media content delivery sessions with historical network usage
CN102244670B (en) * 2010-05-13 2013-11-06 北京大学 Idle node assistance method for P2P (peer-to-peer) file transmission
FR2962285A1 (en) * 2010-06-30 2012-01-06 France Telecom PAIR AUDIO COMMUNICATION
US8832709B2 (en) 2010-07-19 2014-09-09 Flash Networks Ltd. Network optimization
TW201210284A (en) * 2010-08-27 2012-03-01 Ind Tech Res Inst Architecture and method for hybrid Peer To Peer/client-server data transmission
US8745749B2 (en) 2010-11-15 2014-06-03 Media Ip, Llc Virtual secure digital card
US8806049B2 (en) 2011-02-15 2014-08-12 Peerialism AB P2P-engine
US8688074B2 (en) 2011-02-28 2014-04-01 Moisixell Networks Ltd. Service classification of web traffic
US8775827B2 (en) 2011-03-28 2014-07-08 Media Ip, Llc Read and write optimization for protected area of memory
US8949879B2 (en) 2011-04-22 2015-02-03 Media Ip, Llc Access controls for known content
US8898327B2 (en) * 2011-10-05 2014-11-25 Peerialism AB Method and device for arranging peers in a live streaming P2P network
AU2012320637B2 (en) * 2011-10-05 2015-10-08 Hive Streaming Ab Method and device for arranging peers in a live streaming P2P network
US8713194B2 (en) 2011-11-18 2014-04-29 Peerialism AB Method and device for peer arrangement in single substream upload P2P overlay networks
US8799498B2 (en) 2011-11-18 2014-08-05 Peerialism AB Method and device for peer arrangement in streaming-constrained P2P overlay networks
TWI471001B (en) * 2011-12-29 2015-01-21 Ind Tech Res Inst Server, user equipment, and method for selecting an initial block address and deciding a block request number
GB2500720A (en) * 2012-03-30 2013-10-02 Nec Corp Providing security information to establish secure communications over a device-to-device (D2D) communication link
FR2989241B1 (en) 2012-04-05 2018-01-26 Easybroadcast METHOD FOR DIFFUSION OF CONTENT IN A COMPUTER NETWORK
CN103685373B (en) * 2012-09-10 2016-12-28 联想(北京)有限公司 Data uploading device and data uploading method
EP2954727A1 (en) * 2013-02-07 2015-12-16 Interdigital Patent Holdings, Inc. Method and apparatus for selecting a routing path in a mesh network
US9578077B2 (en) * 2013-10-25 2017-02-21 Hive Streaming Ab Aggressive prefetching
US11023541B2 (en) * 2014-12-30 2021-06-01 Rovi Guides, Inc. Methods and systems for providing media recommendations based on user location
US9819760B2 (en) 2015-02-03 2017-11-14 Microsoft Technology Licensing, Llc Method and system for accelerated on-premise content delivery
JP2016163131A (en) * 2015-02-27 2016-09-05 株式会社ソニー・インタラクティブエンタテインメント Information processing apparatus and image data distribution method
US10142411B2 (en) 2015-05-29 2018-11-27 Microsoft Technology Licensing, Llc Dynamic swarm segmentation
US20170154080A1 (en) * 2015-12-01 2017-06-01 Microsoft Technology Licensing, Llc Phasing of multi-output query operators
US10091264B2 (en) * 2015-12-26 2018-10-02 Intel Corporation Technologies for streaming device role reversal
KR101791208B1 (en) * 2016-01-12 2017-10-31 네이버 주식회사 Method and system for sharing live broadcasting data
CN105827695B (en) * 2016-03-11 2020-05-22 中国联合网络通信集团有限公司 Bandwidth resource sharing method and device
US10057337B2 (en) * 2016-08-19 2018-08-21 AvaSure, LLC Video load balancing system for a peer-to-peer server network
KR102532645B1 (en) * 2016-09-20 2023-05-15 삼성전자 주식회사 Method and apparatus for providing data to streaming application in adaptive streaming service
CN109408211A (en) * 2018-09-28 2019-03-01 桂林电子科技大学 A kind of equity network flow medium system data scheduling algorithm of multiple-objection optimization
CN109450911A (en) * 2018-11-26 2019-03-08 武汉虹信技术服务有限责任公司 A kind of across a network stream medium data transmission system and method
US11386007B1 (en) * 2021-04-05 2022-07-12 Oracle International Corporation Methods and systems for fast allocation of fragmented caches
US11689836B2 (en) 2021-05-28 2023-06-27 Plantronics, Inc. Earloop microphone

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020188735A1 (en) * 2001-06-06 2002-12-12 Needham Bradford H. Partially replicated, locally searched peer to peer file sharing system
US20070204321A1 (en) * 2006-02-13 2007-08-30 Tvu Networks Corporation Methods, apparatus, and systems for providing media content over a communications network
US8832290B2 (en) * 2007-02-23 2014-09-09 Microsoft Corporation Smart pre-fetching for peer assisted on-demand media

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4924384A (en) * 1988-09-21 1990-05-08 International Business Machines Corporation Method for controlling the peer-to-peer processing of a distributed application across a synchronous request/response interface using push-down stack automata
US5739867A (en) 1997-02-24 1998-04-14 Paradise Electronics, Inc. Method and apparatus for upscaling an image in both horizontal and vertical directions
CN1170438C (en) 1997-03-12 2004-10-06 松下电器产业株式会社 Upsampling filter and half-pixel generator for an HDTV down conversion system
US6618443B1 (en) 1997-03-12 2003-09-09 Matsushita Electric Industrial Co., Ltd. Upsampling filter for a down conversion system
US6510246B1 (en) 1997-09-29 2003-01-21 Ricoh Company, Ltd Downsampling and upsampling of binary images
US6650704B1 (en) 1999-10-25 2003-11-18 Irvine Sensors Corporation Method of producing a high quality, high resolution image from a sequence of low quality, low resolution images that are undersampled and subject to jitter
US7084905B1 (en) 2000-02-23 2006-08-01 The Trustees Of Columbia University In The City Of New York Method and apparatus for obtaining high dynamic range images
US6816197B2 (en) 2001-03-21 2004-11-09 Hewlett-Packard Development Company, L.P. Bilateral filtering in a demosaicing process
US6839769B2 (en) * 2001-05-31 2005-01-04 Intel Corporation Limiting request propagation in a distributed file system
US20030158958A1 (en) * 2002-02-20 2003-08-21 Koninklijke Philips Electronics N.V. Distributed storage network architecture using user devices
US7120305B2 (en) 2002-04-16 2006-10-10 Ricoh, Co., Ltd. Adaptive nonlinear image enlargement using wavelet transform coefficients
US7352919B2 (en) 2004-04-28 2008-04-01 Seiko Epson Corporation Method and system of generating a high-resolution image from a set of low-resolution images
US8149235B2 (en) 2004-08-20 2012-04-03 Microsoft Corporation System and method for upscaling low-resolution images
US7664109B2 (en) * 2004-09-03 2010-02-16 Microsoft Corporation System and method for distributed streaming of scalable media
WO2006058065A2 (en) * 2004-11-23 2006-06-01 Nighthawk Radiology Services Methods and systems for providing data across a network
US8131673B2 (en) * 2006-12-05 2012-03-06 International Business Machines Corporation Background file sharing in a segmented peer-to-peer file sharing network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020188735A1 (en) * 2001-06-06 2002-12-12 Needham Bradford H. Partially replicated, locally searched peer to peer file sharing system
US20090198822A1 (en) * 2001-06-06 2009-08-06 Needham Bradford H Partially replicated, locally searched peer to peer file sharing system
US20100106778A1 (en) * 2001-06-06 2010-04-29 Needham Bradford H Partially replicated, locally searched peer to peer file sharing system
US8645553B2 (en) * 2001-06-06 2014-02-04 Intel Corporation Partially replicated, locally searched peer to peer file sharing system
US8850040B2 (en) * 2001-06-06 2014-09-30 Intel Corporation Partially replicated, locally searched peer to peer file sharing system
US20070204321A1 (en) * 2006-02-13 2007-08-30 Tvu Networks Corporation Methods, apparatus, and systems for providing media content over a communications network
US8904456B2 (en) * 2006-02-13 2014-12-02 Tvu Networks Corporation Methods, apparatus, and systems for providing media content over a communications network
US8832290B2 (en) * 2007-02-23 2014-09-09 Microsoft Corporation Smart pre-fetching for peer assisted on-demand media

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230102888A1 (en) * 2021-09-30 2023-03-30 Rovi Guides, Inc. Systems and methods for streaming media content during unavailability of content server
US11706469B2 (en) * 2021-09-30 2023-07-18 Rovi Guides, Inc. Systems and methods for streaming media content during unavailability of content server

Also Published As

Publication number Publication date
US10218758B2 (en) 2019-02-26
US20080205291A1 (en) 2008-08-28
US8832290B2 (en) 2014-09-09

Similar Documents

Publication Publication Date Title
US10218758B2 (en) Smart pre-fetching for peer assisted on-demand media
Wang et al. Optimal proxy cache allocation for efficient streaming media distribution
Guo et al. Prefix caching assisted periodic broadcast for streaming popular videos
CN106416269A (en) Unicast adaptive bitrate streaming
US20110219114A1 (en) Pod-based server backend infrastructure for peer-assisted applications
Medjiah et al. Avoiding quality bottlenecks in P2P adaptive streaming
Bideh et al. Adaptive content-and-deadline aware chunk scheduling in mesh-based P2P video streaming
He et al. Distributed throughput maximization in P2P VoD applications
JP5314605B2 (en) Content distribution system and method and program
Haouari et al. Transcoding resources forecasting and reservation for crowdsourced live streaming
Hlavacs et al. Hierarchical video patching with optimal server bandwidth
Hwang et al. Joint-family: Adaptive bitrate video-on-demand streaming over peer-to-peer networks with realistic abandonment patterns
Pâris A cooperative distribution protocol for video-on-demand
Dakshayini et al. Client-to-client streaming scheme for vod applications
Nayfeh et al. Design and analysis of scalable and interactive near video-on-demand systems
Lee Channel folding-an algorithm to improve efficiency of multicast video-on-demand systems
Lin et al. Cooperative proxy framework for layered video streaming
Diaz et al. An efficient IPTV cost-based policy management of hosting service for 3rd party content
Seo et al. Network-adaptive autonomic transcoding algorithm for seamless streaming media service of mobile clients
Su et al. Optimal chaining and implementation for large scale multimedia streaming
Takano et al. Offloading VoD server organized dynamically distributed cache using P2P delivery
Kalan Enhancing Quality of Experience by Monitoring End-User Streaming Behavior
Li et al. Distortion optimized bandwidth allocation on cluster of Video-on-Command servers
Xie et al. An efficient caching mechanism for video-on-demand service over peer-to-peer network
Chakrabarti et al. A Case for Grid Based Video on Demand System

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, JIN;HUANG, CHENG;ROSS, KEITH W.;SIGNING DATES FROM 20070221 TO 20070223;REEL/FRAME:033818/0261

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034747/0417

Effective date: 20141014

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:039025/0454

Effective date: 20141014

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230226