US20100085979A1 - Models for routing tree selection in peer-to-peer communications - Google Patents

Models for routing tree selection in peer-to-peer communications Download PDF

Info

Publication number
US20100085979A1
US20100085979A1 US12/247,431 US24743108A US2010085979A1 US 20100085979 A1 US20100085979 A1 US 20100085979A1 US 24743108 A US24743108 A US 24743108A US 2010085979 A1 US2010085979 A1 US 2010085979A1
Authority
US
United States
Prior art keywords
routing
routing tree
data
tree
receivers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/247,431
Other versions
US7738406B2 (en
Inventor
Shao Liu
Sudipta Sengupta
Mung Chiang
Jin Li
Philip Andrew Chou
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US12/247,431 priority Critical patent/US7738406B2/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LIU, Shao, CHOU, PHILIP ANDREW, LI, JIN, CHIANG, MUNG, SENGUPTA, SUDIPTA
Publication of US20100085979A1 publication Critical patent/US20100085979A1/en
Application granted granted Critical
Publication of US7738406B2 publication Critical patent/US7738406B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/48Routing tree calculation

Definitions

  • the peer-to-peer communications model involves a set of nodes (such as computers or devices) connected over a network that cooperate in the distribution of one or more data streams.
  • This model differs from more conventional communications models by permitting a node that is receiving a portion of the data stream to redistribute it to other nodes, thereby reducing the communications strain on the source of the data stream and amplifying the availability and potential throughput of the source stream to other nodes.
  • a peer-to-peer communications session may be established over networks and among clients that have many properties relevant to the throughput of the network, such as the uploading and downloading capacity of the nodes; an uploading communications cap that limits the number of outbound connections that may be established by a node; and the full or partial interconnectedness of the nodes (i.e., whether any node is able to reach the full set of nodes or only a neighboring subset thereof.)
  • the number of data streams generated and shared among the nodes may also be relevant; e.g., a single data stream may be shared by a single source and redistributed among the nodes (such as in an IP television scenario), or a plurality of data streams may be shared by multiple sources to be redistributed to the nodes (such as in a conferencing scenario.)
  • some nodes may cooperate as helpers by optionally participating in the session, such that the helper does not consume the data stream (and may be excluded from a portion of the data stream) but may be utilized for redistributing a
  • a theoretically achievable throughput may exist.
  • a video stream shared by a source in an IPTV scenario may be bitrate-adjustable, such that higher bitrate results in better quality video but also a greater size of the data stream.
  • the achievable throughput to the nodes of the network may vary.
  • a high theoretical throughput may be achieved by endeavoring to consume the entire network capacity of all nodes in the redistribution of the one or more data streams.
  • the consumption of upload capacity is a factor of the selection of routing trees through the node.
  • the theoretically achievable throughput of the data streams is an aggregate function of the selection of routing trees, and the selection of a routing tree set in furtherance of consuming a greater share of the capacities of the nodes and of the connections may promote an increase in the theoretical throughput of the session.
  • Techniques may be devised for modeling a network in a manner that facilitates the selection of a high-throughput routing pattern for distributing one or more data streams to the nodes of the peer-to-peer communications session.
  • Such techniques may involve modeling the set of potential routing trees for the communications stream according to a linear programming model, wherein the theoretically achievable throughput of the network may be calculated as a sum of the uploading throughputs of the nodes, which may in turn be adjusted through the apportionment of the data stream among the set of available routing trees.
  • a primal model may be devised that permits the calculation of a theoretical throughput that may be desirably increased; alternatively, a linear programming dual model may be devised that permits the calculation of a theoretical networking cost that may be desirably reduced to expand the utilization of networking resources.
  • a primal model may be devised that permits the calculation of a theoretical throughput that may be desirably increased; alternatively, a linear programming dual model may be devised that permits the calculation of a theoretical networking cost that may be desirably reduced to expand the utilization of networking resources.
  • FIG. 1 is an illustration of a routing tree set comprising a set of routing trees whereby a data stream may be transmitted from a source to a set of receivers.
  • FIG. 2 is an illustration of routing trees involving a helper that may be optionally included in the routing of the routing tree.
  • FIG. 3 is a flow chart illustrating an exemplary method of transmitting a data stream among a node set comprising a source of the data stream and a set of receivers over a routing tree set.
  • FIG. 4 is a pseudocode block illustrating an exemplary iterative process for applying a linear programming dual model to select low-cost routing trees a single-data-source communications session.
  • FIG. 5 is a pseudocode block illustrating a technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 6 is a pseudocode block illustrating another technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 7 is a pseudocode block illustrating yet another technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 8 is a pseudocode block illustrating yet another technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 9 is a flow chart illustrating an exemplary method of transmitting a plurality of data streams among a node set comprising a source of a respective data stream and a set of receivers over a routing tree set.
  • FIG. 10 is a pseudocode block illustrating an exemplary iterative process for applying a linear programming dual model to select low-cost routing trees a multiple-data-source communications session.
  • FIG. 11 illustrates an exemplary computing environment wherein one or more of the provisions set forth herein may be implemented.
  • Peer-to-peer communication sessions involve a transmission of a data stream among a set of nodes (e.g., computers or devices) over a communications network.
  • nodes e.g., computers or devices
  • a peer-to-peer communications session enables receivers to retransmit a portion of the data stream to other receivers.
  • the peer-to-peer communications model improves both the availability and the delivery of the resource to other receivers while alleviating some of the communications burden on the source.
  • a data stream may be reliably deliverable over the network to the receivers at a particular data rate, such as a bitrate of a video stream. It may be desirable to configure the peer-to-peer session to increase the data rate of the stream; e.g., a network of a higher sustainable capacity may be able to deliver a higher-quality video stream than a network of a lower sustainable capacity. Consequently, it may be desirable to organize the peer-to-peer communication to approach, if not meet, a theoretically achievable throughput limit of the network.
  • One such technique involves dividing the data stream into data substreams having a lower data rate, each of which may be sent over a different routing tree. By choosing a variety of routing trees covering the node set, each handling a particular data substream of a comparatively small data rate, the delivery of the data substreams may be adjusted to enhance the achievable throughput of the aggregated data stream.
  • the communications session may involve the transmission of one data stream by one source (such as in an IP television scenario, where a video stream is to be broadly delivered) or a plurality of data streams shared by a plurality of sources (such as in a media conferencing scenario, where several sources wish to share a media stream of a participant with the other participants in the conference.)
  • the capabilities of the nodes and the network may vary among scenarios. In many cases, the rate-limiting factor of a node is the upload capacity, which is typically much smaller than the download capacity of the node.
  • download capacity may also factor into the allocation and selection of routing paths (e.g., a particular node may be unable to receive a large number of streams at a high bitrate.)
  • Connection capacity may also be a factor; e.g., a sending node and a receiving node may be ready to exchange information, but the capacity of the network connection between the sending node and the receiving node may be a limiting factor.
  • Other relevant parameters may also vary, such as the size of the network (e.g., a small number of sources and receivers or a large number), the interconnectedness of the nodes (e.g., each node may able to reach all other nodes in the node set, or may be limited to a subset of neighboring nodes), and the outbound connections cap of the nodes (e.g., a node may or may not be limited in the number of receivers to which the node may concurrently send data.)
  • the network may or may not also be supported by one or more helpers, which are not necessarily included in the full data stream but may offer upload capacity that may be utilized to retransmit a portion of the data stream to receivers. Due to the range of these variables, a peer-to-peer organization that enables a high data rate in one scenario may be limited to a lower achievable data rate in another scenario.
  • FIG. 1 illustrates an exemplary scenario 10 involving a delivery of a data stream 12 to a node set 14 comprising a source 16 and four receivers 18 that may redistribute the data stream 12 to the other receivers 18 .
  • a routing tree set 20 may be devised of routing trees 22 specifying an ordering of the delivery of the data stream 12 (or a portion thereof) among the nodes.
  • a source 14 participating in the peer-to-peer session may therefore consume a nontrivial amount of computing resources simply while evaluating this range of options and selecting routing trees 22 of the routing tree set 20 over which the data substreams comprising the data stream 12 may be delivered.
  • FIG. 2 illustrates two valid routing trees and one invalid routing tree pertaining to a node set 14 featuring a helper 32 .
  • the source 16 transmits the data stream 12 to two receivers 18 and to the helper 32 , which retransmits the data stream 12 to the remaining two receivers 18 .
  • the source 16 delivers the data stream 12 to all receivers 18 but not to the helper 32 ; this is an acceptable routing tree 22 because the helper 32 does not consume the data stream 12 . Moreover, transmitting the data stream 12 to the helper 32 in this second routing tree 16 would be inefficient because no receiver 18 to which the helper 32 may transmit the data stream 12 .
  • the third routing tree illustrates this invalid routing, wherein a helper 32 is delivered a portion of the data stream 12 but does not retransmit the data stream to any receiver 18 .
  • the helper is represents a “leaf helper” 34 that exists as a child node in the routing tree 22 having no receivers 18 as child nodes, creating an inefficiency.
  • helper 32 in the first routing tree is a “non-leaf” helper, as it transmits to two receivers 18 .
  • peer-to-peer networks may optionally include one or more helpers 32 to facilitate the redistribution of the data stream 12 , but the routing of the data stream 12 desirably excludes the inclusion of leaf helpers 34 .
  • techniques may be developed to facilitate the organization of the peer-to-peer communications session in a manner that enables a data rate throughput that approaches, and in certain cases equals, a theoretically achievable data rate limit.
  • Some of these techniques involve a modeling of the peer-to-peer network in a particular manner that promotes an evaluation of the options and a calculation of the achievable throughput of a particular configuration. If the network may be represented in this manner, an automated calculation and comparison of various configurations may be performed to consider the alternatives and to choose a well-performing configuration, and to allocate data substreams of the data stream for transmission to the receiver nodes in a reliable and sustainable manner.
  • the complexities and range of variables may hamper the development of a model that applies well in many scenarios.
  • Many of these techniques rely on a linear programming model, wherein the data rates of data substreams allocated to particular routing trees may be adjusted to achieve an advantageous theoretical throughput.
  • the linear programming model may be devised as a primal model, yielding a calculable throughput rate that may be desirably increased by adjusting the parameters of the primal model representing the allocation of data rates to various routing trees.
  • the linear programming model may be devised as a linear programming dual model, yielding a calculable network cost representing the inefficient allocation of resources (such as upload capacity) that may be advantageously reduced to improve the full utilization of network resources.
  • a routing tree that includes a node that is already retransmitting other data substreams may be represented as having a higher network cost than a routing tree that includes only nodes having greater available upload capacity, which may be able to handle a greater data rate of the data substream.
  • the reduction of the network cost may therefore represent a more efficient organization of network resources that enables a higher throughput of the data streams in the peer-to-peer communications session.
  • Those familiar with linear programming models may appreciate the use of such models in the holistic evaluation and automated configuration of the peer-to-peer communications session.
  • FIG. 3 presents a first technique for achieving the configuration of the peer-to-peer communications session having a single data stream 12 shared by a single source 16 .
  • This scenario may pertain, e.g., to an IP television arrangement, wherein a source of a video stream transmits the stream to a (potentially large) set of receivers 18 that share the network burden by redistributing portions of the video stream.
  • the first technique is illustrated in FIG. 1 as an exemplary method 40 of transmitting a data stream 12 among a node set 14 comprising a source 16 of the data stream 12 and a set of receivers 18 over a routing tree set 20 , where respective routing trees 22 specify a route of the data stream 12 among the source 16 and the receivers 18 .
  • the exemplary method 40 begins at 42 and involves representing 44 the routing tree set 20 as a primal model allocating a routing tree data rate of the data stream 12 for respective routing trees 22 .
  • the exemplary method 40 also endeavors to restrict the routing tree data rate within an upload capacity of respective senders in the route of the routing tree 22 (a “sender” comprising any node that delivers a data substream to another node.)
  • the exemplary method 40 also involves selecting 46 routing tree data rates for the respective routing trees 22 that increase an aggregated data rate according to the primal model.
  • the exemplary method 40 also involves apportioning 48 data substreams of the data stream 12 for the respective routing trees 22 according to the routing tree data rate of the routing tree 22 , and transmitting 50 the data substreams over the respective routing trees 22 at the respective routing tree data rates. Having determined the routing of the data stream 12 according to the upload capacities of the nodes of the node set 14 with the assistance of the primal mode, the exemplary method 10 thereby achieves the delivery of the data stream 12 to the nodes of the node set 14 at a comparatively high throughput, and so ends at 52 .
  • V represents the set of receivers and the source 16 ;
  • represents a sender in a routing tree 22 ;
  • C( ⁇ ) represents the upload capacity of sender ⁇
  • T represents the routing tree set 20 ;
  • t represents a routing tree 22 ;
  • y t represents the routing tree data rate of routing tree t
  • m ⁇ ,t represents the number of receivers to which sender ⁇ transmits the data substream in routing tree t
  • r represents the aggregated data rate of the data stream 12 transmitted over the routing trees 22 .
  • This primal model mathematical formula therefore suggests calculating the throughput of the data stream 12 as the sum of the data rates of the data substreams transmitted over the routing trees 22 of the routing tree set 20 , subject to the upload capacity constraints of the senders. It may be appreciated that this model considers the allocation of data substreams across the entire routing tree set 20 , and that an advantageously high result may involve the selection of a very large number of routing trees 22 , resulting in the segmentation of the data stream 12 into a large number of data substreams, some or all of which may potentially have a small routing tree data rate. An evaluation of the network using this primal model may therefore strive to expand the consumption of upload capacity of the senders in the node set 14 in furtherance of enabling an advantageously high sustainable bit rate of the data stream 12 .
  • the primal model is represented as a linear programming dual model that associates a routing price with respective senders in the route of a routing tree 22 .
  • the routing price in turn represents a per-unit flow cost of the data substream through a particular sender. For example, if the sender has plentiful upload capacity, the per-unit flow cost through the sender may be indicated as a low cost, but if the sender is already uploading a substantial aggregate data rate for other data substreams, the routing tree 22 may have a high cost.
  • the selecting 46 may comprise selecting routing capacities that reduce the routing prices of the senders of the routing trees 22 in the routing tree set 20 .
  • the linear programming dual model may therefore facilitate an evaluation of the efficiency of a particular peer-to-peer organization, rather than simply calculating the aggregate throughput without reference to a theoretical throughput limit.
  • V represents the set of receivers and the source 16 ;
  • represents a sender in a routing tree 22 ;
  • C( ⁇ ) represents the upload capacity of sender ⁇
  • T represents the routing tree set 20 ;
  • t represents a routing tree 22 ;
  • m ⁇ ,t represents the number of receivers to which sender ⁇ transmits the data substream in routing tree t
  • p ⁇ represents the per-unit flow cost of sender ⁇ .
  • This linear programming dual model mathematical formula therefore suggests calculating the throughput of the data stream 12 as the sum of the efficiencies whereby the uploading capacities of the senders in the node set 14 are consumed.
  • this model considers the allocation of data substreams across the entire routing tree set 20 , and that an advantageously high result may involve the selection of a very large number of routing trees 22 , resulting in the segmentation of the data stream 12 into a large number of data substreams, some or all of which may potentially have a small routing tree data rate.
  • the linear programming models presented herein for the single-data-stream scenario therefore involve an evaluation of the relative capacities of the routing trees 22 and an efficient allocation of the data rate of the data stream 12 among various data substreams to permit a comparatively high sustainable data rate of the data stream 12 .
  • it may be difficult to evaluate all of the routing path trees 22 in the routing path set 20 due to the number of routing trees 22 that are available for the network (as illustrated in FIG. 1 .)
  • a brute-force concurrent or consecutive evaluation of all routing trees 22 may be prohibitively resource-intensive, especially in the case of a large network where the potential routing trees 22 may number billions or more.
  • the selecting 96 may be performed by iteratively selecting the routing capacities of particular routing trees 22 for transmitting particular data substreams.
  • the selecting may involve iteratively selecting the routing capacities of the routing trees by selecting a low-cost routing tree 22 for an iteration and allocating the routing tree data rate for the low-cost routing tree 22 based on the residual upload capacities of the senders in the routing tree 22 .
  • the iteration may also involve calculating the residual upload capacities of the senders in the routing tree 22 (for use in assessing the costs of various routing trees 22 in future iterations.)
  • the iterative selecting of low-cost routing trees 22 may continue until the per-unit flow cost of the routing trees 22 is at least one. In this and other calculations, the per-unit flow cost of sender ⁇ may be calculated according to the mathematical formula:
  • i an iteration of the selecting
  • p i ( ⁇ ) represents the per-unit flow cost of sender ⁇ during iteration i;
  • represents an optimality constraint
  • the optimality constraint may represent a proximity of the selected routing tree set 20 and data substreams allocated thereacross to a theoretical limit.
  • a higher optimality constraint may result in an improved selection of routing trees 22 and a higher data rate for the data stream 12 , but at a cost of additional computing time to achieve the selection.
  • the data stream 12 may be iteratively apportioned to a low-cost routing tree 22 , and the upload capacities of the senders of the selected routing tree 22 may be recalculated to indicate a lower upload capacity (i.e., a higher network cost) for future iterations, until the data stream 12 has been apportioned to low-cost routing trees 22 .
  • FIG. 4 illustrates an exemplary pseudocode block 60 that embodies such iterative selecting. It may be appreciated that the pseudocode block 60 is presented in a pseudocode language that may not conform to the syntactic and logical constraints of any particular programming or mathematical language. Rather, this pseudocode block 60 (and those presented and discussed elsewhere herein) is presented to illustrate a sequence of logical concepts that cooperatively achieve the data stream routing techniques discussed herein.
  • some parameters are first initialized (such as the calculated flow through a particular node ⁇ and the total allocated data rate of the data stream Y.
  • the exemplary pseudocode block 60 then iteratively selects a routing tree 22 from the routing tree set 20 that has an advantageously low per-unit flow cost (as determined by the upload capacities of the nodes in the respective routing tree 22 .)
  • the selected routing tree 22 is assigned a flow rate based on the achievable data rate of the nodes comprising the routing tree 22 (i.e., the data rate that fully utilizes the upload capacity of the lowest-capacity sending node in the routing tree 22 .)
  • the available capacities of the sending nodes in the routing tree 22 are then reduced by the data rate of the substream, and a next iteration is performed, etc., until further apportionment of the data stream 12 among routing trees 22 may not be advantageous (e.g., when the cost of including a routing tree 22 does not offset the infrastructure costs of using the routing tree 22 .)
  • a linear programming dual model for selection may rely too heavily on an allocation of data substreams through a particular sender that may exceed the upload capacity of the sender, and may result in a potentially unsustainable allocation.
  • the upload capacity of a sender may be reduced, thereby impairing the previously sustainable data rate of the data substreams routed through the sender.
  • L a scaling factor
  • the scaling reduces the overall data rate, the reduction promotes the sustainability of the data rate of the data stream 12 through the node set 14 and the preservation of the quality of the data stream 12 .
  • L may be computed according to the mathematical formula:
  • C( ⁇ ) represents the upload capacity of sender ⁇
  • U( ⁇ ) represents an aggregate upload rate of sender ⁇ among all routing trees.
  • This scaling factor may be applied to the calculated routing tree data rates of the selected routing trees 22 .
  • This scaling may also be factored in during the iterative processing by including it as an element of the networking cost.
  • the per-unit flow cost of sender ⁇ during a first iteration of the selecting of routing trees 22 may be calculated according to the mathematical formula:
  • This value may be utilized for the initial constant ( ⁇ ) comprising the initial value of the per-unit flow cost.
  • This factor is then applied to the computed routing tree data rate of the first iteration, and is propagated through subsequent iterations as a progressively adjusted per-unit flow cost.
  • low-cost routing tree identification techniques may be devised and utilized within the dual models. Such low-cost routing tree identification may be attuned to the details of the network, such as the full or partial interconnectedness of the network, the capping or uncapping of the upload connections of each sender, and the presence or absence of helpers 32 in the node set 14 . The selection of an appropriate low-cost routing tree identification technique may therefore yield suitably low-cost routing trees for any particular session or iteration.
  • a first low-cost routing tree identification technique may be applied to dual models where the node set 14 comprises a source 16 , receivers 18 , and zero or more helpers 32 , all of which are capable of sending to the respective receivers 18 and respective helpers 32 (such as a fully interconnected network) without an upload connections cap.
  • helpers 32 are included, the networking cost for helpers 32 may be slightly increased to represent the additional complexity of routing through a helper 32 that is not a consumer of the data stream 12 .
  • a helper 32 will only be selected in a routing tree if the routing cost is otherwise lower than for a receiver 18 . This difference may be represented by calculating the routing process as an effective routing price according to the mathematical formula:
  • p ⁇ ⁇ ( v ) ⁇ p ⁇ ( v ) if v ⁇ s ⁇ ⁇ U ⁇ ⁇ R p ⁇ ( v ) ⁇ R ⁇ ⁇ R ⁇ - 1 if ⁇ ⁇ v ⁇ H
  • ⁇ circumflex over (p) ⁇ ( ⁇ ) represents the effective routing price
  • R represents the set of receivers 18 .
  • H represents the set of helpers 32 .
  • the selecting of a low-cost routing tree 22 may therefore be computed as a one- or two-connection routing tree: either the data substream of the routing tree 22 may be delivered directly from the source 16 to the receivers 18 , or may be delivered from the source 16 to a receiver 18 or helper 32 that retransmits the data substream to the remaining receivers 18 .
  • the routing tree-data rate for the generated routing tree 22 is calculated as a maximum of the upload capacity of the selected node and the source, or a scaled portion thereof. This manner of selecting 46 low-cost routing trees 22 may be performed during each iteration, and the subsequent iteration may account for the depletion of upload capacity due to the selected low-cost routing trees 22 identified in preceding iterations.
  • FIG. 5 illustrates a pseudocode block 70 embodying this manner of selecting 46 low-cost routing trees 22 .
  • the selecting 46 of a low-cost routing tree 22 may involve selecting a sender ⁇ having a low effective routing price among the senders of the node set according to this logic: if ⁇ is the source 16 , generate a routing tree 22 routing a data substream from the source 16 to the receivers 18 ; if ⁇ is a receiver 18 , generate a routing tree 22 routing a data substream from the source 16 to ⁇ and from ⁇ to the receivers 19 except ⁇ ; and if ⁇ if a helper 32 , generate a routing tree 22 routing a data substream from the source 16 to ⁇ and from ⁇ to the receivers 18 .
  • the generated routing tree 22 may then be added to a set of selected routing trees 22 over which respective data substreams are to be delivered at designated routing tree data rates.
  • Another low-cost routing tree identification technique may be applied to a network comprising a node set 14 including a source 16 of a data stream 12 and receivers 18 (but having no helpers 32 ), where all senders are capable of sending to the respective receivers 18 (i.e., in a fully interconnected network), but with an upload connections cap of upload connections.
  • the senders may be capped (either internally or by the network) to establish no more than a designated number of concurrent upload connections.
  • the exemplary pseudocode block 70 of FIG. 5 may not be applicable, because the generated routing trees may specify too many upload connections from a particular sender.
  • a second low-cost routing tree identification technique may be devised.
  • a routing tree 22 may be generated by selecting a sender ⁇ having a low routing price among the senders of the node set 14 and generating a routing tree 22 that routes a data substream from the source 16 to ⁇ .
  • the routing tree 22 may then be recursively extended through the receivers 18 of the node set 14 according to a low-cost node identification. For example, after generating the first connection in the routing tree 22 from the source 16 to a first sender, the recursive extending may involve selecting a sender ⁇ having a low routing price among the receivers included in the routing tree, and having at least one fewer upload connection as compared with the upload connections cap.
  • the generating may also involve selecting a receiver ⁇ ⁇ having a low routing price among the receivers not yet included in the routing tree (i.e., with which the sender ⁇ may efficiently communicate over the network.)
  • the receiver ⁇ ⁇ may then be added to the routing by extending the routing tree 22 to route the data substream from sender ⁇ to receiver ⁇ ⁇ . This recursive selecting may continue until the routing tree 22 includes the set of receivers 18 .
  • the routing tree 22 may then be added to the set of routing trees 22 over which data substreams are to be transmitted, and the next iteration may involve selecting a new low-cost routing tree 18 taking into account the upload capacity allocations and upload connections allocated for respective nodes in previous iterations.
  • FIG. 6 illustrates a pseudocode block 80 embodying this second low-cost routing tree identification technique, wherein:
  • A represents the set of senders (both the source 16 and receivers 18 ) that are already in a routing tree 22 being recursively generated;
  • B represents the set of receivers 18 that are not yet included in the routing tree 22 ;
  • represents the subset of receivers in set A that have at least one fewer allocated upload connection as compared with the upload connection cap.
  • pseudocode block 80 of FIG. 6 is not the only embodiment of this second low-cost routing tree identifying technique, and that those of ordinary skill in the art may devise other embodiments of this technique.
  • a type of network to which these techniques may be applied involves a fully-interconnected network comprising the source 16 , the receivers 18 , and zero or more helpers 32 , and also includes an upload connections cap applied to at least some of the nodes of the node set 14 . It may be appreciated that this type of network may not be adequately serviced by either the first low-cost routing tree identifying technique (as it does not account for the upload connections cap) or the second low-cost routing tree identifying technique (as it does not anticipate the inclusion of helpers 32 , and therefore may inefficiently route portions of the data stream 12 to leaf helpers 34 .)
  • a third low-cost routing tree identifying technique may be devised for networks involving a node set 14 comprising at least zero helpers 32 alongside the source 16 and the receivers 18 , and where the source 16 , receivers 18 , and helpers 32 are capable of sending to the respective receivers 18 and respective helpers 32 with an upload connections cap of upload connections.
  • the selecting 46 may again involve selecting a sender ⁇ having a low routing price among the senders and helpers of the node set 14 , generating a routing tree 22 routing a data substream from the source 16 to ⁇ , and recursively extending the routing tree 22 to the other receivers 18 .
  • the recursive extending may involve selecting a sender ⁇ ⁇ having a low routing price among the senders included in the routing tree and having at least one fewer upload connection as compared with the upload connections cap, and extending the routing tree 22 to route the data substream from ⁇ ⁇ to respective low-priced nodes not yet included in the routing tree 22 until the upload connections of ⁇ ⁇ equal the upload connections cap.
  • This recursive extending may continue until the routing tree 22 includes the set of receivers 18 , and the generated routing tree 22 may then be added to the set of routing trees over which respective data substreams are to be transmitted.
  • this third low-cost routing tree identifying technique may generate adequately low-cost routing trees, it may also involve selecting routing trees 22 that include leaf helpers 34 .
  • An improvement of this third technique involves removing leaf helpers 34 in a manner that further reduces the network cost of the selected routing tree 22 .
  • At least one leaf helper 34 may be removed by removing at least one leaf helper by first identifying a leaf helper in the routing tree (e.g., by iterating over the set of helpers 32 and identifying whether any helper 32 is included in the routing tree 22 but has no children.) Upon such identifying, the improved third technique may remove the leaf helper 34 by selecting a high-cost node in the routing tree having at least one upload connection (i.e., having at least one child node), removing the leaf helper 34 from the routing tree 22 , and transferring an upload connection from the high-cost node to the sender of the leaf helper 34 in the routing tree 22 .
  • the now-available outbound connection may be utilized to reduce the burden of sending to at least one receiver through the high-cost node.
  • the high-cost node is also a helper 32
  • the high-cost node might also be removed by attempting to transfer the receivers of the high-cost node to other nodes. This may be achieved by identifying at least one low-cost node having at least one fewer upload connection as compared with the upload connections cap, transferring the receivers 18 of the high-cost node to the at least one low-cost node, and removing the high-cost node from the routing tree 22 .
  • this improved third technique thereby improves the routing tree 22 generated by the basic third technique by removing leaf helpers 34 while concurrently transferring connections from high-cost senders to low-cost senders.
  • FIG. 7 provides a pseudocode block 90 embodying the improved third low-cost routing tree identifying technique, utilizing the same notation used for the other mathematical formulae and pseudocode blocks discussed herein.
  • This pseudocode block 90 first recursively generates the routing tree 22 according to the basic third technique (e.g., by recursively selecting low-cost senders in the routing tree 22 , and adding them to the routing tree 22 with as many not-yet-included receivers 18 and helpers 32 as permitted by the upload connections cap), and then removes leaf helpers 34 by transferring such connections to the sender of the leaf helper 34 and to other low-cost senders.
  • the pseudocode block 90 of FIG. 7 represents only one embodiment of such techniques, and those of ordinary skill in the art may devise other embodiments of the basic and improved third low-cost routing tree identifying techniques.
  • Still another type of network to which these techniques may be applied involves a partially interconnected network, wherein at least one sender in the node set 14 may be unable to connect to at least one receiver 18 in the node set 14 .
  • the Internet may represent one such network, wherein the interconnectedness of nodes in a peer-to-peer communication session may be limited (e.g.) by firewalls, geographic distances, and intermittent links that provide unacceptably high latency between two nodes.
  • the peer-to-peer communication session may still be achieved by routing the data stream from such a sender to such a recipient through a third node that is accessible to both the sender and the receiver.
  • the first three low-cost routing tree identifying techniques may be inapplicable to such networks due to the basic presumption of such techniques that the node set 14 is fully interconnected.
  • a fourth low-cost routing tree identifying technique may be devised that takes into account the connected of any particular node with a subset of nodes (“neighboring nodes”) in the node set 14 .
  • This technique may be applied where the network comprises at least zero helpers 32 alongside the source 16 and the receivers 18 , and where the source 16 , receivers 18 , and helpers 32 are capable of sending to a neighbor set of respective receivers 18 and respective helpers 32 .
  • the routing trees 22 identified thereby respect the neighboring node limitations of the nodes and only include routings of a node to and from its neighboring nodes.
  • selecting the low-cost routing tree comprises selecting a sender ⁇ having a low routing price among the senders and helpers 32 of the node set 14 ; generating a routing tree 22 routing a data substream from the source 18 to ⁇ ; and recursively extending the routing tree 32 .
  • the extending involves selecting a sender ⁇ ⁇ having a low routing price among the receivers 18 included in the routing tree 32 , and extending the routing tree 22 to route the data substream from ⁇ ⁇ to nodes in the neighbor set of ⁇ ⁇ that are not yet included in the routing tree 22 . This extending may continue until the routing tree includes the set of receivers 18 .
  • the fourth low-cost routing tree identifying technique may be improved by endeavoring to remove particular helpers 32 while further reducing the cost of the selected routing tree 22 .
  • This improved technique may include removing leaf helpers 34 , which may be performed in a similar manner as in the improved third technique (while also respecting the limitation that an upload connection may be transferred from a first sender to a second sender only if the receiver 18 of this upload connection is in the neighbor node set of the second sender.)
  • An additional improvement may involve a reevaluation of the included non-leaf helpers 32 to determine whether a more efficient routing may be achieved by excluding the helper 32 .
  • the improved fourth technique determines whether the upload connections (i.e., the receivers) of the non-leaf helper 32 have at least one neighbor node other than the non-leaf helper 32 . If all of the receivers have a neighbor node other than the non-leaf helper 32 , the non-leaf helper is a candidate for removal. The improved fourth technique may therefore iteratively transfer the upload connections of the non-leaf helper 32 to respective neighbor nodes that have a lower routing cost than the non-leaf helper 32 , thereby reducing the upload connections of the non-leaf helper 32 .
  • the non-leaf helper 32 may then be removed from the routing tree 22 . Conversely, if at least one upload connection may not be removed from the non-leaf helper 32 , then the non-leaf helper 32 may be retained in the routing tree 22 , since it serves to reduce the routing cost to the non-removable upload connection (as compared with alternative routing trees 22 .) This removal of non-leaf helpers 22 may be iteratively performed until no more non-leaf helpers 22 may be removed from the routing tree 22 , and the routing tree 22 may then be added to the set of low-cost routing trees 22 over which data substreams are to be transmitted.
  • FIG. 8 presents an exemplary pseudocode block 100 embodying the improved fourth low-cost routing tree identifying technique, which relies on the following notation (along with the notation previously discussed for other mathematical formulae):
  • B u* represents a neighbor node set of a node ⁇ in the routing tree 22 .
  • u* represents a neighbor node of a node ⁇ .
  • a routing tree 22 is first generated by recursively extending the routing tree 22 from a node in the routing tree 22 to neighbor nodes, as discussed in the basic fourth technique.
  • the routing tree 22 so generated is then improved first by removing leaf helpers 34 , and then by attempting to remove non-leaf helpers 32 by first identifying whether a non-leaf helper 32 is a candidate for removal (i.e., whether all of its upload connections may be transferred to neighbor nodes), and then attempting to transfer away the upload connections to neighbor nodes having a lower routing cost. If all of the upload connections are removed, the non-leaf helper 32 is now a leaf helper 34 that is removed during a subsequent iteration of the routing tree improvement.
  • the improved routing tree 22 may then be added to the set of routing trees over which data substreams are to be transmitted.
  • the pseudocode block 100 of FIG. 8 represents only one embodiment of these fourth low-cost routing tree identifying techniques, and those of ordinary skill in the art may devise other embodiments of the basic and improved fourth low-cost routing tree identifying techniques.
  • FIGS. 3 through 8 The techniques discussed heretofore and illustrated in FIGS. 3 through 8 are useful for many types of networks. However, some of these techniques are effective for peer-to-peer communications sessions involving a transmission of a single data stream 12 from a single source 16 , such as in internet TV distribution. Other scenarios may involve a transmission of multiple data streams 12 from multiple sources 16 to the receivers of the node set 12 (and some or all of the sources 16 may also be receivers of other data streams 12 sent by other sources 16 .) Some modest modifications of these techniques may be more helpful for such multi-data-stream scenarios.
  • FIG. 9 presents one embodiment of these techniques for a multiple-data-stream and multiple-source communications network, illustrated as an exemplary method 110 of transmitting at least two data streams 12 among a node set 14 comprising at least two sources 16 of the respective data streams 12 and a set of receivers 18 over a routing tree set 20 , where respective routing trees 22 specify a route of the data stream 12 among a respective source 16 and the receivers 18 (potentially including the other sources 16 .)
  • the exemplary method 110 begins at 112 and involves representing 114 the node set 20 as a primal model allocating a routing tree data rate of respective data streams 12 for respective routing trees, where the routing tree data rate of the respective routing trees 22 is within an upload capacity of respective senders in the route of the routing tree 22 .
  • the exemplary method 110 also involves selecting 116 routing tree data rates for respective routing trees 22 that increase an aggregated data rate according to the primal model.
  • the exemplary method 110 also involves apportioning 118 data substreams of the respective data streams 12 for respective routing trees 22 according to the routing tree data rate of the routing tree 22 , and transmitting the data substreams of the respective data streams 12 over the respective routing trees 22 at the respective routing tree data rates.
  • the exemplary method 10 Having determined the routing of the data streams 12 according to the upload capacities of the nodes of the node set 14 with the assistance of the primal mode, the exemplary method 10 thereby achieves the delivery of the data streams 12 to the nodes of the node set 14 at a comparatively high throughput, and so ends at 122 .
  • V represents the set of receivers 18 and a respective source 16 ;
  • represents a sender 16 in a routing tree
  • k represents a data stream 12 ;
  • K represents the set of data streams 12 ;
  • T k represents the routing tree set 20 for data stream k
  • t represents a routing tree 22 ;
  • y t represents the routing tree data rate of routing tree t
  • represents a data stream rate multiplier
  • r k represents the data rate of data stream k
  • m ⁇ ,t represents the number of receivers to which sender ⁇ transmits the data substream in routing tree t
  • C( ⁇ ) represents the upload capacity of sender ⁇ .
  • This primal model resembles the primal model applicable to a single-data-stream peer-to-peer communications session, but takes into account the delivery of multiple data streams 12 through respective routing tree sets 20 , such that the upload capacity of a sender of a data substream of a first data stream 12 is evaluated with respect to the upload capacity of the sender consumed by sending a data substream of a second data stream 12 (as per the constraint m ⁇ ,t y t ⁇ C( ⁇ ).) Moreover, this primal model is oriented such that if respective data streams 12 have a relative data rate, the primal model may be used to increase a data stream rate multiplier ⁇ that applies proportionally to the relative data rates of all data streams 12 .
  • the primal model may be organized to permit the linear programming model to adjust the details of the routing so as to increase the data stream rate multiplier ⁇ to a factor of 2.0, such that the first data stream 12 may be transmitted at 2,048 Mbps and the second data stream 12 may be transmitted at 4,096 Mbps.
  • the primal model is represented as a linear programming dual model that associates a routing price with respective senders in the route of a routing tree 22 .
  • a linear programming dual model for a multiple-data-stream and multiple-source peer-to-peer communications session may associate a routing price with respective senders in the route of a routing tree 22 of a data stream 12 , where the routing price representing a per-unit flow cost of the data substream through the sender.
  • the selecting 116 may comprise selecting routing capacities that reduce the routing prices of the senders of the routing trees 22 of respective data streams 12 in the routing tree set 20 .
  • V represents a set of 18 receivers and a source 16 of a respective data stream 12 ;
  • represents a sender in a routing tree 22 ;
  • C( ⁇ ) represents the upload capacity of sender ⁇
  • k represents a data stream 12 from a source 16 ;
  • r k represents the data rate of data stream k
  • z k represents a data stream constraint of data stream k
  • T k represents the routing tree set for data stream k
  • m ⁇ ,t represents the number of receivers 18 to which sender ⁇ transmits the data substream in routing tree t.
  • this mathematical formula for a linear programming dual model of a multiple-data-stream, multiple-source communications network seeks to reduce the network cost of using respective nodes to send data substreams for the set of data streams 12 .
  • An evaluation of this model may result in the selection of a set of low-cost routing trees 20 for sending data substreams of the data streams 12 that, together, reduce the cost to respective nodes of the node set 14 .
  • the evaluation of the primal model and the linear programming dual model for the multiple-data-stream communications session may be difficult to evaluate for all data streams 12 across all available routing trees 22 in the routing tree set 20 . Iterative process may therefore be devised for performing the selecting 116 in an improved manner.
  • the per-unit flow cost of a sender ⁇ may be calculated according to the mathematical formula:
  • i an iteration of the selecting
  • p i ( ⁇ ) represents the per-unit flow cost of sender ⁇ during iteration i;
  • represents an optimality constraint
  • the flow cost for a node is based on the flow cost of the node in a prior iteration, the uploading data rate of the node for previously allocated data substreams of data streams, and the upload capacity of the node, in addition to the optimality constraint.
  • An iterative process may utilize this per-unit flow cost calculation by selecting a low-cost routing tree 22 for an iteration and allocating the routing tree data rate of respective data streams 12 for the low-cost routing tree 22 based on residual upload capacities of the senders in the routing tree 22 . After selecting a routing tree 22 for a particular data substream, the iterative process may calculate the residual upload capacities of the senders in the routing tree 22 . The iteration may then be performed until the per-unit flow cost of the routing trees 22 is at least one, and no further data substreams of the data streams may be apportioned without exceeding the upload capacity of a node.
  • FIG. 10 presents an exemplary iterative process that applies the linear programming dual model for a multiple-data-stream, multiple-source communications session, illustrated as a pseudocode block 130 embodying this iterative process.
  • This pseudocode block 130 may resemble the pseudocode block 60 of FIG. 4 , but also takes into account the selection of multiple routing tree sets 20 for routing the data substreams of multiple data streams 12 by multiple sources 16 .
  • the data rates of the routing trees 22 may be scaled by a scaling factor, L, to promote the selection of data rates that do not exceed the upload capacity of various senders.
  • the scaling factor L may also be adjusted by a counting of phases of selection over the set of data streams 12 .
  • the variations of the techniques discussed with respect to a single-data-stream communications session may be similarly applied, alone or in combination, to multiple-data-stream communications sessions.
  • the several techniques for identifying low-cost routing trees 22 among the routing tree sets 20 may be similarly selected for multiple-data-stream communication sessions in view of the other network parameters (full or partial interconnectedness, upload connection caps, and the presence or absence of helpers), and may be similarly applied during various iterations.
  • the improvements of such techniques may also be utilized, e.g., to remove leaf helpers 34 and to reallocate connections among the nodes of selected routing trees 22 to remove non-leaf helpers 32 and/or to reduce further the costs of the routing trees.
  • Those of ordinary skill in the art may devise many embodiments and improvements of the application of such primal and linear programming dual models to multiple-data-stream communications sessions while implementing the techniques discussed herein.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a controller and the controller can be a component.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.
  • article of manufacture as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media.
  • FIG. 11 and the following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein.
  • the operating environment of FIG. 11 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment.
  • Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Computer readable instructions may be distributed via computer readable media (discussed below).
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
  • FIG. 11 illustrates an example of a system 140 comprising a computing device 142 configured to implement one or more embodiments provided herein.
  • computing device 142 includes at least one processing unit 146 and memory 148 .
  • memory 148 may be volatile (such as RAM, for example), non-volatile (such as ROM, flash memory, etc., for example) or some combination of the two. This configuration is illustrated in FIG. 11 by dashed line 144 .
  • device 142 may include additional features and/or functionality.
  • device 142 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like.
  • additional storage is illustrated in FIG. 11 by storage 150 .
  • computer readable instructions to implement one or more embodiments provided herein may be in storage 150 .
  • Storage 150 may also store other computer readable instructions to implement an operating system, an application program, and the like.
  • Computer readable instructions may be loaded in memory 148 for execution by processing unit 146 , for example.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data.
  • Memory 148 and storage 150 are examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 142 . Any such computer storage media may be part of device 142 .
  • Device 142 may also include communication connection(s) 156 that allows device 142 to communicate with other devices.
  • Communication connection(s) 156 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device 142 to other computing devices.
  • Communication connection(s) 156 may include a wired connection or a wireless connection. Communication connection(s) 156 may transmit and/or receive communication media.
  • Computer readable media may include communication media.
  • Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • Device 142 may include input device(s) 154 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device.
  • Output device(s) 152 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 142 .
  • Input device(s) 154 and output device(s) 152 may be connected to device 142 via a wired connection, wireless connection, or any combination thereof.
  • an input device or an output device from another computing device may be used as input device(s) 154 or output device(s) 152 for computing device 142 .
  • Components of computing device 142 may be connected by various interconnects, such as a bus.
  • Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • IEEE 1394 Firewire
  • optical bus structure and the like.
  • components of computing device 142 may be interconnected by a network.
  • memory 148 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
  • a computing device 160 accessible via network 158 may store computer readable instructions to implement one or more embodiments provided herein.
  • Computing device 142 may access computing device 160 and download a part or all of the computer readable instructions for execution.
  • computing device 142 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 142 and some at computing device 160 .
  • one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described.
  • the order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein.
  • the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion.
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances.
  • the articles “a” and “an” as used in this application and the appended claims may generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.

Abstract

Peer-to-peer communications sessions involve the transmission of one or more data streams from a source to a set of receivers that may redistribute portions of the data stream via a set of routing trees. Achieving a comparatively high, sustainable data rate throughput of the data stream(s) may be difficult due to the large number of available routing trees, as well as pertinent variations in the nature of the communications session (e.g., upload communications caps, network link caps, the presence or absence of helpers, and the full or partial interconnectedness of the network.) The selection of routing trees may be facilitated through the representation of the node set according to a linear programming model, such as a primal model or a linear programming dual model, and iterative processes for applying such models and identifying low-cost routing trees during an iteration.

Description

    BACKGROUND
  • The peer-to-peer communications model involves a set of nodes (such as computers or devices) connected over a network that cooperate in the distribution of one or more data streams. This model differs from more conventional communications models by permitting a node that is receiving a portion of the data stream to redistribute it to other nodes, thereby reducing the communications strain on the source of the data stream and amplifying the availability and potential throughput of the source stream to other nodes.
  • A peer-to-peer communications session may be established over networks and among clients that have many properties relevant to the throughput of the network, such as the uploading and downloading capacity of the nodes; an uploading communications cap that limits the number of outbound connections that may be established by a node; and the full or partial interconnectedness of the nodes (i.e., whether any node is able to reach the full set of nodes or only a neighboring subset thereof.) The number of data streams generated and shared among the nodes may also be relevant; e.g., a single data stream may be shared by a single source and redistributed among the nodes (such as in an IP television scenario), or a plurality of data streams may be shared by multiple sources to be redistributed to the nodes (such as in a conferencing scenario.) Moreover, some nodes may cooperate as helpers by optionally participating in the session, such that the helper does not consume the data stream (and may be excluded from a portion of the data stream) but may be utilized for redistributing a portion of the data stream to other nodes. These and other properties of the network topology and capacity, the roles and capabilities of the nodes, and the nature of the data streams to be shared may affect the achievable throughput of the data streams in the peer-to-peer communications session.
  • SUMMARY
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
  • In view of the properties of a given peer-to-peer communications session, including the topology and capacity of the network, the roles and capabilities of the nodes, and the types of data streams shared thereamong, a theoretically achievable throughput may exist. For example, a video stream shared by a source in an IPTV scenario may be bitrate-adjustable, such that higher bitrate results in better quality video but also a greater size of the data stream. Based on the selection of routing patterns, the achievable throughput to the nodes of the network may vary. However, it may be difficult to select a routing pattern that enables a comparatively high theoretical throughput, especially in view of the range and complexity of network characteristics that may affect the theoretical throughput.
  • It may be appreciated that a high theoretical throughput may be achieved by endeavoring to consume the entire network capacity of all nodes in the redistribution of the one or more data streams. For a particular node, the consumption of upload capacity is a factor of the selection of routing trees through the node. Also Thus, the theoretically achievable throughput of the data streams is an aggregate function of the selection of routing trees, and the selection of a routing tree set in furtherance of consuming a greater share of the capacities of the nodes and of the connections may promote an increase in the theoretical throughput of the session.
  • Techniques may be devised for modeling a network in a manner that facilitates the selection of a high-throughput routing pattern for distributing one or more data streams to the nodes of the peer-to-peer communications session. Such techniques may involve modeling the set of potential routing trees for the communications stream according to a linear programming model, wherein the theoretically achievable throughput of the network may be calculated as a sum of the uploading throughputs of the nodes, which may in turn be adjusted through the apportionment of the data stream among the set of available routing trees. For example, a primal model may be devised that permits the calculation of a theoretical throughput that may be desirably increased; alternatively, a linear programming dual model may be devised that permits the calculation of a theoretical networking cost that may be desirably reduced to expand the utilization of networking resources. By structuring the model such that the parameters of the model may be adjusted to increase this calculation, these techniques facilitate the automated selection of routing trees in the peer-to-peer communications session in a manner that promotes the achievable throughput.
  • To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and implementations. These are indicative of but a few of the various ways in which one or more aspects may be employed. Other aspects, advantages, and novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the annexed drawings.
  • DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is an illustration of a routing tree set comprising a set of routing trees whereby a data stream may be transmitted from a source to a set of receivers.
  • FIG. 2 is an illustration of routing trees involving a helper that may be optionally included in the routing of the routing tree.
  • FIG. 3 is a flow chart illustrating an exemplary method of transmitting a data stream among a node set comprising a source of the data stream and a set of receivers over a routing tree set.
  • FIG. 4 is a pseudocode block illustrating an exemplary iterative process for applying a linear programming dual model to select low-cost routing trees a single-data-source communications session.
  • FIG. 5 is a pseudocode block illustrating a technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 6 is a pseudocode block illustrating another technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 7 is a pseudocode block illustrating yet another technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 8 is a pseudocode block illustrating yet another technique for selecting low-cost routing trees form a routing tree set in a peer-to-peer communications network.
  • FIG. 9 is a flow chart illustrating an exemplary method of transmitting a plurality of data streams among a node set comprising a source of a respective data stream and a set of receivers over a routing tree set.
  • FIG. 10 is a pseudocode block illustrating an exemplary iterative process for applying a linear programming dual model to select low-cost routing trees a multiple-data-source communications session.
  • FIG. 11 illustrates an exemplary computing environment wherein one or more of the provisions set forth herein may be implemented.
  • DETAILED DESCRIPTION
  • The claimed subject matter is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, structures and devices are shown in block diagram form in order to facilitate describing the claimed subject matter.
  • Peer-to-peer communication sessions involve a transmission of a data stream among a set of nodes (e.g., computers or devices) over a communications network. In contrast with more conventional data transfer sessions wherein a source sends the data stream to all receivers, a peer-to-peer communications session enables receivers to retransmit a portion of the data stream to other receivers. By utilizing the upload capacity of the nodes, the peer-to-peer communications model improves both the availability and the delivery of the resource to other receivers while alleviating some of the communications burden on the source.
  • Based on the capacity of the network, a data stream may be reliably deliverable over the network to the receivers at a particular data rate, such as a bitrate of a video stream. It may be desirable to configure the peer-to-peer session to increase the data rate of the stream; e.g., a network of a higher sustainable capacity may be able to deliver a higher-quality video stream than a network of a lower sustainable capacity. Consequently, it may be desirable to organize the peer-to-peer communication to approach, if not meet, a theoretically achievable throughput limit of the network. One such technique involves dividing the data stream into data substreams having a lower data rate, each of which may be sent over a different routing tree. By choosing a variety of routing trees covering the node set, each handling a particular data substream of a comparatively small data rate, the delivery of the data substreams may be adjusted to enhance the achievable throughput of the aggregated data stream.
  • However, selecting an advantageous organization of the peer-to-peer session may be complicated by the range of variables describing the capabilities of the network and clients and the nature of the peer-to-peer session. As a first example, the communications session may involve the transmission of one data stream by one source (such as in an IP television scenario, where a video stream is to be broadly delivered) or a plurality of data streams shared by a plurality of sources (such as in a media conferencing scenario, where several sources wish to share a media stream of a participant with the other participants in the conference.) As a second example, the capabilities of the nodes and the network may vary among scenarios. In many cases, the rate-limiting factor of a node is the upload capacity, which is typically much smaller than the download capacity of the node. However, download capacity may also factor into the allocation and selection of routing paths (e.g., a particular node may be unable to receive a large number of streams at a high bitrate.) Connection capacity may also be a factor; e.g., a sending node and a receiving node may be ready to exchange information, but the capacity of the network connection between the sending node and the receiving node may be a limiting factor. Other relevant parameters may also vary, such as the size of the network (e.g., a small number of sources and receivers or a large number), the interconnectedness of the nodes (e.g., each node may able to reach all other nodes in the node set, or may be limited to a subset of neighboring nodes), and the outbound connections cap of the nodes (e.g., a node may or may not be limited in the number of receivers to which the node may concurrently send data.) The network may or may not also be supported by one or more helpers, which are not necessarily included in the full data stream but may offer upload capacity that may be utilized to retransmit a portion of the data stream to receivers. Due to the range of these variables, a peer-to-peer organization that enables a high data rate in one scenario may be limited to a lower achievable data rate in another scenario.
  • An additional complication arises from the range of available routing trees, which grows exponentially with the number of nodes in the network. FIG. 1 illustrates an exemplary scenario 10 involving a delivery of a data stream 12 to a node set 14 comprising a source 16 and four receivers 18 that may redistribute the data stream 12 to the other receivers 18. Among these nodes, a routing tree set 20 may be devised of routing trees 22 specifying an ordering of the delivery of the data stream 12 (or a portion thereof) among the nodes. As FIG. 1 illustrates, even a comparatively small node set 14 with only one source 16 and four receivers 18 may permit dozens of routing trees of the data stream 12 among the nodes (only some of which are illustrated in this exemplary scenario 10.) A source 14 participating in the peer-to-peer session may therefore consume a nontrivial amount of computing resources simply while evaluating this range of options and selecting routing trees 22 of the routing tree set 20 over which the data substreams comprising the data stream 12 may be delivered.
  • Yet another additional complexity may arise depending on whether or not the node set 14 includes or excludes helpers that may redistribute the data stream 12, but are not consumers of the data stream 12, and may be omitted from some or all of the selected routing trees 22 without compromising the desired delivery of the data stream 12 to all receivers 18. FIG. 2 illustrates two valid routing trees and one invalid routing tree pertaining to a node set 14 featuring a helper 32. In the first routing tree, the source 16 transmits the data stream 12 to two receivers 18 and to the helper 32, which retransmits the data stream 12 to the remaining two receivers 18. In the second routing tree, the source 16 delivers the data stream 12 to all receivers 18 but not to the helper 32; this is an acceptable routing tree 22 because the helper 32 does not consume the data stream 12. Moreover, transmitting the data stream 12 to the helper 32 in this second routing tree 16 would be inefficient because no receiver 18 to which the helper 32 may transmit the data stream 12. The third routing tree illustrates this invalid routing, wherein a helper 32 is delivered a portion of the data stream 12 but does not retransmit the data stream to any receiver 18. In this routing tree, the helper is represents a “leaf helper” 34 that exists as a child node in the routing tree 22 having no receivers 18 as child nodes, creating an inefficiency. By contrast, the helper 32 in the first routing tree is a “non-leaf” helper, as it transmits to two receivers 18. Thus, peer-to-peer networks may optionally include one or more helpers 32 to facilitate the redistribution of the data stream 12, but the routing of the data stream 12 desirably excludes the inclusion of leaf helpers 34.
  • In view of these complexities, techniques may be developed to facilitate the organization of the peer-to-peer communications session in a manner that enables a data rate throughput that approaches, and in certain cases equals, a theoretically achievable data rate limit. Some of these techniques involve a modeling of the peer-to-peer network in a particular manner that promotes an evaluation of the options and a calculation of the achievable throughput of a particular configuration. If the network may be represented in this manner, an automated calculation and comparison of various configurations may be performed to consider the alternatives and to choose a well-performing configuration, and to allocate data substreams of the data stream for transmission to the receiver nodes in a reliable and sustainable manner. However, the complexities and range of variables may hamper the development of a model that applies well in many scenarios.
  • Presented herein are several techniques for organizing the peer-to-peer communications session according to various models, for evaluating the options presented by the respective models applied to a particular peer-to-peer communications session, for choosing an acceptable routing tree set and data substreams transmitted thereacross that enable a high theoretical data rate throughput, and for allocating and transmitting the respective data substreams over the respective routing trees to the receivers. Many of these techniques rely on a linear programming model, wherein the data rates of data substreams allocated to particular routing trees may be adjusted to achieve an advantageous theoretical throughput. The linear programming model may be devised as a primal model, yielding a calculable throughput rate that may be desirably increased by adjusting the parameters of the primal model representing the allocation of data rates to various routing trees. Alternatively, the linear programming model may be devised as a linear programming dual model, yielding a calculable network cost representing the inefficient allocation of resources (such as upload capacity) that may be advantageously reduced to improve the full utilization of network resources. For example, a routing tree that includes a node that is already retransmitting other data substreams may be represented as having a higher network cost than a routing tree that includes only nodes having greater available upload capacity, which may be able to handle a greater data rate of the data substream. The reduction of the network cost may therefore represent a more efficient organization of network resources that enables a higher throughput of the data streams in the peer-to-peer communications session. Those familiar with linear programming models may appreciate the use of such models in the holistic evaluation and automated configuration of the peer-to-peer communications session.
  • FIG. 3 presents a first technique for achieving the configuration of the peer-to-peer communications session having a single data stream 12 shared by a single source 16. This scenario may pertain, e.g., to an IP television arrangement, wherein a source of a video stream transmits the stream to a (potentially large) set of receivers 18 that share the network burden by redistributing portions of the video stream. The first technique is illustrated in FIG. 1 as an exemplary method 40 of transmitting a data stream 12 among a node set 14 comprising a source 16 of the data stream 12 and a set of receivers 18 over a routing tree set 20, where respective routing trees 22 specify a route of the data stream 12 among the source 16 and the receivers 18. The exemplary method 40 of FIG. 3 begins at 42 and involves representing 44 the routing tree set 20 as a primal model allocating a routing tree data rate of the data stream 12 for respective routing trees 22. The exemplary method 40 also endeavors to restrict the routing tree data rate within an upload capacity of respective senders in the route of the routing tree 22 (a “sender” comprising any node that delivers a data substream to another node.) The exemplary method 40 also involves selecting 46 routing tree data rates for the respective routing trees 22 that increase an aggregated data rate according to the primal model. When the data rates for various routing trees 22 have been selected, the exemplary method 40 also involves apportioning 48 data substreams of the data stream 12 for the respective routing trees 22 according to the routing tree data rate of the routing tree 22, and transmitting 50 the data substreams over the respective routing trees 22 at the respective routing tree data rates. Having determined the routing of the data stream 12 according to the upload capacities of the nodes of the node set 14 with the assistance of the primal mode, the exemplary method 10 thereby achieves the delivery of the data stream 12 to the nodes of the node set 14 at a comparatively high throughput, and so ends at 52.
  • One such primal model that may be used in this manner is represented by the mathematical formula:

  • increase r=ΣtεTyt

  • subject to ΣtεT m ν,t y t ≦C(ν)∀νεV,

  • yt≧0∀tεT.
  • This mathematical formula relies on the following notation:
  • V represents the set of receivers and the source 16;
  • ν represents a sender in a routing tree 22;
  • C(ν) represents the upload capacity of sender ν;
  • T represents the routing tree set 20;
  • t represents a routing tree 22;
  • yt represents the routing tree data rate of routing tree t;
  • mν,t represents the number of receivers to which sender ν transmits the data substream in routing tree t; and
  • r represents the aggregated data rate of the data stream 12 transmitted over the routing trees 22.
  • This primal model mathematical formula therefore suggests calculating the throughput of the data stream 12 as the sum of the data rates of the data substreams transmitted over the routing trees 22 of the routing tree set 20, subject to the upload capacity constraints of the senders. It may be appreciated that this model considers the allocation of data substreams across the entire routing tree set 20, and that an advantageously high result may involve the selection of a very large number of routing trees 22, resulting in the segmentation of the data stream 12 into a large number of data substreams, some or all of which may potentially have a small routing tree data rate. An evaluation of the network using this primal model may therefore strive to expand the consumption of upload capacity of the senders in the node set 14 in furtherance of enabling an advantageously high sustainable bit rate of the data stream 12.
  • While this primal model may be useful, it may be difficult to determine the proximity of the achieved throughput (i.e., the value of r) to the theoretical throughput limit. An alternative model may therefore be devised, wherein the primal model is represented as a linear programming dual model that associates a routing price with respective senders in the route of a routing tree 22. The routing price in turn represents a per-unit flow cost of the data substream through a particular sender. For example, if the sender has plentiful upload capacity, the per-unit flow cost through the sender may be indicated as a low cost, but if the sender is already uploading a substantial aggregate data rate for other data substreams, the routing tree 22 may have a high cost. Accordingly, the selecting 46 may comprise selecting routing capacities that reduce the routing prices of the senders of the routing trees 22 in the routing tree set 20. By modeling the utilization of network resources according to flow costs instead of throughput, the linear programming dual model may therefore facilitate an evaluation of the efficiency of a particular peer-to-peer organization, rather than simply calculating the aggregate throughput without reference to a theoretical throughput limit.
  • One such linear programming dual model that may be used in this manner is represented by the mathematical formula:

  • reduce ΣνεVC(ν)pν

  • subject to ΣνεVmν,tpν≧1∀tεT,

  • pν≧∀νεV.
  • This mathematical formula relies on the following notation:
  • V represents the set of receivers and the source 16;
  • ν represents a sender in a routing tree 22;
  • C(ν) represents the upload capacity of sender ν;
  • T represents the routing tree set 20;
  • t represents a routing tree 22;
  • mν,t represents the number of receivers to which sender νtransmits the data substream in routing tree t; and
  • pν represents the per-unit flow cost of sender ν.
  • This linear programming dual model mathematical formula therefore suggests calculating the throughput of the data stream 12 as the sum of the efficiencies whereby the uploading capacities of the senders in the node set 14 are consumed. Again, it may be appreciated that this model considers the allocation of data substreams across the entire routing tree set 20, and that an advantageously high result may involve the selection of a very large number of routing trees 22, resulting in the segmentation of the data stream 12 into a large number of data substreams, some or all of which may potentially have a small routing tree data rate.
  • The linear programming models presented herein for the single-data-stream scenario therefore involve an evaluation of the relative capacities of the routing trees 22 and an efficient allocation of the data rate of the data stream 12 among various data substreams to permit a comparatively high sustainable data rate of the data stream 12. However, it may be difficult to evaluate all of the routing path trees 22 in the routing path set 20, due to the number of routing trees 22 that are available for the network (as illustrated in FIG. 1.) A brute-force concurrent or consecutive evaluation of all routing trees 22 may be prohibitively resource-intensive, especially in the case of a large network where the potential routing trees 22 may number billions or more. Instead, the selecting 96 may be performed by iteratively selecting the routing capacities of particular routing trees 22 for transmitting particular data substreams.
  • In one such iterative approach utilizing the linear programming dual model, the selecting may involve iteratively selecting the routing capacities of the routing trees by selecting a low-cost routing tree 22 for an iteration and allocating the routing tree data rate for the low-cost routing tree 22 based on the residual upload capacities of the senders in the routing tree 22. The iteration may also involve calculating the residual upload capacities of the senders in the routing tree 22 (for use in assessing the costs of various routing trees 22 in future iterations.) The iterative selecting of low-cost routing trees 22 may continue until the per-unit flow cost of the routing trees 22 is at least one. In this and other calculations, the per-unit flow cost of sender ν may be calculated according to the mathematical formula:
  • p i ( v ) = p i - 1 ( v ) ( 1 + ɛ m v , t i y i C ( v ) )
  • This mathematical formula relies on the following notation (along with the notation previously discussed for other mathematical formulae):
  • i represents an iteration of the selecting;
  • pi(ν) represents the per-unit flow cost of sender ν during iteration i; and
  • ε represents an optimality constraint.
  • The optimality constraint may represent a proximity of the selected routing tree set 20 and data substreams allocated thereacross to a theoretical limit. A higher optimality constraint may result in an improved selection of routing trees 22 and a higher data rate for the data stream 12, but at a cost of additional computing time to achieve the selection. In accordance with this iterative selecting, the data stream 12 may be iteratively apportioned to a low-cost routing tree 22, and the upload capacities of the senders of the selected routing tree 22 may be recalculated to indicate a lower upload capacity (i.e., a higher network cost) for future iterations, until the data stream 12 has been apportioned to low-cost routing trees 22.
  • FIG. 4 illustrates an exemplary pseudocode block 60 that embodies such iterative selecting. It may be appreciated that the pseudocode block 60 is presented in a pseudocode language that may not conform to the syntactic and logical constraints of any particular programming or mathematical language. Rather, this pseudocode block 60 (and those presented and discussed elsewhere herein) is presented to illustrate a sequence of logical concepts that cooperatively achieve the data stream routing techniques discussed herein.
  • In the exemplary pseudocode block 60 of FIG. 4, some parameters are first initialized (such as the calculated flow through a particular node ν and the total allocated data rate of the data stream Y. The exemplary pseudocode block 60 then iteratively selects a routing tree 22 from the routing tree set 20 that has an advantageously low per-unit flow cost (as determined by the upload capacities of the nodes in the respective routing tree 22.) The selected routing tree 22 is assigned a flow rate based on the achievable data rate of the nodes comprising the routing tree 22 (i.e., the data rate that fully utilizes the upload capacity of the lowest-capacity sending node in the routing tree 22.) The available capacities of the sending nodes in the routing tree 22 are then reduced by the data rate of the substream, and a next iteration is performed, etc., until further apportionment of the data stream 12 among routing trees 22 may not be advantageous (e.g., when the cost of including a routing tree 22 does not offset the infrastructure costs of using the routing tree 22.) The resulting allocation of the data stream 12 may therefore provide a selection of routing trees 22 and routing tree data rate selection that produces a desirably high sustainable data rate for the data stream 12.
  • However, in some scenarios, a linear programming dual model for selection may rely too heavily on an allocation of data substreams through a particular sender that may exceed the upload capacity of the sender, and may result in a potentially unsustainable allocation. In other scenarios, the upload capacity of a sender may be reduced, thereby impairing the previously sustainable data rate of the data substreams routed through the sender. In these and other scenarios, it may be advantageous to include a scalable parameter in the allocation techniques (such as the mathematical formulae) that calculate the bandwidth allocation of the data substreams.
  • One technique that enables scalability is the inclusion of a scaling factor, L, that may be applied to the allocated routing tree data rates after the calculation. Although the scaling reduces the overall data rate, the reduction promotes the sustainability of the data rate of the data stream 12 through the node set 14 and the preservation of the quality of the data stream 12. For example, L may be computed according to the mathematical formula:
  • = max v V U ( v ) C ( v )
  • This mathematical formula relies on the following notation (along with the notation previously discussed for other mathematical formulae):
  • C(ν) represents the upload capacity of sender ν, and
  • U(ν) represents an aggregate upload rate of sender ν among all routing trees.
  • This scaling factor may be applied to the calculated routing tree data rates of the selected routing trees 22. This scaling may also be factored in during the iterative processing by including it as an element of the networking cost. For example, the per-unit flow cost of sender ν during a first iteration of the selecting of routing trees 22 may be calculated according to the mathematical formula:
  • p 0 ( v ) = ( 1 + ɛ ) ( ( 1 + ɛ ) ) - 1 ɛ
  • (using the notation previously discussed for other mathematical formulae.) This value may be utilized for the initial constant (δ) comprising the initial value of the per-unit flow cost. This factor is then applied to the computed routing tree data rate of the first iteration, and is propagated through subsequent iterations as a progressively adjusted per-unit flow cost.
  • These dual models and iterative selections involve an identification of a low-cost routing tree to be assigned a data substream. Again, this identification may be difficult due to the number of available routing trees 22, and a brute-force search may be prohibitively lengthy even for small node sets 14. The search may be improved by using various search techniques (heuristically guided state searches, learning processes such as artificial neural networks and genetic programming, etc.), but these general-purpose techniques may achieve only modest improvement in the search process, and may not identify a sufficiently low-cost tree within the available time window of a particular iteration.
  • Alternatively, low-cost routing tree identification techniques may be devised and utilized within the dual models. Such low-cost routing tree identification may be attuned to the details of the network, such as the full or partial interconnectedness of the network, the capping or uncapping of the upload connections of each sender, and the presence or absence of helpers 32 in the node set 14. The selection of an appropriate low-cost routing tree identification technique may therefore yield suitably low-cost routing trees for any particular session or iteration.
  • A first low-cost routing tree identification technique may be applied to dual models where the node set 14 comprises a source 16, receivers 18, and zero or more helpers 32, all of which are capable of sending to the respective receivers 18 and respective helpers 32 (such as a fully interconnected network) without an upload connections cap. If helpers 32 are included, the networking cost for helpers 32 may be slightly increased to represent the additional complexity of routing through a helper 32 that is not a consumer of the data stream 12. Thus, a helper 32 will only be selected in a routing tree if the routing cost is otherwise lower than for a receiver 18. This difference may be represented by calculating the routing process as an effective routing price according to the mathematical formula:
  • p ^ ( v ) = { p ( v ) if v s U R p ( v ) R R - 1 if v H
  • This mathematical formula relies on the following notation (along with the notation previously discussed for other mathematical formulae):
  • {circumflex over (p)}(ν) represents the effective routing price,
  • s represents the source 16,
  • R represents the set of receivers 18, and
  • H represents the set of helpers 32.
  • The selecting of a low-cost routing tree 22 may therefore be computed as a one- or two-connection routing tree: either the data substream of the routing tree 22 may be delivered directly from the source 16 to the receivers 18, or may be delivered from the source 16 to a receiver 18 or helper 32 that retransmits the data substream to the remaining receivers 18. The routing tree-data rate for the generated routing tree 22 is calculated as a maximum of the upload capacity of the selected node and the source, or a scaled portion thereof. This manner of selecting 46 low-cost routing trees 22 may be performed during each iteration, and the subsequent iteration may account for the depletion of upload capacity due to the selected low-cost routing trees 22 identified in preceding iterations.
  • FIG. 5 illustrates a pseudocode block 70 embodying this manner of selecting 46 low-cost routing trees 22. In particular, the selecting 46 of a low-cost routing tree 22 may involve selecting a sender ν having a low effective routing price among the senders of the node set according to this logic: if ν is the source 16, generate a routing tree 22 routing a data substream from the source 16 to the receivers 18; if ν is a receiver 18, generate a routing tree 22 routing a data substream from the source 16 to ν and from ν to the receivers 19 except ν; and if ν if a helper 32, generate a routing tree 22 routing a data substream from the source 16 to ν and from ν to the receivers 18. The generated routing tree 22 may then be added to a set of selected routing trees 22 over which respective data substreams are to be delivered at designated routing tree data rates.
  • Another low-cost routing tree identification technique may be applied to a network comprising a node set 14 including a source 16 of a data stream 12 and receivers 18 (but having no helpers 32), where all senders are capable of sending to the respective receivers 18 (i.e., in a fully interconnected network), but with an upload connections cap of upload connections. For example, the senders may be capped (either internally or by the network) to establish no more than a designated number of concurrent upload connections. In this type of network, the exemplary pseudocode block 70 of FIG. 5 may not be applicable, because the generated routing trees may specify too many upload connections from a particular sender.
  • For this type of network, a second low-cost routing tree identification technique may be devised. In this technique, a routing tree 22 may be generated by selecting a sender ν having a low routing price among the senders of the node set 14 and generating a routing tree 22 that routes a data substream from the source 16 to ν. The routing tree 22 may then be recursively extended through the receivers 18 of the node set 14 according to a low-cost node identification. For example, after generating the first connection in the routing tree 22 from the source 16 to a first sender, the recursive extending may involve selecting a sender ν having a low routing price among the receivers included in the routing tree, and having at least one fewer upload connection as compared with the upload connections cap. The generating may also involve selecting a receiver να having a low routing price among the receivers not yet included in the routing tree (i.e., with which the sender ν may efficiently communicate over the network.) The receiver να may then be added to the routing by extending the routing tree 22 to route the data substream from sender ν to receiver να. This recursive selecting may continue until the routing tree 22 includes the set of receivers 18. The routing tree 22 may then be added to the set of routing trees 22 over which data substreams are to be transmitted, and the next iteration may involve selecting a new low-cost routing tree 18 taking into account the upload capacity allocations and upload connections allocated for respective nodes in previous iterations.
  • FIG. 6 illustrates a pseudocode block 80 embodying this second low-cost routing tree identification technique, wherein:
  • A represents the set of senders (both the source 16 and receivers 18) that are already in a routing tree 22 being recursively generated;
  • B represents the set of receivers 18 that are not yet included in the routing tree 22; and
  • Ã represents the subset of receivers in set A that have at least one fewer allocated upload connection as compared with the upload connection cap.
  • However, it may be appreciated that the pseudocode block 80 of FIG. 6 is not the only embodiment of this second low-cost routing tree identifying technique, and that those of ordinary skill in the art may devise other embodiments of this technique.
  • A type of network to which these techniques may be applied involves a fully-interconnected network comprising the source 16, the receivers 18, and zero or more helpers 32, and also includes an upload connections cap applied to at least some of the nodes of the node set 14. It may be appreciated that this type of network may not be adequately serviced by either the first low-cost routing tree identifying technique (as it does not account for the upload connections cap) or the second low-cost routing tree identifying technique (as it does not anticipate the inclusion of helpers 32, and therefore may inefficiently route portions of the data stream 12 to leaf helpers 34.)
  • Therefore, a third low-cost routing tree identifying technique may be devised for networks involving a node set 14 comprising at least zero helpers 32 alongside the source 16 and the receivers 18, and where the source 16, receivers 18, and helpers 32 are capable of sending to the respective receivers 18 and respective helpers 32 with an upload connections cap of upload connections. In this scenario, the selecting 46 may again involve selecting a sender ν having a low routing price among the senders and helpers of the node set 14, generating a routing tree 22 routing a data substream from the source 16 to ν, and recursively extending the routing tree 22 to the other receivers 18. However, in this scenario, the recursive extending may involve selecting a sender να having a low routing price among the senders included in the routing tree and having at least one fewer upload connection as compared with the upload connections cap, and extending the routing tree 22 to route the data substream from να to respective low-priced nodes not yet included in the routing tree 22 until the upload connections of να equal the upload connections cap. This recursive extending may continue until the routing tree 22 includes the set of receivers 18, and the generated routing tree 22 may then be added to the set of routing trees over which respective data substreams are to be transmitted.
  • While this third low-cost routing tree identifying technique may generate adequately low-cost routing trees, it may also involve selecting routing trees 22 that include leaf helpers 34. An improvement of this third technique involves removing leaf helpers 34 in a manner that further reduces the network cost of the selected routing tree 22. According to this improvement, after selecting the low-cost routing tree, at least one leaf helper 34 may be removed by removing at least one leaf helper by first identifying a leaf helper in the routing tree (e.g., by iterating over the set of helpers 32 and identifying whether any helper 32 is included in the routing tree 22 but has no children.) Upon such identifying, the improved third technique may remove the leaf helper 34 by selecting a high-cost node in the routing tree having at least one upload connection (i.e., having at least one child node), removing the leaf helper 34 from the routing tree 22, and transferring an upload connection from the high-cost node to the sender of the leaf helper 34 in the routing tree 22. Because removing the leaf helper 34 provides at least one available outbound connection to the sender (i.e., the parent) of the leaf helper 34, the now-available outbound connection may be utilized to reduce the burden of sending to at least one receiver through the high-cost node. Moreover, if the high-cost node is also a helper 32, the high-cost node might also be removed by attempting to transfer the receivers of the high-cost node to other nodes. This may be achieved by identifying at least one low-cost node having at least one fewer upload connection as compared with the upload connections cap, transferring the receivers 18 of the high-cost node to the at least one low-cost node, and removing the high-cost node from the routing tree 22. In this manner, this improved third technique thereby improves the routing tree 22 generated by the basic third technique by removing leaf helpers 34 while concurrently transferring connections from high-cost senders to low-cost senders.
  • FIG. 7 provides a pseudocode block 90 embodying the improved third low-cost routing tree identifying technique, utilizing the same notation used for the other mathematical formulae and pseudocode blocks discussed herein. This pseudocode block 90 first recursively generates the routing tree 22 according to the basic third technique (e.g., by recursively selecting low-cost senders in the routing tree 22, and adding them to the routing tree 22 with as many not-yet-included receivers 18 and helpers 32 as permitted by the upload connections cap), and then removes leaf helpers 34 by transferring such connections to the sender of the leaf helper 34 and to other low-cost senders. However, the pseudocode block 90 of FIG. 7 represents only one embodiment of such techniques, and those of ordinary skill in the art may devise other embodiments of the basic and improved third low-cost routing tree identifying techniques.
  • Still another type of network to which these techniques may be applied involves a partially interconnected network, wherein at least one sender in the node set 14 may be unable to connect to at least one receiver 18 in the node set 14. The Internet may represent one such network, wherein the interconnectedness of nodes in a peer-to-peer communication session may be limited (e.g.) by firewalls, geographic distances, and intermittent links that provide unacceptably high latency between two nodes. The peer-to-peer communication session may still be achieved by routing the data stream from such a sender to such a recipient through a third node that is accessible to both the sender and the receiver. However, the first three low-cost routing tree identifying techniques may be inapplicable to such networks due to the basic presumption of such techniques that the node set 14 is fully interconnected.
  • For such networks, a fourth low-cost routing tree identifying technique may be devised that takes into account the connected of any particular node with a subset of nodes (“neighboring nodes”) in the node set 14. This technique may be applied where the network comprises at least zero helpers 32 alongside the source 16 and the receivers 18, and where the source 16, receivers 18, and helpers 32 are capable of sending to a neighbor set of respective receivers 18 and respective helpers 32. The routing trees 22 identified thereby respect the neighboring node limitations of the nodes and only include routings of a node to and from its neighboring nodes. According to this fourth technique, selecting the low-cost routing tree comprises selecting a sender ν having a low routing price among the senders and helpers 32 of the node set 14; generating a routing tree 22 routing a data substream from the source 18 to ν; and recursively extending the routing tree 32. However, in this fourth technique, the extending involves selecting a sender να having a low routing price among the receivers 18 included in the routing tree 32, and extending the routing tree 22 to route the data substream from να to nodes in the neighbor set of να that are not yet included in the routing tree 22. This extending may continue until the routing tree includes the set of receivers 18.
  • In similar fashion as with the third technique, the fourth low-cost routing tree identifying technique may be improved by endeavoring to remove particular helpers 32 while further reducing the cost of the selected routing tree 22. This improved technique may include removing leaf helpers 34, which may be performed in a similar manner as in the improved third technique (while also respecting the limitation that an upload connection may be transferred from a first sender to a second sender only if the receiver 18 of this upload connection is in the neighbor node set of the second sender.) An additional improvement may involve a reevaluation of the included non-leaf helpers 32 to determine whether a more efficient routing may be achieved by excluding the helper 32. For respective helpers 32, the improved fourth technique determines whether the upload connections (i.e., the receivers) of the non-leaf helper 32 have at least one neighbor node other than the non-leaf helper 32. If all of the receivers have a neighbor node other than the non-leaf helper 32, the non-leaf helper is a candidate for removal. The improved fourth technique may therefore iteratively transfer the upload connections of the non-leaf helper 32 to respective neighbor nodes that have a lower routing cost than the non-leaf helper 32, thereby reducing the upload connections of the non-leaf helper 32. If all of the upload connections of the non-leaf helper 32 may be removed, the non-leaf helper 32 may then be removed from the routing tree 22. Conversely, if at least one upload connection may not be removed from the non-leaf helper 32, then the non-leaf helper 32 may be retained in the routing tree 22, since it serves to reduce the routing cost to the non-removable upload connection (as compared with alternative routing trees 22.) This removal of non-leaf helpers 22 may be iteratively performed until no more non-leaf helpers 22 may be removed from the routing tree 22, and the routing tree 22 may then be added to the set of low-cost routing trees 22 over which data substreams are to be transmitted.
  • FIG. 8 presents an exemplary pseudocode block 100 embodying the improved fourth low-cost routing tree identifying technique, which relies on the following notation (along with the notation previously discussed for other mathematical formulae):
  • Bu* represents a neighbor node set of a node ν in the routing tree 22, and
  • u* represents a neighbor node of a node ν.
  • In this exemplary pseudocode block 100, a routing tree 22 is first generated by recursively extending the routing tree 22 from a node in the routing tree 22 to neighbor nodes, as discussed in the basic fourth technique. The routing tree 22 so generated is then improved first by removing leaf helpers 34, and then by attempting to remove non-leaf helpers 32 by first identifying whether a non-leaf helper 32 is a candidate for removal (i.e., whether all of its upload connections may be transferred to neighbor nodes), and then attempting to transfer away the upload connections to neighbor nodes having a lower routing cost. If all of the upload connections are removed, the non-leaf helper 32 is now a leaf helper 34 that is removed during a subsequent iteration of the routing tree improvement. This iterating continues until no further leaf helpers 34 may be removed and until no more upload connections of non-leaf helpers 32 may be transferred away, and the improved routing tree 22 may then be added to the set of routing trees over which data substreams are to be transmitted. However, the pseudocode block 100 of FIG. 8 represents only one embodiment of these fourth low-cost routing tree identifying techniques, and those of ordinary skill in the art may devise other embodiments of the basic and improved fourth low-cost routing tree identifying techniques.
  • The techniques discussed heretofore and illustrated in FIGS. 3 through 8 are useful for many types of networks. However, some of these techniques are effective for peer-to-peer communications sessions involving a transmission of a single data stream 12 from a single source 16, such as in internet TV distribution. Other scenarios may involve a transmission of multiple data streams 12 from multiple sources 16 to the receivers of the node set 12 (and some or all of the sources 16 may also be receivers of other data streams 12 sent by other sources 16.) Some modest modifications of these techniques may be more helpful for such multi-data-stream scenarios.
  • FIG. 9 presents one embodiment of these techniques for a multiple-data-stream and multiple-source communications network, illustrated as an exemplary method 110 of transmitting at least two data streams 12 among a node set 14 comprising at least two sources 16 of the respective data streams 12 and a set of receivers 18 over a routing tree set 20, where respective routing trees 22 specify a route of the data stream 12 among a respective source 16 and the receivers 18 (potentially including the other sources 16.) The exemplary method 110 begins at 112 and involves representing 114 the node set 20 as a primal model allocating a routing tree data rate of respective data streams 12 for respective routing trees, where the routing tree data rate of the respective routing trees 22 is within an upload capacity of respective senders in the route of the routing tree 22. The exemplary method 110 also involves selecting 116 routing tree data rates for respective routing trees 22 that increase an aggregated data rate according to the primal model. The exemplary method 110 also involves apportioning 118 data substreams of the respective data streams 12 for respective routing trees 22 according to the routing tree data rate of the routing tree 22, and transmitting the data substreams of the respective data streams 12 over the respective routing trees 22 at the respective routing tree data rates. Having determined the routing of the data streams 12 according to the upload capacities of the nodes of the node set 14 with the assistance of the primal mode, the exemplary method 10 thereby achieves the delivery of the data streams 12 to the nodes of the node set 14 at a comparatively high throughput, and so ends at 122.
  • One such primal model that may be used in this manner is represented by the mathematical formula:

  • increase λ

  • subject to ΣtεT k yt=λrk∀k=1, 2, . . . , K

  • ΣkΣtεT k m ν,t y t ≦C(ν),∀νεV

  • yt≧0, ∀tεTk, ∀k=1, 2, . . . , K
  • This mathematical formula relies on the following notation:
  • V represents the set of receivers 18 and a respective source 16;
  • ν represents a sender 16 in a routing tree;
  • k represents a data stream 12;
  • K represents the set of data streams 12;
  • Tk represents the routing tree set 20 for data stream k;
  • t represents a routing tree 22;
  • yt represents the routing tree data rate of routing tree t;
  • λ represents a data stream rate multiplier;
  • rk represents the data rate of data stream k;
  • mν,t represents the number of receivers to which sender νtransmits the data substream in routing tree t; and
  • C(ν) represents the upload capacity of sender ν.
  • This primal model resembles the primal model applicable to a single-data-stream peer-to-peer communications session, but takes into account the delivery of multiple data streams 12 through respective routing tree sets 20, such that the upload capacity of a sender of a data substream of a first data stream 12 is evaluated with respect to the upload capacity of the sender consumed by sending a data substream of a second data stream 12 (as per the constraint mν,tyt≦C(ν).) Moreover, this primal model is oriented such that if respective data streams 12 have a relative data rate, the primal model may be used to increase a data stream rate multiplier λ that applies proportionally to the relative data rates of all data streams 12. For example, if a first data stream 12 transmits at 1,024 Mbps and a second data stream 12 transmits at 2,048 Mbps, the primal model may be organized to permit the linear programming model to adjust the details of the routing so as to increase the data stream rate multiplier λ to a factor of 2.0, such that the first data stream 12 may be transmitted at 2,048 Mbps and the second data stream 12 may be transmitted at 4,096 Mbps.
  • While this primal model may be useful, it may be difficult to determine the proximity of the achieved throughput (i.e., the value of λ) to the theoretical throughput limit. An alternative model may therefore be devised, wherein the primal model is represented as a linear programming dual model that associates a routing price with respective senders in the route of a routing tree 22. A linear programming dual model for a multiple-data-stream and multiple-source peer-to-peer communications session may associate a routing price with respective senders in the route of a routing tree 22 of a data stream 12, where the routing price representing a per-unit flow cost of the data substream through the sender. Accordingly, the selecting 116 may comprise selecting routing capacities that reduce the routing prices of the senders of the routing trees 22 of respective data streams 12 in the routing tree set 20.
  • One such linear programming dual model may be represented according to the mathematical formula:

  • reduce ΣνεVC(ν)pν

  • subject to ΣνεVmν,tpν≧zk∀tεTk, ∀k=1, 2, . . . , K,

  • Σk=1 KrkZk≧1

  • pν≧0∀νεV,
  • This mathematical formula relies on the following notation:
  • V represents a set of 18 receivers and a source 16 of a respective data stream 12;
  • ν represents a sender in a routing tree 22;
  • C(ν) represents the upload capacity of sender ν;
  • pν represents the per-unit flow cost of sender ν; and
  • k represents a data stream 12 from a source 16;
  • rk represents the data rate of data stream k;
  • zk represents a data stream constraint of data stream k;
  • Tk represents the routing tree set for data stream k; and
  • mν,t represents the number of receivers 18 to which sender ν transmits the data substream in routing tree t.
  • Thus, this mathematical formula for a linear programming dual model of a multiple-data-stream, multiple-source communications network seeks to reduce the network cost of using respective nodes to send data substreams for the set of data streams 12. An evaluation of this model may result in the selection of a set of low-cost routing trees 20 for sending data substreams of the data streams 12 that, together, reduce the cost to respective nodes of the node set 14.
  • As with the single-data-stream models, the evaluation of the primal model and the linear programming dual model for the multiple-data-stream communications session may be difficult to evaluate for all data streams 12 across all available routing trees 22 in the routing tree set 20. Iterative process may therefore be devised for performing the selecting 116 in an improved manner. In one such iterative process, the per-unit flow cost of a sender ν may be calculated according to the mathematical formula:
  • p i ( v ) = p i - 1 ( v ) ( 1 + ɛ m v , t i y i C ( v ) )
  • This mathematical formula relies on the following notation:
  • i represents an iteration of the selecting;
  • pi(ν) represents the per-unit flow cost of sender ν during iteration i; and
  • ε represents an optimality constraint.
  • According to this per-unit flow cost calculation, the flow cost for a node is based on the flow cost of the node in a prior iteration, the uploading data rate of the node for previously allocated data substreams of data streams, and the upload capacity of the node, in addition to the optimality constraint. An iterative process may utilize this per-unit flow cost calculation by selecting a low-cost routing tree 22 for an iteration and allocating the routing tree data rate of respective data streams 12 for the low-cost routing tree 22 based on residual upload capacities of the senders in the routing tree 22. After selecting a routing tree 22 for a particular data substream, the iterative process may calculate the residual upload capacities of the senders in the routing tree 22. The iteration may then be performed until the per-unit flow cost of the routing trees 22 is at least one, and no further data substreams of the data streams may be apportioned without exceeding the upload capacity of a node.
  • FIG. 10 presents an exemplary iterative process that applies the linear programming dual model for a multiple-data-stream, multiple-source communications session, illustrated as a pseudocode block 130 embodying this iterative process. This pseudocode block 130 may resemble the pseudocode block 60 of FIG. 4, but also takes into account the selection of multiple routing tree sets 20 for routing the data substreams of multiple data streams 12 by multiple sources 16. Moreover, the data rates of the routing trees 22 may be scaled by a scaling factor, L, to promote the selection of data rates that do not exceed the upload capacity of various senders. In this iterative process, the scaling factor L may also be adjusted by a counting of phases of selection over the set of data streams 12.
  • It may be appreciated that the variations of the techniques discussed with respect to a single-data-stream communications session may be similarly applied, alone or in combination, to multiple-data-stream communications sessions. For example, the several techniques for identifying low-cost routing trees 22 among the routing tree sets 20 may be similarly selected for multiple-data-stream communication sessions in view of the other network parameters (full or partial interconnectedness, upload connection caps, and the presence or absence of helpers), and may be similarly applied during various iterations. The improvements of such techniques may also be utilized, e.g., to remove leaf helpers 34 and to reallocate connections among the nodes of selected routing trees 22 to remove non-leaf helpers 32 and/or to reduce further the costs of the routing trees. Those of ordinary skill in the art may devise many embodiments and improvements of the application of such primal and linear programming dual models to multiple-data-stream communications sessions while implementing the techniques discussed herein.
  • Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
  • As used in this application, the terms “component,” “module,” “system”, “interface”, and the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
  • FIG. 11 and the following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein. The operating environment of FIG. 11 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Although not required, embodiments are described in the general context of “computer readable instructions” being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
  • FIG. 11 illustrates an example of a system 140 comprising a computing device 142 configured to implement one or more embodiments provided herein. In one configuration, computing device 142 includes at least one processing unit 146 and memory 148. Depending on the exact configuration and type of computing device, memory 148 may be volatile (such as RAM, for example), non-volatile (such as ROM, flash memory, etc., for example) or some combination of the two. This configuration is illustrated in FIG. 11 by dashed line 144.
  • In other embodiments, device 142 may include additional features and/or functionality. For example, device 142 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in FIG. 11 by storage 150. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in storage 150. Storage 150 may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in memory 148 for execution by processing unit 146, for example.
  • The term “computer readable media” as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 148 and storage 150 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 142. Any such computer storage media may be part of device 142.
  • Device 142 may also include communication connection(s) 156 that allows device 142 to communicate with other devices. Communication connection(s) 156 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device 142 to other computing devices. Communication connection(s) 156 may include a wired connection or a wireless connection. Communication connection(s) 156 may transmit and/or receive communication media.
  • The term “computer readable media” may include communication media. Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • Device 142 may include input device(s) 154 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device. Output device(s) 152 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 142. Input device(s) 154 and output device(s) 152 may be connected to device 142 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another computing device may be used as input device(s) 154 or output device(s) 152 for computing device 142.
  • Components of computing device 142 may be connected by various interconnects, such as a bus. Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like. In another embodiment, components of computing device 142 may be interconnected by a network. For example, memory 148 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
  • Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device 160 accessible via network 158 may store computer readable instructions to implement one or more embodiments provided herein. Computing device 142 may access computing device 160 and download a part or all of the computer readable instructions for execution. Alternatively, computing device 142 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 142 and some at computing device 160.
  • Various operations of embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein.
  • Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word exemplary is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims may generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
  • Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure which performs the function in the herein illustrated exemplary implementations of the disclosure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes”, “having”, “has”, “with”, or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising.”

Claims (20)

1. A method of transmitting a data stream among a node set comprising a source of the data stream and a set of receivers over a routing tree set, respective routing trees specifying a route of the data stream among the source and the receivers, the method comprising:
representing the node set as a primal model allocating a routing tree data rate of the data stream for respective routing trees, the routing tree data rate within a capacity of respective nodes and within a capacity of respective connections between nodes in the route of the routing tree;
selecting routing tree data rates for respective routing trees that increase an aggregated data rate according to the primal model;
apportioning data substreams of the data stream for respective routing trees according to the routing tree data rate of the routing tree; and
transmitting the data substreams over the respective routing trees at the respective routing tree data rates.
2. The method of claim 1, the primal model represented according to the mathematical formula:

increase r=ΣtεTyt

subject to ΣtεT m ν,t y t ≦C(ν)∀νεV,

yt≧0∀tεT,
wherein:
V represents the set of receivers and the source;
ν represents a sender in a routing tree;
C(ν) represents the upload capacity of sender ν;
T represents the routing tree set;
t represents a routing tree;
yt represents the routing tree data rate of routing tree t;
mν,t represents the number of receivers to which sender ν transmits the data substream in routing tree t; and
r represents the aggregated data rate of the data stream transmitted over the routing trees.
3. The method of claim 1:
the primal model represented as a linear programming dual model associating a routing price with respective senders in the route of a routing tree, the routing price representing a per-unit flow cost of the data substream through the sender; and
the selecting comprising: selecting routing capacities that reduce the routing prices of the senders of the routing trees in the routing tree set.
4. The method of claim 3, the linear programming dual model represented according to the mathematical formula:

reduce ΣνεVC(ν)pν

subject to ΣνεVmν,tpν≧1∀tεT,

pν≧∀νεV.
wherein:
V represents the set of receivers and the source;
ν represents a sender in a routing tree;
C(ν) represents the upload capacity of sender ν;
T represents the routing tree set;
t represents a routing tree;
mν,t represents the number of receivers to which sender ν transmits the data substream in routing tree t; and
pν represents the per-unit flow cost of sender ν.
5. The method of claim 3:
the selecting comprising: iteratively selecting the routing capacities of the routing trees by:
selecting a low-cost routing tree for an iteration;
allocating the routing tree data rate for the low-cost routing tree based on residual capacities of the nodes and within a capacity of respective connections between nodes in the routing tree; and
calculating the residual capacities of the nodes and connections between nodes in the routing tree,
until the per-unit flow cost of the routing trees is at least one; and
the per-unit flow cost of sender ν calculated according to the mathematical formula:
p i ( v ) = p i - 1 ( v ) ( 1 + ɛ m v , t i y i C ( v ) )
wherein:
i represents an iteration of the selecting;
pi(ν) represents the per-unit flow cost of sender ν during iteration i; and
ε represents an optimality constraint.
6. The method of claim 5, the per-unit flow cost of sender ν during a first iteration calculated according to the mathematical formula:
p 0 ( v ) = ( 1 + ɛ ) ( ( 1 + ɛ ) ) - 1 ɛ
wherein L represents a scaling factor computed according to the mathematical formula:
= max v V U ( v ) C ( v )
wherein U(ν) represents an aggregate upload rate of sender ν among all routing trees.
7. The method of claim 5:
the node set comprising at least zero helpers;
the source, receivers, and helpers capable of sending to the respective receivers and respective helpers without an upload connections cap;
the routing price calculated as an effective routing price according to the mathematical formula:
p ^ ( v ) = { p ( v ) if v s U R p ( v ) R R - 1 if v H
wherein:
{circumflex over (p)}(ν) represents the effective routing price,
s represents the source,
R represents the set of receivers, and
H represents the set of helpers; and
selecting the low-cost routing tree comprising:
selecting a sender ν having a low effective routing price among the senders of the node set;
if ν is the source, generating a routing tree routing a data substream from the source to the receivers;
if ν is a receiver, generating a routing tree routing a data substream from the source to ν and from ν to the receivers except ν; and
if ν if a helper, generating a routing tree routing a data substream from the source to ν and from ν to the receivers.
8. The method of claim 5:
the source and receivers capable of sending to the respective receivers and respective helpers with an upload connections cap of upload connections; and
selecting the low-cost routing tree comprising:
selecting a sender ν having a low routing price among the senders of the node set;
generating a routing tree routing a data substream from the source to ν; and
recursively extending the routing tree from senders to receivers by:
selecting a sender ν having a low routing price among the receivers included in the routing tree and having at least one fewer upload connection as compared with the upload connections cap;
selecting a receiver να having a low routing price among the receivers not yet included in the routing tree; and
extending the routing tree to route the data substream from ν to receiver να,
until the routing tree includes the set of receivers.
9. The method of claim 5:
the node set comprising at least zero helpers;
the source, receivers, and helpers capable of sending to the respective receivers and respective helpers with an upload connections cap of upload connections; and
selecting the low-cost routing tree comprising:
selecting a sender ν having a low routing price among the senders and helpers of the node set;
generating a routing tree routing a data substream from the source to ν; and
recursively extending the routing tree by:
selecting a sender να having a low routing price among the senders included in the routing tree and having at least one fewer upload connection as compared with the upload connections cap, and
extending the routing tree to route the data substream from να to respective low-priced nodes not yet included in the routing tree until the upload connections of να equal the upload connections cap,
until the routing tree includes the set of receivers.
10. The method of claim 9, the selecting comprising: after selecting the low-cost routing tree, removing at least one leaf helper by:
identifying a leaf helper in the routing tree;
selecting a high-cost node having at least one upload connection;
removing the leaf helper from the routing tree;
transferring an upload connection from the high-cost node to the sender of the leaf helper in the routing tree; and
if the high-cost node is a helper:
identifying at least one low-cost node having at least one fewer upload connection as compared with the upload connections cap;
transferring the receivers of the high-cost node to the at least one low-cost node; and
removing the high-cost node from the routing tree.
11. The method of claim 5:
the node set comprising at least zero helpers;
the source, receivers, and helpers capable of sending to a neighbor set of respective receivers and respective helpers; and
selecting the low-cost routing tree comprising:
selecting a sender ν having a low routing price among the senders and helpers of the node set;
generating a routing tree routing a data substream from the source to ν; and
recursively extending the routing tree by:
selecting a sender να having a low routing price among the receivers included in the routing tree, and
extending the routing tree to route the data substream from να to nodes in the neighbor set of να that are not yet included in the routing tree,
until the routing tree includes the set of receivers.
12. The method of claim 11, the selecting comprising:
after selecting the low-cost routing tree, removing leaf helpers; and
after selecting the low-cost routing tree:
for respective non-leaf helpers:
upon identifying that the upload connections of the non-leaf helper have at least one neighbor node other than the non-leaf helper:
upon identifying that a respective upload connection of the non-leaf helper node has a neighbor node with a lower routing cost than the non-leaf helper, transferring the upload connection to an upload connection of the neighbor node; and
upon removing the upload connections of the non-leaf helper, removing the non-leaf helper from the routing tree.
13. A method of transmitting at least two data streams among a node set comprising at least two sources of the respective data streams and a set of receivers over a routing tree set, respective routing trees specifying a route of the data stream among a respective source and the receivers, the method comprising:
representing the node set as a primal model allocating a routing tree data rate of respective data streams for respective routing trees, the routing tree data rate within a capacity of respective nodes and connections between nodes in the route of the routing tree;
selecting routing tree data rates for respective routing trees that increase an aggregated data rate according to the primal model;
apportioning data substreams of the respective data streams for respective routing trees according to the routing tree data rate of the routing tree; and
transmitting the data substreams of the respective data streams over the respective routing trees at the respective routing tree data rates.
14. The method of claim 13, the primal model represented according to the mathematical formula:

increase λ

subject to ΣtεT k yt=λrk∀k=1, 2, . . . , K

ΣkΣtεT k m ν,t y t ≦C(ν),∀νεV

yt≧0, ∀tεTk, ∀k=1, 2, . . . , K
wherein:
V represents the set of receivers and the source;
ν represents a sender in a routing tree;
k represents a data stream;
K represents the set of data streams;
Tk represents the routing tree set for data stream k;
t represents a routing tree;
yt represents the routing tree data rate of routing tree t;
λ represents a data stream rate multiplier;
rk represents the data rate of data stream k;
mν,t represents the number of receivers to which sender ν transmits the data substream in routing tree t; and
C(ν) represents the upload capacity of sender ν.
15. The method of claim 13:
the primal model represented as a linear programming dual model associating a routing price with respective senders in the route of a routing tree of a data stream, the routing price representing a per-unit flow cost of the data substream through the sender; and
the selecting comprising: selecting routing capacities that reduce the routing prices of the senders of the routing trees of respective data streams in the routing tree set.
16. The method of claim 15, the linear programming dual model represented according to the mathematical formula:

reduce ΣνεVC(ν)pν

subject to ΣνεVmν,tpν≧zk∀tεTk, ∀k=1, 2, . . . , K,

Σk=1 KrkZk≧1

pν≧0∀νεV,
wherein:
V represents a set of receivers and a source of a respective data stream;
ν represents a sender in a routing tree;
C(ν) represents the upload capacity of sender ν;
pν represents the per-unit flow cost of sender ν; and
k represents a data stream from a source;
rk represents the data rate of data stream k;
zk represents a data stream constraint of data stream k;
Tk represents the routing tree set for data stream k; and
mν,t represents the number of receivers to which sender ν transmits the data substream in routing tree t.
17. The method of claim 15, the selecting comprising: iteratively selecting the routing capacities of the routing trees by:
selecting a low-cost routing tree for an iteration;
allocating the routing tree data rate of respective data streams for the low-cost routing tree based on residual capacities of the nodes and residual capacities of connections between nodes and residual capacities of connections between nodes in the routing tree; and
calculating the residual capacities of the nodes and residual capacities of connections between nodes in the routing tree,
until the per-unit flow cost of the routing trees is at least one; and
the per-unit flow cost of sender ν calculated according to the mathematical formula:
p i ( v ) = p i - 1 ( v ) ( 1 + ɛ m v , t i y i C ( v ) )
wherein:
i represents an iteration of the selecting;
pi(ν) represents the per-unit flow cost of sender ν during iteration i; and
ε represents an optimality constraint.
18. The method of claim 16:
the node set comprising at least zero helpers;
the source, receivers, and helpers capable of sending to the respective receivers and respective helpers without an upload connections cap;
the routing price calculated as an effective routing price according to the mathematical formula:
p ^ ( v ) = { p ( v ) if v s U R p ( v ) R R - 1 if v H
wherein:
{circumflex over (p)}(ν) represents the effective routing price,
s represents the source,
R represents the set of receivers, and
H represents the set of helpers; and
selecting the low-cost routing tree comprising:
selecting a sender ν having a low effective routing price among the senders of the node set;
if ν is the source, generating a routing tree routing a data substream from the source to the receivers;
if ν is a receiver, generating a routing tree routing a data substream from the source to ν and from ν to the receivers except ν; and
if ν if a helper, generating a routing tree routing a data substream from the source to ν and from ν to the receivers.
19. The method of claim 16:
the source and receivers capable of sending to the respective receivers and respective helpers with an upload connections cap of upload connections; and
selecting the low-cost routing tree comprising:
selecting a sender ν having a low routing price among the senders of the node set;
generating a routing tree routing a data substream from the source to ν; and
recursively extending the routing tree from predecessor sender ν to the receivers by:
selecting a receiver να having a low routing price among the receivers not yet included in the routing tree and having at least one fewer upload connection as compared with the upload connections cap;
extending the routing tree to route the data substream from ν to receiver να; and
selecting να as predecessor sender ν,
until the routing tree includes the set of receivers.
20. A method of transmitting a data stream among a node set comprising a source of the data stream and a set of receivers over a routing tree set, respective routing trees specifying a route of the data stream among the source and the receivers, the method comprising:
representing the routing tree set as a primal model allocating a routing tree data rate of the data stream for respective routing trees, the routing tree data rate within a capacity of respective nodes and within a capacity of respective connections between nodes in the route of the routing tree, and the primal model represented as a linear programming dual model associating a routing price with respective senders in the route of a routing tree, the routing price representing a per-unit flow cost of the data substream through the sender, according to the mathematical formula:

reduce ΣνεVC(ν)pν

subject to ΣνεVmν,tpν≧1∀tεT,

pν≧∀νεV
wherein:
V represents the set of receivers and the source;
ν represents a sender in a routing tree;
C(ν) represents the upload capacity of sender ν;
T represents the routing tree set;
t represents a routing tree;
mν,t represents the number of receivers to which sender ν transmits the data substream in routing tree t; and
pν represents the per-unit flow cost of sender ν;
iteratively selecting routing tree data rates for respective routing trees that reduce the routing prices of the senders of the routing trees in the routing tree set by:
selecting a low-cost routing tree for an iteration;
allocating the routing tree data rate for the low-cost routing tree based on residual capacities of nodes and on residual capacities of connections between nodes in the routing tree; and
calculating the residual capacities of the nodes and the residual capacities of connections between nodes in the routing tree,
until the per-unit flow cost of the routing trees is at least one; and
the per-unit flow cost of sender ν calculated according to the mathematical formula:
p i ( v ) = p i - 1 ( v ) ( 1 + ɛ m v , t i y i C ( v ) )
wherein:
i represents an iteration of the selecting;
pi(ν) represents the per-unit flow cost of sender ν during iteration i;
ε represents an optimality constraint;
the per-unit flow cost of sender ν during a first iteration calculated according to the mathematical formula:
p 0 ( v ) = ( 1 + ɛ ) ( ( 1 + ɛ ) ) - 1 ɛ
wherein L represents a scaling factor computed according to the mathematical formula:
= max v V U ( v ) C ( v )
wherein U(ν) represents an aggregate upload rate of sender ν among all routing trees;
apportioning data substreams of the data stream for respective routing trees according to the routing tree data rate of the routing tree; and
transmitting the data substreams over the respective routing trees at the respective routing tree data rates.
US12/247,431 2008-10-08 2008-10-08 Models for routing tree selection in peer-to-peer communications Expired - Fee Related US7738406B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/247,431 US7738406B2 (en) 2008-10-08 2008-10-08 Models for routing tree selection in peer-to-peer communications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/247,431 US7738406B2 (en) 2008-10-08 2008-10-08 Models for routing tree selection in peer-to-peer communications

Publications (2)

Publication Number Publication Date
US20100085979A1 true US20100085979A1 (en) 2010-04-08
US7738406B2 US7738406B2 (en) 2010-06-15

Family

ID=42075766

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/247,431 Expired - Fee Related US7738406B2 (en) 2008-10-08 2008-10-08 Models for routing tree selection in peer-to-peer communications

Country Status (1)

Country Link
US (1) US7738406B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110078200A1 (en) * 2009-09-30 2011-03-31 Eric Williamson Systems and methods for conditioning the distribution of data in a hierarchical database
US20120303822A1 (en) * 2011-05-26 2012-11-29 Qualcomm Incorporated Multipath overlay network and its multipath management protocol
US8358765B1 (en) * 2010-03-31 2013-01-22 Cox Communications, Inc. System for simultaneous delivery of communication session invitation messages to communication devices
US20130297731A1 (en) * 2012-05-04 2013-11-07 The Hong Kong University Of Science And Technology Content distribution over a network
US8885502B2 (en) 2011-09-09 2014-11-11 Qualcomm Incorporated Feedback protocol for end-to-end multiple path network systems
US8996453B2 (en) 2009-09-30 2015-03-31 Red Hat, Inc. Distribution of data in a lattice-based database via placeholder nodes
US8995338B2 (en) 2011-05-26 2015-03-31 Qualcomm Incorporated Multipath overlay network and its multipath management protocol
US9031987B2 (en) 2009-09-30 2015-05-12 Red Hat, Inc. Propagation of data changes in distribution operations in hierarchical database
US20150195189A1 (en) * 2014-01-07 2015-07-09 Alcatel Lucent Usa, Inc. Multiple tree routed selective randomized load balancing
CN107396204A (en) * 2017-06-12 2017-11-24 江苏大学 A kind of P2P video request program node selecting methods based on linear programming and intensified learning
US11062722B2 (en) * 2018-01-05 2021-07-13 Summit Wireless Technologies, Inc. Stream adaptation for latency

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437281B2 (en) * 2007-03-27 2013-05-07 Cisco Technology, Inc. Distributed real-time data mixing for conferencing
JP5104489B2 (en) * 2008-04-03 2012-12-19 日本電気株式会社 Distributed event detection system, distributed event detection method, and distributed event detection program
KR101295875B1 (en) * 2009-12-07 2013-08-12 한국전자통신연구원 Operating method of sensor network and network node providing routing mechanism supporting real time transmission of prior information
KR101212366B1 (en) * 2010-11-25 2012-12-13 엔에이치엔비즈니스플랫폼 주식회사 System and method for controlling server usage in streaming service based on peer to peer
FR3011704A1 (en) * 2013-10-07 2015-04-10 Orange METHOD FOR IMPLEMENTING A COMMUNICATION SESSION BETWEEN A PLURALITY OF TERMINALS

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020161898A1 (en) * 2000-12-20 2002-10-31 Scott Hartop Streaming of data
US20030185166A1 (en) * 2000-11-08 2003-10-02 Belcea John M. Time division protocol for an AD-HOC, peer-to-peer radio network having coordinating channel access to shared parallel data channels with separate reservation channel
US20040236863A1 (en) * 2003-05-23 2004-11-25 Microsoft Corporation Systems and methods for peer-to-peer collaboration to enhance multimedia streaming
US20060007947A1 (en) * 2004-07-07 2006-01-12 Jin Li Efficient one-to-many content distribution in a peer-to-peer computer network
US20060187860A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Serverless peer-to-peer multi-party real-time audio communication system and method
US20070064405A1 (en) * 2004-08-02 2007-03-22 Asustek Computer Inc. Shock absorber assembly and portable computer utilizing the same
US20070094405A1 (en) * 2005-10-21 2007-04-26 Zhang Xinyan System and method for presenting streaming media content
US20070280255A1 (en) * 2006-04-25 2007-12-06 The Hong Kong University Of Science And Technology Intelligent Peer-to-Peer Media Streaming
US7636789B2 (en) * 2007-11-27 2009-12-22 Microsoft Corporation Rate-controllable peer-to-peer data stream routing

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9210085B2 (en) 2006-10-05 2015-12-08 Bittorrent, Inc. Peer-to-peer streaming of non-live content

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030185166A1 (en) * 2000-11-08 2003-10-02 Belcea John M. Time division protocol for an AD-HOC, peer-to-peer radio network having coordinating channel access to shared parallel data channels with separate reservation channel
US20020161898A1 (en) * 2000-12-20 2002-10-31 Scott Hartop Streaming of data
US20040236863A1 (en) * 2003-05-23 2004-11-25 Microsoft Corporation Systems and methods for peer-to-peer collaboration to enhance multimedia streaming
US20060007947A1 (en) * 2004-07-07 2006-01-12 Jin Li Efficient one-to-many content distribution in a peer-to-peer computer network
US20070064405A1 (en) * 2004-08-02 2007-03-22 Asustek Computer Inc. Shock absorber assembly and portable computer utilizing the same
US20060187860A1 (en) * 2005-02-23 2006-08-24 Microsoft Corporation Serverless peer-to-peer multi-party real-time audio communication system and method
US20070094405A1 (en) * 2005-10-21 2007-04-26 Zhang Xinyan System and method for presenting streaming media content
US20070280255A1 (en) * 2006-04-25 2007-12-06 The Hong Kong University Of Science And Technology Intelligent Peer-to-Peer Media Streaming
US7636789B2 (en) * 2007-11-27 2009-12-22 Microsoft Corporation Rate-controllable peer-to-peer data stream routing

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110078200A1 (en) * 2009-09-30 2011-03-31 Eric Williamson Systems and methods for conditioning the distribution of data in a hierarchical database
US8984013B2 (en) * 2009-09-30 2015-03-17 Red Hat, Inc. Conditioning the distribution of data in a hierarchical database
US8996453B2 (en) 2009-09-30 2015-03-31 Red Hat, Inc. Distribution of data in a lattice-based database via placeholder nodes
US9031987B2 (en) 2009-09-30 2015-05-12 Red Hat, Inc. Propagation of data changes in distribution operations in hierarchical database
US8358765B1 (en) * 2010-03-31 2013-01-22 Cox Communications, Inc. System for simultaneous delivery of communication session invitation messages to communication devices
US9444887B2 (en) * 2011-05-26 2016-09-13 Qualcomm Incorporated Multipath overlay network and its multipath management protocol
US20120303822A1 (en) * 2011-05-26 2012-11-29 Qualcomm Incorporated Multipath overlay network and its multipath management protocol
US8995338B2 (en) 2011-05-26 2015-03-31 Qualcomm Incorporated Multipath overlay network and its multipath management protocol
US8885502B2 (en) 2011-09-09 2014-11-11 Qualcomm Incorporated Feedback protocol for end-to-end multiple path network systems
US20130297731A1 (en) * 2012-05-04 2013-11-07 The Hong Kong University Of Science And Technology Content distribution over a network
US9654329B2 (en) * 2012-05-04 2017-05-16 The Hong Kong University Of Science And Technology Content distribution over a network
US20150195189A1 (en) * 2014-01-07 2015-07-09 Alcatel Lucent Usa, Inc. Multiple tree routed selective randomized load balancing
CN107396204A (en) * 2017-06-12 2017-11-24 江苏大学 A kind of P2P video request program node selecting methods based on linear programming and intensified learning
US11062722B2 (en) * 2018-01-05 2021-07-13 Summit Wireless Technologies, Inc. Stream adaptation for latency

Also Published As

Publication number Publication date
US7738406B2 (en) 2010-06-15

Similar Documents

Publication Publication Date Title
US7738406B2 (en) Models for routing tree selection in peer-to-peer communications
CN110262845B (en) Block chain enabled distributed computing task unloading method and system
US8527739B2 (en) Iterative process partner pairing scheme for global reduce operation
KR20210038827A (en) Adaptive dataflow transformation in edge computing environments
EP1612982B1 (en) Content distribution using network coding
Nassef et al. A survey: Distributed Machine Learning for 5G and beyond
CN105830031B (en) Method and network node for selecting a media processing unit
US9118638B2 (en) Content delivery using multiple sources over heterogeneous interfaces
CN113168326A (en) Method and apparatus for management of network-based media processing functions in a wireless communication system
US8005975B2 (en) Reducing or minimizing delays in peer-to-peer communications such as peer-to-peer video streaming
Khalili et al. Inter‐layer per‐mobile optimization of cloud mobile computing: a message‐passing approach
CN112615730B (en) Resource allocation method and device based on block chain network slice proxy
US20100027442A1 (en) Constructing scalable overlays for pub-sub with many topics: the greedy join-leave algorithm
US9654329B2 (en) Content distribution over a network
CN105359113B (en) It is distributed for the distributed storage of heterogeneous system
CN114040425B (en) Resource allocation method based on global resource utility rate optimization
US20240022481A1 (en) System and method for optimizing deployment of a processing function in a media production workflow
CN108833993B (en) Cost-sensitive network video distribution method
CN114827028A (en) Multi-layer computation network integrated routing system and method
CN113747507B (en) 5G ultra-dense network-oriented computing resource management method and device
CN113179154B (en) Resource joint distribution method in quantum key distribution Internet of things and related equipment
Lee et al. Energy-efficient downlink semantic generative communication with text-to-image generators
US10785127B1 (en) Supporting services in distributed networks
KR102033339B1 (en) Method and apparatus for controlling multi-resource infrastructures
Yang et al. Cross-layer model design in wireless ad hoc networks for the Internet of Things

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION,WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LIU, SHAO;SENGUPTA, SUDIPTA;CHIANG, MUNG;AND OTHERS;SIGNING DATES FROM 20081203 TO 20081210;REEL/FRAME:022262/0948

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034564/0001

Effective date: 20141014

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220615