CA2050692C - Fair access of multi-priority traffic to distributed-queue dual-bus networks - Google Patents

Fair access of multi-priority traffic to distributed-queue dual-bus networks

Info

Publication number
CA2050692C
CA2050692C CA002050692A CA2050692A CA2050692C CA 2050692 C CA2050692 C CA 2050692C CA 002050692 A CA002050692 A CA 002050692A CA 2050692 A CA2050692 A CA 2050692A CA 2050692 C CA2050692 C CA 2050692C
Authority
CA
Canada
Prior art keywords
node
data
request
priority
bus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002050692A
Other languages
French (fr)
Other versions
CA2050692A1 (en
Inventor
Ellen Louise Hahne
Nicholas Frank Maxemchuk
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
American Telephone and Telegraph Co Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by American Telephone and Telegraph Co Inc filed Critical American Telephone and Telegraph Co Inc
Publication of CA2050692A1 publication Critical patent/CA2050692A1/en
Application granted granted Critical
Publication of CA2050692C publication Critical patent/CA2050692C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2852Metropolitan area networks

Abstract

Bandwidth balancing is accomplished in Distributed-Queue Dual-Bus (DQDB) networks that handle multi-priority traffic by causing each node to throttle its own rate of transmission in accordance with the priority of the data that the node transmits. In accordance with one approach, each node limits its throughput to the product of a bandwidth balancing factor (which is a fraction that varries according to the priority level of data) and the unused bus capacity. When parcels of different priorities are received within each node, the parcels are processed in priority order. In accordance with another approach, all active parcels within a node are handled concurrently and receive some bandwidth. The throughput of each parcel in a node is limited to the product of the bandwidth balancing factor and the unused bus capacity. In accordance with still another approach, each traffic parcel limits its throughput to the product of the bandwidth balancing factor and the bus capacity unused by parcels of equal or higher priority. This scheme allocates bandwidth first to the higher-priority parcels, then allocates the leftovers to the lower-priority parcels. Lower-priority parcels have no effect on the steady-state throughputs of higher-priority parcels.

Description

FAIR ACCESS OF MULTI-PRIORITY TRAFFIC
TO DISTRIBUTED-QUEIJE DUAL-BUS NEIWORKS
Back~round of the Invention This relates to co~ lunication systems and, more particularly, to S protocols for fair allocation of trancmi~si~n resources in a co~ullications system.
The Distributed-Queue Dual-Bus (DQDB) is a co.,.,.,~ ication network with a slotted access protocol that is cull~nLly being standardi~d by the EEE 802.6 Working Group. As the tr~n~mi~sion rates and distances spanned by neLw~
increase, slotted networks can be much more efficient than token-passing networks.
10 However, in slotted networks, the grade of service provided to nodes can depend on their relative position. The combination of the network span, tr~n~mission rate, and slot size of DQDB allow many slots to be in transit between the nodes. On a longnetwork, if the access protocol is too efficient and tries to never waste a slot, then users can receive very unfair service, especially during large file transfers.
15 Moreover, if the file transfers are of d~fferent priority, then the priority mechanism can be completely ineffective.
In an invention disclosed in C~n~ n Patent Application No. 2,020,236, filed June 28, 1990, we describe a number of techniques for explicit bandwidth balancing. Our disclosed bandwidth balancing intentionally wastes a small amount20 of bus bandwidth in order to farilit~te coordination among the nodes ~ Lly using that bus, but it divides the rem~ining bandwidth equally among those nodes. The key idea is that the maximum permissible nodal throughput rate is plopollional to the unused bus capacity; and that each node can determine this unused capacity by locally observing the volume of busy slots and reservations. The system achieves25 fairness gradually, over an interval several times longer than the propagation delay between competing nodes.
This bandwidth balancing, which was incolpold~ed as a DQDB protocol option into the EEE 802.6 draft standard, guaTantees equal allocations of bus bandwidth to nodes with heavy clem~nfl at the lowest priority level. It turns out that 30 a node with higher-priority traffic is guaranteed at least as much bandwidth as a lowest-priority node, but no further gua~ ees are possible. As long as the higher-priority applications do not require more bandwidth than the lowest-priority applications, this perfo~n~nce guarantee is sufficient. However, if priorities are to be assigned to applications with significant volume, such as packet video and 35 bridging of high-speed local area n~woll~, then improvements may be desirable.
It is difficult to make priorities work effectively over long DQDB
networks with the current control infol.llation. The reason for this lies in the fact that while the prioritv level of a reservation is known, the priority level of the data in a busy slot is not known. (The priority level is typically ~esign~ted with a priority S designating field associ~ed with each reservation.) It is an object of this invention to make the network fairer in an envilol l~lent which SU1~1lS traffic of multiple priority levels, and at least account for the dirre.ent priorities of packets in the allocation of bus bandwidth.
Summary of the Invention .
In accordance with the principles of this invention, bandwidth balancing is accomplished in DQDB nelwc,ll~s that handle multi-priority traffic by causing each node to throttle its own rate of tr~n~mi~sion in accordance with the priority of the data that the node tr~nsmit~. All of the disclosed approaches rely on wasting a fraction of the available bandwidth in order to insure fairness.
In accordance with one approach, each node limits its throughput to the product of some fraction and the unused bus capacity. The bandwidth balancing factor varies according to the priority level at which the node is CU1ICIII1Y
ll~u.smilling~ Within each node, when parcels of dirrc.ellt priorities are received, the node processes its parcels in strict priority order; lower-priority parcels are only 20 served when no higher-priority parcels are waiting. Thus, this scheme allocates bandwidth to all nodes in proportion to the bandwidth balancing factors of theircurrent (highest) priority levels. This first approach is called "local per-node" -"local" because the node only needs to know the priority of its own locally-generated data, and "per-node" because individual nodes are the entities that are treated fairly.
In accordance with another approach, all active parcels within a node receive some bandwidth. The throughput of each parcel is limited to the product of the bandwidth balancing factor (which depends on the priority level of the parcel) and the unused bus capacity. Thus, this scheme allocates bandwidth to all parcels, regardless of which nodes they belong to, in pr~p~,l lion to their bandwidth balancing 30 factors. This is called the "local per-parcel" approach.
In accordance with still another approach, each traffic parcel limits its throughput to the product of the bandwidth balancing factor and the bus capacity that is unused by parcels of equal or higher priority. Thus, this scheme allocates bandwidth first to the higher-priority parcels, then allocates the leftovers to the 35 lower-priority parcels. Lower-priority parcels have no effect on the steady-state throughputs of higher-priority parcels. This is called the "global per-parcel"
approach - "global" because each node needs to know the prionty of the data gencl ~ted by all the other nodes.
Brief Description of the D~ ..~i..~
FIG. 1 p~senls a section of a DQDB n~lwulk and illustrates one of the S time slots appearing in the two buses of the FIG. 1 network;
FIG. 2 dep*ts one embodiment of a bandwidth balancing node that may be used for uni-priority operation;
FIG. 3 illuSllateS one embo liment for a multi-priority node where parcels are injected into the network in priority order, FIG. 4 is a block diagram of a multi-priority node that handles all parcels con-;ul~
FIG. 5 is a coqlesce~ version of the FIG. 4 embodiment; and FIG. 6 l,l.,se.,~ a block diagram of a multi-priority node where all parcels are hqn(lled simllltq-neously with priority information on the data bus as well 15 as on the reservation bus.
Detailed Description The s~ ant dil~e~nce between the DQDB protocol and previous slotted access protocols is the use of each bus to reserve slots on the other bus in order to make the access fairer. FIG. 1 depicts a section of a DQDB network, with 20 nodes 11, 12, 13 and 14, and buses 10 and 20 passing through the nodes. Each node is capable of sending traffic to "downstream" nodes, and receive traffic from "upstream" nodes. The traffic is in the form of slot 15 and each slot contains aheader field and a data field. The current proposal for the header field includes a single busy bit, two bits that can be used to designate the priority of the data in the 25 busy slot, and a request bit field conl~inil-g one request bit for each priority level.
The request bits on one bus are used to notify nodes with prior access to the data slots on the other bus that a node is waiting. When a node wants to transmit a data segment on a bus, it sets a request bit on the opposite bus and waits for an empty slot on the desired bus. The busy bit in-lic~tes whether another node (u~s~ ) has 30 already inserted a seg_ent of data into the slot.
The operation for data tr~n~mission in both directions is irlenti~l Th~.~fol~, for the r~m~in~ler of this disclosure operation in only one direction is described. More spec1fi~l1y, henceforth, bus 10 is the data bus and bus 20 is the reservations bus.
4 2 ~

A brief review of the bandwidth balancing approach which was disclosed in the aforementioned application and adopted by the EEE 802.6 WorkingGroup may be a~ iate.
It may be noted at this point that in the previously disclosed bandwidth S balancing approach it is the r.odes that receive "fair treatment". However, when nodes have traffic of di~ ,nt priority levels, perhaps a different filn~l~m.ont~l "fair tre~tm~nt" entity should be considered -- such as parcels, where a parcel is traffic ori~in~ting at a node at a particular priority level, for tr~ncmicsion over the data bus.
In what follows, it is assumed that the traffic ~em~nl1 of each node n has 10 some fL~ced rate p(n). Tnis offered load can be stochastic, as long as it hac a well-defined average rate. The offered load of the traffic parcel of priority level p at node n is pp (n). The actual long-terrn average th~ughput of node n is r(n), and that of its parcel p is rp (n). The unused bus capacity is U.
Employing the above nomenclature in the previously disclosed 15 bandwidth b~l~ncing approach, the unused bus capacity on bus 10 is N

U = 1 - ~ r(m) where N is the total number of nodes in the networlc (1) m=l Of course, any individual node n, at any instant t, does not have direct knowledge of the long-term average rates defined above. All the node can see is the rate of busy slots coming into node n at time t from nodes up~ calll (designated20 B(n,t)), tne rate of requests coming into node n at time t from nodes downstream sign~te~ R(n,t)), and the rate at which node n serves its own data segments at time t (designated S(n,t)). These observations can be used to determine the bus capacity that is unallocated by node n at time t, i.e., U(n,t).
U(n,t) = 1 - B(n,t) - R(n,t) - S(n,t) . (2) 25 The "unallocated" capacity is the capa~ily that is neither used by nodes up..
of n, nor requested by nodes downstream of n, nor taken by node n itself.
For traffic with no priority de~ign~tions (or uni-priority), bandwidth balancing achieves f,.imess by guaranteeing that there is some unused bus capacity, - and asking each node to limit its throughput to some multiple F (bandwidth 30 balancing factor) of that unused capacity. Nodes with less ~lem~n~ than this can have all the bandwidth they desire. Thus, the average throughput of a node n, with bandwidth balancing, is r(n) = min [p(n), F ~ U] = min p(n), F ~ r(m) . (3) m The approach reflected by equation 3 is fair in the sense that all rate-controlled nodes get the sarne bandwidth. Equivalently, equation 3 may be written as (F ) = min p(n) , U . (4) S Equation 3 l~presellts a set of almost linear equations (one for eachnode) in the unknowns r(n). In the special case where all N nodes have heavy llem~nd (p ( n ) is large), the solution of these equations has a simple form ofr(n) F (5) That is, all N nodes have the same throughput, with a value specified by equation 5.
10 That means that if F = 4 and there are three nodes with heavy dem~n~ then each node gets 4/13 of the bus capacity, and 1/13 of the bus capacity goes unused. The total bus utili7~tion (which in the case of equation 5 is F N/l +F-N) increases as N
andlor F increase.
In order to implement the above-described uni-priority bandwidth 1~ balancing, the slot header need only contain the busy bit and a single request bit. A
node can ~iet~ ~ ~--;--e the bus utili7~ion by su_m-ing the rate of busy bits on bus 10, the rate of requests on bus 20, and the node's own tr~nsmi~.cion rate. In the long run, this sum should be the same at every node (though the individual co~onents will differ from node to node). In other words, each node has enough inform~tion 20 available to implement equation 3.
Fortunately, it is not necess~ry for the node to explicitly measure the bus utili_ation rate over some lengthy interval. Rather, it is sufficient for node n to respond to arriving busy bits and request bits in such a way that the rate at which it selves its own data is less than or equal to F times the unallocated cal,acily. That is, 25 S(n~t) < F U(n,t) (6) or, U(n,t) 2 S(n~) (7). At steady state, S(n,t) of equations 6 and 7 will approach r(n) of equation 3.
The~e are several ways to implement expressions 6 and 7. FIG. 2, for example, depicts one implement~tion for caIrying out bandwidth b~l~n~ing in accordance with eAl,lession 6. It includes a local FIFO ~ y 21, data inserter 22, 30 a request insener 23, OR gate 24 and segment counter 25. Data inserter 22 straddles bus 10. It receives signals from nodes upstream from itself and sends signals tonodes dow~ alll from itself (when referring to signals on buses 10 and 20, the ZOS0~9~
terms ''Ul~SIl~lll'' and "dow"sl.edlll" always refer to nodes that are "up~ " and "downstream" vis-a-vis the bus under discussion). Data inserter 22 is also responsive to local ~O 21 and is connected to bus 20 via request insell~r 23 andOR gate 24. Request inserter 23 straddles bus 20 or, more specifically, request S inserter 23 receives one input from bus 20 upstream of itself and delivers an output on bus 20 to dow~ ea~ nodes. Another input to request inserter 23 comes from data inserter 22. OR gate 24 is also responsi.~e to bus 20 and it delivers its output to data insel l~r 22. Counter 25 supplies signals to OR gate 24 in response to dataderived from FIFO 21.
The function of FIFO 21 is to store data segments ~ne~aled by local users while these segments wait for data inserter 22 to find al)p.u~.iate empty slots on data bus 10. Data inserter 22 ~el~les on one local data segment at a time; once ~0 21 folwdlds a segment to data inserter 22, it may not fo.wa,~ another segmentuntil the inserter has written the current segment onto data bus 10. When data 15 inserter 22 takes a segment from FIFO 21, it orders request inserter 23 to send a request on the request bus (20), and proceeds to detennine the approp.iate empty slot for the local segment. This d~ te~ tion is accomplished by inserting the segmentinto the data inserter's ll~nslllil queue 26. All the other ele~nts of this queue are requests from downstream nodes that were received by data inserter 22 from bus 20 20 via OR gate 24. Some of the requests in the Transmit Queue arrived before the local data segment and are queued ahead of it, but others arrived later and are queuedbehind it. Data inserter 22 serves its transmit queue 26 whenever an empty slot comes by on data bus 10. When the entry at the head of transmit queue 26 is a request from a downstream node, data insel ler 22 lets the empty slot pass. When the 25 head entry is the local data sc~ nt, then the busy bit is set and the segment is applied to, and tr~nsmitteA in, that slot.
Transmit queue 26 can be implemented with a one-bit register and two counters. The one bit register intlirates whether there is a local data seg,ll~.-l in the queue, and the two coul-lel~ inrlirate the number of requests that arrived on bus 20 30 before and after the arrival of the local data segment at the data inserta.
The ci~uitl~ that actually accounts for the bandwidth balancing factor, F, is imple. . .~ e~ in the FIG. 2 embodiment through segment counta 25. Counter25 counts data se~ ,n~ as they are taken from FIFO 21 by data inserter 22. After F
data segments have been counted, counter 25 resets itself to zero and gellC.~l~S a 35 signal for OR gate 24. The data inserter receives that signal, via OR gate 24, as if it were a request from an u~s~ (vis-a-vis bus 20) node. This ar~ficial, local, ~-- 2 ~

request causes data inserter 22 to let a slot go ~mallocated, and that carries out the bandwidth bqlAqn~ng funcion.
Witn multi-priorit,v traffic the situation is different, of course. However, in accordance with tne principles of our invenion, F can still be used effecively to 5 balance tne trAqncmicsion resource of the network. Specifically, in situaions where dirr~.~nt nodes in a DQDB network offer traffic at t1ifferent priorities, bandwidth balancing is accomplished by allowing the bandwidth balancing factor, F, to assume a different value at each of the nodes, in accordance with the priority level of the node's traffic. That is, a number of dirrel~ bandwidth bAhqncing factors, Fl, ... Fp, 10 ... F p, are used, where F 1 is the bandwidth bql qncing factor for priority- 1 traffic (lowest priority), Fp is a dirrcl~;nt bandwidth balancing factor for priority-P traffic (highest priority) and Fp is an "in between" bandwidth balancing factor for "in between" priority-p traffic. When each node is allowed to transmit data of traffic at M P
its selected priority level. the unused capacity is U = 1 - ~ ~; rq(m) (8) . nd m=l q=l lS the throughput that a node which carries traffic of priority-p can reach is limited tO
Fp times the unallocated capacity. Thus, rp(n) = min [pp(n), Fp ~ U] = m pp(n), Fp ~ 1 - ~; rq(m) .(9) Equivalendy, equation 11 can be written as rp(n) = min PP( ), U .(10) Fp Fp 20 This scheme is fair in the sense that all nodes active at the same priority level get the same bandwidth. Nodes ~nsmitting at dirr.,.~nt priorities get bandwidth in plul~olLion to the bandwidth bqlAnring factors of their priority levels. This takes care of providing a greater capacity to nodes that carry a higher priority tra~ic. As an aside, one may assume for convenience that the bandwidth balancing factors F i 25 (1 < i < P) are inL.,g~ , but actually rational n~ can also be use~ It may be highligh-e~ at this point that each node is totally oblivious to priority levels of the busy slots and to the priority levels of the reservation fields (even though information about the latter can be easily obtained widh the slot structure of FIG. 1).
Each node is only aware of the priority level of its own data, and it throtdes itself 30 only in accordance with that information.

i 9 ~

The above addresses the situation where every node in the n.,~wc,ll~ has only one parcel that it wishes to transmit at any one time. A question imm~i~tely anses, of course, as to what happens when a node has traffic of mllltiple priority levels. That is, the situation may be that each node has, or at least is allowed to have S a number of parcels. Two approaches can be applied to insure a fair ~llocation of bandwidth. One is so~,~vhat akin to time division multiplexing, while the other is more akin to space division mnltirleYing In the "time division" approach, each node orders its parcels according to their priority levels, and it offers the parcels to the network in order. So, with this 10 approach, at any one time each node offers only one parcel and within each node the highest-priority parcel P (from among the parcels in the node) has pl~fe.l~d access.
As far as the r.eLwul~ is conce.lled, the operation follows equation 10. If the parcel of the highest priority presehLs a ~u~Lcient dçm~n~l (i.e. a lot of data needs to be tr~ncmittç1), then the node's throughput will match its entitlçment for the highest 15 priority parcel. In this case, however, the lower-priority parcels at the node will get nothing. On the other hand, when the traffic load of the highest ~liOliL~ parcel is less than the node's allocation, then the highest priority parcel gets all the bandwidth it desires, and the parcel(s) with the lower priorities in node n will use some of the leftover capacity. With two parcels (priority P and P - 1), for example, their use will 20 be bounded by the al)plupliately weighted average of their two throughput limits:
rp-l(n) rp(n) pp_l(n) pp(n) + = min + , U . (11) Fp_l Fp Fp_l Fp In this manner, the throughputs of the various traffic parcels are detç~ine~
se~lu~,ilLially, from priority P through 1. The formula for rp (n) is:

~ qF ) = min ~; P~ , U (12) q 2p q q2p q If U is ~ ssed in terms of the parcel throughputs rp (n), then equation 12 represents a set of almost linear equations (one for each node and priority level) in the unknowns rp (n). In the special case where each node has heavy dçrn~n~ in its highest priority parcel (pq (n) is "out of the picture" in equation 12) and there are Np nodes active at priority level p, then the solution of these 30 equations has a simple form:
rp(n) = p P . (13) 1 + ~ Fq Nq q=
That means that, for example, if there are three priority levels, if F3 = 8, F2 = 4 ~., - ~ n ~-~ ~

and F 1 = 2, and if there is one node with heavy ~m~nrl at each pnority level, then the nodal throughput rates are 8/15, 4/15, and 2/15, and the unused bandwidth is 1/15 of the bus capacity.
The above~es~rihe~ "time division" approach of bandwidth balancing, 5 which is basically a "local, per node" app~vach, requires the slot header to only contain the busy bit and a single request bit. As already m~nhone~ earlier, the prioritv levels of the reservation or busy in(lir~hons on buses 10 and 20 are not i~pvl~ant. In order to implement equation 13, the node should respond to arriving busy bits and request bits in such a way that Sq(n,t) (14) The embodiment of FIG. 3 illustrates one approach for carr~ng out this bandwidth b~l~ncing approach. As shown in Figure 3, the depicted node is very similar to the embo,~im~-nt of FIG. 2. It differs from the FIG. 2 emb~lim~-nt in that insuad of FIFO 21, FIG. 3 employs a priority scheduler 27. Scheduler 27 serves the 15 same function as FIFO 21 except that it also orders the parcels arriving from the user, or users, according to priority so that parcels of high priority have absolute pl~fe~;nce lower priority parcels. Counter 37 must account for these priority variations as well as effect the bandwidth b~l~n~ing factor's function. One approach is to merely have dirrel~nt thresholds for the counter. Alternatively, a fixed 20 threshold can be employed in counter 37 but the counting can be done in steps of varying sizes. For example, if F is 16 for the highest priority, 8 for the next priority level, and 4 for the lowest priority level, one can employ a COU.lt~ 37 with a threshold set at 16 and count in steps of 1 for the high priority traffic, steps of 2 for the middle priority traffic, and steps of 4 for the low priority traffic. As before, when 25 counter 37 reaches its threshold, a signal is sent to data inserter 22 (via OR gate 24) to sim.ll~te a "request".
When treating the various traffic parcels at a node as though they came from separate nodes (the "space division" approach), a different mode of operation results. In such an arrangement, the parcel of priority p is asked to limit its 30 throughput to a multiple Fp of the spare bus capacity; parcels with less dem~n(l than this can have all the bandwidth they desire. In such an arrangement, the reslll~ing throughput of a parcel is ~ Q 5 ~ 1~ 9 2 rp(n) = ~nn [pp(n), Fp U~ = min pp(n), Fp 1 - ~ ~ rq(m) , (15) m ~ , or equivalently, rp(n) min pp(n), U . (16) Fp P
This scheme is fair in the sense that all rate-controlled parcels of the same priority 5 level get the same bandwidth. Parcels of different priority levels are offered bandwidth in p~ on to their bandwidth b~l~n~nE factors Fp.
As in connection with previous c~ression~ it may be not~d that prcssion 15 l~ SW,~ a set of almost linear equations (one for each node and priority level) in the unknowns rp ( n). In the special case where all Np parcels of 10 priority level p have heavy demand, the solution of these equations has the simple form of equation 13. Thus, for example, when there are three pnority levels and F3 = 8, F2 = 4, and Fl = 2, if there are one high-priority parcel, two .~-1;~-.,.-priority parcds, and one low-priority parcel, then the parcels' throughput rates are 8/19, 4/19, 4/19, and Vl9, and the unused bandwidth is 1/19 of the bus capacit,v.
15 Note that the concepl of "nodes" is esse .~;~lly lost with this approach.
To implement this "space division" version of bandwidth b~l~n~ing, which is a "local, per parcel" approach, the slot header need only contain the busy bit and a single request bit (i.e. again, there is no need for priority level infc,l~ion). In order to imrlem~nt equation 15, the section of each node that h~nAlçs priority p20 traffic should respond to arriving busy bits and request bits in such a way that Sp(n,t) S Fp U(n,t) . (17) In the straightfor~ru lrnple~ rl~1 ;on of equation 17 shown in Figure 4, the node has a ~alate section to manage data for each priority level. Depicted are sections 11-1, 1 l-p and 1 l-P which take care of priority-l, priority-p and priority-P
25 parcels. P is the highest priority level and p is greater than 1 and less than P. Each section has its own data inserter 22, request inserter 23, local ~O 21, gate 28, and permit counter 29. Inequality 17 is imple. . .~ nt~ by the node section of priority p as follows. Whenever data ins~il t~,. 22 observes an unallocated slot (a slot that is neither busy nor reserved for a downstream node nor used to transrnit a local data 30 seEment of priority p), it creates Fp peImits by incr~m.-nting its pe~mit counter 29 by the bandwidth b~l~ncing factor Fp. Gate 28 prevents FIFO 21 segments from re~ching data inserter 22 unless counter 29 is positive. While counlcr 29 is positive, data inserter 22 takes a sc~ t from ~-~O 21 and inserts it into its llarls-llit queue - ZOS069Z:

26 (not shown in FIG. 4 but shown in FIG. 2). Within the llanSlllil queue, the requests received from bus 20 are ordered according to their time of arrival, asdescribed before. With each appe~nce of a non-busy slot, data inserter 22 allowsthe non-busy slot to pass unpopulated as long as transmit queue 26 is serving 5 requests from downstream nodes. When the segmf nt taken from FIFO 21 reaches the top of the t~nsmit queue, data insel~r 22 inserts the se~mf nt into the non-busy slot, marks the slot busy, and declel"~nts counter 29 by 1. Counter 29 is reset to zero whenever there are no more segrnentc in ~O 21. When COul t~. 29 is at zero,-either because there are no more segments in ~O 21 or because F numb~. of 10 segmf ntc were inserted, the action of gate 28 prevents the data inserter 22 from taking another data segment from the local FIFO 21 until a slot goes nn~llocate~ at which time the process repeats.
A more compact implelllehta~,on is shown in Figure 5. Here the node has only one section, with one data inserter 32 and one request inserter 31 to manage 15 data of all priority levels. However, a separate local ~O Queue (31-1, 31-p, 31-P) for each priority is required, as well as separate permit counters 33 and gates 34. In addition, FIG. 5 includes a local priority queue 35 which is responsive to gates 34 and which applies its signal to data inserter 32. All local data segments with permits are stored in one queue 35 rather than in the sepalate local FIFOs 31. Data inserter 20 32 processes local data segments one-at-a-time from queue 35.
More specifically, whenever data inserter 32 observes an unallocated slot which is passed to downstream nodes, it creates Fp ~~ s for each priority level p. That is, it instructs each local FIFO 31-i to transfer to queue 35 up to Fi;
segmf ntc, or packets of data. This is accompli~he~ in FIG. 5 with count~.~ 33, 25 which are incre. . .f ntecl by the co~ ~nding bandwidth balancing factor, anddeclc.l.f nted with each transfer of a segment from the corresponding local ~O 33 to local queue 35.
When local queue 35 contains data and transmit queue 36 within data inserter 32 does not contain a local data segTnf nt taken from queue 35, then such a 30 segment is insened in the transmit queue behind the requests that arrived previously from bus 20. With each appedla- ce of a non-busy slot on bus 10, local l,,anslllil queue 36 is served. When the local data segment (from local queue 35) reaches the top of the ll~smil queue 36, the local data scz;...~ .-t is inserted into the non-busy slot of bus 10 and the busy bit of that slot is set. The process then repeats, with the local 35 queue 35 entf nng a local data segment in the transmit queue. When local queue 35 becomes empty, data 1I1SC~ 32 allows an lln~lloc~te l slot to pass on bus 10 at the a~.ol,liate time and the entire process repeats with queue 35 being filled again from FIFO's 31. There are two ~l~-ua~l-es for dc~ ing the "apl,lu~iate time" when the lln~lloc~ted slot is sent on bus 10. One is when local queue 35 is empty andtransmit queue 36 is empty. Another is when local queue 35 is empty, and the S requests within transmit queue 36 that ~ccllmul~tçd since queue 35 became empty are satisfied.
When every node can determine the utili7~tion of bus 10 due to t~ffic from other nodes of each prioriy~ level, then still another bandwidth balancing a~ uach can be impl~ nt~l in a DQDB ncl~ull.. In other words, priority of other 10 nodes' traffic can be taken into a~ou~-t when a node throttles itself, but most advantageously this should be done ~ , ir~lly. I.e., taking account of the priority of the traffic in the reserved slots and traffic in the busy (occupied) slots. To that end, an ~d-lition~l ~ea~u~e of bus capacity can be considered, to wit, the capacity left over by parcels of priority p and greater, that capacity is~5 Up+ = 1 ~ rq(m) . (18) m q2p If node n can obcerve priority levels of the reservation bits and of the data in busy slots, then it can measure Up+ (n,t), which is the bus c~iL~ not allocated by node n at time t to parcels of priority p or greater. Then, Up+(n,t) = 1 - ~ Bq(n,t) - ~ Rq(n,t) - ~ Sq(n,t) . (19) q2p q2p q2p 20 Bandwidth b~l~n~ing is accomrlichY1 in this approach by asking the parcel of priority p to limit its throughput to some multiple Fp of the spare bus ca~ not used by parcels of equal or grea~er prioriry than p. Parcels with lesc de-m~nd than this limit, of cource, do not need to limit th~msçlves~
In accordance with this approach, the throughput of par~el p at node n is 25 rp(n) = mm [pp(n), Fp Up+] = min pp(n), Fp 1 - ~; rq~m) , (20) orequivalently, PF = min PF , Up, ~ (21) This scheme is fair in the sense that all rate-controlled parcels within the networ~ of the same priority level get the same bandwidth.
Allocation of bandwidth across the various priority levels is as follows:
30 First, the entire bus ca~a~;iLy is bandwidth-b~l~n~e~ over the highest-priority parcels, as though the lower-priority parcds did not exist. Bandwidth balancing ensures that some bus capacity will be left unused by the highest-priority parcels. This unused ' -- --bandwidth is then bandwidth-b~l~ncell over the second-highest-priority parcels. The bandwidth left over after the two highest pri-~ri~içs have been processed is then bandwidth-balanced over the third-highest-priorit,v parcels, etc. It should be emphasized that with this approach, in contrast to the previously described 5 ayyloaches~ the throughput att~in~d by a parcel of a given priority is independent of the presence of lower-pnority parcels anywhere in the networ~
As with the earlier des~ribe~l appl'~.~CI~S, equation 21 represents a set of almost linear equations (one for each node and priority level) in the unknowns rp ( n). In the syecial case where all Np parcels of priority level p have heavy10 den-~nd the solution of these equations has a simple forrn P( ) Il (1 ~ Fq ~ Nq) ( ) That means that if F 1 = F 2 = F3 = 4 and there are one high-priority parcel, two m~Aillm-priority parcels, and one low-priority parcel, then the parcels' throughput rates are 4/5, 4/45, 4/45, and 4/225, and the unused bandwidth is 1/225 of the bus 15 capacity.
For this "global, per-parcel" version of bandwidth balancing, the slot header must contain the busy bit, an inAic~tion of the priority level of the data segment in a busy slot, and an indication of the priority level of the requests. As for the priority of requests, the present slot ~ssi~nment provides a multi-bit request field 20 with one request bit for each priority level (see FIG. 1) which allows one to specify the priority level of ~ue~b by selecting a particular bit in the request field that is set. As for the prionty il~ulllla~ion of the busy slots, we p~opose to use the two spare bits in the header of ~IG. 1. By reading the busy priority bits and the request bits, each node can ~e~ . ~..;ne the priority level of all traffic on the bus.
In order to imrlem~nt equation 21, node n should respond to arriving busy and request inf~ ;on in such a way that S p (n, t) C Fp ~ Up+ (n, t). (23) FIG. 6 plcsellts one embo~lim~ont for carrying out the bandwidth b~l~ncing of equation 21. It comprises a plurality of sections in each node, where each section m~n~ges data of a different priority level. Thus, FIG. 6 depicts seGctions 11-1, 11-p, and 11-P, with section 11-1 being up~c~ from section 11-p and m~n~ging data of priority 1 (the lowest priority), and section 11-P being downstream of section 11-p and m~n~ging data of priority P (the highest priority).
While the layout of FIGs. 4 and 6 need not collGs~ond to the actual physical implc ~~er.~tion of a node 11, functionally it would be the same; and 35 therefore, for the expository purposes of this disclosure it makes sense to describe -- zos~9~
-the embodiment of FIG. 6. It is expected, however, that in an actual DQDB node, the sections will be co~lessed into a more compact embodiment where si~n~lling between the sections is done explicitly (rather than over the buses) and where all sections read a bus at the same place, before any of them has a chance to write.S In FIG. 6, each section has its own data inserter 42, request inserter 43, local ~O 41, and permit CO~ J 45 with gate 44. As in FIG. 4, data illS~Jt~J 42 straddles bus 10. FIFO 41 receives data from users, and illteJl~osed ~l~,.een FIFO 41 and data inseJter 42 is gate 44. Counter 4S controls gate 44 in cons~uenre of signals provided to counter45 by data inserter 42. Data inserter 42 also controls 10 request inserter 43 which straddles bus 20.
Data inserter 42 contains a transmit queue 46 that can hold at most one local data segment of priority p. All requests of priority p or greater coming into node section p on bus 20 also becc,~e el~ of transmit queue 46; requests of priority less than p do not join queue 46. When an empty slot appears on bus 10,15 transmit queue 46 is served. If the head entry is the local data segm~-nt~ then the slot header is mo~lified to inrlic~te that the slot is now busy with data of priority p, and the local data segrnent is tr~n~mit~l in that slot.
Within l,~s",it queue 46, the requests are sorted by priority first and then by time of arrival. It still requires only two count~ to ill~ple~nt this 20 discipline. One counter counts requests of priority p that arrived at transmit queue 46 before the local data segm~nt, plus all l~ue~b of priority greater than p regardless of arrival tirne. The other counter counts requests of priority p that arrived at queue 46 after the local data segment. (Recall that queue 46 holds no requests of priority less than p.) Inequality 24 is imple...... -led within the node's section that handles priority p as follows. Whenever data inserter 42 observes a slot that is not allocated to traffic of priority p or higher, it creates Fp permits (i.e., it incle.lJ.,nls counter 45 by the bandwidth balancing factor). Unlike with the previously described appl~aches, there are two c"~ ces under which the data inserter 42 observes 30 such a slot: (a) the slot arrives empty and finds transmit queue 46 inside data inserter 42 also empty, holding neither a local data segm~nt nor requests from downstreamnodes, and (b) the slot is already busy with a segment of priority less than p when the slot arrives at data inserter 42.
While counter 45 is positive, wherl~ a local data sePm~nt moves 35 through gate 44 from FIFO 41 to t,dn~mil queue 46 in data inserter 42, counter 45 is decrel--f .-t~d by one. Counter 45 is reset to zero whenever the node has no data of ZOS~69~:

priority p to send.
A number of approaches (local, per node; local, per parcel; and global, per parcel) have been disclosed above that provide bandwidth b~l~n~in~ for multi-priority traffic. All of these methods converge gradually to a steady state overS multiple round-trip-delay times. The three methods waste some bus bandwidth even in steady state. Advantageously, all of these methods are predictable. The steady-state throughput rates of the nodes are ~l~t~ d only by their offered loads and not by their relative bus loc~tion~, nor by their relative start-up times. There is a trade-off bel~n the convergence rate and the bus utilization that can be exercised 10 through the bandwidth balancing factor. Moreover, these methods are fair in the sense that if each node has traffic of only one priority level and its parcel's dem~n~l is heavy, then nodes that are active at the same priority level get the same bandwidth in steady state.
The methods differ in the ways they allocate bus bandwidth across the 15 various priority levels. In the "local, per-node" scheme, each node limits its throughput to the product of the bandwidth balancing factor and the unu~d bus capacity. In this approach, the bandwidth balancing factor varies accol~hlg to the priority level at which the node is currently tr~n~mitting. Each node processes its own data sc3~ ,nts in strict priority order, lower-priority ~gments are only ~rved 20 when no higher-priority segments are waiting. In other words, this scheme ~ll~ates bandwidth to all nodes in plo~l~on to the bandwidth balancing factors of their current ~ighest) priority levels. The steady-state throughput of a higher-priority traffic parcel is not affected by the presence of lower-priority parcels within the sarne node, but it can be affected by the pre~nce of lower-priority parcels at other nodes.
25 In the two "per-parcel" approaches, all active parcels within a node receive some bandwidth. In the "local, per-parcel" approach, each traffic parcel limits its throughput to the product of the bandwidth balancing factor and the unused bus capacity. Of course, the bandwidth balancing factor depends on the priority level of the parcel. In other words, this approach allocates bandwidth to all parcels 30 (regardless of which nodes they belong to) in pl'~pOl lion to their bandwidthbalancing factors. Now the steady-state throughput of a traffic parcel is affected by the presence of every other parcel for that bus, including lower-priority parcels within the same node. In the "global, per-parcel" approach, each traffic parcel limits its throughput to the product of the bandwidth balancing factor and the bus capacity 35 unused by parcels of equal or higher priority. In rough terms, this approach allocates bandwidth first to the higher-pliulily parcels, then ~llocates the leftovers to the 2~50~9~
_.

lower-priority parcels. Lowa-priority parcels have no effect on the steady-statethroughputs of higher-priority parcels.
In all three methods, there is sy~ llctly between the priority infollna~ion needed concerning requests and that needed concerning busy slots, but this S infc,lma~ion needs to be different in the various sche~s. The two "local" schemes are elegant no priority infol,l,a~ion is tr~ncmine~d on the buses, and consequently fewa cou~ , re needed in the nodes. The "global" scheme has more cGn~ iC~tion and co,~u~-on overhead. All three techniques can be imple.--~nlc~ through modest changes in the DQDB slot fo~mat and protocol.
It should be mentioned that the two "local" a~lu~ches can be imple ~-~ Ited with a stre~mlined "data inserter" (similar to the "global per-parcel"
approach) that serves its "transmit queue" by satisfying: all received requests before the local data Se~ " . nt iS served, regardless of the arrival times of the reql~est~
(Since all the nodes in the netwolL are following the bandwidth balancing discipline, 15 the local data segm~nt will evel,lually be served.) The advantage of this queuing discipline, as described above, is that it can be imple...~nte~l with one counter rather than two.
The two-counta description was presented to e~lphasi~ the DQDB-based, two-counter irnplementation of the data inserta for the following reasons: (1) 20 While both implementations have the same thr~ughput perf )rmq~e under sust~ine~
overload, the delay performance under moderate load is frequently better with the two-counter implementation. (2) Many DQDB networks will have no signifi~nt fairness or priority problems (e.g., if the buses are short, or if the tr~nsmi~sion rate is low, or if the application is point-to-point rather than multi-access). In these cases, 25 one would want to disable bandwidth balancing (because it wastes some bandwidth) and use the pure DQDB protocol, which l'~ui~'eS a two-counter data inserter. It is convenient to build one data inserta that can be used with or without bandwidth balancing, and this would have to be the two-counter version.
This brings up one final advantage of the "global" bandwidth balancing 30 scheme over the "local" ones. If the "local" schemes are physically imple,l~nled by reducing the nu,ll~r of request bits to one per slot (shared by all priority levels), then wheneva bandwidth balancing is disabled, the nelwolk has no priority mech~nism With the "global" sch~m~.~ each priority level has one request bit pa slot, so the priority m~ch~ni~m of pure DQDB can be retained when bandwidth 35 balancing is disabled.

Claims (25)

1. A method for allocating transmission capacity in a network having a data bus that passes through network nodes 1 to N of said network in ascending order and a request bus that passes from network nodes N to 1 in descending order, where transmission capacity on said data bus is divided into slots and where each slot on said data bus contains a data field and a header field that includes a busy subfield and a request subfield, and where each node j, where j is an integer between 1 and N, accepts data with an associated priority level from a user port of said node, for transmission to a node K on said network where j<k~N, and controls the injection of said data into said data bus in accordance with a procedure comprising the steps of:
each node determining an effective spare capacity available to itself on said data bus based on the presence of request bits in the request subfield of slots on the request bus, and the busy bits in the request subfield of slots on the data bus; and each node throttling its rate of injecting of data into said data bus to a fraction of the effective spare capacity, where said fraction is related to the priority of the data presented to the node.
2. The method of claim 1 where the effective spare capacity is related neither to priority level of slots requested on said request bus nor to priority level of information in the busy slots on said data bus.
3. The method of claim 1 where the effective spare capacity is related to priority level of slots requested on said request bus and priority level of information in the busy slots on said data bus.
4. The method of claim 1 where the effective spare capacity is related to priority level of request bits on said request bus and priority level of busy bits in the busy slots on said data bus.
5. The method of claim 4 where the step of determining the effective spare capacity comprises the steps of:
determining the priority level of a node's parcel, p, determining the rate of requests set, at priority level p or greater, in the request subfield of slots appearing at the request bus, and determining the rate of busy indications, at priority p or greater, in the busy subfield of slots appearing at the data bus.
6. The method of claim 1 where the step of a node throttling its transmission rate comprises selecting for transmission, from among parcels received by said node for transmission on said data base, the parcel with the highest priority level, and throttling the transmission rate of said highest priority parcel in accordance with a bandwidth balancing factor for said priority level.
7. The method of claim 1, further comprising a step of:
each node ordering by priority parcels received from said user port; and applying said parcels to said step of throttling, one at a time, in order, in accordance with said ordering.
8. The method of claim 1, further comprising the steps of:
each node ascertaining the appearance of data accepted from said user port that comprises a plurality of parcels, and each node separating said parcels for individual throttling of rate of injecting of data of said parcels into said data bus, WHERE
said step of each node throttling its rate of injecting of data comprises a plurality of throttling processes, with each process throttling the rate of injecting of data of a different one of the separated parcels to a fraction of the effective spare capacity, where the fraction employed in each of the plurality of throttling processes is related to the priority level of the parcel throttled.
9. The method of claim 8 where the effective spare capacity within each throttling process is related neither to priority level of slots requested on said request bus nor to priority level of information in the busy slots on said data bus.
10. The method of claim 8 where the effective spare capacity within each throttling process is related to priority level of slots requested on said request bus and to priority level of information in the busy slots on said data bus.
11. The method of claim 8 where the effective spare capacity within each throttling process is related to priority level of slots requested on said request bus that are greater than p, and to priority level of information in the busy slots on said data bus that are greater than p, where p is the priority level immediately below the priority level of the parcel throttled by said throttling process.
12. The method of claim 8 where said step of each node determining an effective spare capacity comprises a plurality of effective spare capacity determining steps, each developing an effective spare capacity for a parcel, in accordance with the parcel's priority level, with the effective spare capacity for a priority level p being employed in the throttling of the rate of injecting of data of the parcel having priority p.
13. A method for allocating transmission capacity in a slotted network having a data bus that passes through network nodes 1 to N of said network in ascending order and a request bus that passes from network nodes N to 1 in descending order, where transmission capacity on said data bus is divided into slots and where each slot on said data bus capacity a data field and a header field that includes a busy subfield and a request subfield, and where each node j, where j is an integer between 1 and N, accepts data with an associated priority level from a user port for transmission to a node k on said network where j < k ~ N, which data comprises packets of information, and controls the injection of said data into said data bus in accordance with a procedure comprising the steps of:
accumulating requests from nodes m, where m>j, in a queue, when there is no local request from node j and data is available at node j, accumulating a local request in said queue, and returning to said step of accumulating request, in parallel with said accumulating of requests, when the top request in said queue is a request from a node m and a non-busy slot appears at said node j on said data bus, satisfying that top request by passing said slot to succeeding nodes and removing the satisfied request from the queue, in parallel with said accumulating of requests, when the top request in said queue is said local request from node j and a non-busy slot appears at said data bus entering said node j, when a bandwidth balancing indication is set, satisfying said local request by populating said slot with a data packet, and when a bandwidth balancing indication is unset, passing said slot to succeeding nodes setting said bandwidth balancing indication;
wherein said bandwidth balancing indication is unset in accordance with the priority level of said data.
14. The method of claim 13 where said bandwidth balancing indication is unset with every Mp instances of satisfying said local request, where Mp is related to the priority level p of said data.
15. The method of claim 14 where Mp is greater than Mj, when priority p is higher than priority j.
16. The method of claim 13 where said step of accumulating requests places all requests arriving from nodes m, where m>j, ahead in said queue of said local request.
17. The method of claim 16 where said accumulating is accomplished by incrementing a counter, and said removing from queue is accomplished by decrementing a counter.
18. The method of claim 13 where said step of accumulating requests places in said queue all requests arriving from nodes m, where m>j, in order of arrival.
19. The method of claim 16 where, while there is a local request from node j, said accumulating of requests is accomplished by incrementing a first counter, and said removing from queue is accomplished by decrementing a second counter.
20. The method of claim 13 where the accumulating of said requests in order of arrival ignores requests of priority lower than the priority of the local request, and the requests that are not ignored are ordered by priority of the requests, with the ordering of the requests by priority predominating the ordering of requests by time of arrival.
21. A method for allocating transmission capacity in a slotted network having a data bus that passes through network nodes 1 to N of said network in ascending order and a request bus that passes from network nodes N to 1 in descending order, where transmission capacity on said data bus is divided into slots and where each slot on said data bus contains a data field and a header field that includes a busy subfield and a request subfield, and where each node j, where j is an integer between 1 and N, accepts data with an associated priority level from a user port for transmission to a node k on said network where j < k~N, which data comprises packets of information, and controls the injection of said data into said data bus in accordance with a procedure comprising the steps of:
determining the number of unsatisfied requests received from the request bus;
satisfying the unsatisfied requests by allowing empty slots to pass unpopulated; and populating less than all of the remaining empty slots with said data by passing an empty slot unpopulated for every selected number of empty slots that are populated, where the selected number is related to said associated priority level of said data.
22. The method of claim 21 where the number of unsatisfied requests, Q, corresponds to the number of request bits received by node j on said request bus, from downstream nodes j+1 to N, in excess of the number of empty slots passed through node j on said data bus to node j+1 that were left unpopulated by node j.
23. A method for allocating transmission capacity in a network having a data bus that passes through network nodes 1 to N of said network in ascending order and a request bus that passes from network nodes N to 1 in descending order, where transmission capacity on said data bus is divided into slots and where each slot on said data bus contains a data field and a header field that includes a busy subfield and a request subfield, and where each node j, where j is an integer between 1 and N, receives a parcel with an associated priority level from a user port of said node, for transmission to a node k on said network downstream from node j < k~M, where said parcel is composed of data packets that are small enough to fit within the data fields of said slots, and where said node j controls the injection of said data packets into said data bus in accordance with a procedure COMPRISING THE STEPS OF:
satisfying accumulated requests in a queue, injecting a data packet when an empty slot appears on the data bus and the accumulated requests are satisfied, for every given number of injected data packets, allowing an empty slot to pass unpopulated, and returning to the step of satisfying accumulated requests;
WHERE
a measure of said accumulated requests is the number, Q, which equals the number of request bits received by node j on said request bus, from downstream nodes j+1 to N, minus the number of empty slots passed unpopulated through node j on said data bus to node j+1, in response to said request bits, from the most recent execution of said step of injecting a data packet, said step of satisfying accumulated requests allows Q empty slots to pass unpopulated; and said given number is related to said priority level associated with said parcel received from said user port.
24. The method of claim 23 wherein said measure of accumulated requests is developed by incrementing a count whenever a request appears at the request bus, and by decrementing said count whenever an empty slot is passed unpopulated.
25. The method of claim 24 wherein said step of allowing an empty slot to pass unpopulated every given number of injected data packets is effected by establishing a slot request with every given number of injected data packets, with said created request appearing to said node j as a request from nodes downstreamfrom said node j.
CA002050692A 1990-09-24 1991-09-05 Fair access of multi-priority traffic to distributed-queue dual-bus networks Expired - Fee Related CA2050692C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/586,661 US5115430A (en) 1990-09-24 1990-09-24 Fair access of multi-priority traffic to distributed-queue dual-bus networks
US586,661 1990-09-24

Publications (2)

Publication Number Publication Date
CA2050692A1 CA2050692A1 (en) 1992-03-25
CA2050692C true CA2050692C (en) 1998-12-08

Family

ID=24346648

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002050692A Expired - Fee Related CA2050692C (en) 1990-09-24 1991-09-05 Fair access of multi-priority traffic to distributed-queue dual-bus networks

Country Status (8)

Country Link
US (1) US5115430A (en)
EP (1) EP0478190B1 (en)
JP (1) JP3359926B2 (en)
KR (1) KR100212104B1 (en)
AU (1) AU632709B2 (en)
CA (1) CA2050692C (en)
DE (1) DE69131794T2 (en)
ES (1) ES2143460T3 (en)

Families Citing this family (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NL9001173A (en) * 1990-05-21 1991-12-16 Nederland Ptt METHOD OF IMPROVING THE TRANSFER OF INFORMATION IN THE FORM OF DATA PACKAGES ACCORDING TO A DISTRIBUTED QUEUE PROTOCOL.
EP0503212B1 (en) * 1991-03-15 1995-12-27 International Business Machines Corporation Communications network and method of regulating access to the busses in said network
CA2095755C (en) * 1992-08-17 1999-01-26 Mark J. Baugher Network priority management
ES2124718T3 (en) * 1992-08-21 1999-02-16 Alsthom Cge Alcatel DATA TRANSMISSION SYSTEM, AS WELL AS INTERFACE MODULE AND PRIORITY GENERATION MEANS INCLUDED IN THE SAME.
US5349582A (en) * 1992-11-04 1994-09-20 International Business Machines Corporation Scheme for source assisted partial destination release of slots in slotted networks
GB9304636D0 (en) * 1993-03-06 1993-04-21 Ncr Int Inc A method of accessing a communication system
US5619647A (en) * 1994-09-30 1997-04-08 Tandem Computers, Incorporated System for multiplexing prioritized virtual channels onto physical channels where higher priority virtual will pre-empt a lower priority virtual or a lower priority will wait
US5541912A (en) * 1994-10-04 1996-07-30 At&T Corp. Dynamic queue length thresholds in a shared memory ATM switch
US5521916A (en) * 1994-12-02 1996-05-28 At&T Corp. Implementation of selective pushout for space priorities in a shared memory asynchronous transfer mode switch
US5539729A (en) * 1994-12-09 1996-07-23 At&T Corp. Method for overload control in a packet switch that processes packet streams having different priority levels
US5793842A (en) * 1995-02-27 1998-08-11 Schloemer; Jerry R. System and method of call routing and connection in a mobile (remote) radio telephone system
ES2211725T3 (en) * 1995-04-28 2004-07-16 Matsushita Electric Industrial Co., Ltd. DATA TRANSMISSION METHOD
KR0146446B1 (en) * 1995-07-24 1998-08-17 양승택 Equipment for subscriber input and output in parallel common bus type packet exchange system
KR0146762B1 (en) * 1995-07-24 1998-08-17 양승택 Arbitration system for parallel common bus type high speed packet exchanger
US5966163A (en) * 1995-10-20 1999-10-12 Scientific-Atlanta, Inc. Providing constant bit rate upstream data transport in a two way cable system by scheduling preemptive grants for upstream data slots using selected fields of a plurality of grant fields
US5724358A (en) * 1996-02-23 1998-03-03 Zeitnet, Inc. High speed packet-switched digital switch and method
US6752584B2 (en) * 1996-07-15 2004-06-22 Semitool, Inc. Transfer devices for handling microelectronic workpieces within an environment of a processing machine and methods of manufacturing and using such devices in the processing of microelectronic workpieces
US6921467B2 (en) * 1996-07-15 2005-07-26 Semitool, Inc. Processing tools, components of processing tools, and method of making and using same for electrochemical processing of microelectronic workpieces
US5935218A (en) * 1997-04-04 1999-08-10 Siemens Information And Communication Networks, Inc. Method and apparatus for bus network prioritization using the broadcast of delay time to lower priority users from high priority users in a token or loop network
US6968379B2 (en) * 1997-05-30 2005-11-22 Sun Microsystems, Inc. Latency-reducing bandwidth-prioritization for network servers and clients
US6028841A (en) * 1997-06-06 2000-02-22 Nortel Networks Corporation Distributed bus throttle and method
EP0913969A1 (en) * 1997-10-30 1999-05-06 Alcatel Method, arrangement and communication system for upstream timeslot assignment
SE510147C2 (en) * 1997-11-04 1999-04-26 Telia Ab Method for resource optimization in a data and telecommunication system
US7244677B2 (en) * 1998-02-04 2007-07-17 Semitool. Inc. Method for filling recessed micro-structures with metallization in the production of a microelectronic device
US6632292B1 (en) * 1998-03-13 2003-10-14 Semitool, Inc. Selective treatment of microelectronic workpiece surfaces
US6197181B1 (en) * 1998-03-20 2001-03-06 Semitool, Inc. Apparatus and method for electrolytically depositing a metal on a microelectronic workpiece
TW593731B (en) 1998-03-20 2004-06-21 Semitool Inc Apparatus for applying a metal structure to a workpiece
US6565729B2 (en) 1998-03-20 2003-05-20 Semitool, Inc. Method for electrochemically depositing metal on a semiconductor workpiece
US6459899B1 (en) 1998-09-14 2002-10-01 Jerry R. Schloemer Cellular radio routing system
KR100695660B1 (en) 1999-04-13 2007-03-19 세미툴 인코포레이티드 Workpiece Processor Having Processing Chamber With Improved Processing Fluid Flow
US6916412B2 (en) * 1999-04-13 2005-07-12 Semitool, Inc. Adaptable electrochemical processing chamber
US7160421B2 (en) * 1999-04-13 2007-01-09 Semitool, Inc. Turning electrodes used in a reactor for electrochemically processing a microelectronic workpiece
US6665701B1 (en) * 1999-08-03 2003-12-16 Worldcom, Inc. Method and system for contention controlled data exchange in a distributed network-based resource allocation
WO2002004887A1 (en) * 2000-07-08 2002-01-17 Semitool, Inc. Methods and apparatus for processing microelectronic workpieces using metrology
GB2375927B (en) * 2001-05-26 2004-09-29 Cambridge Broadband Ltd Method and apparatus for communications bandwidth allocation
US7072344B2 (en) * 2001-07-16 2006-07-04 International Business Machines Corporation Redistribution of excess bandwidth in networks for optimized performance of voice and data sessions: methods, systems and program products
EP1481114A4 (en) 2001-08-31 2005-06-22 Semitool Inc Apparatus and methods for electrochemical processing of microelectronic workpieces
US6893505B2 (en) * 2002-05-08 2005-05-17 Semitool, Inc. Apparatus and method for regulating fluid flows, such as flows of electrochemical processing fluids
US7293091B2 (en) * 2002-05-30 2007-11-06 Intel Corporation Method and apparatus for disruption sensitive network data management
US7230929B2 (en) * 2002-07-22 2007-06-12 Qlogic, Corporation Method and system for dynamically assigning domain identification in a multi-module fibre channel switch
US7154886B2 (en) * 2002-07-22 2006-12-26 Qlogic Corporation Method and system for primary blade selection in a multi-module fiber channel switch
US7334046B1 (en) 2002-08-05 2008-02-19 Qlogic, Corporation System and method for optimizing frame routing in a network
US7025866B2 (en) * 2002-08-21 2006-04-11 Micron Technology, Inc. Microelectronic workpiece for electrochemical deposition processing and methods of manufacturing and using such microelectronic workpieces
US7397768B1 (en) 2002-09-11 2008-07-08 Qlogic, Corporation Zone management in a multi-module fibre channel switch
US7362717B1 (en) 2002-10-03 2008-04-22 Qlogic, Corporation Method and system for using distributed name servers in multi-module fibre channel switches
US6886141B1 (en) * 2002-10-07 2005-04-26 Qlogic Corporation Method and system for reducing congestion in computer networks
US7319669B1 (en) 2002-11-22 2008-01-15 Qlogic, Corporation Method and system for controlling packet flow in networks
US7355966B2 (en) * 2003-07-16 2008-04-08 Qlogic, Corporation Method and system for minimizing disruption in common-access networks
US7525910B2 (en) * 2003-07-16 2009-04-28 Qlogic, Corporation Method and system for non-disruptive data capture in networks
US7620059B2 (en) 2003-07-16 2009-11-17 Qlogic, Corporation Method and apparatus for accelerating receive-modify-send frames in a fibre channel network
US7463646B2 (en) * 2003-07-16 2008-12-09 Qlogic Corporation Method and system for fibre channel arbitrated loop acceleration
US7471635B2 (en) * 2003-07-16 2008-12-30 Qlogic, Corporation Method and apparatus for test pattern generation
US7453802B2 (en) 2003-07-16 2008-11-18 Qlogic, Corporation Method and apparatus for detecting and removing orphaned primitives in a fibre channel network
US7388843B2 (en) * 2003-07-16 2008-06-17 Qlogic, Corporation Method and apparatus for testing loop pathway integrity in a fibre channel arbitrated loop
US7406092B2 (en) * 2003-07-21 2008-07-29 Qlogic, Corporation Programmable pseudo virtual lanes for fibre channel systems
US7580354B2 (en) * 2003-07-21 2009-08-25 Qlogic, Corporation Multi-speed cut through operation in fibre channel switches
US7430175B2 (en) * 2003-07-21 2008-09-30 Qlogic, Corporation Method and system for managing traffic in fibre channel systems
US7684401B2 (en) 2003-07-21 2010-03-23 Qlogic, Corporation Method and system for using extended fabric features with fibre channel switch elements
US7477655B2 (en) * 2003-07-21 2009-01-13 Qlogic, Corporation Method and system for power control of fibre channel switches
US7466700B2 (en) * 2003-07-21 2008-12-16 Qlogic, Corporation LUN based hard zoning in fibre channel switches
US7646767B2 (en) 2003-07-21 2010-01-12 Qlogic, Corporation Method and system for programmable data dependant network routing
US7630384B2 (en) * 2003-07-21 2009-12-08 Qlogic, Corporation Method and system for distributing credit in fibre channel systems
US7512067B2 (en) * 2003-07-21 2009-03-31 Qlogic, Corporation Method and system for congestion control based on optimum bandwidth allocation in a fibre channel switch
US7447224B2 (en) * 2003-07-21 2008-11-04 Qlogic, Corporation Method and system for routing fibre channel frames
US7522529B2 (en) * 2003-07-21 2009-04-21 Qlogic, Corporation Method and system for detecting congestion and over subscription in a fibre channel network
US7792115B2 (en) 2003-07-21 2010-09-07 Qlogic, Corporation Method and system for routing and filtering network data packets in fibre channel systems
US7583597B2 (en) * 2003-07-21 2009-09-01 Qlogic Corporation Method and system for improving bandwidth and reducing idles in fibre channel switches
US7420982B2 (en) * 2003-07-21 2008-09-02 Qlogic, Corporation Method and system for keeping a fibre channel arbitrated loop open during frame gaps
US7522522B2 (en) * 2003-07-21 2009-04-21 Qlogic, Corporation Method and system for reducing latency and congestion in fibre channel switches
US7894348B2 (en) 2003-07-21 2011-02-22 Qlogic, Corporation Method and system for congestion control in a fibre channel switch
US7573909B2 (en) * 2003-07-21 2009-08-11 Qlogic, Corporation Method and system for programmable data dependant network routing
US7558281B2 (en) * 2003-07-21 2009-07-07 Qlogic, Corporation Method and system for configuring fibre channel ports
US7525983B2 (en) 2003-07-21 2009-04-28 Qlogic, Corporation Method and system for selecting virtual lanes in fibre channel switches
US7352701B1 (en) 2003-09-19 2008-04-01 Qlogic, Corporation Buffer to buffer credit recovery for in-line fibre channel credit extension devices
US20050092611A1 (en) * 2003-11-03 2005-05-05 Semitool, Inc. Bath and method for high rate copper deposition
US7480293B2 (en) * 2004-02-05 2009-01-20 Qlogic, Corporation Method and system for preventing deadlock in fibre channel fabrics using frame priorities
US7564789B2 (en) * 2004-02-05 2009-07-21 Qlogic, Corporation Method and system for reducing deadlock in fibre channel fabrics using virtual lanes
US7340167B2 (en) * 2004-04-23 2008-03-04 Qlogic, Corporation Fibre channel transparent switch for mixed switch fabrics
US7930377B2 (en) 2004-04-23 2011-04-19 Qlogic, Corporation Method and system for using boot servers in networks
US7404020B2 (en) * 2004-07-20 2008-07-22 Qlogic, Corporation Integrated fibre channel fabric controller
US7411958B2 (en) * 2004-10-01 2008-08-12 Qlogic, Corporation Method and system for transferring data directly between storage devices in a storage area network
US8295299B2 (en) 2004-10-01 2012-10-23 Qlogic, Corporation High speed fibre channel switch element
US7593997B2 (en) * 2004-10-01 2009-09-22 Qlogic, Corporation Method and system for LUN remapping in fibre channel networks
JP2006109258A (en) * 2004-10-07 2006-04-20 Hitachi Ltd Communication method and communication apparatus
US7519058B2 (en) 2005-01-18 2009-04-14 Qlogic, Corporation Address translation in fibre channel switches
US7548560B1 (en) 2006-02-27 2009-06-16 Qlogic, Corporation Method and system for checking frame-length in fibre channel frames
US20080264774A1 (en) * 2007-04-25 2008-10-30 Semitool, Inc. Method for electrochemically depositing metal onto a microelectronic workpiece
US8438301B2 (en) * 2007-09-24 2013-05-07 Microsoft Corporation Automatic bit rate detection and throttling
US8239564B2 (en) * 2008-06-20 2012-08-07 Microsoft Corporation Dynamic throttling based on network conditions
US8000237B1 (en) * 2010-01-28 2011-08-16 Telefonaktiebolaget L M Ericsson (Publ) Method and apparatus to provide minimum resource sharing without buffering requests
US11314558B2 (en) * 2019-07-23 2022-04-26 Netapp, Inc. Methods for dynamic throttling to satisfy minimum throughput service level objectives and devices thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US497757A (en) * 1893-05-23 Vaginal syringe
US5001707A (en) * 1989-11-02 1991-03-19 Northern Telecom Limited Method of providing reserved bandwidth in a dual bus system
US4977557A (en) * 1989-12-14 1990-12-11 Northern Telecom Limited Method of providing effective priority access in a dual bus system

Also Published As

Publication number Publication date
EP0478190A2 (en) 1992-04-01
AU632709B2 (en) 1993-01-07
EP0478190A3 (en) 1994-07-13
DE69131794T2 (en) 2000-06-29
ES2143460T3 (en) 2000-05-16
KR920007393A (en) 1992-04-28
JP3359926B2 (en) 2002-12-24
EP0478190B1 (en) 1999-11-24
KR100212104B1 (en) 1999-08-02
DE69131794D1 (en) 1999-12-30
AU8456591A (en) 1992-03-26
US5115430A (en) 1992-05-19
CA2050692A1 (en) 1992-03-25
JPH04227146A (en) 1992-08-17

Similar Documents

Publication Publication Date Title
CA2050692C (en) Fair access of multi-priority traffic to distributed-queue dual-bus networks
US5867663A (en) Method and system for controlling network service parameters in a cell based communications network
US5675573A (en) Delay-minimizing system with guaranteed bandwidth delivery for real-time traffic
US5274644A (en) Efficient, rate-base multiclass access control
US6377546B1 (en) Rate guarantees through buffer management
CA2029054C (en) Method and apparatus for congestion control in a data network
JP3305500B2 (en) Apparatus and method for traffic control of user interface in asynchronous transmission mode
US6430191B1 (en) Multi-stage queuing discipline
Hahne et al. DQDB networks with and without bandwidth balancing
US6717912B1 (en) Fair discard system
EP0717532A1 (en) Dynamic fair queuing to support best effort traffic in an ATM network
EP1705851A1 (en) Communication traffic policing apparatus and methods
US6229813B1 (en) Pointer system for queue size control in a multi-task processing application
JPH09507738A (en) Method and apparatus for prioritizing traffic in an ATM network
JP4652494B2 (en) Flow control method in ATM switch of distributed configuration
KR20030001412A (en) Method and apparatus for distribution of bandwidth in a switch
US6246687B1 (en) Network switching system supporting guaranteed data rates
Hahne et al. Fair access of multi-priority traffic to distributed-queue dual-bus networks
EP0838970B1 (en) Method for shared memory management in network nodes
EP1031253B1 (en) Buffer management method
US6341134B1 (en) Process for the arrangement of equitable-loss packets
DE69631744T2 (en) Attachment and method for bandwidth management in multi-service networks
DE69737249T2 (en) Packet-switched communication system
US7330475B2 (en) Method for sharing the bandwidth available for unicast and multicast flows in an asynchronous switching node
Cerdà et al. A study of the fairness of the fast reservation protocol

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed