WO2003081874A1 - Scheduling using quantum and deficit values - Google Patents

Scheduling using quantum and deficit values Download PDF

Info

Publication number
WO2003081874A1
WO2003081874A1 PCT/US2003/007872 US0307872W WO03081874A1 WO 2003081874 A1 WO2003081874 A1 WO 2003081874A1 US 0307872 W US0307872 W US 0307872W WO 03081874 A1 WO03081874 A1 WO 03081874A1
Authority
WO
WIPO (PCT)
Prior art keywords
deficit
determining
item
processing
items
Prior art date
Application number
PCT/US2003/007872
Other languages
French (fr)
Inventor
Doron Shoham
Amir Michaeli
David Tsiang
Chandrashekhar Appanna
Original Assignee
Cisco Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology, Inc. filed Critical Cisco Technology, Inc.
Priority to CA2484427A priority Critical patent/CA2484427C/en
Priority to DE60303444T priority patent/DE60303444T2/en
Priority to AU2003221996A priority patent/AU2003221996B2/en
Priority to EP03717979A priority patent/EP1488600B1/en
Publication of WO2003081874A1 publication Critical patent/WO2003081874A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/6225Fixed service order, e.g. Round Robin
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank

Definitions

  • network and “communications mechanism” are used generically herein to describe one or more networks, communications mediums or communications systems, including, but not limited to the Internet, private or public telephone, cellular, wireless, satellite, cable, local area, metropolitan area and/or wide area networks, a cable, electrical connection, bus, etc., and internal communications mechanisms such as message passing, interprocess communications, shared memory, etc.
  • storage mechanism includes any type of memory, storage device or other mechanism for maintaining instructions or data in any format.
  • Computer-readable medium is an extensible term including any memory, storage device, storage mechanism, and other storage and signaling mechanisms including interfaces and devices such as network interface cards and buffers therein, as well as any communications devices and signals received and transmitted, and other current and evolving technologies that a computerized system can interpret, receive, and/or transmit.
  • memory includes any random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components or elements.
  • storage device includes any solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Memories and storage devices may store computer-executable instructions to be executed a processor and/or control logic, and data which is manipulated a processor and/or control logic.
  • the deficit does not allow the item to be further processed (e.g., the deficit is less than a predetermined constant value such as zero or determined via another comparison)
  • the item is removed from the current set and placed in another of the sets of items typically based on the value of the deficit for the item, its quantum, and the current set, such that the item will not be manipulated further until it actually would have developed enough quantum to process one of its entities.
  • the deficit for the item is increased by an appropriate additional quantum corresponding to the next time the item is serviced or processed. Processing then returns to process block 604. Otherwise, processing returns to process block 608 to select and process a next item in the set (or the same item if it is the only one in the set.)

Abstract

Methods and apparatus are disclosed for scheduling items (e.g., packets, processes, etc.) in a system, such as, but not limited to a computer or communications system. In one implementation, multiple sets of items requiring processing to be performed are maintained. A current set of the multiple sets of items is identified. Each of the items in the current set is processed. In one implementation, this processing includes identifying a particular item in the current set, adding a quantum to a deficit of the particular item, processing an entity corresponding to the particular item, decreasing the deficit of the particular item by a value, determining if the deficit allows the particular item to be further processed, and moving the particular item from the current set to another of the multiple sets of items after said determining if the deficit allows the particular item to be further processed.

Description

SCHEDULING USING QUANTUM AND DEFICIT VALUES
FIELD OF THE INVENTION
This invention especially relates to scheduling of information and/or processes, such as those used in communications and computer systems; and more particularly, the invention relates to scheduling entities and items using quantum and deficit values, including, but not limited to the scheduling of the sending of packets in a packet switching system.
BACKGROUND OF THE INVENTION
The communications industry is rapidly changing to adjust to emerging technologies and ever increasing customer demand. This customer demand for new applications and increased performance of existing applications is driving communications network and system providers to employ networks and systems having greater speed and capacity (e.g., greater bandwidth). In trying to achieve these goals, a common approach taken by many communications providers is to use packet switching technology. Increasingly, public and private communications networks are being built and expanded using various packet technologies, such as Internet Protocol (IP). >
A network device, such as a switch or router, typically receives, processes, and forwards or discards a packet. For example, a enqueuing component of such a device receives a stream of various sized packets which are accumulated in an input buffer. Each packet is analyzed, and an appropriate amount of memory space is allocated to store the packet. The packet is stored in memory, while certain attributes (e.g., destination information and other information typically derived from a packet header or other source) are maintained in separate memory. Once the entire packet is written into memory, the packet becomes eligible for processing, and an indicator of the packet is typically placed in an appropriate destination queue for being serviced according to some scheduling methodology.
When there is a contention for resources, such as on output links of a packet switching system or interface or even for compute cycles in a computing device, it is important for resources to be allocated or scheduled according to some fairness policy. Moreover, the amount of work required to schedule and to enqueue and dequeue a packet or other scheduled item is important, especially as the operating rate of systems increase. Ordinary round-robin (RR) is an algorithm commonly used for sharing a common resource between several clients. All clients are served in a cyclic order, hi each round every client will be served if it is eligible. When served, each client is permitted to send one packet. Servicing of queues is simple to implement and can be done in constant time, but, due to the varying size of packets, does not allocate bandwidth fairly.
In some scenarios, high priority (e.g., low latency), guaranteed bandwidth, best effort traffic (e.g., spare bandwidth) and other classifications of traffic compete for a common resource. Various known scheduling algorithms are designed to provide isolation, prioritization, and fair bandwidth allocation to traffic competing for a common resource. These are known as fair queuing algorithms. Some examples are Weighted Fair Queuing (WFQ), Self-Clocked Fair Queuing (SCFQ), and Deficit Round Robin/Surplus Round Robin (referred to as DRR). WFQ, and SCFQ depend upon arrival times as well as previous link utilization in order to calculate the next best packet to send. The accepted "ideal" behavior is bit-by-bit round robin, which states that the next packet to send should be the packet which would have completed if all packets currently in the system send one bit each round robin (which is typically not practical in a packet based system). The inclusion of arrival times in the scheduling algorithm, in order to calculate theoretical ideal departure times, typically requires insertion into a sorted list which is known to be an O(log N) problem worst case where N is the number of packets already enqueued.
DRR is an algorithm used for sharing a common resource between several clients with different ratios between clients (i.e., some clients are allowed to consume more of the resources than others). The ratio between clients is typically defined by a parameter called a quantum.
DRR services queues using round-robin servicing with a quantum assigned to each queue. Unlike traditional round-robin, multiple packets up to the specified quantum can be sent resulting in each queue sending at least a quantum's worth of bytes. If the quantum for each queue is equal, then each queue will send an equal amount of bandwidth.
This DRR approach works in rounds, where a round is one round-robin iteration over the queues that have items to be sent. Typically, when the queue is scheduled, it is allowed to transmit until its deficit becomes negative, and then the next queue is served. Packets coming in on different flows are stored in different queues. Each round, each queue is allocated a quantum worth of bits, which are added to the deficit of each queue. Each queue is allowed to send out one or more packets in a DRR round, subject to the restriction that it cannot send more bits than indicated by the deficit corresponding to the queue. If there are no more packets in a queue after the queue has been serviced, the deficit corresponding to the queue is set to zero. Otherwise, the remaining amount (i.e., the deficit minus the number of bits sent) is maintained for the next round.
DRR is an algorithm that does not depend upon actual arrival times and has complexity of 0(1) - that is the amount of work required is a constant and independent of the number of packets enqueued, hi order to be work conserving, a packet should be sent every time a queue is scheduled. Thus, the quantum used in DRR should be at least one maximum packet size (MTU) to guarantee that when the quantum is added to any deficit, the resulting value is at least zero. DRR provides fair bandwidth allocation and is easy to implement. It is work conserving and, because of its 0(1) properties, it scales well with higher link speeds and larger number of queues. However, its scheduling behavior deviates quite a bit from the bit-by-bit round robin "ideal". In particular, latency for a system with N queues is Q*N where Q is the average quantum, which must be at least one maximum transport unit (MTU). In one implementation, called modified deficit round robin (MDRR), there is optionally one high priority (HP) client mapped to each resource, in addition to the other non-high priority (NHP) clients mapped to the resource. In this implementation there are two possible modes of operation: low latency mode and fair mode. In low latency mode, the HP client is constantly served until it is not eligible (i.e., either it has an empty queue or it had exceeded its allocated bandwidth). The NHP clients are only served whenever the HP client can't be served. Low latency mode reduces the latency of the HP client (good for voice/video applications), but can cause starvation of NHP clients. In fair mode, one packet is sent from the HP client and one from other NHP clients in alternating order as long as the HP client didn't reach its maximum allocated bandwidth. Fair mode is a one compromise point between the requirement not to starve NHP clients and the low latency requirement of HP clients. Needed are new methods and apparatus for scheduling of packets in a packet switching system, especially those which may provide good latency performance while maintaining 0(1) work properties, and/or have other desirable characteristics.
SUMMARY OF THE INVENTION
Methods and apparatus are disclosed for scheduling items in a system, such as, but not limited to a computer or communications system. In one embodiment, multiple sets of items requiring processing to be performed are maintained. A current set of the multiple sets of items is identified. Each of the items in the current set is processed. In one embodiment, this processing includes identifying a particular item in the current set, adding a quantum to a deficit of the particular item, processing an entity corresponding to the particular item, decreasing the deficit of the particular item by a value, determining if the deficit allows the particular item to be further processed, and moving the particular item from the current set to another of the multiple sets of items after said determining if deficit allows the particular item to be urther processed.
BRIEF DESCRIPTION OF THE DRAWINGS
The appended claims set forth the features of the invention with particularity. The invention, together with its advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which: FIG. 1 is a block diagram of one embodiment for scheduling entities and items using quantum and deficit values;
FIG. 2 is a block diagram of one embodiment for scheduling entities and items using quantum and deficit values;
FIGs. 3A-D are block diagrams of data structures used in one embodiment; FIG. 4 is a block diagram of an enqueue process used in one embodiment to ensure a scheduler is aware of an item with an entity to be scheduled;
FIG. 5 is a flow diagram of a process used in one embodiment for scheduling the processing of entities and items within one or more sets of items;
FIG. 6 is a flow diagram of a process used in one embodiment for scheduling the processing of entities and items within one or more sets of items; and
FIG. 7 is a flow diagram of a process used in one embodiment for scheduling the processing of entities and items within one or more sets of items.
DETAILED DESCRIPTION
Methods and apparatus are disclosed for scheduling entities and items using quantum and deficit values, especially in communications and computer systems. Embodiments described herein include various elements and limitations, with no one element or limitation contemplated as being a critical element or limitation. Each of the claims individually recites an aspect of the invention in its entirety. Moreover, some embodiments described may include, but are not limited to, z'nter alia, systems, networks, integrated circuit chips, embedded processors, ASICs, methods, and computer-readable medium containing instructions. The embodiments described hereinafter embody various aspects and configurations within the scope and spirit of the invention, with the figures illustrating exemplary and non-limiting configurations.
As used herein, the term "packet" refers to packets of all types or any other units of information or data, including, but not limited to, fixed length cells and variable length packets, each of which may or may not be divisible into smaller packets or cells. The term "packet" as used herein also refers to both the packet itself or a packet indication, such as, but not limited to all or part of a packet or packet header, a data structure value, pointer or index, or any other part or identification of a packet. Moreover, these packets may contain one or more types of information, including, but not limited to, voice, data, video, and audio information. The term "item" is used herein to refer to a packet or any other unit or piece of information or data.
The term "system" is used generically herein to describe any number of components, elements, sub-systems, devices, packet switch elements, packet switches, routers, networks, computer and/or communication devices or mechanisms, or combinations of components thereof. The term "computer" is used generically herein to describe any number of computers, including, but not limited to personal computers, embedded processors and systems, control logic, ASICs, chips, workstations, mainframes, etc. The term "device" is used generically herein to describe any type of mechanism, including a computer or system or component thereof. The terms "task" and "process" are used generically herein to describe any type of running program, including, but not limited to a computer process, task, thread, executing application, operating system, user process, device driver, native code, machine or other language, etc., and can be interactive and/or non-interactive, executing locally and/or remotely, executing in foreground and/or background, executing in the user and/or operating system address spaces, a routine of a library and/or standalone application, and is not limited to any particular memory partitioning technique. The steps, connections, and processing of signals and information illustrated in the figures, including, but not limited to the block and flow diagrams, are typically performed in a different serial or parallel ordering and/or by different components and/or over different connections in various embodiments in keeping within the scope and spirit of the invention.
Moreover, the terms "network" and "communications mechanism" are used generically herein to describe one or more networks, communications mediums or communications systems, including, but not limited to the Internet, private or public telephone, cellular, wireless, satellite, cable, local area, metropolitan area and/or wide area networks, a cable, electrical connection, bus, etc., and internal communications mechanisms such as message passing, interprocess communications, shared memory, etc. The tenn "storage mechanism" includes any type of memory, storage device or other mechanism for maintaining instructions or data in any format. "Computer-readable medium" is an extensible term including any memory, storage device, storage mechanism, and other storage and signaling mechanisms including interfaces and devices such as network interface cards and buffers therein, as well as any communications devices and signals received and transmitted, and other current and evolving technologies that a computerized system can interpret, receive, and/or transmit. The term "memory" includes any random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components or elements. The term "storage device" includes any solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Memories and storage devices may store computer-executable instructions to be executed a processor and/or control logic, and data which is manipulated a processor and/or control logic. The term "data structure" is an extensible term referring to any data element, variable, data structure, data base, and/or one or more or an organizational schemes that can be applied to data to facilitate interpreting the data or performing operations on it, such as, but not limited to memory locations or devices, sets, queues, trees, heaps, lists, linked lists, arrays, tables, pointers, etc. A data structure is typically maintained in a storage mechanism. The terms "first," "second," etc. are typically used herein to denote different units (e.g., a first element, a second element). The use of these terms herein does not necessarily connote an ordering such as one unit or event occurring or coming before the another, but rather provides a mechanism to distinguish between particular units. Moreover, the phrase "based on x" is used to indicate a minimum set of items x from which something is derived, wherein "x" is extensible and does not necessarily describe a complete list of items on which the operation is based. Additionally, the phrase "coupled to" is used to indicate some level of direct or indirect connection between two elements or devices, with the coupling device or devices modify or not modifying the coupled signal or communicated information. The term "subset" is used to indicate a group of all, less than all, or none of the elements of a set. Moreover, the term "or" is used herein to identify an alternative selection of one or more, including all, of the conjunctive items.
Methods and apparatus are disclosed for scheduling items (e.g., packets, processes, etc.) in a system, such as, but not limited to a computer or communications system (e.g., router, switch, etc.) i one implementation, multiple sets of items requiring processing to be performed are maintained. A current set of the multiple sets of items is identified. Each of the items in the current set is processed. In one implementation, this processing includes identifying a particular item in the current set, adding a quantum to a deficit of the particular item, processing an entity corresponding to the particular item, decreasing the deficit of the particular item by a value, determining if the deficit allows the particular item to be further processed, and moving the particular item from the current set to another of the multiple sets of items after said determining if deficit allows the particular item to be further processed.
In one embodiment, multiple sets of items requiring processing to be performed are maintained, wherein these sets of items are processed in some deterministic sequence. A current set is selected and processed until empty. An item is selected from the current set and a quantum is added to the deficit of the particular item. While the deficit of the item is not negative (or non-positive), the entities from item are processed (e.g., some corresponding action performed, a packet is sent from a queue, a process is allowed to be processed by some number of CPU cycles, etc.), and the item's deficit is decreased by some corresponding value. After the deficit of the item becomes negative, the item is moved from the current list to another of the multiple lists, typically based on the item's current deficit, quantum, current set, and deterministic sequence such that when the next time the item is referenced, its quantum will be great enough to at least perform one action (e.g., send one packet). In one embodiment, the size of the quantum is approximately the maximum packet size divided by the number of sets of items. If an item becomes empty, it is not further processed nor placed in another active set until it has entities requiring processing, and then it is placed in the appropriate set. h one embodiment, multiple sets of items requiring processing to be performed are maintained, wherein these sets of items are processed in some deterministic sequence. A current set is selected and processed until empty. The current set is sequenced through, and during the first such sequencing through, the quantum is added to the deficit of each item. During this sequencing, one entity of an item is allowed to be processed (e.g., some corresponding action performed, a packet is sent from a queue, a process is allowed to be processed by some number of CPU cycles, etc.) In one embodiment, this processing corresponds to one entity (e.g., a packet, number of CPU cycles, etc.) In one embodiment, a mini-quantum worth of processing is performed on the item , which may allow multiple entities to be processed. The item's deficit is decreased by some corresponding value, and a next item is processed. However, if the item becomes empty, it is removed from the set, or if the item's deficit becomes negative (or non-positive), the item is placed in another set. In one embodiment, two sets are used: a currently active set and a currently inactive set, and thus the item would be placed in the currently inactive set. When the currently active set becomes empty, it becomes the currently inactive set, and the other set becomes the currently active set and processing repeats. In one embodiment, more than two sets are used, and in which case, the item is moved from the current list to another of the multiple lists, typically based on the item's current deficit, quantum, current set, and deterministic sequence such that when the next time the item is referenced, its quantum will be great enough to at least perform one action (e.g., send one packet). In one embodiment, allowing a mini-quantum's worth of processing of a unit at a time increases packet interleaving and may reduce jitter especially when the processing required by individual entities varies, such as when the lengths of packets greatly vary. In one embodiment, every item has a deficit value. Every scheduling time a packet is sent from the client, the packet length is subtracted from the deficit. The client is allowed to transmit only a single packet in every scheduling round, and the client is allowed to participate in the next scheduling round if its deficit is still not negative. Once the deficit become negative, the client is not allowed to participate in the scheduling round, hi one embodiment, this done by taking out the client from the active list (e.g., the list currently being serviced) and putting it into an inactive list (e.g., a list not currently being serviced, but for containing clients to be scheduled). Once all the clients demands have been served as identified by the active list being empty, a next list become the active list, and the active list becomes an inactive list, hi one embodiment, only two lists are used, while in one embodiment, more than two lists are used. In one embodiment, the quantum used is at least one MTU to ensure a packet is sent each time a client is processed. hi one embodiment, a source is serviced (e.g., a packet is sent) when its turn comes and as long as its deficit and mini-deficit and leaky bucket are not negative. A source gets a new mini-quantum given to it every round (when its turn comes) and quantum when it is passed from the active list to the inactive list. An active list is the list of sources that didn't complete their bandwidth share (i.e., their deficit is not negative). The inactive list is the list of sources that have completed their bandwidth share (i.e. their deficit became negative). As sources from the active list transmit packets the active list becomes shorter and shorter until it becomes empty. When that active list becomes empty the active list and the inactive list are switched.
In one embodiment, two linked lists are maintained: an active linked list consisting of clients with a positive deficit value (i.e., the clients are eligible to send packets in this DRR round); and an inactive linked list consisting of clients with a non-positive deficit value (i.e., the clients are not eligible to send packets in this DRR round). A programmable mini-quantum parameter per client is used along with the mini-deficit to determine the number of bytes that will be sent by each client in each round. In one embodiment, there is only one mini-deficit variable used, while in one embodiment, a mini-deficit variable is used for each item, which allows, in one embodiment, to maintain the amount of negative mini-deficit for each item in and/or across rounds.
The active linked list is scanned, and each client is permitted to send mini-quantum bytes (or a little more because whole packets are sent). When the corresponding mini- deficit becomes non-positive (or negative depending on the embodiment), the next client on the active linked list is served. The appropriate number of bytes will be subtracted from mini-deficit after each packet sent from a client. Usually a client is not dequeued from the active linked list when mini-deficit becomes non-positive. Only when deficit becomes non- positive is a client dequeued from the active linked list, and then either is enqueued to the inactive clients list or back to the free linked list (when the client becomes not eligible for sending packets due to a maximum bandwidth allocation or when a client doesn't have packets to send). The active linked list becomes shorter and shorter as clients complete sending their quantum. When a client completes sending its quantum, assuming it has packets to send and is still eligible to send packets, it will be dequeued from the active linked list and enqueued to the inactive linked list. In one embodiment, a status bit that states that this client didn't get mini-quantum for this round will be set. This status bit may be a global bit for all clients on the same list or particular to a client. When the active linked list becomes empty, it becomes inactive and the previous inactive client linked list becomes active. In the first round, all clients get their mini-quantum. In one embodiment, two linked lists are used as clients that finished their quantum are discriminated from clients that didn't finish their quantum but finished their mini-quantum. FIG. 1 illustrates one embodiment of a system such as, but not limited to a computer or communications system, which schedules entities and/or items according to the invention. Packets or other entities or indications thereof 101-109 to be scheduled are received from one or more interfaces, processes, threads, information streams, or other sources 100. These packets or other entities 101-109 are typically stored in items / queues 111-119 for scheduling by scheduler 120. In one embodiment, such as that processing packets or communication streams, scheduler 120 processes one or more streams of packets or other entities 101-109 to produce one or more streams of outputs 121-129. For example, in one embodiment, streams of packets 101-109 are stored in queues 111-119. Scheduler 120 schedules packets from queues 111-119 to produce one or more streams of packets 121-129.
FIG. 2 illustrates one embodiment of a system 200 such as, but not limited to a computer or communications system, scheduling entities and/or items according to the invention. In one embodiment, system 200 schedules internally generated entities, such as, but not limited to processes, threads, information streams, packets or other entities. In one embodiment, system 200 receives externally generated entities and/or internally generates entities, and schedules the processing and/or sending of these entities by system 200 or by another system. For example, in one embodiment, system 200 schedules internal processes, threads, etc. In one embodiment, system 200 receives streams of packets (or other information units) , schedules packets belonging to these streams, and generates one or more external streams of scheduled packets. h one embodiment, system 200 includes a processor 201, memory 202, storage devices 203, and optionally one or more incoming and/or outgoing entity interface 204, which are typically coupled via one or more communications mechanisms 209 (shown as a bus for illustrative purposes). Various embodiments of system 200 may include more or less elements. The operation of system 200 is typically controlled by processor 201 using memory 202 and storage devices 203 to perform one or more scheduling tasks or processes. Memory 202 is one type of computer-readable medium, and typically comprises random access memory (RAM), read only memory (ROM), flash memory, integrated circuits, and/or other memory components. Memory 202 typically stores computer-executable instructions to be executed by processor 201 and/or data which is manipulated by processor 201 for implementing functionality in accordance with the invention. Storage devices 203 are another type of computer-readable medium, and typically comprise solid state storage media, disk drives, diskettes, networked services, tape drives, and other storage devices. Storage devices 203 typically store computer-executable instructions to be executed by processor 201 and/or data which is manipulated by processor 201 for implementing functionality in accordance with the invention.
FIGs. 3A-3D illustrate exemplary data structures used in one or more of an unlimited number of embodiments of the invention. FIG. 3 A illustrates a data structure used by to identify an item and to maintain a deficit for the item. One embodiment uses all fields 301-305 of data structure 300. One embodiment uses less than all of fields 301-305 of data structure 300.
As shown, data structure 300 includes an item identifier field 301 used to indicate the particular item (e.g., a set of queues or other items), an item deficit field 302 used to maintain a deficit value for the item, an item quantum field 303 used to indicate a quantum for the item as quantum values in one embodiment may vary between items to provide varying amounts of bandwidth, processing time, etc. for the items (and if they are the same, then item quantum field 303 is typically not used or included), and an item pointer field 304 used in a linked list of items, such as that illustrated in FIG. 3B. In one embodiment, an additional field for storing a mini-quantum value is included in data structure 300, which may be especially useful if items use different mini-quantum values.
FIG. 3B illustrates multiple sets 360, 370 and 380 having respective items 362-364, 372-374, and 382-384 which require processing to be performed. In one embodiment, items 362-364, 372-374, and 382-384 correspond to queues containing packets or other entities to be processed. In one embodiment, items 362-364, 372-374, and 382-384 correspond to processes or threads requiring CPU time. As shown, each set 360, 370 and 380 is in the form of a linked list (and numerous other data structures are used in other embodiments.) A linked list is convenient for maintaining an ordered set (e.g., a list) of items, hi the embodiment illustrated, each list or set 360, 370 and 380 has a head pointer 361, 371, and 381, and a tail pointer 365, 375, and 385. In one embodiment, a particular item pointer 350 is used to indicate one of the items 362-364, 372-374, and 382-384 currently being processed.
FIG. 3C further illustrates a mini-deficit data structure or counting mechanism 310 used in one embodiment to maintain a mini-deficit value for an item that is currently being processed, such as that described in the flow diagram of FIG. 7 further described hereinafter.
FIG. 3D illustrates a data structure comprising an item 390 used in one embodiment. As shown, item 390 comprises multiple entities 391-399. hi one embodiment, item 390 corresponds to a queue, and entities 391-399 correspond to packets to be scheduled and sent, hi one embodiment, an entity corresponds to a unit of work performed on an item.
FIG. 4 illustrates a process used in one embodiment to ensure that an item requiring processing is scheduled. Processing begins with process block 400, and proceeds to process block 402, wherein an entity (or indication thereof) requiring processing is received, and the entity is placed in the appropriate item in process block 404. Next, as determined in process block 406, if the item is already in the scheduling system, then processing returns to process block 402. Otherwise, in process block 408, the deficit of the item is set to its quantum value, and in process block 410, an indication of the item is provided to the scheduler or possibly the item is added to one of the scheduling sets in the scheduling data structure. In one embodiment, the quantum is the same for all items, hi one embodiment, the quantum is not the same for all items, as, for example, one item (e.g., queue) might be allocated more bandwidth for sending packets than another item. In one embodiment, the initialization of the deficit of an item is performed in another process (e.g., that depicted in FIGs. 4, 5 or 6), but the initialization in process block 408 may save one or more memory accesses which may be important in some embodiments. Processing then returns to process block 402.
FIG. 5 illustrates a scheduling process used in one embodiment. Processing begins with process block 500, and proceeds to process block 502, wherein a first set is selected. Next, as determined in process block 504, if there are no items in the selected set, then in process block 506, a next set is selected and processing returns to process block 504. Otherwise, in process block 508, an item, such as the first one if an ordered set is used or another item, is selected from the set. In process block 510, the information corresponding to the item is retrieved. Next, in process block 512, an entity, typically the first entity, is processed. In one embodiment, a packet is sent. Next, in process block 514, the deficit of the item is decreased by some constant or variable value, such as the size of the sent packet or other value commiserate with the processing performed. Note, in one embodiment, the quantum used may be less than one maximum transport unit (MTU). In one embodiment, the quantum used is at least (1 MTU/(S-1)), where S is the number of sets and S>1.
Next, as determined in process block 516, if there are no more entities in the item, then in process block 518, the item is removed from the set as it has no more processing to be performed, and processing returns to process block 504. Otherwise, as determined in process block 520, if the deficit does not allow the item to be further processed (e.g., the deficit is less than a predetermined constant value such as zero or determined via another comparison), then, in process block 522, the item is placed in another of the sets of items typically based on the value of the deficit for the item, its quantum, and the current set, such that the item will not be manipulated further until it actually would have developed enough quantum to process one of its entities. Additionally, the deficit for the item is increased by an appropriate additional quantum corresponding to the next time the item is serviced or processed. In one embodiment, an item's deficit is increased by n quantum, where n corresponds to the position of the set (in which the item is placed) in relative order of servicing of sets, where the value of n is typically an integer greater than or equal to one. For example, if the item is placed in the next set to be serviced, it's deficit will be increased by one quantum, and if the item is placed in the second next set to be serviced, it's deficit will be increased by two quantum, and so forth. Processing then returns to process block 504. Otherwise, processing returns to process block 512 to process another entity.
FIG. 6 illustrates a scheduling process used in one embodiment. Processing begins with process block 600, and proceeds to process block 602, wherein a first set is selected. Next, as determined in process block 604, if there are no items in the selected set, then in process block 606, a next set is selected and processing returns to process block 604. In one embodiment, two sets of items are used.
Otherwise, in process block 608, an item, such as the first one if an ordered set is used or another item, is selected from the set. In process block 610, the information corresponding to the item is retrieved, hi process block 612, an entity, typically the first entity, is processed. In one embodiment, a packet is sent. Next, in process block 614, the deficit of the item is decreased by some constant or variable value, such as the size of the sent packet or other value commiserate with the processing performed. Next, as determined in process block 616, if there are no more entities in the item, then in process block 618, the item is removed from the set as it has no more processing to be performed, and processing returns to process block 604. Otherwise, as determined in process block 620, if the deficit does not allow the item to be further processed (e.g., the deficit is less than a predetermined constant value such as zero or determined via another comparison), then, in process block 622, the item is removed from the current set and placed in another of the sets of items typically based on the value of the deficit for the item, its quantum, and the current set, such that the item will not be manipulated further until it actually would have developed enough quantum to process one of its entities. Additionally, the deficit for the item is increased by an appropriate additional quantum corresponding to the next time the item is serviced or processed. Processing then returns to process block 604. Otherwise, processing returns to process block 608 to select and process a next item in the set (or the same item if it is the only one in the set.)
FIG. 7 illustrates a scheduling process used in one embodiment. Processing begins with process block 700, and proceeds to process block 702, wherein a first set is selected. Next, as determined in process block 704, if there are no items in the selected set, then in process block 706, a next set is selected and processing returns to process block 704. Otherwise, in process block 708, an item, such as the first one if an ordered set is used or another item, is selected from the set. In process block 710, the information corresponding to the item is retrieved and a mini-deficit value is initialized to a mini-quantum value. In one embodiment, the same mini-quantum is used for all items, while in one embodiment, the mini-quantum is not the same for all items. Note, that if a mini-quantum value is set to less than a minimized sized packet, then the process illustrated in FIG. 6 is achieved (with some extraneous steps) in one embodiment.
Next, in process block 712, an entity, typically the first entity, is processed. In one embodiment, a packet is sent. Next, in process block 714, the deficit and mini-deficit of the item are each decreased by some constant or variable value, such as the size of the sent packet or other value commiserate with the processing performed. Next, as determined in process block 720, if there are no more entities in the item, then in process block 722, the item is removed from the set as it has no more processing to be performed, and processing returns to process block 704. Otherwise, as determined in process block 724, if the deficit does not allow the item to be further processed (e.g., the deficit is less than a predetermined constant value such as zero or determined via another comparison), then, in process block 728, the item is removed from the current set and placed in another of the sets of items typically based on the value of the deficit for the item, its quantum, and the current set, such that the item will not be manipulated further until it actually would have developed enough quantum to process one of its entities. Additionally, the deficit for the item is increased by an appropriate additional quantum corresponding to the next time the item is serviced or processed. Processing then returns to process block 704.
Otherwise, if, as determined in process block 726, the mini-deficit does not allow the item to be further processed (e.g., the mini-deficit is less than a predetermined constant value such as zero or determined via another comparison) processing returns to process block 704. Otherwise, processing returns to process block 716 to process another entity as the allocated deficit and the mini-deficit were not previously completely used.
In view of the many possible embodiments to which the principles of our invention may be applied, it will be appreciated that the embodiments and aspects thereof described herein with respect to the drawings/figures are only illustrative and should not be taken as limiting the scope of the invention. For example and as would be apparent to one skilled in the art, many of the process block operations can be re-ordered to be performed before, after, or substantially concurrent with other operations. Also, many different forms of data structures could be used in various embodiments. The invention as described herein contemplates all such embodiments as may come within the scope of the following claims and equivalents thereof.

Claims

CLAIMSWhat is claimed is:
1. A method comprising: maintaining a plurality of sets of items requiring processing to be performed; identifying a current set of the plurality of sets of items; and processing items in the current set, said processing the items including: identifying a particular item in the current set; adding a quantum to a deficit of the particular item; processing an entity corresponding to the particular item; decreasing the deficit of the particular item by a value; determining if the deficit allows the particular item to be further processed; and moving the particular item from the current set to another of the plurality of sets of items after said determining if deficit allows the particular item to be further processed.
2. The method of claim 1, wherein each of the items in the plurality of sets of items represents a queue.
3. The method of claim 2, wherein the entity represents a packet.
4. The method of claim 3, wherein the value corresponds to a size of the packet.
5. The method of claim 3, wherein the value corresponds to a constant value.
6. The method of claim 1, wherein the entity represents a process, thread, or information stream.
7. The method of claim 1, wherein the value corresponds to a variable amount related to said processing of the entity or a constant value.
8. The method of claim 1, wherein said determining if the deficit allows the particular item to be further processed includes comparing a value of the deficit to a constant value or zero.
9. The method of claim 1, wherein said processing items further comprises selecting said another of the plurality of sets of items based on the deficit and the quantum of the particular item.
10. The method of claim 1, wherein said processing items further comprises removing the particular item from all of the plurality of sets when the particular no longer has any corresponding entities remaining.
11. The method of claim 1, wherein said processing items further comprises leaving the particular item in the current set when said determining determines that the deficit allows the particular item to be further processed.
12. The method of claim 11 , wherein said moving the particular item from the current set to said another of the plurality of sets of items is performed only when said determining determines that the deficit does not allow the particular item to be further processed.
13. The method of claim 11, wherein a group including said processing the entity, said decreasing the deficit, and said determining is repeatedly performed on the particular item until said determining determines that the deficit does not allow the particular item to be further processed.
14. The method of claim 1, wherein said processing items further comprises setting the particular item to be a different item in the current set.
15. The method of claim 14, wherein a group including said processing the entity, said decreasing the deficit, said determining and said setting is repeatedly performed until the current set becomes empty.
16. The method of claim 1, wherein said processing items further comprises: initializing a mini-deficit associated with the particular item; decreasing the mini-deficit of the particular item by a second value; determining if the mini-deficit allows the particular item to be further processed; and setting the particular item to be a different item in the current set after said determining if the mini-deficit allows the particular item to be further processed.
17. The method of claim 16, wherein a group including said processing the entity, said decreasing the deficit, said decreasing the mini-deficit, said determining if the mini-deficit allows and said determining if the deficit allows is repeatedly performed on the particular item until said determining if the deficit allows determines that the deficit does not allow the particular item to be further processed or until said determining if the mini-deficit does not allow the particular item to be further processed.
18. The method of claim 16, wherein the value and the second value are equal.
19. The method of claim 1, further comprising selecting and processing a particular next set from the plurality of sets of items.
20. The method of claim 19, wherein said selecting and processing the particular next set is performed in response to determining the current set to be empty.
21. A computer-readable medium containing computer-executable instructions for performing the method of claim 1.
22. An apparatus comprising: a plurality of queues for buffering a plurality of packets; and a scheduler to perform scheduling of the sending of packets of the plurality of packets, the scheduler including: control logic; and one or more memory devices configured to maintain a plurality of sets of indications of queues of the plurality of queues requiring processing to be performed; wherein said scheduling includes: identifying a current set of the plurality of sets of indications; and processing queues corresponding to the one or more indications of queues in the current set, said processing including: identifying a particular indication of a particular queue of the plurality of queues in the current set; adding a quantum to a deficit of the particular indication; sending a packet from the particular queue corresponding to the particular indication; decreasing the deficit by a value; determining if the deficit allows the particular queue to be further processed; and moving the particular indication from the current set to another of the plurality of sets of indications of queues after said determining if deficit allows the particular queue to be further processed.
23. The apparatus of claim 22, wherein the value corresponds to a size of the packet.
24. The apparatus of claim 22, wherein said processing queues further comprises selecting said another of the plurality of sets of indications based on the deficit and the quantum.
25. The apparatus of claim 22, wherein said processing queues further comprises leaving the particular indication in the current set when said determining determines that the deficit allows the particular queue to be further processed.
26. The apparatus of claim 25, wherein a group including said sending the packet, said decreasing the deficit, and said determining is repeatedly performed based on the particular indication until said determining determines that the deficit does not allow the particular queue to be further processed.
27. The apparatus of claim 22, wherein said moving the particular indication from the current set to another of the plurality of sets of indications is performed only when said determining determines that the deficit does not allow the particular queue to be further processed.
28. The apparatus of claim 22, wherein said processing queues further comprises setting the particular indication to be a different particular indication in the current set of the plurality of indications.
29. The apparatus of claim 28, wherein a group including said sending the packet, said decreasing the deficit, said determining and said setting is repeatedly performed until the current set becomes empty.
30. The apparatus of claim 22,wherein said processing queues further comprises: initializing a mini-deficit associated with the particular indication; decreasing the mini-deficit by a second value; determining if the mini-deficit allows the particular queue to be further processed; and setting the particular indication to be a different indication in the current set after said determining if the mini-deficit allows the particular queue to be further processed.
31. The apparatus of claim 30, wherein a group including said sending the packet, said decreasing the deficit, said decreasing the mini-deficit, said determining if the mini-deficit allows, and said determining if the deficit allows is repeatedly performed based on the particular indication until said determining if the deficit allows determines that the deficit does not allow the particular queue to be further processed or until said determining if the mini-deficit does not allow the particular queue to be further processed.
32. The apparatus of claim 22, further comprising selecting and processing a particular next set from the plurality of sets of indications.
33. The apparatus of claim 32, wherein said selecting and processing the particular next set is performed in response to determining the current set to be empty.
34. An apparatus comprising: means for maintaining a plurality of sets of items requiring processing to be performed; means for identifying a current set of the plurality of sets of items; and means for processing each of the items in the current set, said means for processing each of the items including: means for identifying a particular item in the current set; means for adding a quantum to a deficit of the particular item; means for processing an entity corresponding to the particular item; means for decreasing the deficit of the particular item by a value; and means for determining if the deficit allows the particular item to be further processed; and means for moving the particular item from the current set to another of the plurality of sets of items after said determining if deficit allows the particular item to be further processed.
35. The apparatus of claim 34, wherein said processing items further comprises setting the particular item to be a different item in the current set.
36. The method of claim 34, further comprising: means for initializing a mini-deficit associated with the particular item; means for decreasing the mini-deficit of the particular item by a second value; means for determining if the mini-deficit allows the particular item to be further processed; and means for setting the particular item to be a different item in the current set after determining if the mini-deficit allows the particular item to be further processed.
37. The method of claim 34, further comprising means for selecting a particular next set from the plurality of sets of items.
PCT/US2003/007872 2002-03-21 2003-03-14 Scheduling using quantum and deficit values WO2003081874A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CA2484427A CA2484427C (en) 2002-03-21 2003-03-14 Scheduling using quantum and deficit values
DE60303444T DE60303444T2 (en) 2002-03-21 2003-03-14 PROCESS CONTROL USING QUANTUM VALUES AND DEFICIT VALUES
AU2003221996A AU2003221996B2 (en) 2002-03-21 2003-03-14 Scheduling using quantum and deficit values
EP03717979A EP1488600B1 (en) 2002-03-21 2003-03-14 Scheduling using quantum and deficit values

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/104,759 2002-03-21
US10/104,759 US7065091B2 (en) 2002-03-21 2002-03-21 Method and apparatus for scheduling and interleaving items using quantum and deficit values including but not limited to systems using multiple active sets of items or mini-quantum values

Publications (1)

Publication Number Publication Date
WO2003081874A1 true WO2003081874A1 (en) 2003-10-02

Family

ID=28452387

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/007872 WO2003081874A1 (en) 2002-03-21 2003-03-14 Scheduling using quantum and deficit values

Country Status (8)

Country Link
US (2) US7065091B2 (en)
EP (2) EP1638273B1 (en)
CN (1) CN100593312C (en)
AT (2) ATE508563T1 (en)
AU (1) AU2003221996B2 (en)
CA (1) CA2484427C (en)
DE (1) DE60303444T2 (en)
WO (1) WO2003081874A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7523218B1 (en) * 2002-04-30 2009-04-21 University Of Florida Research Foundation, Inc. O(log n) dynamic router tables for prefixes and ranges
US7342936B2 (en) * 2002-06-17 2008-03-11 Integrated Device Technology, Inc. Method of performing deficit round-robin scheduling and structure for implementing same
US7350208B1 (en) 2002-12-31 2008-03-25 Cisco Technology, Inc. Method and apparatus for scheduling using a resource variable decreased by amounts corresponding to the efficiency of the resource
US7580355B2 (en) 2003-08-25 2009-08-25 Integrated Device Technology, Inc. Method of performing weighted round-robin queue scheduling using a dynamic link list and structure for implementing same
GB2411492B (en) * 2004-02-25 2006-06-07 Patsystems Electronic trading system
US7599381B2 (en) * 2004-12-23 2009-10-06 Cisco Technology, Inc. Scheduling eligible entries using an approximated finish delay identified for an entry based on an associated speed group
US7843940B2 (en) * 2005-06-01 2010-11-30 Cisco Technology, Inc. Filling token buckets of schedule entries
US7809009B2 (en) * 2006-02-21 2010-10-05 Cisco Technology, Inc. Pipelined packet switching and queuing architecture
US8339949B2 (en) * 2007-10-24 2012-12-25 Cortina Systems Inc. Priority-aware hierarchical communication traffic scheduling
CN101621478A (en) * 2009-08-07 2010-01-06 中兴通讯股份有限公司 Method and device for array dispatch
US8335160B2 (en) * 2010-03-30 2012-12-18 Telefonaktiebolaget L M Ericsson (Publ) Flow sampling with top talkers
EP2672461A1 (en) 2012-06-05 2013-12-11 a.tron3d GmbH Method for continuing recordings to detect three-dimensional geometries of objects
CN105429897A (en) * 2014-09-19 2016-03-23 深圳市中兴微电子技术有限公司 Service quality scheduling method and device
CN106254272B (en) * 2016-09-05 2019-08-02 盛科网络(苏州)有限公司 Weighted-fair scheduling method and scheduling system based on hard-wired elasticity

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6247061B1 (en) * 1998-06-09 2001-06-12 Microsoft Corporation Method and computer program product for scheduling network communication packets originating from different flows having unique service requirements
US6377579B1 (en) * 1998-06-11 2002-04-23 Synchrodyne Networks, Inc. Interconnecting a synchronous switching network that utilizes a common time reference with an asynchronous switching network
US6480511B1 (en) * 1998-09-15 2002-11-12 Avaya Technology Corp. Arrangement for synchronization of a stream of synchronous traffic delivered by an asynchronous medium
US6172579B1 (en) * 1999-02-02 2001-01-09 Cleveland Medical Devices Inc. Three point modulated phase locked loop frequency synthesis system and method
US7075934B2 (en) * 2001-01-10 2006-07-11 Lucent Technologies Inc. Method and apparatus for hierarchical bandwidth distribution in a packet network
KR100431191B1 (en) * 2001-12-03 2004-05-12 주식회사 케이티 An apparatus and method for scheduling packets by using a round robin based on credit

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GIROUX N ET AL: "Queuing and Scheduling, SCHEDULING MECHANISMS", QUALITY OF SERVICE IN ATM NETWORKS: STATE-OF-THE-ART TRAFFIC MANAGEMENT, XX, XX, 1999, pages 96 - 109, XP002245024 *
SHREEDHAR M ET AL: "EFFICIENT FAIR QUEUING USING DEFICIT ROUND-ROBIN", IEEE / ACM TRANSACTIONS ON NETWORKING, IEEE INC. NEW YORK, US, vol. 4, no. 3, 1 June 1996 (1996-06-01), pages 375 - 385, XP000591077, ISSN: 1063-6692 *

Also Published As

Publication number Publication date
EP1488600A1 (en) 2004-12-22
DE60303444T2 (en) 2006-09-21
CN100593312C (en) 2010-03-03
US20060176883A1 (en) 2006-08-10
CA2484427A1 (en) 2003-10-02
EP1488600B1 (en) 2006-02-01
US7065091B2 (en) 2006-06-20
CN1643874A (en) 2005-07-20
EP1638273B1 (en) 2011-05-04
CA2484427C (en) 2012-03-13
ATE317196T1 (en) 2006-02-15
US20030214964A1 (en) 2003-11-20
EP1638273A3 (en) 2009-04-01
AU2003221996A1 (en) 2003-10-08
US7212535B2 (en) 2007-05-01
EP1638273A2 (en) 2006-03-22
DE60303444D1 (en) 2006-04-13
ATE508563T1 (en) 2011-05-15
AU2003221996B2 (en) 2008-10-09

Similar Documents

Publication Publication Date Title
US7212535B2 (en) Scheduling items using mini-quantum values
US7876763B2 (en) Pipeline scheduler including a hierarchy of schedulers and multiple scheduling lanes
US8325736B2 (en) Propagation of minimum guaranteed scheduling rates among scheduling layers in a hierarchical schedule
US7606250B2 (en) Assigning resources to items such as processing contexts for processing packets
EP1264430B1 (en) Non-consecutive data readout scheduler
US7453898B1 (en) Methods and apparatus for simultaneously scheduling multiple priorities of packets
US7843940B2 (en) Filling token buckets of schedule entries
US7248594B2 (en) Efficient multi-threaded multi-processor scheduling implementation
EP1741229B1 (en) Weighted random scheduling
WO2002062013A2 (en) Methods and systems providing fair queuing and priority scheduling to enhance quality of service in a network
US7580355B2 (en) Method of performing weighted round-robin queue scheduling using a dynamic link list and structure for implementing same
US7289443B1 (en) Slow-start packet scheduling particularly applicable to systems including a non-blocking switching fabric and homogeneous or heterogeneous line card interfaces
US7565496B2 (en) Sharing memory among multiple information channels
US7350208B1 (en) Method and apparatus for scheduling using a resource variable decreased by amounts corresponding to the efficiency of the resource
US7599381B2 (en) Scheduling eligible entries using an approximated finish delay identified for an entry based on an associated speed group
US7583678B1 (en) Methods and apparatus for scheduling entities using a primary scheduling mechanism such as calendar scheduling filled in with entities from a secondary scheduling mechanism
EP1774721B1 (en) Propagation of minimum guaranteed scheduling rates

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 20038066181

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2003717979

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2003221996

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 2484427

Country of ref document: CA

WWP Wipo information: published in national office

Ref document number: 2003717979

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 2003717979

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP