CA2252106A1 - Method and apparatus for processor sharing - Google Patents

Method and apparatus for processor sharing Download PDF

Info

Publication number
CA2252106A1
CA2252106A1 CA002252106A CA2252106A CA2252106A1 CA 2252106 A1 CA2252106 A1 CA 2252106A1 CA 002252106 A CA002252106 A CA 002252106A CA 2252106 A CA2252106 A CA 2252106A CA 2252106 A1 CA2252106 A1 CA 2252106A1
Authority
CA
Canada
Prior art keywords
tickets
processes
thread
processor
threaded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002252106A
Other languages
French (fr)
Inventor
Kelvin K. Yue
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Publication of CA2252106A1 publication Critical patent/CA2252106A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Abstract

A method and apparatus for implementing proportional sharing in a single processor system and/or in a multi-processor system. The invention can also implement proportional sharing in a system that executes multi-threaded computer programs. The invention uses the metaphor of "tickets." Each process in the system has a number of tickets and each ticket entitles the process to use a processor for a period of a time quantum. The operating system allocates the processor(s) first to the process with the highest number of tickets. As each process (or thread) finishes being executed for a predetermined amount of time, the tickets of that process/thread are adjusted accordingly and a new process (or thread) is chosen for execution. Tickets can be allocated to each process in thesystem. Alternatively, tickets can be allocated to each process and shared by all threads of the process. Alternatively, tickets can be allocated to a group of processes and shared by all processes within the group.

Description

CA 022~2106 1998-10-28 METHOD AND APPARATUS FOR PROCESSOR SHARING

FIELD OF THE INVENTION
This application relates to operating systems and, specifically, to a method and apparatus that allows processes to share one or more processors in a 5 proportional manner.

BACKGROUND OF THE INVENTION
Most modern computer systems are multi-tasking systems. That is, they allow more than one "job" or "process" to be active at a given time. Since there is 10 often only one processor in the system (or a number of processors less than the number of jobs/processes), it becomes necessary for the jobs/processes to share the processor resources. In a shared processor system, a processor spends some time executing part of a first job/process before switching to execute part of another job/process. Thus, to a human user, it appears that more than one 15 job/process is being executed at a given time.
When multiple applications, processes, or jobs are sharing one or more processors in a computer system, an equitable method of processor sharing needs to be determined. For instance, Job A might be twice as urgent as Job B.
As another example, if User X and User Y share a computer system and User X
20 pays a fee three times as much as User Y pays, then User X wants to be allocated three times more processor resources than User Y.
Moreover, some computer systems execute "multi-threaded" computer programs in which multiple "threads" of a process can be executing at the same time. Multi-threading adds an extra note of complexity to the operating system 25 and to processor sharing.
As an example, many conventional operating systems use a priority-based approach to allocate jobs, threads, users, or processes to one or more processors. Thus, jobs having the highest priority execute before jobs having a lower priority. In some conventional operating systems, a high priority job will run 30 until it is done before a lower priority job gets to run.

CA 022~2106 1998-10-28 In at least one implementation of the Solaris operating system (available from Sun Microsystems, Inc), a highest priority job will run for a period of time and then its priority is redetermined. There are currently four different scheduling classes that define the priorities of the applications in a conventional Solaris5 system: Real Time (RT); System (SYS); Interactive (IA), and Timesharing (TS).
If, after its execution, a job still has the highest priority in the system, it is allowed to run for another period of time (e.g., between 20 to 200 milliseconds), after which the priority is redetermined again. If it is no longer the highest priority job in the system after the redetermination, then a job that has a higher priority gets 10 to run. Unfortunately, if a job maintains a highest priority, other applications do not always get a chance to execute.

SUMMARY OF THE INVENTION
A first preferred embodiment of the present invention provides a method 15 and apparatus for implementing proportional sharing among single-threaded applications in a single processor system and/or in a multi-processor system. A
second preferred embodiment of the present invention provides a method and apparatus that implements proportional sharing among multi-threaded applications in a single processor system and/or a multiple processor system. A
20 third preferred embodiment of the present invention provides a method and apparatus that implements hierarchical sharing among the jobs of users in a single processor system and/or a multiple processor system. Each of these preferred embodiments is described below in detail.
The present invention uses the metaphor of "tickets." In a preferred 25 embodiment, each process in the system has a number of tickets and each ticket entitles a thread of the process to use a processor for a period of a time ( a "time quantum"). The operating system allocates processor time first to a thread of the process having the highest number of tickets. As each thread (or process) finishes being executed for a predetermined amount of time, the tickets of that process to 30 which the thread belongs are adjusted accordingly and a new thread is chosen for execution CA 022~2106 1998-10-28 As an example, if the ratio of processor allocation for process A and process B is x:y, then process A is given x tickets and process B is given y tickets. The priority of a process (or of threads of the process) is calculated based on the number of tickets that the process currently has. The more tickets a process 5 currently has, the higher the priority of its threads.
The operating system picks for execution the waiting thread that has the highest priority. Each time a thread is picked for execution, the number of tickets that its process currently has is reduced. If the number of tickets held by a process falls to zero, the number of tickets is reset to its initial number of tickets. Tickets 10 can be allocated to each process in the system. Alternatively, tickets can beallocated to each process and shared by all threads of the process. Alternatively, tickets can be allocated to a group of processes and shared by all processes within the group.
Alternatively, tickets can be assigned at the user level and shared by all jobs 15 of a particular user.
In accordance with the purpose of the invention, as embodied and broadly described herein, the invention relates to a method of sharing at least one processor in a data processing system between a plurality of processes, including the steps, performed by the data processing system, of: initially assigning a 20 number of tickets to each of the plurality of processes; assigning an initial priority to a thread of each of the plurality of processes in accordance with the number of tickets assigned to the process associated with the thread; and executing the respective threads of the plurality of processes in an order indicated by the tickets assigned to the plurality of processes, so that the proportion of execution time25 between any two of the threads is the same as the proportion between the number of tickets of the two processes associated with the threads.
In further accordance with the purpose of the invention, as embodied and broadly described herein, the invention relates to a method of sharing a processor between a plurality of threads of a plurality of multi-threaded applications in a data 30 processing system, including the steps, performed by the data processing system, of: initially assigning respective numbers of tickets to each of the plurality of multi-threaded applications; assigning a priority to each of the plurality of multi-threaded , . .

CA 022~2106 1998-10-28 applicaliol1s in accordance with a number of tickets assigned to each multi-threaded application; adding a thread of at least one of the plurality of multi-threaded applications to a ticket queue, when there is not room for another process on a dispatch queue; and adding a thread of at least one of the plurality of multi-threaded applications to a dispatch queue when room becomes available for another thread on a dispatch queue.
In further accordance with the purpose of the invention, as embodied and broadly described herein, the invention relates to a method of sharing at least one processor between a plurality of threads of a first multi-threaded process and a10 plurality of threads of a second multi-threaded process in a data processing system, including the steps, performed by the data processing system, of: initially assigning a respective numbers of tickets to the first and second multi-threadedprocesses; assigning a priority to at least one thread of each of the first and second multi-threaded processes in accordance with the number of tickets assigned to 15 each multi-threaded process; and executing the threads of the first and second multi-threaded processes in an order indicated by the tickets assigned to their respective multi-threaded process, so that the proportion of execution time between any two of the processes is the same as the proportion between the number of tickets of the two multi-threaded processes.
In further accordance with the purpose of the invention, as embodied and broadly described herein, the invention relates to a data structure stored in a memory storage area of a data processing system, comprising: a data structure representing a first thread of a multi-threaded process; a data structure representing a second thread of a multi-threaded process; and a ticket queue data 25 structure, that is pointed to by the data structures representing the first and second threads, where the ticket queue data structure stores a value indicating an initial number of tickets for the multi-threaded process to which the first and second threads belong and further storing a value indicating a current number of tickets for the multi-threaded process.

CA 022~2106 1998-10-28 A fuller understanding of the invention will become apparent and appreciated by referring to the following description and claims taken in conjunction with the accompanying drawings.

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and,together with the description, sever to explain the principles of the invention.Fig. 1 shows an example of a plurality of single-threaded applications 10 executing on a data processing system having a single processor, in accordance with a first preferred embodiment of the present invention.
Fig. 2 shows an example of a plurality of single-threaded applications executing on a data processing system having multiple processors, in accordance with a second preferred embodiment of the present invention.
Fig. 3 shows an example of a plurality of multi-threaded applications executing on a data processing system having a single processor, in accordance with a third preferred embodiment of the present invention.
Fig. 4 shows an example of a plurality of multi-threaded applications executing on a data processing system having rnultiple processors, in 20 accordance with a fourth preferred embodiment of the present invention.
Fig. 5(a) shows an example of a data structure used to implement a preferred embodiment of the present invention in a system where single-threaded applications share a single processor (such as the system of Fig. 1).
Fig. 5(b) shows an example of a data structure used to implement a 25 preferred embodiment of the present invention in a system where single-threaded applications share multiple processors (such as the system of Fig. 2).
Figs. 6(a) and 6(b) are flow charts showing steps using the data structure of Figs. 5(a) or 5(b).
Fig. 7(a) shows an example of a data structure used to implement a 30 preferred embodiment of the present invention in a system where multi-threaded applications share a single processor (such as the system of Fig. 3).

CA 022~2106 1998-10-28 Fig. 7(b) shows an example of a data structure used to implement a preferred embodiment of the present invention in a system where multi-threaded applications share multiple processors (such as the system of Fig. 4).
Figs. 8(a) and 8(b) are flow charts showing steps using the data structure 5 of Figs. 7(a) or 7(b).
Fig 9 shows an example of a data structure used to implement a preferred embodiment of the present invention in a system where multiple processors are shared among the jobs of a plurality of users.
Figs. 1 0(a) and 1 0(b) are flow charts showing steps performed in the data 10 processing system of Fig. 9.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Reference will now by made in detail to preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings.
15 Wherever convenient, the same reference numbers will be used throughout the drawings to refer to the same of like parts.

I. Overview The present invention can be implemented on a wide variety of data 20 processing systems, including systems that execute single-threaded applications and/or multi-threaded applications and that include a single processor or multiple processors. Figs. 14 show respective examples of each of these systems.
The present invention uses the metaphor of "tickets." In a preferred embodiment of the present invention, each process in the system has a number of 25 tickets and each ticket entitles a thread of the process to use a processor for a period of a time ( a "time quantum"). The operating system allocates processor time first to a thread of the process having the highest number of tickets. As each thread (or process) finishes being executed for a predetermined amount of time, the tickets of that process to which the thread belongs are adjusted accordingly30 and a new thread is chosen for execution. In one embodiment, a ticket owner can only have numbers of tickets that are positive integer values. In an alternate embodiment, a ticket owner can have fractional or negative numbers of tickets.

CA 022~2106 1998-10-28 In a preferred embodiment, each process in the system has a number of tickets and each ticket entitles a thread of the process to use a processor for a period of a time ( a "time quantum").
Fig. 1 is a block diagram of a data processing system 100 having a single 5 processor 102; a memory 104; inpuVoutput lines 105; an input device 150, such as a keyboard, a mouse, or a voice input device; and a display device 160, such as a display terminal. Data processing system 100 also includes an input device 161, such as a floppy disk drive, CD ROM reader, or DVD reader, that reads computer instructions stored on computer readable medium 162, such as a 10 floppy disk, a CD ROM, or a DVD drive. These computer instructions are the instructions of e.g., process scheduler software 109.
Memory 104 includes one or more single-threaded applications (also called "processes") 106, each of which contains a thread 107. Memory 104 also includes an operating system (OS) 108, which includes process scheduler 15 software 109. The steps of the described embodiment of the present invention are performed when instructions in process scheduler software 109 are performed by processor 102. Data processing system 100 (and other data processing system discussed herein) can be, for example, a SPARC chip based system, an Ultra server, or an Enterprise server (all of which are available from 20 Sun Microsystems, Inc.). Data processing system 100 can also be any other appropriate data processing system.
Operating system 108 can be, for example, a variation of the Unix operating system that has been modified to incorporate the present invention.
UNIX is a registered trademark in the United States and other countries and is 25 exclusively licensed by X/Open Company Ltd. Operating system 108 can be, for example, a variation of the Solaris operating system, available from Sun Microsystems, Inc. that incorporates the functionality of the present invention.It will be understood that the present invention can also be performed in a distributed data processing system, where the processor(s) and memory are 30 located in different machines. It will also be understood that the present invention can also be implemented in a distributed system, where the processes, threads, and/or processors are not all in the same computer. The present CA 022~2106 1998-10-28 invention allows processes 106 to proportionally share processor 102, as will bedescribed below.
A person of ordinary skill in the art will understand that memory 104 also contains additional information, such as application programs, operating 5 systems, data, etc., which are not shown in the figure for the sake of clarity. It also will be understood that data processing system 100 (or any other data processing system described herein) can also include numerous elements not shown, such as disk drives, keyboards, display devices, network connections, additional memory, additional CPUs, LANs, inpuVoutput lines, etc.
In the following discussion, it will be understood that the invention is not limited to any particular implementation or programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention may be implemented in any appropriate operating system using any appropriate programming language or programming techniques.
Fig. 2 is a block diagram of a data processing system 200 having multiple processors 102' and a memory 104. Memory 104 includes one or more single-threaded applications (also called "processes") 106, each of which contains a thread 107. Memory 104 also includes an operating system (OS) 108, which includes process scheduler software 109. The steps of the described embodiment of the present invention are performed when instructions in process scheduler software 109 are executed by one or more of processors 102'. The present invention allows applications 106 to proportionally share processors 102', as will be described below.
Fig. 3 is a block diagram of a data processing system 300 having a single processor 102 and a memory 104. Memory 104 includes one or more multi-threaded applications (also called "processes"") 110, at least one of which contains a plurality of threads 112. Memory 104 also includes an operating system (OS) 108, which includes process scheduler software 109. The steps of the described embodiment of the present invention are performed when instructions in process scheduler software 109 are executed by processor 102.
The present invention allows the threads 112 of each multi-threaded CA 022~2106 1998-10-28 application/process 110 to proportionally share processor 102, as described below.
Fig. 4 is a block diagram of a data processing system 400 having multiple processors 102' and a memory 104. Memory 104 includes one or more multi-5 threaded applications 110 (also called a "process"), at least one of whichcontains a plurality of threads 112. Memory 104 also includes an operating system (OS) 108, which includes process scheduler software 109. The steps of the described embodiment of the present invention are performed when instructions in process scheduler software 109 are executed by one or more of 10 processors 102'. The present invention allows the threads 112 of each multi-threaded application/process 110 to proportionally share processors 102', as described below.

Il. Single-threaded Applications The following paragraphs discuss several preferred embodiments of the present invention that are applicable when the system is executing single-threaded applications (also called "processes"). Figs. 5 and 6 relate to single-threaded applications. Figs. 7 and 8 relate to multi-threaded applications.
With regard to single-threaded applications, Fig. 5(a) shows an example of 20 a data structure used to implement a preferred embodiment of the present invention in a system where single-threaded applications share a single processor (such as the system of Fig. 1). Fig.5(b) shows a data structure used in a system where single-threaded applications share multiple processors 102'.
Both Figures show data structures for two single-threaded applications (Process 25 A and Process B), although it should be understood that any appropriate number of processes/applications can be executing in the system.
Fig. 5(a) shows a proc_t data structure 501 for each process 106 being executed by the system and a thread_t data structure 502 for each thread being executed by the system. Each proc_t data structure 501 points to the thread_t 30 data structure 502 of its thread via a thread list pointer 511. Each single-threaded application will have only one thread. Each thread_t data structure 502points to a ps_proc_t data structure 504 via a thread data pointer in field 510.

CA 022~2106 1998-10-28 Each ps_proc_t data structure 504 preferably includes a time quantum field 520, a time left field 522, a thread pointer field 524, a priority field 526 (containing a priority for the process/thread), a #tickets field 528 (containing an initial number of tickets assigned to the process/thread), and a #current tickets 5 field 530 (containing a number of tickets currently held by the process/thread).
Processes/threads waiting to be executed by processor 102 are temporarily placed in dispatch queue 103.
Processes/threads are executed by processor 102 for a preferred time quantum 520. During execution of a process/thread, the amount of time 10 remaining of a current time quantum is stored in timeleft field 522. When a thread or process is picked to run on a processor, the timeleft field 522 of thethread/process is set to equal its time quantum. For example, if a time quantum is 100 milliseconds (10 clock ticks), timeleft field 522 is initially set to 10 ticks.
Timeleft field 522 is decremented by OS 108 for every clock tick that a processor 15 executes the thread/process. Timeleft field 522 is needed because every so often the processor is interrupted during execution of the thread/process to perform some other function. The timeleft field 522 is used to keep track of howmuch of a time quantum is used up for a thread/process. When timeleft field 522 is zero, it is time for the thread/process to give up the processor and recalculate 20 its priority.
Fig. 5(b) shows an example of a data structure used to implement a preferred embodiment of the present invention in a system where single-threaded applications share multiple processors 102' (such as the system of Fig.2). The Figure shows data structures for two single-threaded applications 25 (Process A and Process B), although it should be understood that any appropriate number of processes can be executing in the system.
Figs. 6(a) and 6(b) are flow charts showing steps performed in the data processing system of Figs. 5(a) or 5(b). It will be understood that, in the example, the steps of Figs. 6(a) and 6(b) are performed by process scheduling 30 software 109 being executed by processor 102,102', or some other appropriate processor. In a preferred embodiment of the present invention, operating system 108 is based on the Solaris operating system, which is available from Sun .. ....

CA 022~2106 1998-10-28 Microsystems, Inc. The steps of Figs. 6(a) and 6(b) preferably are implemented by modifying the RT scheduling class of Solaris. In such an implementation, two new fields (#tickets and #current tickets) are preferably added to the rt_proc structure (here called ps_proc_t). The Solaris function rt_tick is also preferably 5 modified, as described below, to calculate the new priority in accordance with a number of tickets held, to reduce the number of tickets after a process/thread has executed for a predetermined time quantum, and to decide whether the current thread should continue to use the processor(s).
The following table shows an example of two processes: Process A and 10 Process B executing concurrently and sharing a processor in a ratio of 2:1. The table will be discussed in connection with the steps of Figs. 6(a) and 6(b).

CA 022~2106 1998-10-28 Process A Process B
Time #Tkts State Priority #Tkts State Priority 1 1 Run 52 1 51 2 1 51 1 Run 51 3 2 Run 51 1 51 4 1 Run 52 1 51 1 51 1 Run 51 6 2 Run 52 1 51 Initially, in step 652 of Fig 6(a), Process A is assigned 2 tickets and Process B is assigned 1 ticket. These values are stored in #tickets field 528 and #current tickets field 530 for each process. Thus, the proportion of execution time between any threads of the two processes is to be 2:1 because that is the ratio between the number of tickets assigned to the two processes. The thread of Process A is assigned an initial priority of 52 (50 + 2) and the thread of Process B is assigned an initial priority of 51 (50 + 1). The initial priority of "50" is arbitrary in the example, and can be any appropriate value in an implementation of the invention. These priorities are stored in field 526 for each process.
In step 654, it is determined that a thread of Process A is ready to run. A
priority for the thread of Process A is calculated in step 658 based on a numberof tickets of Process A. Thus, the thread of Process A is placed in dispatch queue 103 (or 103') in step 660.
As shown in Fig. 6(b), in step 674, the thread of Process A is eventually removed from the dispatch queue for execution by a processor when it has a highest priority in the dispatch queue. The thread of Process A is executed by one of processors 102, 102' for a predetermined time quantum (for example,100 milliseconds, although any appropriate time quantum can be used). During execution, timeleft field 522 is decremented while the thread of the process is executed during its time quantum. At the end of execution, timeleft field 522 iszero.
When the time quantum of its executing thread expires, the number of tickets held by Process A is decremented by a predetermined number (such as 35 "1") in step 676. If the number of tickets held by the process is "0" in step 678, CA 022~2106 1998-10-28 the number of tickets for the process is reset to the initial value for the process in step 680 and control passes to step 690. Otherwise, control passes to step 690.
In the example, the number of tickets is decremented from 2 to 1. This value is stored in #current tickets field 530 for Process A.
In step 690, the priority of the thread of Process A is recalculated. In the example, the priority of the thread of Process A is 51, since the number of current tickets held by Process A is 1. In step 692 the thread is placed back on the dispatch queue (since, of course it has not completed execution).
In the example, control then passes to the step 674 of Fig. 6(b), where it is 10 determined that, since the thread of Process B has the same priority as the thread of Process A, the thread of Process A will wait and the thread of ProcessB will execute. (The thread of process B was previously placed on the dispatch queue). After the thread of Process B executes for the predetermined amount of time, the number of tickets that Process B has is reduced by "1," giving Process15 B zero tickets. In step 680, the #current tickets 530 held by Process B is set back to its initial value "1". The priority for the thread of Process B is then recalculated, still yielding a priority of "51". Since the thread of Process A has the same priority, the thread of Process A executes next. After the thread of Process A executes, the number of current tickets of process A is reduced to "0" and reset 20 back to "2".
The steps of Fig. 6(b) are repeated as shown in the table until both Process A and Process B have executed completely. Note that, in the example, the thread of Process A executes for four quantum time units, while the thread of Process B executes for two quantum time units. This yields an execution ratio of25 2:1 while the processes are competing for the processor(s).
As an additional example, if Process A is a process or application having a short execution time and Process B has a long execution time (and assuming that there are only two executing processes), then process B will have all the execution time after Process A is done executing. The described embodiment of 30 the present invention preserves proportionality whenever both applications are competing for the processor.

CA 022~2106 1998-10-28 As an additional example, if Processes A and B are both long-running applications, but process A stops in the middle to wait for some inputs from theuser, Processes A and B will have a ratio of 2:1 until Process A stops. Then Process B will have all the processor time while Process A is waiting for inputs.
After the user enters his input, Processes A and B will have a ratio of 2:1 again However, the ratio between the total processor time spent on each job is skewed by how long the wait time is for Process A. Again, the described embodiment of the present invention preserves proportionality whenever both applications are competing for the processor.
Ill. Multi-threaded Applications Fig. 7(a) shows an example of a data structure used to implement a preferred embodiment of the present invention in a single processor system capable of executing multi-threaded applications (such as the system of Fig. 3).15 Fig. 7(b) shows an example of a data structure used to implement a preferred embodiment of the present invention in a multi-processor system capable of executing multi-threaded applications (such as the system of Fig. 4). Figs. 8(a)and 8(b) are flow charts showing steps using the data structure of Figs. 7(a) or 7(b).
The data structure of Figs. 7(a) and 7(b) are similar to the data structures of Figs. 5(a) and 5(b), except that, in a system capable of executing multi-threaded applications, multiple threads in each application/process "share" the tickets of their process between them via a ticket data structure 705. It should be understood that the systems of Fig. 7(a) and 7(b) can also include single-threaded applications (not shown), in addition to the multi-threaded applications.
While each thread has its own priority 726, the tickets are held at the process level in fields 740 and 742 of ticket structure 705. Ticket data structure 705 also includes a ticket queue pointer 748 and a #slots field 746. The #slots field 746indicates a number of processor slots assigned to each process. In the case of asingle processor system, the number of slots is always "1". A multiple processorsystem can have any number of slots less than or equal to the number of processors 102'. Ticket queue pointer 748 points to a ticket queue 750, whose CA 022~2106 1998-10-28 function is to hold threads of a process that are waiting for a processor slot to become available. Threads to be executed are taken off ticket queue 750 and placed on dispatch queue 103 (or 103') for execution by processor 102 (or 102').Fig. 8(a) show steps performed in connection with the multi-threaded 5 applications of Figs. 7(a) and 7(b) sharing single processor 102 or multiple processors 102'. In step 852 of Fig. 8(a), a number of tickets and a number of slots are assigned to each process. The priority of a thread is assigned based on the number of current tickets in the ticket structure 705 of its process when the priority is assigned . Thus, threads of a same process can have different 10 priorities.
When a thread wants to use a processor 102,102', it first checks to determine whether there is an available processor slot (step 856). If there is an available slot (i.e., no other thread is waiting in ticket queue 750), a priority is determined for the thread in step 858. This priority is placed in field 726 of the 15 thread's ps_proc_t data structure 504' and the thread is placed in dispatch queue 103 for execution in step 860. Otherwise (if there are no available processor slots), the thread is placed in ticket queue 750 for this process (step 862). When a slot becomes available in step 864, a thread is taken off ticket queue 750 andits priority 726 is calculated based on the current number of tickets held by its 20 process. The thread is then placed in dispatch queue 103 for execution.
Fig. 8(b) shows the steps for executing a thread after the thread is put on the system dispatch queue for execution by processor 102. After the thread has been executed for the predetermined number of time quanta in step 874, the #current tickets 742 for its process is reduced by "1" in step 876. If the number of 25 tickets held is "0" in step 878, the number of tickets is reset to the initial value for the process in step 880 and control passes to step 882. Otherwise control passes directly to step 882 In step 882, if other threads are on the ticket queue 750 for this process, the current thread gives up its slot in step 884 and is put back on ticket queue30 750. Its slot is given to the thread at the head of ticket queue 750 and a new priority is calculated for this new thread based on the number of current tickets CA 022~2106 1998-10-28 742 for the process in step 886. This new thread (with its newly assigned priority) is placed on dispatch queue 103 for execution in step 888.
If, on the other hand, in step 882 there are no threads waiting in the ticket queue 750 when the current thread finishes execution, the priority of the current 5 thread is recalculated based on the number of current tickets for the process in step 890 and the current thread is placed back on dispatch queue 103 in step 892 for execution. Note that steps 886 and 890 recalculate a priority of a thread based on the number of tickets currently held by the process to which the threadbelongs Because the system of Fig. 7(b) includes multiple processors 102', each process is assigned a number 746 of "slots" equal, for example, to the number ofprocessors in the system. Other implementations of the present invention may use other numbers of slots. For example, the number of slots could be smaller than the number of processors in the system in order to reduce the concurrency of execution of processes. In the described embodiment, if there are two processors in the system, the number of slots would be "2". Thus, for example, if there are only two processors 102' in the system and a process has ten threads, only two threads of the process at a time can be input to the system dispatch queue. The rest of the threads will wait on a ticket queue 750.
Thus, in a preferred embodiment of the present invention, tickets are assigned to processes, which are either a single thread application or a multi-threaded application. The processes can be executed by either a single-processoror a multi-processor system. The system determines which threads to execute next based on a priority of the thread, which is determined in accordance with acurrent number of tickets held by the process to which the thread belongs. In certain multi-processor systems, the system keeps tracks of the number of open processor slots and queues the waiting threads in a ticket queue if all processor slots are being used. In the described implementation, the ticket queue is a FIFO.

IV. Hierarchical proportional Sharing Between Users Fig. 9 shows another variation of the present invention, called "hierarchical proportional sharing." In this variation, various users are preassigned proportional CA 022~2106 1998-10-28 shares of system resources. Assume, for example, that users A and B share the system in an x:y ratio and that User A has three jobs, while User B has one job.User A can spread his share of the system among user A's three jobs in, for example, a 1 :2:4 ratio. User B has only one job, so user B's job gets all of user B's 5 share of system resources. Although Fig. 9 shows multiple processors, this embodiment can also be implemented in a single processor system.
The data structure of Fig. 9 is similar to the data structures of Figs. 5-8, except that tickets are assigned to each user and are then shared by the jobs ofthe user. Although Fig. 9(a) shows a multiple processor system, hierarchical 10 proportional sharing can also be implemented for a single processor system.
While each job has its own priority 928, the tickets are held at the user level in fields 940 and 942 of ticket structure 905. Ticket structure 905 also includes aticket queue pointer 948 and a #slots field 946. The #slots field indicates a number of slots assigned to each user. In the case of a single processor system,15 the number of slots is always "1". A multi-processor system can have a numberof slots less than or equal to the number of processors. Ticket queue pointer 948 points to a ticket queue 950, whose function is to hold jobs of a user that are waiting for a processor slot to become available. Jobs to be executed are taken off ticket queue 950 and placed on dispatch queue 103' for execution by one of 20 processors 102'.
Fig.10(a) show steps performed in connection with the hierarchical process sharing between users sharing multiple processors 102'. In step 1052 of Fig.10(a), a number of tickets are assigned to each user. The priority of a job is based on the number of current tickets in the ticket structure 905 of its user when 25 the priority is assigned. Thus, jobs of a same user can have different priorities.
When a job wants to use a processor 102', it first checks to determine whether there is an available processor slot (step 1056). If there is an available slot (i.e., no other job is waiting in ticket queue 950 of the job's user), a priority is determined for the job in step 1058 and the job is placed in dispatch queue 103'30 in step 1060 for execution. Otherwise (if there are no available processor slots), the job is placed in ticket queue 950 for this user (step 1062). When a slot .. . .. . .

CA 022~2106 1998-10-28 becomes available in step 1064, a job is taken off ticket queue 950 and placed in dispatch queue 103' for execution.
Fig.10(b) shows the steps for executing a job after the job is put on the system dispatch queue for execution by one of processors 102'. After the job has been executed for the predetermined number of time quanta in step 1074, the #current tickets 942 is reduced by "1" in step 1076. If the number of tickets held is "0" in step 1078, the number of tickets is reset to the initial value for the user in step 1080 and control passes to step 1082. Otherwise control passes directly to step 1082 In step 1082, if other jobs are on the ticket queue of this user, the current job gives up its slot in step 1084 and is put back on ticket queue 950. Its slot is given to the job at the head of ticket queue 950 and a new priority is calculated for this new job based on the number of current tickets 942 for the user in step1086. This new job (with its newly assigned priority) is placed on dispatch queue 103' in step 1086 for execution.
If, on the other hand, in step 1082 there are no jobs waiting in the ticket queue 950 when the current job finishes execution, the priority of the current job is recalculated based on the number of current tickets for the user in step 1090and the current job is placed back on dispatch queue 103' in step 1092 for 20 execution. Note that steps 1086 and 1090 recalculate a priority of a job based on the number of tickets currently held by the user to which the job belongs While the invention has been described in conjunction with a specific embodiment, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art in light of the foregoing description. For 25 example, multiple processes can share a single pool of tickets (e.g., they may have a common ticket structure). Similarly, the threads in multiple processes can allshare a single pool of tickets. In other implementations, a process can transfersome of its tickets to another process.
Accordingly, it is intended to embrace all such alternatives, modifications 30 and variations as fall within the spirit and scope of the appended claims and equivalents.

Claims (17)

1. A method of sharing at least one processor in a data processing system between a plurality of processes, including the steps, performed by the data processing system, of:
initially assigning a number of tickets to each of the plurality of processes;
assigning an initial priority to a thread of each of the plurality of processes in accordance with the number of tickets assigned to the process associated with the thread; and executing the respective threads of the plurality of processes in an order indicated by the tickets assigned to the plurality of processes, so that the proportion of execution time between any two of the threads is the same as the proportion between the number of tickets of the two processes associated with the threads.
2. The method of claim 1, wherein the data processing system includes a plurality of processors, and the data processing system has a plurality of processor slots;
wherein the executing step includes the step of determining whether there is an available processor slot; and further including the step of placing a thread associated with a one of the plurality of processes on a ticket queue of the process, when there is no available processor slot, so that the thread waits on the ticket queue for an available processor slot.
3. The method of claim 1, wherein the data processing system includes a plurality of processors, and the data processing system has a plurality of processor slots;
wherein the executing step includes the step of determining whether there is an available processor slot; and further including the step of placing a thread associated with a one of the plurality of processes onto a processor dispatch queue when there is an available processor slot.
4. The method of claim 3, further including the step of calculating a priority of the thread in accordance with the number of tickets currently held by the thread's process before the thread is placed on the dispatch queue.
5. The method of claim 4, wherein the executing step includes the steps of:
decrementing a number of tickets of a one of the plurality of processes whose thread has just finished executing for a predetermined time period; and if the number of tickets for the one process is equal to zero after the decrementing step, setting the number of tickets for the one process to the initially determined number of tickets for the one process.
6. The method of claim 4, wherein the executing step includes the steps of recalculating the number of tickets held by the process; after the thread of theprocess has executed for the predetermined period of time.
7. The method of claim 4, wherein the executing step includes the steps of checking whether there is another thread with a higher priority; after the thread has finished executing for the predetermined period of time.
8. The method of claim 1, wherein at least one of the plurality of processes is a multi-threaded application.
9. A method of sharing a processor between a plurality of threads of a plurality of multi-threaded applications in a data processing system, including the steps, performed by the data processing system, of:
initially assigning respective numbers of tickets to each of the plurality of multi-threaded applications;

assigning a priority to each of the plurality of multi-threaded applications in accordance with a number of tickets assigned to each multi-threaded application;
adding a thread of at least one of the plurality of multi-threaded applications to a dispatch queue when room becomes available for another thread on a dispatch queue; and adding a thread of at least one of the plurality of multi-threaded applications to a ticket queue, when there is not room for another process on the dispatch queue.
10. A method of sharing at least one processor between a plurality of threads of a first multi-threaded process and a plurality of threads of a secondmulti-threaded process in a data processing system, including the steps, performed by the data processing system, of:
initially assigning a respective numbers of tickets to the first and second multi-threaded processes;
assigning a priority to at least one thread of each of the first and second multi-threaded processes in accordance with the number of tickets assigned to each multi-threaded process; and executing the threads of the first and second multi-threaded processes in an order indicated by the tickets assigned to their respective multi-threaded process, so that the proportion of execution time between any two of the processes is the same as the proportion between the number of tickets of the twomulti-threaded processes.
11. A data structure stored in a memory storage area of a data processing system, comprising:
a data structure representing a first thread of a multi-threaded process;
a data structure representing a second thread of a multi-threaded process;
and a ticket queue data structure, that is pointed to by the data structures representing the first and second threads, where the ticket queue data structure stores a value indicating an initial number of tickets for the multi-threaded process to which the first and second threads belong and further storing a value indicating a current number of tickets for the multi-threaded process.
12. A method of sharing at least one processor between a plurality of users in a data processing system, where the users each have a plurality of jobs to execute, including the steps, performed by the data processing system, of:
initially assigning a number of tickets to each of the plurality of users;
assigning a priority to each of the plurality of jobs in accordance with the number of tickets assigned to each user; and executing jobs belonging to the plurality of users in an order indicated by the tickets assigned to the plurality of users, so that the proportion of execution time between jobs of any of the two users is the same as the proportion between the number of tickets of the two users.
13. A method of sharing at least one processor between a plurality of threads in multiple processes, comprising:
initially assigning a number of tickets to the plurality of processes;
where the tickets are assigned to the plurality of processes as a pool;
assigning a priority to two threads associated with two of the plurality of processes, in accordance with the number of tickets assigned to the plurality of processes; and executing the two threads in an order indicated by the tickets assigned to the processes with which the threads are associated, so that the proportion of execution time between the two threads is the same as the proportion between the number of tickets of the two processes.
14. A computer program product, including:
a computer usable medium having computer readable code embodied therein for causing proportional execution times between two threads of two processes, the computer program product comprising:

computer readable program code devices configured to cause a computer to effect initially assigning a number of tickets to each of the plurality of processes;
computer readable program code devices configured to cause a computer to effect assigning a priority to each of the plurality of processes inaccordance with the number of tickets assigned to each process; and computer readable program code devices configured to cause a computer to effect executing the plurality of processes in an order indicated by the tickets assigned to the plurality of processes, so that the proportion of execution time between any two of the processes is the same as the proportion between the number of tickets of the two processes.
15. A computer data signal embodied in a carrier wave and representing sequences of instructions which, when executed by a processor, cause the processor to share at least one processor in a data processing system between a plurality of processes, by performing the steps of:
initially assigning a number of tickets to each of the plurality of processes;
assigning a priority to each of the plurality of processes in accordance with the number of tickets assigned to each process; and executing the plurality of processes in an order indicated by the tickets assigned to the plurality of processes, so that the proportion of execution time between any two of the processes is the same as the proportion between the number of tickets of the two processes.
16. An apparatus that proportionally shares at least one processor in a data processing system between a plurality of processes, the apparatus comprising:
a portion configured to initially assign a number of tickets to each of the plurality of processes;
a portion configured to assign a priority to each of the plurality of processes in accordance with the number of tickets assigned to each process; and a portion configured to execute the plurality of processes in an order indicated by the tickets assigned to the plurality of processes, so that the proportion of execution time between any two of the processes is the same as the proportion between the number of tickets of the two processes.
17. An apparatus that proportionally shares at least one processor between a plurality of threads of a plurality of multi-threaded applications, the apparatus comprising:
a portion configured to initially assign respective numbers of tickets to each of the plurality of multi-threaded applications;
a portion configured to assign a priority to each of the plurality of multi-threaded applications in accordance with a number of tickets assigned to each multi-threaded application;
a portion configured to add a thread of at least one of the plurality of multi-threaded applications to a dispatch queue when room becomes available for another thread on a dispatch queue; and a portion configured to add a thread of at least one of the plurality of multi-threaded applications to a ticket queue, when there is not room for another process on the dispatch queue.
CA002252106A 1997-10-31 1998-10-28 Method and apparatus for processor sharing Abandoned CA2252106A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/962,140 US5987492A (en) 1997-10-31 1997-10-31 Method and apparatus for processor sharing
US08/962,140 1997-10-31

Publications (1)

Publication Number Publication Date
CA2252106A1 true CA2252106A1 (en) 1999-04-30

Family

ID=25505477

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002252106A Abandoned CA2252106A1 (en) 1997-10-31 1998-10-28 Method and apparatus for processor sharing

Country Status (4)

Country Link
US (2) US5987492A (en)
EP (1) EP0915418A3 (en)
JP (1) JPH11212809A (en)
CA (1) CA2252106A1 (en)

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987492A (en) 1997-10-31 1999-11-16 Sun Microsystems, Inc. Method and apparatus for processor sharing
US6374285B1 (en) * 1998-05-15 2002-04-16 Compaq Computer Corporation Method for mutual exclusion of locks in a remote-write globally ordered network of processors
US7805724B1 (en) * 1998-05-27 2010-09-28 Arc International I.P., Inc. Apparatus, method and computer program for dynamic slip control in real-time scheduling
US6952827B1 (en) * 1998-11-13 2005-10-04 Cray Inc. User program and operating system interface in a multithreaded environment
US7237093B1 (en) 1998-12-16 2007-06-26 Mips Technologies, Inc. Instruction fetching system in a multithreaded processor utilizing cache miss predictions to fetch instructions from multiple hardware streams
US7529907B2 (en) 1998-12-16 2009-05-05 Mips Technologies, Inc. Method and apparatus for improved computer load and store operations
US7257814B1 (en) * 1998-12-16 2007-08-14 Mips Technologies, Inc. Method and apparatus for implementing atomicity of memory operations in dynamic multi-streaming processors
US6389449B1 (en) * 1998-12-16 2002-05-14 Clearwater Networks, Inc. Interstream control and communications for multi-streaming digital processors
US7035997B1 (en) 1998-12-16 2006-04-25 Mips Technologies, Inc. Methods and apparatus for improving fetching and dispatch of instructions in multithreaded processors
US6477562B2 (en) * 1998-12-16 2002-11-05 Clearwater Networks, Inc. Prioritized instruction scheduling for multi-streaming processors
US7020879B1 (en) * 1998-12-16 2006-03-28 Mips Technologies, Inc. Interrupt and exception handling for multi-streaming digital processors
US6438704B1 (en) * 1999-03-25 2002-08-20 International Business Machines Corporation System and method for scheduling use of system resources among a plurality of limited users
US6964048B1 (en) * 1999-04-14 2005-11-08 Koninklijke Philips Electronics N.V. Method for dynamic loaning in rate monotonic real-time systems
US7518993B1 (en) * 1999-11-19 2009-04-14 The United States Of America As Represented By The Secretary Of The Navy Prioritizing resource utilization in multi-thread computing system
US6757897B1 (en) * 2000-02-29 2004-06-29 Cisco Technology, Inc. Apparatus and methods for scheduling and performing tasks
US7856633B1 (en) * 2000-03-24 2010-12-21 Intel Corporation LRU cache replacement for a partitioned set associative cache
US6931641B1 (en) 2000-04-04 2005-08-16 International Business Machines Corporation Controller for multiple instruction thread processors
WO2002006959A1 (en) 2000-07-14 2002-01-24 Clearwater Networks, Inc. Instruction fetch and dispatch in multithreaded system
US7155717B2 (en) * 2001-01-26 2006-12-26 Intel Corporation Apportioning a shared computer resource
US20020103990A1 (en) * 2001-02-01 2002-08-01 Hanan Potash Programmed load precession machine
CN1228714C (en) * 2001-03-05 2005-11-23 皇家菲利浦电子有限公司 Method and system for withdrawing budget from blocking task
US7509671B1 (en) * 2001-06-20 2009-03-24 Microstrategy Incorporated Systems and methods for assigning priority to jobs in a reporting system
US7086059B2 (en) * 2001-06-26 2006-08-01 Intel Corporation Throttling queue
US7984268B2 (en) * 2002-10-08 2011-07-19 Netlogic Microsystems, Inc. Advanced processor scheduling in a multithreaded system
US8015567B2 (en) 2002-10-08 2011-09-06 Netlogic Microsystems, Inc. Advanced processor with mechanism for packet distribution at high line rate
US8037224B2 (en) 2002-10-08 2011-10-11 Netlogic Microsystems, Inc. Delegating network processor operations to star topology serial bus interfaces
US8176298B2 (en) 2002-10-08 2012-05-08 Netlogic Microsystems, Inc. Multi-core multi-threaded processing systems with instruction reordering in an in-order pipeline
US7627721B2 (en) 2002-10-08 2009-12-01 Rmi Corporation Advanced processor with cache coherency
US7346757B2 (en) 2002-10-08 2008-03-18 Rmi Corporation Advanced processor translation lookaside buffer management in a multithreaded system
US7961723B2 (en) 2002-10-08 2011-06-14 Netlogic Microsystems, Inc. Advanced processor with mechanism for enforcing ordering between information sent on two independent networks
US7334086B2 (en) 2002-10-08 2008-02-19 Rmi Corporation Advanced processor with system on a chip interconnect technology
US7924828B2 (en) 2002-10-08 2011-04-12 Netlogic Microsystems, Inc. Advanced processor with mechanism for fast packet queuing operations
US8478811B2 (en) 2002-10-08 2013-07-02 Netlogic Microsystems, Inc. Advanced processor with credit based scheme for optimal packet flow in a multi-processor system on a chip
US9088474B2 (en) 2002-10-08 2015-07-21 Broadcom Corporation Advanced processor with interfacing messaging network to a CPU
US7222218B2 (en) * 2002-10-22 2007-05-22 Sun Microsystems, Inc. System and method for goal-based scheduling of blocks of code for concurrent execution
US7346902B2 (en) * 2002-10-22 2008-03-18 Sun Microsystems, Inc. System and method for block-based concurrentization of software code
US7603664B2 (en) * 2002-10-22 2009-10-13 Sun Microsystems, Inc. System and method for marking software code
US7765532B2 (en) * 2002-10-22 2010-07-27 Oracle America, Inc. Inducing concurrency in software code
GB2453284A (en) * 2004-04-02 2009-04-01 Symbian Software Ltd Mechanism for notifying a kernel of a thread entering a critical section.
US7770172B2 (en) * 2004-09-01 2010-08-03 Microsoft Corporation Conditional variables without spinlocks
US7525962B2 (en) * 2004-12-27 2009-04-28 Intel Corporation Reducing memory access bandwidth consumption in a hierarchical packet scheduler
US20060212853A1 (en) * 2005-03-18 2006-09-21 Marvell World Trade Ltd. Real-time control apparatus having a multi-thread processor
US8195922B2 (en) 2005-03-18 2012-06-05 Marvell World Trade, Ltd. System for dynamically allocating processing time to multiple threads
US7606363B1 (en) 2005-07-26 2009-10-20 Rockwell Collins, Inc. System and method for context switching of a cryptographic engine
US7395418B1 (en) * 2005-09-22 2008-07-01 Sun Microsystems, Inc. Using a transactional execution mechanism to free up processor resources used by a busy-waiting thread
US8832705B1 (en) * 2005-12-28 2014-09-09 Emc Corporation Ordered mutual exclusion
US9703285B2 (en) 2006-04-27 2017-07-11 International Business Machines Corporation Fair share scheduling for mixed clusters with multiple resources
JP2007317171A (en) * 2006-04-27 2007-12-06 Matsushita Electric Ind Co Ltd Multi-thread computer system and multi-thread execution control method
US8087026B2 (en) 2006-04-27 2011-12-27 International Business Machines Corporation Fair share scheduling based on an individual user's resource usage and the tracking of that usage
US8001549B2 (en) * 2006-04-27 2011-08-16 Panasonic Corporation Multithreaded computer system and multithread execution control method
US8468532B2 (en) * 2006-06-21 2013-06-18 International Business Machines Corporation Adjusting CPU time allocated to next thread based on gathered data in heterogeneous processor system having plurality of different instruction set architectures
JP2008123045A (en) * 2006-11-08 2008-05-29 Matsushita Electric Ind Co Ltd Processor
US8136113B2 (en) * 2006-12-20 2012-03-13 International Business Machines Corporation Method and apparatus for adjusting sleep time of fixed high-priority threads
US8024739B2 (en) * 2007-01-09 2011-09-20 International Business Machines Corporation System for indicating and scheduling additional execution time based on determining whether the execution unit has yielded previously within a predetermined period of time
US8789052B2 (en) * 2007-03-28 2014-07-22 BlackBery Limited System and method for controlling processor usage according to user input
JP4448163B2 (en) * 2007-11-21 2010-04-07 レノボ・シンガポール・プライベート・リミテッド Device management method for computer systems and processes
US7930574B2 (en) * 2007-12-31 2011-04-19 Intel Corporation Thread migration to improve power efficiency in a parallel processing environment
US8806491B2 (en) 2007-12-31 2014-08-12 Intel Corporation Thread migration to improve power efficiency in a parallel processing environment
US9596324B2 (en) 2008-02-08 2017-03-14 Broadcom Corporation System and method for parsing and allocating a plurality of packets to processor core threads
JP4996519B2 (en) * 2008-03-27 2012-08-08 パナソニック株式会社 Virtual multiprocessor, system LSI, mobile phone device, and virtual multiprocessor control method
US8904399B2 (en) * 2010-03-15 2014-12-02 Qualcomm Incorporated System and method of executing threads at a processor
EP2477112A1 (en) 2010-12-29 2012-07-18 Basque Center for Applied Mathematics Method for efficient scheduling in a resource-sharing system
US9542236B2 (en) 2011-12-29 2017-01-10 Oracle International Corporation Efficiency sequencer for multiple concurrently-executing threads of execution
CN103246552B (en) * 2012-02-14 2018-03-09 腾讯科技(深圳)有限公司 Prevent thread from the method and apparatus blocked occur

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5012409A (en) * 1988-03-10 1991-04-30 Fletcher Mitchell S Operating system for a multi-tasking operating environment
ATE167582T1 (en) * 1989-09-08 1998-07-15 Digital Equipment Corp PRIVATE STORAGE FOR THREADS IN A MULTI-THREAD DIGITAL DATA PROCESSING SYSTEM
US5421014A (en) * 1990-07-13 1995-05-30 I-Tech Corporation Method for controlling multi-thread operations issued by an initiator-type device to one or more target-type peripheral devices
US5506987A (en) * 1991-02-01 1996-04-09 Digital Equipment Corporation Affinity scheduling of processes on symmetric multiprocessing systems
US5500890A (en) * 1993-08-19 1996-03-19 Exxon Research And Engineering Company Point-of-sale system using multi-threaded transactions and interleaved file transfer
US5586318A (en) 1993-12-23 1996-12-17 Microsoft Corporation Method and system for managing ownership of a released synchronization mechanism
AU696018B2 (en) * 1994-02-28 1998-08-27 British Telecommunications Public Limited Company Service provision in communications networks
US5822588A (en) * 1995-06-09 1998-10-13 Sun Microsystem, Inc. System and method for checking the use of synchronization locks in a multi-threaded target program
US5687390A (en) * 1995-11-14 1997-11-11 Eccs, Inc. Hierarchical queues within a storage array (RAID) controller
US5699428A (en) * 1996-01-16 1997-12-16 Symantec Corporation System for automatic decryption of file data on a per-use basis and automatic re-encryption within context of multi-threaded operating system under which applications run in real-time
US5937187A (en) 1996-07-01 1999-08-10 Sun Microsystems, Inc. Method and apparatus for execution and preemption control of computer process entities
US5987492A (en) 1997-10-31 1999-11-16 Sun Microsystems, Inc. Method and apparatus for processor sharing

Also Published As

Publication number Publication date
US6272517B1 (en) 2001-08-07
US5987492A (en) 1999-11-16
EP0915418A3 (en) 2003-10-29
EP0915418A2 (en) 1999-05-12
JPH11212809A (en) 1999-08-06

Similar Documents

Publication Publication Date Title
US5987492A (en) Method and apparatus for processor sharing
US6269391B1 (en) Multi-processor scheduling kernel
US5586318A (en) Method and system for managing ownership of a released synchronization mechanism
Jones et al. An overview of the Rialto real-time architecture
US6223201B1 (en) Data processing system and method of task management within a self-managing application
US5515538A (en) Apparatus and method for interrupt handling in a multi-threaded operating system kernel
JP2866241B2 (en) Computer system and scheduling method
US5339415A (en) Dual level scheduling of processes to multiple parallel regions of a multi-threaded program on a tightly coupled multiprocessor computer system
US20060130062A1 (en) Scheduling threads in a multi-threaded computer
US20050125793A1 (en) Operating system kernel-assisted, self-balanced, access-protected library framework in a run-to-completion multi-processor environment
US5666523A (en) Method and system for distributing asynchronous input from a system input queue to reduce context switches
US20020198925A1 (en) System and method for robust time partitioning of tasks in a real-time computing environment
US6587865B1 (en) Locally made, globally coordinated resource allocation decisions based on information provided by the second-price auction model
JPH1063519A (en) Method and system for reserving resource
JP2005536791A (en) Dynamic multilevel task management method and apparatus
EP0913770A2 (en) Method and apparatus for sharing a time quantum
Bunt Scheduling techniques for operating systems
EP0544822B1 (en) Dual level scheduling of processes
WO2004019205A2 (en) System and method for robust time partitioning of tasks in a real-time computing environment
Lew Optimal resource allocation and scheduling among parallel processes
Gharu Operating system
JPS63301332A (en) Job executing system
サミホ,ムハマドモサタファ STUDIES ON AUGMENTED FAIRNESS SCHEDULING IN GENERAL PURPOSE ENVIRONMENTS
Joon et al. OPTIMIZATION OF SCHEDULING ALGORITHMS FOR LOAD BALANCING USING HYBRID APPROACH
CN115543554A (en) Method and device for scheduling calculation jobs and computer readable storage medium

Legal Events

Date Code Title Description
FZDE Dead