Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20070136725 A1
Publication typeApplication
Application numberUS 11/301,104
Publication dateJun 14, 2007
Filing dateDec 12, 2005
Priority dateDec 12, 2005
Also published asCN1983193A, CN100481014C, US8261279, US20080163217
Publication number11301104, 301104, US 2007/0136725 A1, US 2007/136725 A1, US 20070136725 A1, US 20070136725A1, US 2007136725 A1, US 2007136725A1, US-A1-20070136725, US-A1-2007136725, US2007/0136725A1, US2007/136725A1, US20070136725 A1, US20070136725A1, US2007136725 A1, US2007136725A1
InventorsJos Accapadi, Matthew Accapadi, Andrew Dunshea, Dirk Michel
Original AssigneeInternational Business Machines Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
System and method for optimized preemption and reservation of software locks
US 20070136725 A1
Abstract
A system and method is provided that reserves a software lock for a waiting thread is presented. When a software lock is released by a first thread, a second thread that is waiting for the same resource controlled by the software lock is woken up. In addition, a reservation to the software lock is established for the second thread. After the reservation is established, if the lock is available and requested by a thread other than the second thread, the requesting thread is denied, added to the wait queue, and put to sleep. In addition, the reservation is cleared. After the reservation has been cleared, the lock will be granted to the next thread to request the lock.
Images(7)
Previous page
Next page
Claims(20)
1. A computer-implemented method comprising:
releasing a software lock by a first thread;
identifying a second thread waiting on the software lock at the time of the releasing;
establishing a reservation to the software lock by the second thread in response to the identifying;
waking the second thread in response to the releasing;
after the establishment of the reservation and before the software lock has been taken by the second thread, receiving a first request for the software lock by a first requesting thread that is not the second thread; and
denying the request and putting the first requesting thread to sleep in response to the second thread having a priority equal to or better than the first requesting thread.
2. The method of claim 1 further comprising:
clearing the reservation in response to receiving the first request.
3. The method of claim 1 further comprising:
after the clearing and before the software lock has been requested by the second thread, receiving a second request for the software lock by a second requesting thread that is not the second thread; and
allowing the second requesting thread to acquire the software lock in response to the second request.
4. The method of claim 1 further comprising:
after the establishment of the reservation, receiving a request for the software lock by the second thread;
allowing the second thread to acquire the software lock in response to the request; and
clearing the reservation in response to receiving the request.
5. The method of claim 1 further comprising:
after the establishment of the reservation, receiving a request for the software lock by a requesting thread;
comparing a thread identifier of the requesting thread to a thread identifier of the second thread;
in response to the thread identifier of the requesting thread equaling the thread identifier of the second thread:
allowing the requesting thread to acquire the software lock; and
clearing the reservation; and
in response to the thread identifier of the requesting thread not equaling the thread identifier of the second thread:
denying the request;
putting the requesting thread to sleep; and
clearing the reservation.
6. The method of claim 1 further comprising:
selecting a thread identifier corresponding to the second thread from a wait queue, wherein the wait queue includes one or more thread identifiers corresponding to threads waiting for the software lock; and
removing the selected thread identifier from the wait queue.
7. The method of claim 1 further comprising:
releasing the software lock by the second thread;
receiving a request for the software lock by a requesting thread after the software lock has been released by the second thread;
in response to the software lock not being reserved:
allowing the requesting thread to acquire the software lock in response to the software lock being available; and
putting the requesting thread to sleep and adding the requesting thread's identifier to a wait queue in response to the software lock not being available; and
in response to the software lock being reserved:
clearing the reservation and allowing the requesting thread to acquire the software lock in response to the reservation being for the requesting thread; and
clearing the reservation, putting the requesting thread to sleep, and adding the requesting thread's identifier to the wait queue in response to the reservation not being for the requesting thread.
8. An information handling system comprising:
one or more processors;
a memory accessible by the processors;
a plurality of threads, including a first thread and a second thread, stored in the memory and executed by the processors;
a software lock that controls access to a resource; and
a set of instructions executed by the processors to perform actions of:
releasing the software lock by the first thread;
identifying a second thread waiting on the software lock at the time of the releasing;
establishing a reservation to the software lock by the second thread in response to the identifying;
waking the software lock in response to the identifying;
after the establishment of the reservation and before the software lock has been taken by the second thread, receiving a first request for the software lock by a first requesting thread that is not the second thread; and
denying the request and putting the first requesting thread to sleep in response to the second thread having a priority equal to or better than the first requesting thread.
9. The information handling system of claim 8 further comprising:
clearing the reservation in response to receiving the first request.
10. The information handling system of claim 8 further comprising:
after the clearing and before the software lock has been requested by the second thread, receiving a second request for the software lock by a second requesting thread that is not the second thread; and
allowing the second requesting thread to acquire the software lock in response to the second request.
11. The information handling system of claim 8 further comprising:
after the establishment of the reservation, receiving a request for the software lock by the second thread;
allowing the second thread to acquire the software lock in response to the request; and
clearing the reservation in response to receiving the request.
12. The information handling system of claim 8 further comprising:
after the establishment of the reservation, receiving a request for the software lock by a requesting thread;
comparing a thread identifier of the requesting thread to a thread identifier of the second thread;
in response to the thread identifier of the requesting thread equaling the thread identifier of the second thread:
allowing the requesting thread to acquire the software lock; and
clearing the reservation; and
in response to the thread identifier of the requesting thread not equaling the thread identifier of the second thread:
denying the request;
putting the requesting thread to sleep; and
clearing the reservation.
13. The information handling system of claim 8 further comprising:
selecting a thread identifier corresponding to the second thread from a wait queue, wherein the wait queue includes one or more thread identifiers corresponding to threads waiting for the software lock; and
removing the selected thread identifier from the wait queue.
14. A computer program product in a computer-readable medium comprising functional descriptive material that, when executed by a computer, directs the computer to perform actions of:
releasing a software lock by a first thread;
identifying a second thread waiting on the software lock at the time of the releasing;
establishing a reservation to the software lock by the second thread in response to the identifying;
waking the software lock in response to the identifying;
after the establishment of the reservation and before the software lock has been taken by the second thread, receiving a first request for the software lock by a first requesting thread that is not the second thread; and
denying the request and putting the first requesting thread to sleep in response to the second thread having a priority equal to or better than the first requesting thread.
15. The computer program product of claim 14 further comprising:
clearing the reservation in response to receiving the first request.
16. The computer program product of claim 14 further comprising:
after the clearing and before the software lock has been requested by the second thread, receiving a second request for the software lock by a second requesting thread that is not the second thread; and
allowing the second requesting thread to acquire the software lock in response to the second request.
17. The computer program product of claim 14 further comprising:
after the establishment of the reservation, receiving a request for the software lock by the second thread;
allowing the second thread to acquire the software lock in response to the request; and
clearing the reservation in response to receiving the request.
18. The computer program product of claim 14 further comprising:
after the establishment of the reservation, receiving a request for the software lock by a requesting thread;
comparing a thread identifier of the requesting thread to a thread identifier of the second thread;
in response to the thread identifier of the requesting thread equaling the thread identifier of the second thread:
allowing the requesting thread to acquire the software lock; and
clearing the reservation; and
in response to the thread identifier of the requesting thread not equaling the thread identifier of the second thread:
denying the request;
putting the requesting thread to sleep; and
clearing the reservation.
19. The computer program product of claim 14 further comprising:
selecting a thread identifier corresponding to the second thread from a wait queue, wherein the wait queue includes one or more thread identifiers corresponding to threads waiting for the software lock; and
removing the selected thread identifier from the wait queue.
20. The computer program product of claim 14 further comprising:
releasing the software lock by the second thread;
receiving a request for the software lock by a requesting thread after the software lock has been released by the second thread;
in response to the software lock not being reserved:
allowing the requesting thread to acquire the software lock in response to the software lock being available; and
putting the requesting thread to sleep and adding the requesting thread's identifier to a wait queue in response to the software lock not being available; and
in response to the software lock being reserved:
clearing the reservation and allowing the requesting thread to acquire the software lock in response to the reservation being for the requesting thread; and
clearing the reservation, putting the requesting thread to sleep, and adding the requesting thread's identifier to the wait queue in response to the reservation not being for the requesting thread.
Description
    BACKGROUND OF THE INVENTION
  • [0001]
    1. Technical Field
  • [0002]
    The present invention relates in general to a system and method for improving software locks. More particularly, the present application relates to a system and method that reserves a software lock for threads that are waiting for the lock.
  • [0003]
    2. Description of the Related Art
  • [0004]
    In a multiprocessing environment, software locks are used to serialize access to resources. As used herein, a “thread” refers to a part of a program that can execute independently of other parts of the program. There can be multiple programs, including the operating system, operating at the same time, and each of these programs can have multiple threads. When a thread requires access to a serialized resource, a software lock is used. The software lock provides a mechanism so that one thread is able to use the resource (e.g., write to a shared memory location, etc.). While traditional locks provide a controlled means for accessing a resource, one challenge that is encountered using traditional locks is that a thread may be unintentionally starved for access to a particular resource. While this situation occurs in a multiprocessor environment, it may also occur in a uniprocessor environment under certain conditions.
  • [0005]
    FIG. 1 is a prior art depiction of a typical locking algorithm that potentially starves a process for a critical resource. In the example shown, the second thread is being starved for a particular resource. Depiction of a portion of the processing of the first thread commences at 100 and the depiction of a portion of the processing of the second thread commences at 101. At step 105, the first thread acquires a particular lock and then commences step 110 by performing work that utilizes the resource being controlled by the lock.
  • [0006]
    Sometime after the first thread has acquired the lock, but before the first thread has released the lock, the second thread requests the same software lock (step 115). At step 120, the lock manager notices that the lock is already taken (by the first thread), so, at step 125, the second thread is put to sleep and added to wait queue 130. Sometime after the second thread is put to sleep and added to the wait queue, the first thread releases the lock (step 135). The release of the lock by the first thread causes the second thread to wakeup (step 140) and the second thread is removed from wait queue 130. While the second thread is waking up, at step 145, the first thread performs work that does not require the shared resource that is being controlled by the software lock. However, before the second thread fully wakes up and requests the lock (step 160), the first thread once again needs to access the shared resource and requests the lock again at step 150. Because the lock was available and the first thread requested it before the second thread, at step 155 the first thread reacquires the lock.
  • [0007]
    Once again, the lock manager denies the lock to the second thread (step 165) because the lock has already been taken by another process (the first thread). The second thread is then put back to sleep and added to the wait queue (step 170). Unfortunately, the sequence may be repeated over and over again (step 175 and 180), thus starving the second thread so that it must wait an inordinate amount of time to acquire the shared resource being controlled by the lock. As mentioned before, while the situation described in FIG. 1 occurs in a multiprocessor environment, it may also occur in a uniprocessor environment under certain conditions. One reason that the second thread may become starved for the lock, as described above, is because the first thread is a running thread while the second thread is put to sleep. A running thread has an advantage over a sleeping thread in that it can often finish its processing and request the lock before the sleeping thread can wake up and request the lock. Using an affinity dispatcher, the second thread would be preferably dispatched to CPU 2 when it wakes up. However, if CPU 2 is busy, the second thread may be reassigned to a different CPU that is idle at the time that the second thread wakes up. Again, reassigning the second thread to an idle CPU requires additional time that provides the first thread with an additional advantage in finishing its work and re-requesting the lock.
  • [0008]
    What is needed, therefore, is a system and method that prevents one process from being starved for a shared resource. What is further needed is a system and method that reserves the lock for a waiting process so that it has a much better chance of acquiring the lock in a timely manner.
  • SUMMARY
  • [0009]
    It has been discovered that the aforementioned challenges are resolved by a system and method that reserves a software lock for a waiting process. When a software lock is released by a first process (thread), a second thread that is waiting for the same resource controlled by the software lock is woken up. In addition, a reservation to the software lock is established for the second thread.
  • [0010]
    After the reservation is established, if the lock is available and requested by a thread other than the second thread, the requesting thread is denied if its priority is not better than the second thread, and the requesting thread is added to the wait queue and put to sleep. In addition, the reservation is cleared. In this manner, if the second thread takes an inordinate amount of time to request the lock, the reservation does not permanently stand in the way of other processes (threads). After the reservation has been cleared, the lock will be granted to the next thread to request the lock. While the second thread is not guaranteed immediate acquisition of the lock, it does greatly improve the second thread's chances of acquiring the lock in a reasonable amount of time.
  • [0011]
    Of course, if the second thread is the next thread to request the lock after the reservation has been set (or if the second thread requests the lock before the reservation has been set), the second thread acquires the software lock. If a reservation had been set for the second thread, then the reservation is cleared.
  • [0012]
    The foregoing is a summary and thus contains, by necessity, simplifications, generalizations, and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting. Other aspects, inventive features, and advantages of the present invention, as defined solely by the claims, will become apparent in the non-limiting detailed description set forth below.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0013]
    The present invention may be better understood, and its numerous objects, features, and advantages made apparent to those skilled in the art by referencing the accompanying drawings.
  • [0014]
    FIG. 1 is a prior art depiction of a typical locking algorithm that potentially starves a process for a critical resource;
  • [0015]
    FIG. 2 is a high level diagram showing processes being executed by various processors and sharing a common resource through a software lock;
  • [0016]
    FIG. 3 is a diagram showing how the use of a reservation prevents starvation of the processes attempting use of the lock;
  • [0017]
    FIG. 4 is a flowchart showing the steps taken when a request for a lock is received;
  • [0018]
    FIG. 5 is a flowchart showing the steps taken when a process holding the lock releases the lock; and
  • [0019]
    FIG. 6 illustrates information handling system 701 which is a simplified example of a computer system capable of performing the computing operations described herein.
  • DETAILED DESCRIPTION
  • [0020]
    The following is intended to provide a detailed description of an example of the invention and should not be taken to be limiting of the invention itself. Rather, any number of variations may fall within the scope of the invention, which is defined in the claims following the description.
  • [0021]
    FIG. 1 is a prior art depiction of a typical locking algorithm that potentially starves a process for a critical resource. The description for FIG. 1 can be found in the Background section, above.
  • [0022]
    FIGS. 2-5 show a locking algorithm that avoids starving a process for a critical resource. When a lock is released and there are waiters, the wake-up call sets a reservation bit in the lock structure. A thread woken from the waitlist (wait queue) will have a bit set in the thread control block that it was on the waitlist for the lock. The woken thread attempts to acquire the lock and will clear the waitlist bit. During the lock acquisition, if the lock is free and the lock structure has the waitlist bit set, a check will be made to see if the thread has the waitlist bit set. If the thread has the waitlist bit set, it will be allowed to attempt to take the lock and the thread waitlist bit will be cleared (irrespective of whether the lock is taken). If the thread does not have the waitlist bit set in its thread control structure, it will clear the lock reservation bit of the lock structure, will go to sleep, and it will be added to the waitlist (the wait queue).
  • [0023]
    A registration table can be used to determine which locks will use the algorithm described above and in FIGS. 2-5 or the algorithm can be applied to all locks. As the overhead for the algorithm is quite small, applying the algorithm to all locks is possible as the overhead involved in checking and setting reservation bit check is negligible, especially since the waitlist (wait queue) is being checked anyway. In addition, the extra dispatch overhead is also negligible and any small savings that may be accomplished by a more complex implementation would likely be outweighed by the additional overhead required by the complex implementation. In addition, efficiency can be further improved by storing the reservation bit in the same cache line as the lock itself. Furthermore, as explained in further detail herein, the lock reservation is only performed when a thread is waiting on the waitlist and is implemented as part of the wake-up routine.
  • [0024]
    FIG. 2 is a high level diagram showing processes being executed by various processors and sharing a common resource through a software lock. The challenge described in the Background section (see FIG. 1 and description thereof, above) is more likely to occur in a multiprocessing system where different processors are used to execute different threads, each of which needs access to a shared resource. FIG. 2 depicts this multiprocessing environment.
  • [0025]
    Hardware 200 includes, among many components, multiple processors (CPU 1 (210), CPU 2 (220), and CPU 3 (230)). Each of the processors may be executing one or more threads. In the example shown, CPU 1 (210) is executing a first thread (Thread A), and CPU 2 (220) is executing a second thread (Thread B). These threads share various shared resources, access to which is controlled through one or more software locks.
  • [0026]
    Operating system or software application 250 manages access to shared resource 270 by using software lock 260. When the lock is acquired by one of the threads, then that thread can access and use shared resource 270. If the software lock has already been acquired by a first thread, then the requesting thread is added to wait queue 275. In one embodiment, wait queue 275 is a FIFO queue. More particularly, in this embodiment, threads with the same priority are sorted based on the order they arrived in the queue (FIFO). In this embodiment, higher priority threads are placed at the front of the queue and lower priority threads are placed at the end of the queue. When the first thread releases the lock, the first thread listed in wait queue 275 is woken up and a reservation is established for the woken up thread using reservation data structure 280. If another thread attempts to acquire the lock while the reservation is in place, the requesting thread is put to sleep and added to the wait queue. In one embodiment, after the other thread attempts to acquire the lock the reservation is cleared.
  • [0027]
    FIG. 3 is a diagram showing how the use of a reservation prevents starvation of the processes attempting use of the lock. A comparison between the processing shown in FIG. 3 and that shown in the prior art depiction shown in FIG. 1 reveals that the present invention reduces, or eliminates, starvation of a thread for a shared resource.
  • [0028]
    Depiction of a portion of the processing of a first thread commences at 300 and the depiction of a portion of the processing of a second thread commences at 301. At step 305, the first thread acquires a particular lock and then commences step 310 by performing work that utilizes the resource being controlled by the lock. Sometime after the first thread has acquired the lock but before the first thread has released the lock, the second thread requests the same software lock (step 315). At step 320, the lock manager notices that the lock is already taken (by the first thread), so, at step 325, the second thread is put to sleep and added to wait queue 275.
  • [0029]
    Sometime after the second thread is put to sleep and added to the wait queue, the first thread releases the lock (step 340). The release of the lock by the first thread causes the second thread to wakeup (step 345) and the second thread is removed from wait queue 275. In addition, the lock manager sets a reservation to the lock for the second thread. While the second thread is waking up, at step 350, the first thread performs work that does not require the shared resource being controlled by the software lock. However, before the second thread fully wakes up and requests the lock (step 365), the first thread once again needs to access the shared resource and requests the lock again at step 355. However, even though the lock is available, the lock manager notices that a reservation has been set for the lock for another thread. The lock manager compares the thread identifier of the first thread (the thread making the request) with the thread identifier stored in reservation 280 (set for the second thread). Since the two thread identifiers do not match, the first thread is put to sleep, added to the wait queue, and the reservation is cleared (step 360). Now, when the second thread wakes up and requests the lock at step 365, the lock is still available and the second thread is able to acquire the lock at step 370. Likewise, when the second thread releases the lock, a reservation will be established for the first thread as it is sleeping and waiting in the wait queue, giving the first thread a better opportunity to acquire the lock.
  • [0030]
    FIG. 4 is a flowchart showing the steps taken when a request for a lock is received. Processing commences at 400 whereupon, at step 405, a request for a lock is received. A determination is made as to whether the requested lock is reserved (decision 410). If the requested lock is reserved, decision 410 branches to “yes” branch 415 whereupon another determination is made as to whether the lock is reserved by the requesting thread (decision 420). This determination is made by comparing the requesting thread's identifier to the thread identifier corresponding to reservation 280. If the lock is reserved by the requesting thread, then decision 420 branches to “yes” branch 425 whereupon, at step 430, reservation 280 is cleared and the lock is acquired by the requestor at step 445 by writing the requestor's thread identifier to lock data structure 260 before the depicted processing ends at 495.
  • [0031]
    Returning to decision 420, if the lock is reserved but not by the requesting thread, then decision 420 branches to “no” branch 455 whereupon, at step 460, reservation 280 is cleared, the requesting thread is put to sleep and added to wait queue 275 before the depicted processing ends at 499. Returning to decision 410, if the lock is not reserved, decision 410 branches to “no” branch 470 whereupon another determination is made as to whether the lock is currently available (decision 475). If the lock is currently available, decision 475 branches to “yes” branch 480 whereupon, the lock is acquired by the requestor at step 445 by writing the requestor's thread identifier to lock data structure 260 before the depicted processing ends at 495. On the other hand, if the lock is not currently available, decision 475 branches to “no” branch 485 whereupon, at step 490 the requester is put to sleep and added to wait queue 275 before the depicted processing ends at 499.
  • [0032]
    FIG. 5 is a flowchart showing the steps taken when a thread holding a lock releases the lock. Processing commences at 500 whereupon, at step 510, a first thread that was holding lock 260 releases the lock. At step 520, lock 260 is cleared so that is no longer assigned to the thread.
  • [0033]
    A determination is made as to whether there are one or more threads in wait queue 275 that are waiting for the lock (decision 530). If there are no threads waiting on the lock, decision 530 branches to “no” branch 535 bypassing the remaining steps and processing ends at 595. On the other hand, if one or more threads are waiting on the lock, decision 530 branches to “yes” branch 540 whereupon, at step 550, the next thread listed in wait queue 275 is selected. At step 560, a reservation is set for the selected thread by writing the selected thread's identifier to reservation data structure 280. At step 570, the selected thread is woken up, and at step 580 the selected thread is removed from wait queue 275. Processing thereafter ends at 595.
  • [0034]
    FIG. 6 illustrates information handling system 601 which is a simplified example of a computer system capable of performing the computing operations described herein. Computer system 601 includes processor 600 which is coupled to host bus 602. A level two (L2) cache memory 604 is also coupled to host bus 602. Host-to-PCI bridge 606 is coupled to main memory 608, includes cache memory and main memory control functions, and provides bus control to handle transfers among PCI bus 610, processor 600, L2 cache 604, main memory 608, and host bus 602. Main memory 608 is coupled to Host-to-PCI bridge 606 as well as host bus 602. Devices used solely by host processor(s) 600, such as LAN card 630, are coupled to PCI bus 610. Service Processor Interface and ISA Access Pass-through 612 provides an interface between PCI bus 610 and PCI bus 614. In this manner, PCI bus 614 is insulated from PCI bus 610. Devices, such as flash memory 618, are coupled to PCI bus 614. In one implementation, flash memory 618 includes BIOS code that incorporates the necessary processor executable code for a variety of low-level system functions and system boot functions.
  • [0035]
    PCI bus 614 provides an interface for a variety of devices that are shared by host processor(s) 600 and Service Processor 616 including, for example, flash memory 618. PCI-to-ISA bridge 635 provides bus control to handle transfers between PCI bus 614 and ISA bus 640, universal serial bus (USB) functionality 645, power management functionality 655, and can include other functional elements not shown, such as a real-time clock (RTC), DMA control, interrupt support, and system management bus support. Nonvolatile RAM 620 is attached to ISA Bus 640. Service Processor 616 includes JTAG and I2C busses 622 for communication with processor(s) 600 during initialization steps. JTAG/I2C busses 622 are also coupled to L2 cache 604, Host-to-PCI bridge 606, and main memory 608 providing a communications path between the processor, the Service Processor, the L2 cache, the Host-to-PCI bridge, and the main memory. Service Processor 616 also has access to system power resources for powering down information handling device 601.
  • [0036]
    Peripheral devices and input/output (I/O) devices can be attached to various interfaces (e.g., parallel interface 662, serial interface 664, keyboard interface 668, and mouse interface 670 coupled to ISA bus 640. Alternatively, many I/O devices can be accommodated by a super I/O controller (not shown) attached to ISA bus 640.
  • [0037]
    In order to attach computer system 601 to another computer system to copy files over a network, LAN card 630 is coupled to PCI bus 610. Similarly, to connect computer system 601 to an ISP to connect to the Internet using a telephone line connection, modem 675 is connected to serial port 664 and PCI-to-ISA Bridge 635.
  • [0038]
    While the computer system described in FIG. 6 is capable of executing the processes described herein, this computer system is simply one example of a computer system. Those skilled in the art will appreciate that many other computer system designs are capable of performing the processes described herein.
  • [0039]
    One of the preferred implementations of the invention is a client application, namely, a set of instructions (program code) or other functional descriptive material in a code module that may, for example, be resident in the random access memory of the computer. Until required by the computer, the set of instructions may be stored in another computer memory, for example, in a hard disk drive, or in a removable memory such as an optical disk (for eventual use in a CD ROM) or floppy disk (for eventual use in a floppy disk drive), or downloaded via the Internet or other computer network. Thus, the present invention may be implemented as a computer program product for use in a computer. In addition, although the various methods described are conveniently implemented in a general purpose computer selectively activated or reconfigured by software, one of ordinary skill in the art would also recognize that such methods may be carried out in hardware, in firmware, or in more specialized apparatus constructed to perform the required method steps. Functional descriptive material is information that imparts functionality to a machine. Functional descriptive material includes, but is not limited to, computer programs, instructions, rules, facts, definitions of computable functions, objects, and data structures.
  • [0040]
    While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, that changes and modifications may be made without departing from this invention and its broader aspects. Therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this invention. Furthermore, it is to be understood that the invention is solely defined by the appended claims. It will be understood by those with skill in the art that if a specific number of an introduced claim element is intended, such intent will be explicitly recited in the claim, and in the absence of such recitation no such limitation is present. For non-limiting example, as an aid to understanding, the following appended claims contain usage of the introductory phrases “at least one” and “one or more” to introduce claim elements. However, the use of such phrases should not be construed to imply that the introduction of a claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an”; the same holds true for the use in the claims of definite articles.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5297283 *Oct 16, 1992Mar 22, 1994Digital Equipment CorporationObject transferring system and method in an object based computer operating system
US7383368 *Sep 25, 2003Jun 3, 2008Dell Products L.P.Method and system for autonomically adaptive mutexes by considering acquisition cost value
US20020078119 *Dec 4, 2000Jun 20, 2002International Business Machines CorporationSystem and method for improved complex storage locks
US20020078123 *Aug 13, 2001Jun 20, 2002Jean-Francois LatourMethod and apparatus for resource access synchronization
US20020083063 *Dec 26, 2000Jun 27, 2002Bull Hn Information Systems Inc.Software and data processing system with priority queue dispatching
US20040019892 *Jul 24, 2002Jan 29, 2004Sandhya E.Lock management thread pools for distributed data systems
US20040216112 *Apr 23, 2003Oct 28, 2004International Business Machines CorporationSystem and method for thread prioritization during lock processing
US20060265373 *May 20, 2005Nov 23, 2006Mckenney Paul EHybrid multi-threaded access to data structures using hazard pointers for reads and locks for updates
US20070101333 *Oct 27, 2005May 3, 2007Mewhinney Greg RSystem and method of arbitrating access of threads to shared resources within a data processing system
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7890707 *Jun 27, 2007Feb 15, 2011Microsoft CorporationEfficient retry for transactional memory
US7925908 *Jun 18, 2007Apr 12, 2011Samsung Electronics Co., LtdApparatus and method for controlling slotted mode of several systems using one sleep controller in a hybrid terminal of a mobile communication system
US8132171 *Dec 22, 2006Mar 6, 2012Hewlett-Packard Development Company, L.P.Method of controlling thread access to a synchronization object
US8140823Dec 3, 2007Mar 20, 2012Qualcomm IncorporatedMultithreaded processor with lock indicator
US8595729 *Nov 6, 2006Nov 26, 2013Intel CorporationManaging sequenced lock requests
US8615755Sep 15, 2010Dec 24, 2013Qualcomm IncorporatedSystem and method for managing resources of a portable computing device
US8631414Sep 2, 2011Jan 14, 2014Qualcomm IncorporatedDistributed resource management in a portable computing device
US8806502Sep 13, 2011Aug 12, 2014Qualcomm IncorporatedBatching resource requests in a portable computing device
US9098521Jan 12, 2012Aug 4, 2015Qualcomm IncorporatedSystem and method for managing resources and threshsold events of a multicore portable computing device
US9152523Jan 27, 2012Oct 6, 2015Qualcomm IncorporatedBatching and forking resource requests in a portable computing device
US9411635Sep 18, 2012Aug 9, 2016Microsoft Technology Licensing, LlcParallel nested transactions in transactional memory
US9697055 *Sep 22, 2015Jul 4, 2017International Business Machines CorporationAlmost fair busy lock
US20080059676 *Aug 31, 2006Mar 6, 2008Charles Jens ArcherEfficient deferred interrupt handling in a parallel computing environment
US20080109669 *Jun 18, 2007May 8, 2008Samsung Electronics Co., Ltd.Apparatus and method for controlling slotted mode of several systems using one sleep controller in a hybrid terminal of a mobile communication system
US20080109807 *Nov 6, 2006May 8, 2008Intel CorporationManaging Sequenced Lock Requests
US20080155546 *Dec 22, 2006Jun 26, 2008Hewlett-Packard Development Company, L.P.Method of controlling thread access to a synchronization object
US20090007070 *Jun 27, 2007Jan 1, 2009Microsoft CorporationEfficient retry for transactional memory
US20090144519 *Dec 3, 2007Jun 4, 2009Qualcomm IncorporatedMultithreaded Processor with Lock Indicator
US20090172686 *Oct 9, 2008Jul 2, 2009Chen Chih-HoMethod for managing thread group of process
US20120311605 *May 31, 2011Dec 6, 2012International Business Machines CorporationProcessor core power management taking into account thread lock contention
US20130191839 *Nov 6, 2012Jul 25, 2013Canon Kabushiki KaishaInformation processing apparatus, control method therefor, and computer-readable storage medium
US20160139966 *Sep 22, 2015May 19, 2016International Business Machines CorporationAlmost fair busy lock
CN104102547A *Jul 25, 2014Oct 15, 2014珠海全志科技股份有限公司Multiprocessor system synchronization method and synchronization device
WO2009073722A1 *Dec 3, 2008Jun 11, 2009Qualcomm IncorporatedMultithreaded processor with lock indicator
WO2013085669A1 *Nov 9, 2012Jun 13, 2013Qualcomm IncorporatedBatching of resource requests into a transaction and forking of this transaction in a portable computing device
WO2016079622A1 *Oct 29, 2015May 26, 2016International Business Machines CorporationAlmost fair busy lock
Classifications
U.S. Classification718/100
International ClassificationG06F9/46
Cooperative ClassificationG06F2209/522, G06F9/526
European ClassificationG06F9/52E
Legal Events
DateCodeEventDescription
Jan 10, 2006ASAssignment
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ACCAPADI, JOS M.;ACCAPADI, MATTHEW;DUNSHEA, ANDREW;AND OTHERS;REEL/FRAME:017175/0180;SIGNING DATES FROM 20051101 TO 20051107