CA2435148A1 - System and method for lock caching for compound atomic operations on shared memory - Google Patents

System and method for lock caching for compound atomic operations on shared memory Download PDF

Info

Publication number
CA2435148A1
CA2435148A1 CA002435148A CA2435148A CA2435148A1 CA 2435148 A1 CA2435148 A1 CA 2435148A1 CA 002435148 A CA002435148 A CA 002435148A CA 2435148 A CA2435148 A CA 2435148A CA 2435148 A1 CA2435148 A1 CA 2435148A1
Authority
CA
Canada
Prior art keywords
lock
memory
accordance
locks
pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002435148A
Other languages
French (fr)
Inventor
Robert J. Blainey
Raul E. Silvera
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
IBM Canada Ltd
Original Assignee
IBM Canada Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by IBM Canada Ltd filed Critical IBM Canada Ltd
Priority to CA002435148A priority Critical patent/CA2435148A1/en
Priority to US10/864,777 priority patent/US7228391B2/en
Publication of CA2435148A1 publication Critical patent/CA2435148A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/52Indexing scheme relating to G06F9/52
    • G06F2209/522Manager

Abstract

A system and method for lock caching for compound atomic operations (i.e. a read or write operation to more than one 4-byte word) on shared memory is provided. In a computer system including a memory shared among a plurality of processing entities, for example, multiple threads, a method of performing compound atomic operations comprises providing a pool of lacks for synchronizing access to the memory;
assigning the locks among the plurality of entities to minimize lock contention; and performing the compound atomic operations using the assigned locks. Each lock may be assigned in accordance with an address of the shared memory from the processing entity's compound atomic operations. Assigning locks may be performed in a manner to minimize concurrent atomic updates to the same or overlapping portions of the shared memory. For example, the addresses of the memory from the compound atomic operations may be aliased in accordance with a known upper bound on the amount of the shared memory that may be affected by any atomic operation.

Description

SYSTEM AND METHOD FOR LOGIC CACHING FOR COMPOUND
ATOMIC OPERATIONS ON SHARED MEMORY
TECIiNICAL FIELD
[0001] This system and method are related to the field of shared memory and more particularly to compound atomic operations on shared memory.
BACKGROUND
[0002] One area of interest to computer program developers is parallel processing whereby computer code from an application is processed by two or more co-operating 1 o processors simultaneously using shared memory. A computer system having two or more co-operating processors coupled to shared memory, an operating system (OS) adapted for parallel processing, such as a mufti-tasking, mufti-threaded OS, and an application coded in a computer language adapted for parallel processing may provide significant performance advantages over a non-parallel processing implementation of the application.
[0003] Vllhen programming multiple threads using shared memory, synchronization is often necessary to communicate control commands or data between executing threads.
Synchronization may be implemented in a variety of manners including critical sections, barriers, and semaphores. A primitive form of synchronization is the atomic update of a 2 o single memory location. In a mufti-threaded environment, for example, an atomic update of a shared memory location by one of the threads requires that no other thread can read or modify the shared memory location while the update is happening.
Synchronization is used to ensure that two or more threads competing far the same resource (i.e. the shared memory location) wait on the resource until the one thread having the resource is finished.
[0004] Often, the lower level instruction sets of many computer processor architectures include specific instructions to atomically update memory in specialized ways.
These instructions typically form the basis of other forms of synchronization.
Higher level programming languages such as C, C++ or Java include primitives that represent various forms of synchronization. For example, in the OpenMPT"" application programming interface (API) extensions to C and C++, there are constructs for critical sections, semaphores, barriers and atomic updates. OpenMP is a trademark of OpenMP Architecture Review Board. These forms of synchronization can be implemented correctly using primitive forms of synchronization as described above but some known implementations require more efficient treatment. OpenMP supports multi-platform shared-memory parallel programming in C/C++ and Fortran on all architectures, including UnixT"" platforms and V~indowsTM f~T platforms.
[0005] in accordance with the OpenMP C and C++ API, (Version 2.0, published March 2002, and available at http://www.openmp.org/specs/mp-documents/cspec20_bars.pdf) for example, there is provided an ATOMIC construct, a pragma or compiler directive to instruct a C/C++ compiler to generate code which ensures that a single memory location is updated atomically. If there are instructions from the target processor's instruction set that match the semantics of the atomic update then use of those instructions is often the best implementation of the construct. If, however, there are no appropriate hardware instructions available, other synchronization implementations are used to ensure that the update is indeed atomic. As an example of this problem, the 2 o OpenMP implementation of the ATOMIC pragma on PowerPCTM architecture processors is unable to exploit the available load word and reserve index (iwarx) and store word conditional index (stcwx) instructions for compound atomic updates of data items larger than 4 bytes (e.g. double or long long data types).
[0006] A common implementation of these compound atomic operations (i.e. reads and 2 5 writes to more than one 4-byte word) requires acquiring a semaphore or lock in another location, updating a particular shared memory location and then releasing the lock.
Because there is often ambiguity about which symbols in a computer source code language can refer to which memory locations (e.g. through the use of pointers), a correct solution must ensure that acquisition of a lock for an atomic update guarantees that no other atomic update on the same or overlapping locations in memory can happen concurrently.
[0007, One way to ensure this exclusivity is to require all atomic updates in a program to acquire a single shared lock. The problem with this solution is that threads are likely to contend for that single shared lock when, in fact, they are not contending to update the same or overlapping locations in memory.
[0008) The following pseudo-code illustrates an exemplary contention:
double *p, *q;
#pragma atomic *p+=x #pragma atomic *q+=Y
[0009) In the example, the updates of the memory pointed to by p and q must be done exclusively only if p and q point to overlapping storage. If an implementation of the atomic construct uses a single shared lock and p and q do not, in fact, point at overlapping storage, then there may be unnecessary contention due to the shared lock.
[0010) A solution to some or all of these shortcomings is therefore desired.
SUMMARY
2 0 [OOy 1] The present invention is directed to a system and method for lock caching for compound atomic operations on shared memory.
[0012] For a computer system including a memory shared among a plurality of processing entities, there is provided a method of performing compound atomic operations by the processing entities. The method comprises providing a pool of locks for synchronizing access to the memory; assigning the locks among the plurality of entities to minimize lock contention; and performing the compound atomic operations using the assigned locks.

[0013] In accordance with a feature of this aspect, each lock may be assigned in accordance with an address of the shared memory from the entity's compound atomic operations. Assigning may comprise selecting a lock from the pool to minimize concurrent atomic updates to the same or overlapping portions of the shared memory.
Selecting may comprise aliasing the addresses in accordance with a known upper bound on the amount of the shared memory that may be affected by any atomic operation.
[0014] In accordance with a further feature, the method comprises configuring the pool of locks in accordance with a count of the plurality of processing entities e~cpected to 1o contend for the locks.
[0015] Further, the step of performing comprises, for each respective processing entity, synchronizing the entity in accordance with the availability of the lock assigned to the entity and further comprises releasing the lock assigned to the entity.
[0016] In accordance with another feature, the method includes providing a software s5 tool to generate instructions for the computer system to define the pool and a mechanism to use said pool, the mechanism adapted to assign the locks.
[0017] The software tool may be adapted to generate instructions to define said processing entities, said entities including instructions to use said mechanism to perform said steps of assigning and performing. Optionally, the software tool operates in 2 o accordance with a standard for shared-memory parallel processing.
[0018] In accordance with another aspect of the invention, there is provided a lock sharing system for performing compound atomic operations in a computer system including a memory shared among a plurality of processing entities. The lock sharing system comprises a pool of locks for synchronizing access to the memory; and a 25 mechanism for sharing the locks among the plurality of entities to minimize lock contention, said processing entities adapted to perform the compound atomic operations using the assigned locks.

[0019] The mechanism may be adapted to assign each lock in accordance with an address of the shared memory from the entity's compound atomic operations. The mechanism may selects a lock from the pool to minimize concurrent atomic updates to the same or overlapping portions of the shared memory. Further, the mechanism can be adapted to alias the addresses in accordance with a known upper bound on the amount of the shared memory that may be affected by any atomic operation.
[0020] As a feature of the lock sharing system, the pool is configured in accordance with a count of the plurality of processing entities expected to contend for the locks.
[0021] Further, the mechanism may comprise a component to synchronize the 1o processing entity in accordance with the availability of the lock assigned to the processing entity and, additionally, a component to release the lock assigned to the processing entity.
[0022] In accordance with a further feature, the lock sharing system includes a software tool component configured to generate instructions for the computer system for defining the pool and mechanism. The software tool component may further generate instructions to define the processing entities including instructions to use the pool and mechanism. Optionally, the software tool component operates in accordance with a standard for shared-memory parallel processing.
[0023] In accordance with another aspect, there is provided a software tool for 2 o generating instructions to perform compound atomic operations using shared locks to minimize lock contention. More particularly, there is provided a computer program product having a computer readable medium tangibly embodying computer executable code for generating instructions for a computer system including a memory to be shared among a plurality of processing entities. The computer program product oi~
this aspect comprises code to define a pool of locks for synchronizing access to the memory; code to define a mechanism for sharing the locks among the plurality of entities to minimize lock contention; and code to define the plurality of processing entities, said processing entities adapted to perform compound atomic operations using the mechanism.

Optionally, the computer program product is configured in accordance with a standard for shared-memory parallel processing.
[0024] In accordance with yet another aspect of the invention there is provided a computer program product having a computer readable medium tangibly embodying computer executable code for performing compound atomic operations in a computer system including a memory to be shared among a plurality of processing entities. This computer program product comprises code defining a pool of locks for synchronizing access to the memory; code defining a mechanism for sharing the locks among the plurality of entities to minimize lock contention; and code defining the plurality of 1 o processing entities, said processing entities adapted to perform compound atomic operations using the mechanism.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] Further features and advantages of aspects of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
[0026] Fig. 1 schematically illustrates a computer system embodying aspects of the invention;
[0027] Fig. 2 schematically illustrates, in greater detail, a portion of the computer system of Fig. 1;
2 0 (0028] Fig. 3 illustrates a memory portion of the computer system of Fig 2;
[0029] Figs. 4A and 4B are flowcharts illustrating operational steps involved in an embodiment of the invention.
[0030] It will be noted that throughout the appended drawings, like features are identified by like reference numerals.

DETAILED DESCRIPTION
[0031] The following detailed description of the embodiments of the present invention does not limit the implementation of the invention to any particular computer programming language. The present invention may be implemented in any computer programming language provided that the OS (Operating System) provides the facilities that may support the requirements of the present invention. A preferred embodiment is implemented in C/C++ computer programming language. Any limitations presented would be a result of a particular type of operating system, data processing system, or computer programming language, and thus would not be a limitation of the present invention.
[0032] An embodiment of the invention, computer system 100, is illustrated in Fig. 1. A
computer system 100, which is illustrated for exemplary purposes as a computing device, is adapted to communicate with other computing devices (not shown) using network 102. As will be appreciated by those of ordinary skill in the art, network 102 may be embodied using conventional networking technologies and may include one or more of the following: local networks, wide area networks, intranets, the Internet, and the like.
[0033] Through the description herein, an embodiment of the invention is illustrated with aspects of the invention embodied solely on computer system 100: Aspects of the 2 o invention may be distributed amongst one or more networked computing devices which interact with computer system 100 using one or more networks such as, for example, network 102. However, for ease of understanding, aspects of the invention have been embodied in a single computing device - computer system 100.
[0034] Computing device 100 typically includes a processing system 104 which is enabled to communicate with the network 102, and various input devices 106 and output devices 108. Input devices 106, (a keyboard and a mouse are shown) may also include a scanner, an imaging system (e.g., a camera, etc.), or the like.
Similarly, output devices 108 (only a display is illustrated) may also include printers and the like.
Additionally, combination input/output (I/O) devices may also be in communication with processing system 104. Examples of conventional I/O devices (not shown in Fig.1 ) include removable recordable media {e.g., floppy disk drives, tape drives, CD-ROM
drives, DVD-RW drives, etc.), touch screen displays, and the like.
[0035] Exemplary processing system 104 is illustrated in greater details in Fig. 2. As illustrated, processing system 104 includes: two central processing units (CPUs) 202A
and 2028, collectively 202, memory 204, network interface (I/F) 206 and input-output interface (I/O IIF) 208. Communication between various components of the processing system 104 may be facilitated via a suitable communications bus 210 as required.
[0036] CPU 202 is a processing unit, such as an IBM PowerPCT~", Intel Pentium TM, Sun 1o Microsystems UItraSparcTM processor, or the like, suitable for the operatians described herein. As will be appreciated by those of ordinary skill in the art, other embodiments of processing system 104 could use alternative CPUs and may include embodiments in which only one or more than two CPUs are employed (not shown). CPUs 202 may include various support circuits to enable communication with the other components of the processing system 104.
[0037] Memory 204 includes both volatile memory 212 and persistent memory 214 for the storage of: operational instructions for execution by CPUs 202, data registers, and the like. Memory 204 preferably includes a combination, possibly arranged in a hierarchy, of random access memory (RAM), read only memory (ROM) and persistent 2o memory such as that provided by a hard disk drive, flash memory, or both.
[0038] Network IIF 206 enables communication between computing device 100 and other network computing devices (not shown) via network 102. Network IIF 206 may be embodied in one or more conventional communication devices. Examples of a conventional communication device include: an Ethernet card, a token ring card, a 2 5 modem, or the like. Network I/F 206 may also enable the retrieval or transmission of instructions for execution by CPUs 202, from or to a remote storage media or device via network 102.

[0039] I/O I/F 208 enables communication between processing system 104 and the various I/O devices 106 and 108. I/O I/F 208 may include, for example, a video card for interfacing with an external display such as output device 108. Additionally, may enable communication between processing system 104 and a removable media 216. Removable media 216 may comprise a conventional diskette or other removable memory devices such as ZipTM drives, flash cards, CD-ROMs, static memory devices and, the like. Removable media 216 may be used to provide instructions for execution by CPUs 202 or as a removable data storage device.
[0040] Computer instructions/applications stored in memory 204 and executed by s o CPUs 202 (thus adapting the operation of the computer system 100 as described herein) are illustrated in functional block form in Fig. 3. As will be appreciated by those of ordinary skill in the art, the discrimination between aspects of the applications illustrated as functional blocks in Fig. 3 is somewhat arbitrary in that the various operations attributed to a particular application as described herein may, in an z 5 alternative embodiment, be subsumed by another application.
[0041 As illustrated for exemplary purposes only, memory 204 stores instructions and data for enabling the operation of lock caching for compound atomic operations on shared memory, comprising: an operating system (OS) 302, a communication suite 304, a compiler 306, an application 308 comprising computer source code 310 and 2 o executable code 312, lock cache 316 and a portion 318 of shared memory 204 where processes, threads or other processing entities 314 of application 308 may contend for atomic operations.
[0042] OS 302 is an operating system suitable for operation with selected CPUs and the operations described herein. Multi-tasking, multi-threaded OSes such as, for 25 example, Microsoft Windows 2000T"", UNIXTM or other UNIX-like OSes such as IBM
AIXTM, LinuxTM, etc. are expected to be preferred in many embodiments.
Communication suite 306 provides, through interaction vvith OS 302 and network I/F
206 (Fig. 2), suitable communication protocols to enable communication with other networked computing devices via network 102 (Fig. 1 ). Communication suite 304 may include one or more of such protocols such as TCP/IP, Ethernet, token ring and the like.
Communication suite 304 preferably includes asynchronous transport communication capabilities for communicating with other computing devices.
[0043] Compiler 306 is a software application for compiling and linking computer source code to create executable code for execution by CPUs 202 in the environment provided by ~S 302 and communications suite 304. Application 308 comprises source code for compiling and linking by compiler 306 and executable code 312 produced thereby for use during runtime. Instructions of application 308 define or produce when compiled compound atomic operations (not shown) with respect to shared memory 204.
[0044] In accordance with an embodiment of the invention, compiler 306 is adapted to generate executable code which, when run, shares a plurality of shared memory locks from cache 316 among the processes, threads or other processing entities, collectively 314, of application 308 that share a portion 318 of memory 204 and perform compound atomic operations with the shared portion 318. Though portion 318 is shown as a single contiguous block of memory 204, it may comprise one or more contiguous blocks.
[0045] In the exemplary embodiment, the locks are themselves implemented as a portion of memory 204, defined as a lock cache 31 C, i.e. a pool or other storage construct of memory 204 that may be reserved by a first one of the entities 314 to lock out one or more other entities 314 from updating a particular part of portion 318 while 2 o the first one of the entities performs a compound atomic operation. In accordance with this embodiment, the lock cache is facilitated by a lookup function (not shown) to determine which lock of the lock cache 316 is associated with a particular memory address of portion 318 to be locked, a lock acquire function to reserve the determined lock during the atomic operations and a lock release function to give back the lock for 2 5 re-use by the same or another of the entities 314.
[0046] In accordance with the exemplary embodiment, directives may be used in the source code of application 308 to instruct compiler 306 to invoke the pooled locking scheme. For example, the following pseudo-code illustrates a source code fragment for two or more of entities 314 and a representation of the code fragment generated by compiler 306 to implement the lock:
Entit~~ Code Generated Code Frac,~ment #pragma atomic lock = lock_cache.lookup (p) *p += x lock.acquire();
*p+=x lock. release();
[0047] In accordance with object oriented programming prirvciples, which persons of ordinary skill in the art will understand are not necessary to implement the invention, lock cache.lookup returns a lock object from a cache 316 (lock cache) of such objects for a particular memory address pointed to by pointer variable p. The lookup determines the appropriate lock for the memory address from the cache but does not invoke the lock. Rather the lock object has a method (acquire) for synchronizing a requesting entity in accordance with the availability of the lock. if the lock is reserved by another entity, the acquire method waits or locks out the requesting entity until the lock is available.
Advantageously, a requesting entity is not locked out when another requesting entity is performing atomic operations ii' that other requesting entity's memory address does not map to the same lock.
[0048] Persons of ordinary skilil in the art will recognize that multiple different addresses 2 0 might refer to overlapping storage such as is illustrated by the rode fragment:
Entity Code char *cp;
double *dp;
dp = &(xLil);
cp = (char *)dp + 4;
#pragma atomic *dp +_- x ...
#pragma atomic *cp +-_ Y

»_ a a ._ . _~-~ __i _,. ~ _~-a . ,.~_. ~~~ _.:~~ _ .~ -.- __ ..._~ .. ~. .. -~._ _ ~ _~._._ ~___._~_._, v . __~

[0049] In this case, the same lock should be used to guard both atomic updates because the updates may use and modify the same memory location (dp+4). In accordance with a standard or other convention (e.g. OpenMP) where the target of an atomic update must be a scalar variable, the following embodiment may be usefully employed.
[0050] Where there is a known upper bound on the amount of storage that might be modified by any given atomic update (i.e. the largest scalar type that is supported) an appropriate cache lookup may be tuned. For example, if the largest supported type is 16 bytes long, a cache lookup Busing the next lower 32-byte aligned address instead of 1o the target address can be performed and be assured that any pointers that might address overlapping storage will use the same lookup key.
[0051) The following the pseudo-code fragment illustrates such a tuned locking scheme (based on a maximum scalar size of 16 bytes):
Entity Code Generated Code 1s #pragma atomic lock -- lock cache.lookup (p » 5) *p += x lock.acquire();
*p+=x lock.release();
[0052] It will be appreciated by those of ordinary skill in the art that the 5 bit shift applied 2 o to the address key p may be alternatively performed as an AND operation (e.g. p &
OxFFFFFE00) but the shift achieves the desired aliasing of a key to the lock cache and is computationally faster.
(0053] The size of the cache (alternately, the number of bits of the address used in the lookup) is a tuning parameter since the cache size should be larger when ;a larger 25 number of threads are active.
[0054) Fig. 4A illustrates operations 400 involved in coding and compiling an application in accordance with an embodiment of the invention. Following a start (step 402), a developer or other programmer writes computer source code (Step 404) which, when executed, performs instructions for compound atomic operations using a specific memory location. The memory location may be shared with one or more processes, threads or other entities but need not be. The source code includes pragmas or other compiler directives, as discussed previously, for instructing the compiler to provide executable code that obtains a lock from a lock cache in accordance with the memory location to be updated, lockc> the lock to prevent other entities from updating the location, performs the update and releases the lock. The code is compiled using a compiler adapted to receive the pragmas and produce the executable code (Step 406) and the operations 400 end (Step 408).
[0055] Fig. 4B illustrates operations 420 involved at runtime for the executable code produced by operations 400. At start step 422, a thread of the application produced by the compiler 306 and including instructions for performing compound atomic operations us executed. The executable code obtains (e.g. by way of a lookup method or function) a lock from the cache 316 for a particular address of data memory 318 to be atomically updated (Step 424). In accordance with an embodiment of the invention, the memory address may be shifted or otherwise abased to provide a key to the lock cache for assigning a particular lock from among the pool of locks maintained by the cache for the shared memory block 318. Once the particular lock is obtained, for example, by returning a lock object to the thread, the lock may be acguired exclusively by the thread 2 o to prevent other executing entities from updating the particular address (Step 426).
[0056] It will be appreciated that modifications and extensions to the embodiments described may be made without departing from the scope of the invention. For example, it is described that compiler 306 may be instructed using directives to generate code for implementing the lock cache for compound directives. Other mechanisms, whether automatic or user initiated such as by direct programming, may be used to invoke a lock cache implementation.
[0057] It will be appreciated that modifications and extensions to the embodiments described may be made without departing from the scope of the invention. For example, it is described that compiler 300 may be instructed using directives to generate code for implementing the lock cache for compound operations. t>ther mechanisms may be used to invoke a lock cache implementation and may be automated or more user initiated such as by direct programming.
[0058] Additionally, alternative software tools to compiler 306 may be adapted to provide lock caching in accordance with the invention. Such software tools may include interpreters, editors, preprocessors and the like.
[0059] The invention applies iro shared memory that is shared among two or more processes/threads in a single processor system as well. hock contention may be minimized in such a system key sharing locks from a cache or other mechanism in accordance wit the invention.
[0060] The embodiments of the invention described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the scope of the appended clairns.

Claims (30)

1. In a computer system including a memory shared among a plurality of processing entities, a method of performing compound atomic operations by said processing entities comprising:
providing a pool of locks for synchronizing access to the memory;
assigning the locks among the plurality of entities to minimize lock contention;
and performing the compound atomic operations using the assigned locks.
2. The method of claim 1 wherein each lock is assigned in accordance with an address of the shared memory from the entity's compound atomic operations.
3 . The method of claim 2 wherein the step of assigning comprises selecting a lock from the pool to minimize concurrent atomic updates to the same or overlapping portions of the shared memory.
4. The method of claim 3 wherein the step of selecting comprises aliasing the addresses in accordance with a known upper bound on the amount of the shared memory that may be effected by any atomic operation.
5. The method of claim 1 comprising configuring the pool of locks in accordance with a count of the plurality of processing entities expected to contend for the locks.
6. The method of claim 1 wherein the step of performing comprises, for each respective processing entity, synchronizing the entity in accordance with the availability of the lock assigned to the entity.
7. The method of claim 6 comprising releasing the lock assigned to the entity.
8. The method of claim 1 including providing a software tool to generate instructions for said computer system to define said pool and a mechanism to use said pool.
9. The method of claim 8 wherein said software tool further generates instructions to define said processing entities, said entities including instructions to use said mechanism to perform said steps of assigning and performing.
10. The method of claim 8 wherein said software tool operates in accordance with a standard for shared-memory parallel processing.
11. In a computer system including a memory shared among a plurality of processing entities, a lock sharing system for performing compound atomic operations by said processing entities comprising:
a pool of locks for synchronizing access to the memory; and a mechanism for sharing the locks among the plurality of entities to minimize lock contention, said processing entities adapted to perform the compound atomic operations using the assigned locks.
12. The lock sharing system of claim 11 wherein said mechanism is adapted to assign each lock in accordance with an address of the shared memory from the entity's compound atomic operations.
13. The lock sharing system of claim 12 wherein the mechanism selects a lock from the pool to minimize concurrent atomic updates to the same or overlapping portions of the shared memory.
14. The lock sharing system of claim 13 wherein the mechanism is adapted to alias the addresses in accordance with a known upper bound on the amount of the shared memory that may be affected by any atomic operation.
15 . The lock sharing system of claim 11 wherein the pool is configured in accordance with a count of the plurality of processing entities expected to contend for the locks.
16. The lock sharing system of claim 11 wherein the mechanism comprises a component to synchronize the processing entity in accordance with the availability of the lock assigned to the processing entity.
17. The lock sharing system of claim 16 wherein the mechanism comprises a component to release the lock assigned to the processing entity.
18. The lock sharing system of claim 11 including a software tool component configured to generate instructions for said computer system for defining said pool and mechanism.
19. The lock sharing system of claim 18 wherein said software tool component further generates instructions to define said processing entities, said entities including instructions to use said pool and mechanism.
20. The lock sharing system of claim 18 wherein said software tool component operates in accordance with a standard for shared-memory parallel processing.
21. A computer program product having a computer readable medium tangibly embodying computer executable code for generating instructions for a computer system including a memory to be shared among a plurality of processing entities, said computer program product comprising:
code to define a pool of locks for synchronizing access to the memory;
code to define a mechanism for sharing the locks among the plurality of entities to minimize lock contention; and code to define the plurality of processing entities, said processing entities adapted to perform compound atomic operations using the mechanism.
22. The computer program product of claim 21 wherein the mechanism is adapted to assign a lock to a processing entity in accordance with an address of the shared memory from the entity's compound atomic operations.
23. The computer program product of claim 22 wherein the mechanism is adapted to select a lock from the pool to minimize concurrent atomic updates to the same or overlapping portions of the shared memory.
24. The computer program product of claim 21 wherein the code to define the pool is configured in accordance with a count of the plurality of processing entities expected to contend for the locks.
25. The computer program product of claim 21 configured in accordance with a standard for shared-memory parallel processing.
26. A computer program product having a computer readable medium tangibly embodying computer executable code for performing compound atomic operations in a computer system including a memory to be shared among a plurality of processing entities, said computer program product comprising:
code defining a pool of locks for synchronizing access to the memory;
code defining a mechanism for sharing the locks among the plurality of entities to minimize lock contention; and code defining the plurality of processing entities, said processing entities adapted to perform compound atomic operations using the mechanism.
27. The computer program product of claim 26 wherein the mechanism is adapted to assign a lock to a processing entity in accordance with an address of the shared memory from the entity's compound atomic operations.
28. The computer program product of claim 27 wherein the mechanism is adapted to select a lock from the pool to minimize concurrent atomic updates to the same or overlapping portions of the shared memory.
29. The computer program product of claim 26 wherein the pool is configured in accordance with a count of the plurality of processing entities expected to contend for the locks.
30. The computer program product of claim 26 wherein said code is defined by a software tool configured in accordance with a standard for shared-memory parallel processing.
CA002435148A 2003-07-15 2003-07-15 System and method for lock caching for compound atomic operations on shared memory Abandoned CA2435148A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CA002435148A CA2435148A1 (en) 2003-07-15 2003-07-15 System and method for lock caching for compound atomic operations on shared memory
US10/864,777 US7228391B2 (en) 2003-07-15 2004-06-08 Lock caching for compound atomic operations on shared memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA002435148A CA2435148A1 (en) 2003-07-15 2003-07-15 System and method for lock caching for compound atomic operations on shared memory

Publications (1)

Publication Number Publication Date
CA2435148A1 true CA2435148A1 (en) 2005-01-15

Family

ID=33557687

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002435148A Abandoned CA2435148A1 (en) 2003-07-15 2003-07-15 System and method for lock caching for compound atomic operations on shared memory

Country Status (2)

Country Link
US (1) US7228391B2 (en)
CA (1) CA2435148A1 (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7356653B2 (en) * 2005-06-03 2008-04-08 International Business Machines Corporation Reader-initiated shared memory synchronization
US8402224B2 (en) * 2005-09-20 2013-03-19 Vmware, Inc. Thread-shared software code caches
US9118698B1 (en) 2005-12-02 2015-08-25 Branislav Radovanovic Scalable data storage architecture and methods of eliminating I/O traffic bottlenecks
US8347010B1 (en) * 2005-12-02 2013-01-01 Branislav Radovanovic Scalable data storage architecture and methods of eliminating I/O traffic bottlenecks
US7996632B1 (en) * 2006-12-22 2011-08-09 Oracle America, Inc. Device for misaligned atomics for a highly-threaded x86 processor
US8060880B2 (en) * 2007-05-04 2011-11-15 Microsoft Corporation System using backward inter-procedural analysis for determining alternative coarser grained lock when finer grained locks exceeding threshold
US9032100B2 (en) * 2008-04-03 2015-05-12 International Business Machines Corporation I/O hub-supported atomic I/O operations
US8032706B2 (en) * 2008-08-05 2011-10-04 Intel Corporation Method and apparatus for detecting a data access violation
US8145817B2 (en) * 2009-04-28 2012-03-27 Microsoft Corporation Reader/writer lock with reduced cache contention
US9141439B2 (en) * 2010-10-11 2015-09-22 Sap Se System and method for reporting a synchronization event in a runtime system of a computer system
US9280444B2 (en) 2010-10-11 2016-03-08 Sap Se System and method for identifying contention of shared resources in a runtime system
US9081628B2 (en) 2011-05-27 2015-07-14 Intel Corporation Detecting potential access errors in a multi-threaded application
KR101667097B1 (en) 2011-06-28 2016-10-17 휴렛 팩커드 엔터프라이즈 디벨롭먼트 엘피 Shiftable memory
US8954409B1 (en) * 2011-09-22 2015-02-10 Juniper Networks, Inc. Acquisition of multiple synchronization objects within a computing device
KR20140065477A (en) 2011-10-27 2014-05-29 휴렛-팩커드 디벨롭먼트 컴퍼니, 엘.피. Shiftable memory supporting atomic operation
KR101660611B1 (en) 2012-01-30 2016-09-27 휴렛 팩커드 엔터프라이즈 디벨롭먼트 엘피 Word shift static random access memory(ws-sram)
WO2013130109A1 (en) 2012-03-02 2013-09-06 Hewlett-Packard Development Company L.P. Shiftable memory defragmentation
US9569281B1 (en) * 2015-08-13 2017-02-14 International Business Machines Corporation Dynamic synchronization object pool management
US10331353B2 (en) 2016-04-08 2019-06-25 Branislav Radovanovic Scalable data access system and methods of eliminating controller bottlenecks
US20170328876A1 (en) * 2016-05-12 2017-11-16 Radiant Innovation Inc. Gas concentration detection device and detection method thereof
US20200401412A1 (en) * 2019-06-24 2020-12-24 Intel Corporation Hardware support for dual-memory atomic operations
CN110471779B (en) * 2019-07-22 2023-11-14 创新先进技术有限公司 Method and device for realizing lock resource processing

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA1300758C (en) 1988-03-07 1992-05-12 Colin H. Cramm Mechanism for lock-up free cache operation with a remote address translation unit
US5404482A (en) 1990-06-29 1995-04-04 Digital Equipment Corporation Processor and method for preventing access to a locked memory block by recording a lock in a content addressable memory with outstanding cache fills
US5408629A (en) 1992-08-13 1995-04-18 Unisys Corporation Apparatus and method for controlling exclusive access to portions of addressable memory in a multiprocessor system
US5787465A (en) 1994-07-01 1998-07-28 Digital Equipment Corporation Destination indexed miss status holding registers
US5694567A (en) 1995-02-09 1997-12-02 Integrated Device Technology, Inc. Direct-mapped cache with cache locking allowing expanded contiguous memory storage by swapping one or more tag bits with one or more index bits
JP2916420B2 (en) * 1996-09-04 1999-07-05 株式会社東芝 Checkpoint processing acceleration device and data processing method
EP0856798B1 (en) 1997-01-30 2004-09-29 STMicroelectronics Limited A cache system
US6092159A (en) 1998-05-05 2000-07-18 Lsi Logic Corporation Implementation of configurable on-chip fast memory using the data cache RAM
US6772299B2 (en) 2001-07-16 2004-08-03 Sun Microsystems, Inc. Method and apparatus for caching with variable size locking regions
US6829762B2 (en) * 2002-10-10 2004-12-07 International Business Machnies Corporation Method, apparatus and system for allocating and accessing memory-mapped facilities within a data processing system
US6829698B2 (en) * 2002-10-10 2004-12-07 International Business Machines Corporation Method, apparatus and system for acquiring a global promotion facility utilizing a data-less transaction

Also Published As

Publication number Publication date
US20050010729A1 (en) 2005-01-13
US7228391B2 (en) 2007-06-05

Similar Documents

Publication Publication Date Title
US7228391B2 (en) Lock caching for compound atomic operations on shared memory
Kim et al. Multicore desktop programming with intel threading building blocks
US9323586B2 (en) Obstruction-free data structures and mechanisms with separable and/or substitutable contention management mechanisms
McKenney Exploiting deferred destruction: an analysis of read-copy-update techniques in operating system kernels
US8438341B2 (en) Common memory programming
Yoo et al. Performance evaluation of Intel® transactional synchronization extensions for high-performance computing
Agesen et al. An efficient meta-lock for implementing ubiquitous synchronization
Saha et al. McRT-STM: a high performance software transactional memory system for a multi-core runtime
KR100437704B1 (en) Systems and methods for space-efficient object tracking
Orr et al. Synchronization using remote-scope promotion
US20170031815A1 (en) Wand: Concurrent Boxing System For All Pointers With Or Without Garbage Collection
Balaji Programming models for parallel computing
Ziarek et al. A uniform transactional execution environment for Java
Mannarswamy et al. Compiler aided selective lock assignment for improving the performance of software transactional memory
Vießmann et al. Effective Host-GPU Memory Management Through Code Generation
Fanfarillo et al. Caf events implementation using mpi-3 capabilities
US20050149945A1 (en) Method and system of re-reserving object locks without causing reservation thrash
Yamada et al. SAW: Java synchronization selection from lock or software transactional memory
Chabbi et al. Efficient abortable-locking protocol for multi-level NUMA systems: Design and correctness
Chen et al. Adaptive lock-free data structures in haskell: a general method for concurrent implementation swapping
Dokulil Consistency model for runtime objects in the Open Community Runtime
Tian et al. GPU First--Execution of Legacy CPU Codes on GPUs
Nord et al. TORTIS: Retry-free software transactional memory for real-time systems
D. Jardim et al. An extension for Transactional Memory in OpenMP
Baker et al. OpenSHMEM Specification 1.4

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued