|Publication number||US6957435 B2|
|Application number||US 09/838,057|
|Publication date||Oct 18, 2005|
|Filing date||Apr 19, 2001|
|Priority date||Apr 19, 2001|
|Also published as||EP1390839A1, EP1390839A4, US20020156824, WO2002086698A1|
|Publication number||09838057, 838057, US 6957435 B2, US 6957435B2, US-B2-6957435, US6957435 B2, US6957435B2|
|Inventors||William Joseph Armstrong, Mark Gregory Manges, Naresh Nayar, Jeffrey Jay Scheel, Craig Alden Wilcox|
|Original Assignee||International Business Machines Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (23), Non-Patent Citations (19), Referenced by (141), Classifications (7), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates generally to digital data processing, and more particularly to the logical partitioning of components of a digital computer system.
A modern computer system typically comprises a central processing unit (CPU) and supporting hardware necessary to store, retrieve and transfer information, such as communications busses and memory. It also includes hardware necessary to communicate with the outside world, such as input/output controllers or storage controllers, and devices attached thereto such as keyboards, monitors, tape drives, disk drives, communication lines coupled to a network, etc. The CPU is the heart of the system. It executes the instructions which comprise a computer program and directs the operation of the other system components.
From the standpoint of the computer's hardware, most systems operate in fundamentally the same manner. Processors are capable of performing a limited set of very simple operations, such as arithmetic, logical comparisons, and movement of data from one location to another. But each operation is performed very quickly. Programs which direct a computer to perform massive numbers of these simple operations give the illusion that the computer is doing something sophisticated. What is perceived by the user as a new or improved capability of a computer system is made possible by performing essentially the same set of very simple operations, but doing it much faster. Therefore continuing improvements to computer systems require that these systems be made ever faster.
The overall speed of a computer system (also called the “throughput”) may be crudely measured as the number of operations performed per unit of time. Conceptually, the simplest of all possible improvements to system speed is to increase the clock speeds of the various components, and particularly the clock speed of the processor. E.g., if everything runs twice as fast but otherwise works in exactly the same manner, the system will perform a given task in half the time. Early computer processors, which were constructed from many discrete components, were susceptible to significant speed improvements by shrinking component size, reducing component number, and eventually, packaging the entire processor as an integrated circuit on a single chip. The reduced size made it possible to increase the clock speed of the processor, and accordingly increase system speed.
Despite the enormous improvement in speed obtained from integrated circuitry, the demand for ever faster computer systems has continued. Hardware designers have been able to obtain still further improvements in speed by greater integration (i.e., increasing the number of circuits packed onto a single chip), by further reducing the size of the circuits, and by various other techniques. However, designers can see that physical size reductions can not continue indefinitely, and there are limits to their ability to continue to increase clock speeds of processors. Attention has therefore been directed to other approaches for further improvements in overall speed of the computer system.
Without changing the clock speed, it is possible to improve system throughput by using multiple copies of certain components, and in particular, by using multiple processors. The modest cost of individual processors and other components packaged on integrated circuit chips has made this practical. As a result, many current large-scale system designs include multiple processors, caches, buses, I/O drivers, storage devices and so forth.
The proliferation of system components introduces various architectural issues involved in managing these resources. For example, multiple processors typically share the same main memory (although each processor may have it own cache). If two processors have the capability to concurrently read and update the same data, there must be mechanisms to assure that each processor has authority to access the data, and that the resulting data is not gibberish. Another architectural issue is the allocation of processing resources to different tasks in an efficient and “fair” manner, i.e., one which allows all tasks to obtain reasonable access to system resources. There are further architectural issues, which need not be enumerated in great detail here.
One recent development in response to this increased system complexity is to support logical partitioning of the various resources of a large computer system. Conceptually, logical partitioning means that multiple discrete partitions are established, and the system resources of certain types are assigned to respective partitions. Each task executes within a logical partition, meaning that it can use only the resources assigned to that partition, and not resources assigned to other partitions.
Logical partitions are generally allocated by a system administrator or user with similar authority. I.e., the allocation is performed by issuing commands to appropriate management software resident on the system, rather than by physical reconfiguration of hardware components. It is expected, and indeed one of the benefits of logical partitioning is, that the authorized user can re-allocate system resources in response to changing needs or improved understanding of system performance.
One of the resources commonly partitioned is the set of processors. In supporting the allocation of resources, and particularly processor resources, it is desirable to provide an easy to use interface which gives the authorized user predictable results. Current partitioning support may cause unwanted side-effects when shared processor allocations are changed. Changing processor allocation for one partition can affect the performance of other partitions which are unchanged. A need exists for resource allocation methods and apparatus which enable an administrator to more conveniently reallocate processor resources and achieve greater isolation of the effects of reallocation to specific targeted logical partitions.
A processor allocation mechanism for a logically partitionable computer system supports the allocation of processor resources to different partitions. An authorized user (administrator) specifies processing capability allocable to each partition as a scalar quantity representing a number of processors, where the processing capability may be specified as a non-integer value. This processing capability value is unaffected by changes to the processing capability values of other partitions. Preferably, the administrator may designate multiple sets of processors, and assign each physical processor of the system to a respective processor set. Each logical partition is constrained to execute in an assigned processor set.
In the preferred embodiment, certain processor sets are referred to as “pools”, while others are dedicated to respective single partitions. A processor pool may be assigned to a single partition, or may be shared by more than one partition.
In the preferred embodiment, the administrator may designate a logical partition in a processor pool as either capped or uncapped. A capped partition is constrained to utilize no more than the specified processing capability allocable to the partition, even if processors are idle due to lack of available work from other partitions. An uncapped partition may utilize spare processing capability beyond its allocation, provided that it may not execute its tasks on physical processors outside its assigned processor pool.
In the preferred embodiment, the administrator may further specify a number of virtual processors for each partition in a processor pool. Such a specification will divide the processing capability available to the partition into the specified number of virtual processors.
The resource allocation mechanism described herein thus gives an administrator an effective interface for regulating processor resources among multiple tasks running in multiple logical partitions.
The details of the present invention, both as to its structure and operation, can best be understood in reference to the accompanying drawings, in which like reference numerals refer to like parts, and in which:
Logical Partitioning Overview
Logical partitioning is a technique for dividing a single large computer system into multiple partitions, each of which behaves in some respects as a separate computer system. Certain resources of the system may be allocated into discrete sets, such that there is no sharing of a single resource among different partitions, while other resources may be shared on a time interleaved or other basis. Examples of resources which may be partitioned are central processors, main memory, I/O processors and adapters, and I/O devices. Each user task executing in a logically partitioned computer system is assigned to one of the logical partitions (“executes in the partition”), meaning that it can use only the system resources assigned to that partition, and not resources assigned to other partitions.
Logical partitioning is indeed logical rather than physical. A general purpose computer typically has physical data connections such as buses running between a resource in one partition and one in a different partition, and from a physical configuration standpoint, there is typically no distinction made with regard to logical partitions. Generally, logical partitioning is enforced by low-level encoded data, which is referred to as “licensed internal cod”, although there may be a certain amount of hardware support for logical partitioning, such as hardware registers which hold state information. E.g., from a hardware standpoint, there is nothing which prevents a task executing in partition A from writing to an I/O device in partition B. Low level licensed internal code function and/or hardware prevent access to the resources in other partitions.
Code enforcement of logical partitioning constraints means that it is possible to alter the logical configuration of a logically partitioned computer system, i.e., to change the number of logical partitions or re-assign resources to different partitions, without reconfiguring hardware. Generally, a logical partition management tool is provided for this purpose. This management tool is intended for use by a single or a small group of authorized users, who are herein designated the system administrator. In the preferred embodiment described herein, this management tool is referred to as the “hypervisor”. A portion of this management tool used for creating or altering a configuration executes in one of the logical partitions, herein designated the “primary partition”.
Logical partitioning of a large computer system has several potential advantages. As noted above, it is flexible in that reconfiguration and re-allocation of resources is easily accomplished without changing hardware. It isolates tasks or groups of tasks, helping to prevent any one task or group of tasks from monopolizing system resources. It facilitates the regulation of resources provided to particular users; this is important where the computer system is owned by a service provider which provides computer service to different users on a fee-per-resource-used basis. Finally, it makes it possible for a single computer system to concurrently support multiple operating systems, since each logical partition can be executing in a different operating system.
Additional background information regarding logical partitioning can be found in the following commonly owned patents and patent applications, which are herein incorporated by reference: Ser. No. 09/672,043, filed Sep. 29, 2000, entitled Technique for Configuring Processors in System With Logical Partitions; U.S. Pat. No. 6,436,671 to Doing et al., entitled Generating Partition Corresponding Real Address in Partitioned Mode Supporting System; U.S. Pat. No. 6,467,007 to Armstrong et al., entitled Processor Reset Generated Via Memory Access Interrupt; U.S. Pat. No. 6,681,240 to Armstrong et al., entitled Apparatus and Method for Specifying Maximum Interactive Performance in a Logical Partition of a Computer; Ser. No. 09/314,324, filed May 19, 1999, entitled Management of a Concurrent Use License in a Logically Partitioned Computer; U.S. Pat. No. 6,691,146 to Armstrong et al., entitled Logical Partition Manager and Method; U.S. Pat. No. 6,279,046 to Armstrong et al., entitled Event-Driven Communications Interface for Logically Partitioned Computer; U.S. Pat. No. 5,659,786 to George et al.; and U.S. Pat. No. 4,843,541 to Bean et al. The latter two patents describe implementations using the IBM S/360, S/370, S/390 and related architectures, while the remaining patents and applications describe implementations using the IBM AS/400 and related architectures.
The major hardware components of a multiprocessor computer system 100 for utilizing a logical partitioning management tool according to the preferred embodiment of the present invention are shown in FIG. 1. Multiple central processing units (CPUs) 101A-101H concurrently perform basic machine processing function on instructions and data from main memory 102. Each processor contains or controls a respective cache. These cache structures are shown conceptually in
A pair of memory buses 103A, 103B connect the various CPUs, main memory, and I/O bus interface unit 105. I/O bus interface unit 105 communicates with multiple I/O processing units (IOPs) 111-117 through respective system I/O buses 110A, 110B. In the preferred embodiment, each system I/O bus is an industry standard PCI bus. The IOPs support communication with a variety of storage and I/O devices, such as direct access storage devices (DASD), tape drives, workstations, printers, and remote communications lines for communication with remote devices or other computer systems. While eight CPUs, two memory buses, two I/O buses, and various numbers of IOPs and other devices are shown in
While various system components have been described and shown at a high level, it should be understood that a typical computer system contains many other components not shown, which are not essential to an understanding of the present invention. In the preferred embodiment, computer system 100 is a multiprocessor computer system based on the IBM AS/400 or I/Series architecture, it being understood that the present invention could be implemented on other multiprocessor computer systems.
As shown in FIG. 2 and explained earlier, logical partitioning is a code-enforced concept. At the hardware level 201, logical partitioning does not exist. As used herein, hardware level 201 represents the collection of physical devices (as opposed to data stored in devices), such as processors, memory, buses, I/O devices, etc., shown in
Immediately above the hardware is a common low-level hypervisor base 202, also called partitioning licensed internal code (PLIC), which enforces logical partitioning. As represented in
Above hypervisor 202 is another level of machine management code herein identified as the “OS kernel” 204A-204D. At the level of the OS kernel, each partition behaves differently, and therefore
Above the OS kernel are a set of high-level operating system functions 205A-205D, and user application code and data 206A-206D. A user may create code in levels 206A-206D which invokes one of high level operating system functions 205A-205D to access the OS kernel, or may directly access the OS kernel. This is represented in
One and only one of the logical partitions is designated the primary partition, which is the partition used by the system administrator to manage logical partitioning. The primary partition contains a special portion of hypervisor code 203 which shares the level of OS kernel 204A. Hypervisor portion 203 contains code necessary to create or alter logical partition definitions. Collectively, hypervisor portion 203 and hypervisor base 202 constitute the hypervisor. Additionally, a user-to-hypervisor interface 208 is provided at the OS kernel level in the primary partition. Interface 208 provides functions for interacting with a user (system administrator) to obtain user-specified partitioning parameters. The functions available in interface 208 may be used directly in a direct-attach terminal, or may be accessed through a set of APIs from other interface code (not shown) in any device (such as an intelligent workstation) connected to computer system 100. The hypervisor is super-privileged code which is capable of accessing resources, and specifically processor resources, in any partition. The hypervisor causes state values to be written to various hardware registers and other structures, which define the boundaries and behavior of the logical partitions.
In accordance with the preferred embodiment, the administrator defines multiple logical partitions and the resources available to each. With respect to processing resource, the administrator specifies four things: the number of virtual processors available to each partition, the processing capacity available to the partition, whether the assigned processing capacity is capped, and the assignment of physical processors to partitions. Any or all of these parameters may be changed by the administrator, effecting an altered configuration. These parameters are explained with reference to the examples below.
In the example of
A physical processor allocation constrains a task executing in an associated partition to run on only the processors allocated to the processor set to which the partition is assigned. In this embodiment, a set of one or more processors may be assigned to a partition in dedicated mode, or may be assigned to a processor pool, to which one or more partitions are in turn assigned. Dedicated mode means simply that the full capacity of the set of physical processors is dedicated to a single partition. In a pooled mode, the processors are assigned to a pool, which is typically (although not necessarily) shared among more than one partition. Dedicated mode is functionally equivalent to a pool to which only one logical partition is assigned, and in which the full capacity and number of virtual processors of the pool are given to the one partition.
Thus, in the example of
The processing capacity allocation specifies the amount of equivalent processing power allocated to a partition in processor units. I.e., one processor unit is the equivalent of a single physical processor executing 100% of the time. The sum of the processing capacity allocations of all partitions assigned to a particular processor pool can not exceed the number of physical processors in the pool, although it may be less than the number of physical processors in the pool (in which case, there is unallocated processor capacity). Thus, if the administrator changes the processing capacity allocation of a single partition assigned to a pool, this change has no effect on the processing capacity allocations to the remaining partitions assigned to the same pool. The unallocated processor capacity is merely increased or decreased accordingly.
In the example of
The virtual processor assignment specifies the number of virtual processors seen by each respective partition which is assigned to a pool of processors. To the partition, the underlying hardware and dispatching code behaves like the number of virtual processors specified, each of which is running at some fraction of the power of a single physical processor, the fraction being the number of virtual processors divided by the number processing units allocated to the partition. Thus, in the example of
A logical partition assigned to a pool may be designated either capped or uncapped. A capped partition can not use more processing capacity than its allocation, even if processors are idle due to lack of available work from other partitions in the same pool. Capping assures that a particular logical partition will not exceed its allocated usage, which is desirable in some circumstances. An uncapped partition may utilize spare processing capability beyond its allocation, provided that it may not execute its tasks on physical processors outside its assigned processor pool. Capping does not apply to partitions having dedicated processors.
Referring to the example of
Selective use of capping allows the administrator to configure different pools for different environments. For example, a partition having large fluctuations in expected workload might be configured to run in the same pool as a lower priority but more constant work stream partition, allowing the latter to use the excess capacity of the former. On the other hand, partitions which should be limited to a particular processor capacity (e.g., because the end user is paying for a certain capacity) may run together in a capped environment.
The configuration of
In the preferred embodiment, the hypervisor enters state values in registers and memory, which define the partitions and cause partitioning constraints to be enforced. The hypervisor obtains this information from the administrator.
As shown in
For each defined partition, the following steps 404-407 are performed. The administrator assigns the partition to one and only one of the previously defined processor sets (step 404), meaning that tasks within the partition will execute only in the physical processors assigned to that processor set. If the assigned set is a pool, then steps 405-407 are performed; if the assigned set is a set of dedicated processors, these steps are unnecessary.
The administrator specifies the processing capacity value allocable to the partition as a decimal number (step 405). The total processing capacity value of all partitions assigned to a particular processor pool can not exceed the number of physical processors in the pool. Input from the administrator can be solicited in any of various ways, but it is appropriate to display, either graphically or numerically, the remaining unused processor capacity of the pool to which the partition has been assigned. The system will not accept a processor value which is too high.
The administrator designates the partition as either capped or uncapped (step 406), this designation having the meaning previously explained. Finally, the administrator specifies the number of virtual processors for the partition (step 407), this specification also being explained above. The number of virtual processors must be an integer, and must be equal to or greater than the processor capacity value.
The administrator may specify additional parameters of each logical partition (step 408). For example, the administrator may specify the amount of memory allocation for each partition, I/O devices to be used, and so forth.
When the partitions have been defined and characterized as explained above, the hypervisor stores this information in various registers, tables and other constructs, which effectively configures the system as a logically partitioned system. Hypervisor 202 and hardware 201 thereafter enforce logical partitioning in accordance with these state values, as explained in greater detail below.
The user interface presented to an administrator to obtain partitioning data as described above may take any appropriate form, e.g., the interface may be textual or a graphical user interface (GUI). It will be appreciated that certain steps depicted in
With state data entered in appropriate registers and tables to configure logical partitioning, hypervisor 202 and hardware 201 enforce logical partitioning, and in particular, enforces constraints on the use of processor resources.
In operation, pooled processor constraints are enforced by taking time slices of system operation, and setting timers for each of various processes in a time slice. When the timers time out, some action is taken, such as limiting further execution of a task. The various actions may be triggered, e.g., by interrupts or similar hardware signals generated by a time-out, a task termination, a task becoming idle, etc.
It will be recognized that in steps 524, 526, 543 and 545, there could be multiple virtual processors meeting the applicable criteria, and the virtual processor dispatcher in the hypervisor may have various other priorities for selecting one from among multiple potentially eligible virtual processors, such as length of time in queue, user assigned priority of the underlying task, and so forth.
In general, the routines executed to implement the illustrated embodiments of the invention, whether implemented as part of an operating system or a specific application, program, object, module or sequence of instructions may be referred to herein as “computer programs” or simply “program”. The computer programs typically comprise instructions which, when read and executed by one or more processors in the devices or systems in a computer system consistent with the invention, cause those devices or systems to perform the steps necessary to execute steps or generate elements embodying the various aspects of the present invention. Moreover, while the invention has and hereinafter will be described in the context of fully functioning computer systems, the various embodiments of the invention are capable of being distributed as a program product in a variety of forms, and the invention applies equally regardless of the particular type of signal-bearing media used to actually carry out the distribution. Examples of signal-bearing media include, but are not limited to, recordable type media such as volatile and non-volatile memory devices, floppy disks, hard-disk drives, CD-ROM's, DVD's, magnetic tape, and transmission-type media such as digital and analog communications links, including wireless communications links. Examples of signal-bearing media are illustrated in
In the preferred embodiment described above, the computer system utilizes an IBM AS/400 or I/Series architecture. It will be understood that certain implementation details above described are specific to this architecture, and that logical partitioning management mechanisms in accordance with the present invention may be implemented on different architectures, and certain implementation details may vary.
While the invention has been described in connection with what is currently considered the most practical and preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4843541||Jul 29, 1987||Jun 27, 1989||International Business Machines Corporation||Logical resource partitioning of a data processing system|
|US5095427||Jun 14, 1989||Mar 10, 1992||Hitachi, Ltd.||Dispatch control of virtual machine|
|US5222215||Aug 29, 1991||Jun 22, 1993||International Business Machines Corporation||Cpu expansive gradation of i/o interruption subclass recognition|
|US5325525||Apr 4, 1991||Jun 28, 1994||Hewlett-Packard Company||Method of automatically controlling the allocation of resources of a parallel processor computer system by calculating a minimum execution time of a task and scheduling subtasks against resources to execute the task in the minimum time|
|US5325526 *||May 12, 1992||Jun 28, 1994||Intel Corporation||Task scheduling in a multicomputer system|
|US5357632 *||Aug 8, 1991||Oct 18, 1994||Hughes Aircraft Company||Dynamic task allocation in a multi-processor system employing distributed control processors and distributed arithmetic processors|
|US5504670||Mar 31, 1993||Apr 2, 1996||Intel Corporation||Method and apparatus for allocating resources in a multiprocessor system|
|US5535321||Feb 14, 1991||Jul 9, 1996||International Business Machines Corporation||Method and apparatus for variable complexity user interface in a data processing system|
|US5574914 *||Sep 8, 1994||Nov 12, 1996||Unisys Corporation||Method and apparatus for performing system resource partitioning|
|US5659786||Feb 13, 1995||Aug 19, 1997||International Business Machines Corporation||System and method for dynamically performing resource reconfiguration in a logically partitioned data processing system|
|US5692193||Mar 31, 1994||Nov 25, 1997||Nec Research Institute, Inc.||Software architecture for control of highly parallel computer systems|
|US5872963||Feb 18, 1997||Feb 16, 1999||Silicon Graphics, Inc.||Resumption of preempted non-privileged threads with no kernel intervention|
|US6199093 *||Jul 18, 1996||Mar 6, 2001||Nec Corporation||Processor allocating method/apparatus in multiprocessor system, and medium for storing processor allocating program|
|US6247109 *||Jun 10, 1998||Jun 12, 2001||Compaq Computer Corp.||Dynamically assigning CPUs to different partitions each having an operation system instance in a shared memory space|
|US6269391||Jun 19, 1997||Jul 31, 2001||Novell, Inc.||Multi-processor scheduling kernel|
|US6418460||Feb 18, 1997||Jul 9, 2002||Silicon Graphics, Inc.||System and method for finding preempted threads in a multi-threaded application|
|US6542926 *||Jun 10, 1998||Apr 1, 2003||Compaq Information Technologies Group, L.P.||Software partitioned multi-processor system with flexible resource sharing levels|
|US6587938||Sep 28, 1999||Jul 1, 2003||International Business Machines Corporation||Method, system and program products for managing central processing unit resources of a computing environment|
|US6598069 *||Sep 28, 1999||Jul 22, 2003||International Business Machines Corporation||Method and apparatus for assigning resources to logical partition clusters|
|US6625638||Apr 30, 1998||Sep 23, 2003||International Business Machines Corporation||Management of a logical partition that supports different types of processors|
|US6647508 *||Jun 10, 1998||Nov 11, 2003||Hewlett-Packard Development Company, L.P.||Multiprocessor computer architecture with multiple operating system instances and software controlled resource allocation|
|US20010014905||Dec 15, 2000||Aug 16, 2001||Tamiya Onodera||Method and apparatus for managing a lock for an object|
|US20030014466||Jun 29, 2001||Jan 16, 2003||Joubert Berger||System and method for management of compartments in a trusted operating system|
|1||*||Ayachi et al. "a hierarchical processor scheduling policy for multiprocessor syatems" 1996 IEEE, pp. 100-109.|
|2||*||Bakshi "Partitioning and pipelining for performance-constrained hardware/software systems" 1999 IEEE, pp. 419-432.|
|3||David L. Black, "Scheduling Support for Concurrency and Parallelism in the Mach Operating System," Computer, IEEE Computer Society, vol. 23, No. 5, May 1, 1990, pp. 35-43.|
|4||IBM AS/400e Logical Partitions: Creating. (C) 1999, 2000. http://publib.boulder.ibm.com/pubs/html/as400/v4r5/ic2924/info/rzaj7.pdf.|
|5||IBM AS/400e Logical Partitions: Learning About. (C) 1999, 2000. http://publib.boulder.ibm.com/pubs/html/as400/v4r5/ic2924/info/rzajx.pdf.|
|6||IBM AS/400e Logical Partitions: Managing. (C) 1999, 2000. http://publib.boulder.ibm.com/pubs/html/as400/v4r5/ic2924/info/rzaj6.pdf.|
|7||IBM AS/400e Logical Partitions: Planning for. (C) 1999, 2000. http://publib.boulder.ibm.com/pubs/html/as400/v4r5/ic2924info/rzait.pdf.|
|8||IBM AS/400e Logical Partitions: Troubleshooting. (C) 1999, 2000. http://publib.boulder.ibm.com/pubs/html/as400/v4r5/ic2924/info/rzaj8.pdf.|
|9||IBM Corporation, " AS/400 Logical Partitions Hardware Planning Guide", (C) 1999.|
|10||IBM Corporation, "LPAR Configuration and Management", First Edition, (C) Apr. 2002.|
|11||IBM Corporation, S/390 Processor Resource/Systems Manager Planning Guide (IBM Pub. No. GA22-7236-04, 5<SUP>th </SUP>Edition, Mar. 1999).|
|12||Leutenegger et al. " A Modeling Study of the TPC-C Benchmark", Proceedings of the 1993 ACM SIGMOD Int'l Conference on Management of Data, 1993, pp. 22-31.|
|13||Levine, C. "Order-of-Magnitude Advantage on TPC-C Through Massive Parallelism", Proceedings of the 1995 ACM SIGMOD Int'l Conference on Management of Data, 1995, pp. 464-465.|
|14||Marisa Gil et al., "The Enhancement of a User-level Thread Package Scheduling on Multiprocessors," Sep. 1994, Euromicro Workshop on Parallel and Distributed Processing, pp. 228-236.|
|15||Menasce, D. et al. "Capacity Planning and Performance Modeling", ISBN 0-13-035494-5, (C) 1994.|
|16||Schimunek, G. et al. "Slicing the AS/400 With Logical Partitioning: A How to Guide", Aug. 1999.|
|17||Shigekazu Inohara et al., "A Thread Facility Based on User/Kernel Cooperation in the XERO Operating System," Computer Software and Applications Conference, 1991, Sep. 11, 1991, pp. 398-405.|
|18||T. L. Borden et al., "Multiple Operating Systems on One Processor Complex," IBM Systems Journal, vol. 28, No. 1, 1989, pp. 104-122.|
|19||U.S. Appl. No. 09/672,043, entitled "Technique for Configuring Processors in System With Logical Partitions", filed Sep. 29, 2000.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7219346 *||Dec 5, 2000||May 15, 2007||Microsoft Corporation||System and method for implementing a client side HTTP stack|
|US7275180 *||Apr 17, 2003||Sep 25, 2007||International Business Machines Corporation||Transparent replacement of a failing processor|
|US7389512||Jan 27, 2004||Jun 17, 2008||Sun Microsystems, Inc.||Interprocess communication within operating system partitions|
|US7437556||Jan 21, 2004||Oct 14, 2008||Sun Microsystems, Inc.||Global visibility controls for operating system partitions|
|US7461080||Dec 22, 2003||Dec 2, 2008||Sun Microsystems, Inc.||System logging within operating system partitions using log device nodes that are access points to a log driver|
|US7490074||Jan 28, 2004||Feb 10, 2009||Sun Microsystems, Inc.||Mechanism for selectively providing mount information to processes running within operating system partitions|
|US7526421 *||Feb 27, 2004||Apr 28, 2009||International Business Machines Corporation||System and method for modeling LPAR behaviors in a simulation tool|
|US7526774 *||Jan 20, 2004||Apr 28, 2009||Sun Microsystems, Inc.||Two-level service model in operating system partitions|
|US7530067 *||May 12, 2003||May 5, 2009||International Business Machines Corporation||Filtering processor requests based on identifiers|
|US7546406 *||Jul 20, 2007||Jun 9, 2009||International Business Machines Corporation||Virtualization of a global interrupt queue|
|US7567985||Jan 28, 2004||Jul 28, 2009||Sun Microsystems, Inc.||Mechanism for implementing a sparse file system for an operating system partition|
|US7617375 *||Mar 28, 2007||Nov 10, 2009||International Business Machines Corporation||Workload management in virtualized data processing environment|
|US7698530 *||Mar 28, 2007||Apr 13, 2010||International Business Machines Corporation||Workload management in virtualized data processing environment|
|US7698531 *||Mar 28, 2007||Apr 13, 2010||International Business Machines Corporation||Workload management in virtualized data processing environment|
|US7748004||Jan 19, 2007||Jun 29, 2010||Microsoft Corporation||System and method for implementing a client side HTTP stack|
|US7793289||Jan 20, 2004||Sep 7, 2010||Oracle America, Inc.||System accounting for operating system partitions|
|US7793293 *||Nov 1, 2004||Sep 7, 2010||Hewlett-Packard Development Company, L.P.||Per processor set scheduling|
|US7797512 *||Oct 31, 2007||Sep 14, 2010||Oracle America, Inc.||Virtual core management|
|US7802073||Jul 23, 2007||Sep 21, 2010||Oracle America, Inc.||Virtual core management|
|US7805726||Feb 3, 2004||Sep 28, 2010||Oracle America, Inc.||Multi-level resource limits for operating system partitions|
|US7827021 *||Apr 8, 2009||Nov 2, 2010||International Business Machines Corporation||System for modeling LPAR behaviors in a simulation tool|
|US7831569||Oct 10, 2007||Nov 9, 2010||International Business Machines Corporation||Preserving a query plan cache|
|US7843961 *||Jul 25, 2005||Nov 30, 2010||International Business Machines Corporation||Hardware device emulation|
|US7853949 *||Mar 13, 2006||Dec 14, 2010||International Business Machines Corporation||Method and apparatus for assigning fractional processing nodes to work in a stream-oriented computer system|
|US7865899 *||Jul 13, 2006||Jan 4, 2011||Hitachi, Ltd.||Virtual computer systems and computer virtualization programs|
|US7882227||Mar 14, 2006||Feb 1, 2011||Oracle America, Inc.||Mechanism for implementing file access control across a network using labeled containers|
|US7882274||Sep 19, 2008||Feb 1, 2011||Virtual Desktop Technologies, Inc.||Computer system with multiple terminals|
|US7885975||Feb 23, 2006||Feb 8, 2011||Oracle America, Inc.||Mechanism for implementing file access control using labeled containers|
|US7945913 *||Jan 19, 2006||May 17, 2011||International Business Machines Corporation||Method, system and computer program product for optimizing allocation of resources on partitions of a data processing system|
|US7987464 *||Jul 25, 2006||Jul 26, 2011||International Business Machines Corporation||Logical partitioning and virtualization in a heterogeneous architecture|
|US7991763 *||Apr 13, 2007||Aug 2, 2011||International Business Machines Corporation||Database query optimization utilizing remote statistics collection|
|US8024728 *||Dec 28, 2006||Sep 20, 2011||International Business Machines Corporation||Virtual machine dispatching to maintain memory affinity|
|US8024738||Aug 25, 2006||Sep 20, 2011||International Business Machines Corporation||Method and system for distributing unused processor cycles within a dispatch window|
|US8082547 *||Oct 31, 2006||Dec 20, 2011||Hewlett-Packard Development Company, L.P.||Reallocating hardware resources among workloads in accordance with license rights|
|US8108196||Jul 17, 2008||Jan 31, 2012||International Business Machines Corporation||System for yielding to a processor|
|US8126873||Apr 13, 2007||Feb 28, 2012||International Business Machines Corporation||Portable and iterative re-usable suboptimization of database queries|
|US8156502||Jun 8, 2007||Apr 10, 2012||Hewlett-Packard Development Company, L.P.||Computer resource allocation as a function of demand type|
|US8180997 *||Jul 2, 2008||May 15, 2012||Board Of Regents, University Of Texas System||Dynamically composing processor cores to form logical processors|
|US8181182||Nov 16, 2004||May 15, 2012||Oracle America, Inc.||Resource allocation brokering in nested containers|
|US8185907||Aug 20, 2007||May 22, 2012||International Business Machines Corporation||Method and system for assigning logical partitions to multiple shared processor pools|
|US8225315 *||Oct 31, 2007||Jul 17, 2012||Oracle America, Inc.||Virtual core management|
|US8230425||Jul 30, 2007||Jul 24, 2012||International Business Machines Corporation||Assigning tasks to processors in heterogeneous multiprocessors|
|US8234642 *||May 1, 2009||Jul 31, 2012||International Business Machines Corporation||Filtering processor requests based on identifiers|
|US8261281 *||Mar 28, 2008||Sep 4, 2012||International Business Machines Corporation||Optimizing allocation of resources on partitions of a data processing system|
|US8271990 *||Feb 27, 2009||Sep 18, 2012||International Business Machines Corporation||Removing operating system jitter-induced slowdown in virtualized environments|
|US8281308||Oct 2, 2012||Oracle America, Inc.||Virtual core remapping based on temperature|
|US8302102||Feb 27, 2008||Oct 30, 2012||International Business Machines Corporation||System utilization through dedicated uncapped partitions|
|US8312456 *||May 30, 2008||Nov 13, 2012||International Business Machines Corporation||System and method for optimizing interrupt processing in virtualized environments|
|US8352950||Jan 11, 2008||Jan 8, 2013||International Business Machines Corporation||Algorithm to share physical processors to maximize processor cache usage and topologies|
|US8365182 *||Sep 26, 2007||Jan 29, 2013||International Business Machines Corporation||Method and system for provisioning of resources|
|US8386391 *||May 1, 2007||Feb 26, 2013||Hewlett-Packard Development Company, L.P.||Resource-type weighting of use rights|
|US8387041||Jan 9, 2008||Feb 26, 2013||International Business Machines Corporation||Localized multi-element processor resource sharing among logical partitions|
|US8397239||Dec 7, 2010||Mar 12, 2013||Hitachi, Ltd.||Virtual computer systems and computer virtualization programs|
|US8495627||Jun 27, 2007||Jul 23, 2013||International Business Machines Corporation||Resource allocation based on anticipated resource underutilization in a logically partitioned multi-processor environment|
|US8516160||Apr 27, 2004||Aug 20, 2013||Oracle America, Inc.||Multi-level administration of shared network resources|
|US8533722 *||Jun 3, 2008||Sep 10, 2013||International Business Machines Corporation||Method and apparatus for assigning fractional processing nodes to work in a stream-oriented computer system|
|US8543843||Oct 31, 2007||Sep 24, 2013||Sun Microsystems, Inc.||Virtual core management|
|US8544013 *||Mar 8, 2006||Sep 24, 2013||Qnx Software Systems Limited||Process scheduler having multiple adaptive partitions associated with process threads accessing mutexes and the like|
|US8762680 *||Jan 26, 2012||Jun 24, 2014||International Business Machines Corporation||Scaling energy use in a virtualized environment|
|US8856332||Oct 9, 2007||Oct 7, 2014||International Business Machines Corporation||Integrated capacity and architecture design tool|
|US8892878||Jan 30, 2004||Nov 18, 2014||Oracle America, Inc.||Fine-grained privileges in operating system partitions|
|US8935699||Oct 28, 2011||Jan 13, 2015||Amazon Technologies, Inc.||CPU sharing techniques|
|US8938473||Feb 23, 2006||Jan 20, 2015||Oracle America, Inc.||Secure windowing for labeled containers|
|US8938554||Mar 2, 2006||Jan 20, 2015||Oracle America, Inc.||Mechanism for enabling a network address to be shared by multiple labeled containers|
|US9032180 *||Mar 15, 2013||May 12, 2015||International Business Machines Corporation||Managing CPU resources for high availability micro-partitions|
|US9043575 *||Mar 15, 2013||May 26, 2015||International Business Machines Corporation||Managing CPU resources for high availability micro-partitions|
|US9058218||Mar 12, 2013||Jun 16, 2015||International Business Machines Corporation||Resource allocation based on anticipated resource underutilization in a logically partitioned multi-processor environment|
|US9081627||Jul 31, 2007||Jul 14, 2015||Hewlett-Packard Development Company, L.P.||Workload management with resource transfer sequence planned as a function of ranking of resource allocations|
|US9104485 *||Oct 28, 2011||Aug 11, 2015||Amazon Technologies, Inc.||CPU sharing techniques|
|US9158470 *||Mar 15, 2013||Oct 13, 2015||International Business Machines Corporation||Managing CPU resources for high availability micro-partitions|
|US9189381 *||Mar 15, 2013||Nov 17, 2015||International Business Machines Corporation||Managing CPU resources for high availability micro-partitions|
|US9195508||May 8, 2007||Nov 24, 2015||Hewlett-Packard Development Company, L.P.||Allocation of resources among computer partitions using plural utilization prediction engines|
|US9244825||Mar 15, 2013||Jan 26, 2016||International Business Machines Corporation||Managing CPU resources for high availability micro-partitions|
|US9244826||Mar 15, 2013||Jan 26, 2016||International Business Machines Corporation||Managing CPU resources for high availability micro-partitions|
|US20030061262 *||Sep 25, 2001||Mar 27, 2003||Hahn Stephen C.||Method and apparatus for partitioning resources within a computer system|
|US20040221193 *||Apr 17, 2003||Nov 4, 2004||International Business Machines Corporation||Transparent replacement of a failing processor|
|US20040226015 *||Feb 3, 2004||Nov 11, 2004||Leonard Ozgur C.||Multi-level computing resource scheduling control for operating system partitions|
|US20040226017 *||Jan 29, 2004||Nov 11, 2004||Leonard Ozgur C.||Mechanism for associating resource pools with operating system partitions|
|US20040226019 *||Jan 30, 2004||Nov 11, 2004||Tucker Andrew G.||Fine-grained privileges in operating system partitions|
|US20040226023 *||Jan 27, 2004||Nov 11, 2004||Tucker Andrew G.||Interprocess communication within operating system partitions|
|US20040230976 *||May 12, 2003||Nov 18, 2004||International Business Machines Corporation||Filtering processor requests based on identifiers|
|US20050021788 *||Jan 21, 2004||Jan 27, 2005||Tucker Andrew G.||Global visibility controls for operating system partitions|
|US20050108710 *||Dec 5, 2000||May 19, 2005||Kestutis Patiejunas||System and method for implementing a client side HTTP stack|
|US20050137897 *||Dec 23, 2003||Jun 23, 2005||Hoffman Philip M.||Method and system for performance redistribution in partitioned computer systems|
|US20050192781 *||Feb 27, 2004||Sep 1, 2005||Martin Deltch||System and method for modeling LPAR behaviors in a simulation tool|
|US20050198461 *||Jan 12, 2004||Sep 8, 2005||Shaw Mark E.||Security measures in a partitionable computing system|
|US20060095908 *||Nov 1, 2004||May 4, 2006||Norton Scott J||Per processor set scheduling|
|US20060168214 *||Oct 29, 2004||Jul 27, 2006||International Business Machines Corporation||System for managing logical partition preemption|
|US20060212840 *||Mar 16, 2005||Sep 21, 2006||Danny Kumamoto||Method and system for efficient use of secondary threads in a multiple execution path processor|
|US20060288348 *||Jul 13, 2006||Dec 21, 2006||Shinichi Kawamoto||Virtual computer systems and computer virtualization programs|
|US20070019671 *||Jul 25, 2005||Jan 25, 2007||International Business Machines Corporation||Hardware device emulation|
|US20070061809 *||Mar 8, 2006||Mar 15, 2007||Dan Dodge||Process scheduler having multiple adaptive partitions associated with process threads accessing mutexes and the like|
|US20070118596 *||Jan 19, 2007||May 24, 2007||Microsoft Corporation||System and method for implementing a client side http stack|
|US20070169127 *||Jan 19, 2006||Jul 19, 2007||Sujatha Kashyap||Method, system and computer program product for optimizing allocation of resources on partitions of a data processing system|
|US20070214458 *||Mar 13, 2006||Sep 13, 2007||Nikhil Bansal||Method and apparatus for assigning fractional processing nodes to work in a stream-oriented computer system|
|US20070226735 *||Mar 22, 2006||Sep 27, 2007||Anthony Nguyen||Virtual vector processing|
|US20070271560 *||May 18, 2006||Nov 22, 2007||Microsoft Corporation||Deploying virtual machine to host based on workload characterizations|
|US20080015712 *||Jul 20, 2007||Jan 17, 2008||Armstrong William J||Virtualization of a global interrupt queue|
|US20080028408 *||Jul 25, 2006||Jan 31, 2008||Day Michael N||Logical partitioning and virtualization in a heterogeneous architecture|
|US20080052713 *||Aug 25, 2006||Feb 28, 2008||Diane Garza Flemming||Method and system for distributing unused processor cycles within a dispatch window|
|US20080077919 *||Sep 19, 2007||Mar 27, 2008||Haruo Shida||Logically partitioned multifunctional apparatus|
|US20080082983 *||Sep 26, 2007||Apr 3, 2008||Michael Groetzner||Method and System for Provisioning of Resources|
|US20080163203 *||Dec 28, 2006||Jul 3, 2008||Anand Vaijayanthimala K||Virtual machine dispatching to maintain memory affinity|
|US20080184253 *||Mar 28, 2008||Jul 31, 2008||Sujatha Kashyap||Method, system and computer program product for optimizing allocation of resources on partitions of a data processing system|
|US20080244213 *||Mar 28, 2007||Oct 2, 2008||Flemming Diane G||Workload management in virtualized data processing environment|
|US20080244214 *||Mar 28, 2007||Oct 2, 2008||Flemming Diane G||Workload management in virtualized data processing environment|
|US20080244215 *||Mar 28, 2007||Oct 2, 2008||Flemming Diane G||Workload management in virtualized data processing environment|
|US20080244568 *||Mar 28, 2007||Oct 2, 2008||Flemming Diane G||Method to capture hardware statistics for partitions to enable dispatching and scheduling efficiency|
|US20080256024 *||Apr 13, 2007||Oct 16, 2008||Robert Victor Downer||Portable and Iterative Re-Usable Suboptimization of Database Queries|
|US20080256025 *||Apr 13, 2007||Oct 16, 2008||Robert Joseph Bestgen||Database Query Optimization Utilizing Remote Statistics Collection|
|US20080271036 *||Jun 3, 2008||Oct 30, 2008||Nikhil Bansal|
|US20080276060 *||May 1, 2007||Nov 6, 2008||Erik Bostrom||Pre-Configured Partitions With Use-Rights Limitations|
|US20080276246 *||Jul 17, 2008||Nov 6, 2008||International Business Machines Corporation||System for yielding to a processor|
|US20090007125 *||Jun 27, 2007||Jan 1, 2009||Eric Lawrence Barsness||Resource Allocation Based on Anticipated Resource Underutilization in a Logically Partitioned Multi-Processor Environment|
|US20090013160 *||Jul 2, 2008||Jan 8, 2009||Board Of Regents, The University Of Texas System||Dynamically composing processor cores to form logical processors|
|US20090037911 *||Jul 30, 2007||Feb 5, 2009||International Business Machines Corporation||Assigning tasks to processors in heterogeneous multiprocessors|
|US20090055830 *||Aug 20, 2007||Feb 26, 2009||Carl Phillip Gusler||Method and system for assigning logical partitions to multiple shared processor pools|
|US20090083450 *||Sep 19, 2008||Mar 26, 2009||C & S Operations, Inc.||Computer system with multiple terminals|
|US20090083829 *||Sep 19, 2008||Mar 26, 2009||C & S Operations, Inc.||Computer system|
|US20090094355 *||Oct 9, 2007||Apr 9, 2009||International Business Machines Corporation||Integrated capacity and architecture design tool|
|US20090100114 *||Oct 10, 2007||Apr 16, 2009||Robert Joseph Bestgen||Preserving a Query Plan Cache|
|US20090150650 *||Dec 7, 2007||Jun 11, 2009||Microsoft Corporation||Kernel Processor Grouping|
|US20090178049 *||Jan 9, 2008||Jul 9, 2009||Steven Joseph Branda||Multi-Element Processor Resource Sharing Among Logical Partitions|
|US20090183166 *||Jan 11, 2008||Jul 16, 2009||International Business Machines Corporation||Algorithm to share physical processors to maximize processor cache usage and topologies|
|US20090192779 *||Jul 30, 2009||Martin Deitch||System for modeling lpar behaviors in a simulation tool|
|US20090217280 *||Feb 21, 2008||Aug 27, 2009||Honeywell International Inc.||Shared-Resource Time Partitioning in a Multi-Core System|
|US20090217283 *||Feb 27, 2008||Aug 27, 2009||International Business Machines Corporation||System utilization through dedicated uncapped partitions|
|US20090240908 *||May 1, 2009||Sep 24, 2009||International Business Machines Corporation||Filtering processor requests based on identifiers|
|US20090300317 *||May 30, 2008||Dec 3, 2009||International Business Machines Corporation||System and method for optimizing interrupt processing in virtualized environments|
|US20100223616 *||Feb 27, 2009||Sep 2, 2010||International Business Machines Corporation||Removing operating system jitter-induced slowdown in virtualized environments|
|US20110083134 *||Oct 1, 2010||Apr 7, 2011||Samsung Electronics Co., Ltd.||Apparatus and method for managing virtual processing unit|
|US20110173493 *||Jul 14, 2011||International Business Machines Corporation||Cluster availability management|
|US20110219373 *||Sep 8, 2011||Electronics And Telecommunications Research Institute||Virtual machine management apparatus and virtualization method for virtualization-supporting terminal platform|
|US20130139272 *||Jan 24, 2013||May 30, 2013||Hewlett-Packard Development Company, L.P.||Resource-Type Weighting of Use Rights|
|US20130198484 *||Jan 26, 2012||Aug 1, 2013||International Business Machines Corporation||Scaling energy use in a virtualized environment|
|US20140259013 *||May 19, 2014||Sep 11, 2014||International Business Machines Corporation||Virtualization across physical partitions of a multi-core processor (mcp)|
|US20140281287 *||Mar 15, 2013||Sep 18, 2014||International Business Machines Corporation||Managing cpu resources for high availability micro-partitions|
|US20140281288 *||Mar 15, 2013||Sep 18, 2014||International Business Machines Corporation||Managing cpu resources for high availability micro-partitions|
|US20140281346 *||Mar 15, 2013||Sep 18, 2014||International Business Machines Corporation||Managing cpu resources for high availability micro-partitions|
|US20140281348 *||Mar 15, 2013||Sep 18, 2014||International Business Machines Corporation||Managing cpu resources for high availability micro-partitions|
|WO2012034793A1||Aug 10, 2011||Mar 22, 2012||Ibm United Kingdom Limited||Real address accessing in a coprocessor executing on behalf of an unprivileged process|
|U.S. Classification||718/104, 712/13, 711/173|
|Cooperative Classification||G06F2209/5012, G06F9/5077|
|Apr 19, 2001||AS||Assignment|
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARMSTRONG, WILLIAM J.;MANGES, MARK G.;NAYER, NARESH;AND OTHERS;REEL/FRAME:011737/0048;SIGNING DATES FROM 20010410 TO 20010417
|Apr 9, 2009||FPAY||Fee payment|
Year of fee payment: 4
|Mar 29, 2013||FPAY||Fee payment|
Year of fee payment: 8