US7490074B1 - Mechanism for selectively providing mount information to processes running within operating system partitions - Google Patents

Mechanism for selectively providing mount information to processes running within operating system partitions Download PDF

Info

Publication number
US7490074B1
US7490074B1 US10/767,235 US76723504A US7490074B1 US 7490074 B1 US7490074 B1 US 7490074B1 US 76723504 A US76723504 A US 76723504A US 7490074 B1 US7490074 B1 US 7490074B1
Authority
US
United States
Prior art keywords
operating system
global operating
global
partition
particular non
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US10/767,235
Inventor
Ozgur C. Leonard
Andrew G. Tucker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Priority to US10/767,235 priority Critical patent/US7490074B1/en
Assigned to SUN MICROSYSTEMS, INC. reassignment SUN MICROSYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TUCKER, ANDREW G., LEONARD, OZGUR C.
Application granted granted Critical
Publication of US7490074B1 publication Critical patent/US7490074B1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3466Performance evaluation by tracing or monitoring
    • G06F11/3476Data logging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99931Database or file accessing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99943Generating database or data structure, e.g. via user interface

Definitions

  • an operating system executing on the computer maintains an overall file system.
  • This overall file system provides the infrastructure needed to enable items, such as directories and files, to be created, stored, and accessed in an organized manner.
  • the overall file system may also comprise one or more mounts. These mounts enable devices, other file systems, and other entities to be “mounted” onto particular mount points of the overall file system to enable them to be accessed in the same manner as directories and files. For example, a floppy disk drive, a hard drive, a CDROM drive, etc., may be mounted onto a particular mount point of the overall file system. Similarly, a file system such as a process file system (ProcFS) or a network file system (NFS) may be mounted onto a particular mount point of the overall file system.
  • ProcessFS process file system
  • NFS network file system
  • the mounted entities may be accessed by processes running on the computer.
  • a process may submit a request to the operating system for a list of all of the mounts in the overall file system.
  • the process may select one of the mounts, and submit a request to the operating system to access the mounted entity on that mount. Barring some problem or error, the operating system will grant the request; thus, the process is able to access the mounted entity.
  • Such an access mechanism works well when all processes are allowed to have knowledge of and hence, access to, all mounts in the overall file system. In some implementations, however, it may not be desirable to allow all processes to have knowledge of, and access to, all mounts. In such implementations, the above access mechanism cannot be used effectively.
  • a mechanism for selectively providing mount information to processes running within operating system partitions By providing mount information in a selective manner, it is possible to control which processes are allowed to view which mounts.
  • a non-global partition is created within a global operating system environment provided by an operating system.
  • This non-global partition (which may be one of a plurality of non-global partitions within the global operating system environment) serves to isolate processes running within the partition from other non-global partitions within the global operating system environment.
  • a file system is maintained for this non-global partition.
  • This file system may comprise zero or more mounts, and may be part of a larger, overall file system maintained for the global operating system environment.
  • the overall file system may comprise additional mounts, which are not part of the file system maintained for the non-global partition.
  • FIG. 1 is a functional block diagram of an operating system environment in accordance with one embodiment of the present invention.
  • FIG. 2 is an operational flow diagram showing a high level overview of one embodiment of the present invention.
  • FIG. 3 shows, in tree format, a portion of an overall file system, which includes a file system for a particular operating system zone.
  • FIG. 4 shows the overall file system of FIG. 3 , which further includes a file system for another operating system zone.
  • FIG. 5 shows a sample mount tracking data structure, which may be used to keep track of all the mounts in a file system, in accordance with one embodiment of the present invention.
  • FIG. 6 is a block diagram of a general purpose computer system in which one embodiment of the present invention may be implemented.
  • one embodiment of the present invention provides a mechanism for selectively providing mount information to processes running within operating system partitions.
  • An operational flow diagram showing a high-level overview of this embodiment is provided in FIG. 2 .
  • a non-global partition is created (block 202 ) within a global operating system environment provided by an operating system.
  • This non-global partition (which may be one of a plurality of non-global partitions within the global operating system environment) serves to isolate processes running within the partition from other non-global partitions within the global operating system environment.
  • processes running within this non-global partition may not access or affect processes running within any other non-global partition.
  • processes running in other non-global partitions may not access or affect processes running within this non-global partition.
  • a file system is maintained (block 204 ) for this non-global partition.
  • This file system may comprise zero or more mounts, and may be part of a larger, overall file system maintained for the global operating system environment.
  • the overall file system may comprise additional mounts, which are not part of the file system maintained for the non-global partition.
  • the file system maintained for this non-global partition may be accessed by processes running within this non-global partition, but not by processes running within any other non-global partition.
  • a process running within the non-global partition may request information pertaining to mounts.
  • a determination is made (block 208 ) as to which partition the process is running in. Because the process is running within the non-global partition, only selected information is provided to the process. More specifically, in one embodiment, only information pertaining to the mounts that are within the file system maintained for the non-global partition is provided (block 210 ) to the process. Information pertaining to mounts outside the file system maintained for the non-global partition is not provided. As a result, the process is limited to viewing only those mounts that are part of the non-global partition's file system (hence, the process can view only those mounts that it can access). In this manner, the file system boundaries of the non-global partition, as they relate to mounts, are enforced.
  • FIG. 1 illustrates a functional block diagram of an operating system (OS) environment 100 in accordance with one embodiment of the present invention.
  • OS environment 100 may be derived by executing an OS in a general-purpose computer system, such as computer system 600 illustrated in FIG. 6 , for example.
  • OS is Solaris manufactured by Sun Microsystems, Inc. of Santa Clara, Calif.
  • the concepts taught herein may be applied to any OS, including but not limited to Unix, Linux, Windows, MacOS, etc.
  • OS environment 100 may comprise one or more zones (also referred to herein as partitions), including a global zone 130 and zero or more non-global zones 140 .
  • the global zone 130 is the general OS environment that is created when the OS is booted and executed, and serves as the default zone in which processes may be executed if no non-global zones 140 are created.
  • administrators and/or processes having the proper rights and privileges can perform generally any task and access any device/resource that is available on the computer system on which the OS is run.
  • an administrator can administer the entire computer system.
  • the non-global zones 140 represent separate and distinct partitions of the OS environment 100 .
  • One of the purposes of the non-global zones 140 is to provide isolation.
  • a non-global zone 140 can be used to isolate a number of entities, including but not limited to processes 170 , one or more file systems 180 , and one or more logical network interfaces 182 . Because of this isolation, processes 170 executing in one non-global zone 140 cannot access or affect processes in any other zone. Similarly, processes 170 in a non-global zone 140 cannot access or affect the file system 180 of another zone, nor can they access or affect the network interface 182 of another zone.
  • each non-global zone 140 behaves like a virtual standalone computer. While processes 170 in different non-global zones 140 cannot access or affect each other, it should be noted that they may be able to communicate with each other via a network connection through their respective logical network interfaces 182 . This is similar to how processes on separate standalone computers communicate with each other.
  • non-global zones 140 that are isolated from each other may be desirable in many implementations. For example, if a single computer system running a single instance of an OS is to be used to host applications for different competitors (e.g. competing websites), it would be desirable to isolate the data and processes of one competitor from the data and processes of another competitor. That way, it can be ensured that information will not be leaked between the competitors. Partitioning an OS environment 100 into non-global zones 140 and hosting the applications of the competitors in separate non-global zones 140 is one possible way of achieving this isolation.
  • each non-global zone 140 may be administered separately. More specifically, it is possible to assign a zone administrator to a particular non-global zone 140 and grant that zone administrator rights and privileges to manage various aspects of that non-global zone 140 . With such rights and privileges, the zone administrator can perform any number of administrative tasks that affect the processes and other entities within that non-global zone 140 . However, the zone administrator cannot change or affect anything in any other non-global zone 140 or the global zone 130 . Thus, in the above example, each competitor can administer his/her zone, and hence, his/her own set of applications, but cannot change or affect the applications of a competitor. In one embodiment, to prevent a non-global zone 140 from affecting other zones, the entities in a non-global zone 140 are generally not allowed to access or control any of the physical devices of the computer system.
  • a global zone administrator may administer all aspects of the OS environment 100 and the computer system as a whole.
  • a global zone administrator may, for example, access and control physical devices, allocate and control system resources, establish operational parameters, etc.
  • a global zone administrator may also access and control processes and entities within a non-global zone 140 .
  • enforcement of the zone boundaries is carried out by the kernel 150 . More specifically, it is the kernel 150 that ensures that processes 170 in one non-global zone 140 are not able to access or affect processes 170 , file systems 180 , and network interfaces 182 of another zone (non-global or global). In addition to enforcing the zone boundaries, kernel 150 also provides a number of other services. These services include but are certainly not limited to mapping the network interfaces 182 of the non-global zones 140 to the physical network devices 120 of the computer system, and mapping the file systems 180 of the non-global zones 140 to an overall file system and a physical storage 110 of the computer system. The operation of the kernel 150 will be discussed in greater detail in a later section.
  • each non-global zone 140 has its own associated file system 180 .
  • This file system 180 is used by the processes 170 running within the associated zone 140 , and cannot be accessed by processes 170 running within any other non-global zone 140 (although it can be accessed by a process running within the global zone 130 if that process has the appropriate privileges).
  • FIGS. 3 and 4 To illustrate how a separate file system may be maintained for each non-global zone 140 , reference will be made to FIGS. 3 and 4 .
  • FIG. 3 shows, in tree format, a portion of an overall file system maintained by the kernel 150 for the global zone 130 .
  • This overall file system comprises a/directory 302 , which acts as the root for the entire file system.
  • Under this root directory 302 are all of the directories, subdirectories, files, and mounts of the overall file system.
  • a path to a root directory 322 of a file system 180 for a particular non-global zone 140 is a path to a root directory 322 of a file system 180 for a particular non-global zone 140 .
  • the path is /Zones/ZoneA/Root (as seen from the global zone 130 ), and the non-global zone is zone A 140 ( a ) ( FIG. 1 ).
  • This root 322 acts as the root of the file system 180 ( a ) for zone A 140 ( a ), and everything underneath this root 322 is part of that file system 180 ( a ).
  • root 322 is the root of the file system 180 ( a ) for zone A 140 ( a )
  • processes 170 ( a ) within zone A 140 ( a ) cannot traverse up the file system hierarchy beyond root 322 .
  • processes 170 ( a ) cannot see or access any of the directories above root 322 , or any of the subdirectories that can be reached from those directories.
  • it is as if the other portions of the overall file system did not exist.
  • FIG. 4 shows the same overall file system, except that another file system for another non-global zone 140 has been added.
  • the other non-global zone is zone B 140 ( b ) ( FIG. 1 )
  • the path to the root 422 of the file system 180 ( b ) for zone B 140 ( b ) is /Zones/ZoneB/Root.
  • Root 422 acts as the root of the file system 180 ( b ) for zone B 140 ( b ), and everything underneath it is part of that file system 180 ( b ).
  • root 422 is the root of the file system 180 ( b ) for zone B 140 ( b )
  • processes 170 ( b ) within zone B 140 ( b ) cannot traverse up the file system hierarchy beyond root 422 .
  • processes 170 ( b ) cannot see or access any of the directories above root 422 , or any of the subdirectories that can be reached from those directories.
  • To processes 170 ( b ) it is as if the other portions of the overall file system did not exist.
  • By organizing the file systems in this manner it is possible to maintain, within an overall file system maintained for the global zone 130 , a separate file system 180 for each non-global zone 140 . It should be noted that this is just one way of maintaining a separate file system for each non-global zone. Other methods may be used, and all such methods are within the scope of the present invention.
  • the root of a non-global zone's file system may have any number of directories, subdirectories, and files underneath it.
  • these directories may include some directories, such as ETC 332 , which contain files specific to a zone 140 ( a ) (for example, program files that are to be executed within the zone 140 ( a )), and some directories, such as USR 324 , which contain operating system files that are used by the zone 140 ( a ).
  • ETC 332 which contain files specific to a zone 140 ( a ) (for example, program files that are to be executed within the zone 140 ( a )
  • USR 324 which contain operating system files that are used by the zone 140 ( a ).
  • These and other directories and files may be included under the root 322 , or a subdirectory thereof.
  • the root of a non-global zone's file system may also have one or more mounts underneath it.
  • one or more mount points may exist under the root (or a subdirectory thereof), on which entities may be mounted.
  • a mount point A 330 may exist under root 322 on which a floppy drive may be mounted.
  • a mount point ProcFS 328 may also exist on which a process file system (ProcFS) may be mounted.
  • a mount point NFS 326 may exist on which a network file system (NFS) may be mounted (ProcFS and NFS are well known to those in the art and will not be explained in detail herein).
  • NFS network file system
  • Mounts may exist in various other portions of the overall file system.
  • the file system 180 ( b ) for Zone B 140 ( b ) may have a mount point D 430 on which a CDROM drive may be mounted, a ProcFS mount point 428 on which a ProcFS may be mounted, and an NFS mount point 426 on which an NFS may be mounted.
  • Mounts may also exist in other portions (not shown) of the overall file system, which are not within any file system of any non-global zone. Overall, mounts may exist in any part of the overall file system.
  • a non-global zone 140 may take on one of four states: (1) Configured; (2) Installed; (3) Ready; and (4) Running.
  • a non-global zone 140 When a non-global zone 140 is in the Configured state, it means that an administrator in the global zone 130 has invoked an operating system utility (in one embodiment, zonecfg(1 m)) to specify all of the configuration parameters of a non-global zone 140 , and has saved that configuration in persistent physical storage 110 .
  • an administrator may specify a number of different parameters.
  • These parameters may include, but are not limited to, a zone name, a zone path to the root directory of the zone's file system 180 , specification of zero or more mount points and entities to be mounted when the zone is readied, specification of zero or more network interfaces, specification of devices to be configured when the zone is created, etc.
  • a global administrator may invoke another operating system utility (in one embodiment, zoneadm(1 m)) to put the zone into the Installed state.
  • zoneadm(1 m) an operating system utility
  • the operating system utility interacts with the kernel 150 to install all of the necessary files and directories into the zone's root directory, or a subdirectory thereof.
  • zoneadm(1 m) invokes an operating system utility (in one embodiment, zoneadm(1 m) again), which causes a zoneadmd process 162 to be started (there is a zoneadmd process associated with each non-global zone).
  • zoneadmd 162 runs within the global zone 130 and is responsible for managing its associated non-global zone 140 . After zoneadmd 162 is started, it interacts with the kernel 150 to establish the non-global zone 140 .
  • a number of operations are performed, including but not limited to creating the zone 140 (e.g.
  • zsched is a kernel process; however, it runs within the non-global zone 140 , and is used to track kernel resources associated with the non-global zone 140 ), maintaining a file system 180 , plumbing network interfaces 182 , and configuring devices.
  • Putting a non-global zone 140 into the Ready state gives rise to a virtual platform on which one or more processes may be executed.
  • This virtual platform provides the infrastructure necessary for enabling one or more processes to be executed within the non-global zone 140 in isolation from processes in other non-global zones 140 .
  • the virtual platform also makes it possible to isolate other entities such as file system 180 and network interfaces 182 within the non-global zone 140 , so that the zone behaves like a virtual standalone computer. Notice that when a non-global zone 140 is in the Ready state, no user or non-kernel processes are executing inside the zone (recall that zsched is a kernel process, not a user process).
  • the virtual platform provided by the non-global zone 140 is independent of any processes executing within the zone.
  • the zone and hence, the virtual platform exists even if no user or non-kernel processes are executing within the zone.
  • a non-global zone 140 can remain in existence from the time it is created until either the zone or the OS is terminated.
  • the life of a non-global zone 140 need not be limited to the duration of any user or non-kernel process executing within the zone.
  • a non-global zone 140 After a non-global zone 140 is in the Ready state, it can be transitioned into the Running state by executing one or more user processes in the zone. In one embodiment, this is done by having zoneadmd 162 start an init process 172 in its associated zone. Once started, the init process 172 looks in the file system 180 of the non-global zone 140 to determine what applications to run. The init process 172 then executes those applications to give rise to one or more other processes 174 . In this manner, an application environment is initiated on the virtual platform of the non-global zone 140 . In this application environment, all processes 170 are confined to the non-global zone 140 ; thus, they cannot access or affect processes, file systems, or network interfaces in other zones. The application environment exists so long as one or more user processes are executing within the non-global zone 140 .
  • Zoneadmd 162 can be used to initiate and control a number of zone administrative tasks. These tasks may include, for example, halting and rebooting the non-global zone 140 . When a non-global zone 140 is halted, it is brought from the Running state down to the Installed state. In effect, both the application environment and the virtual platform are terminated. When a non-global zone 140 is rebooted, it is brought from the Running state down to the Installed state, and then transitioned from the Installed state through the Ready state to the Running state. In effect, both the application environment and the virtual platform are terminated and restarted. These and many other tasks may be initiated and controlled by zoneadmd 162 to manage a non-global zone 140 on an ongoing basis during regular operation.
  • mount information is provided to processes running within operating system partitions (zones) in a selective manner.
  • zones operating system partitions
  • certain acts/operations are performed during each of the four states of a non-global zone 140 .
  • the acts/operations performed in each of these states will be discussed separately below.
  • zone A 140 ( a ) will be used as the sample zone.
  • the parameters pertaining to the file system include but are not limited to: (1) a path to the root directory of the file system; (2) a specification of all of the directories and subdirectories that are to be created under the root directory, and all of the files that are to be installed into those directories and subdirectories; and (3) a list of all mount points and the entities that are to be mounted onto those mount points.
  • zone A 140 ( a ) the following sets of information are specified: (1) the path to the root directory 322 of the file system 180 ( a ) for zone A 140 ( a ) is /Zones/ZoneA/Root; (2) the directories to be created are ETC 332 and USR 324 , and certain packages of files are to be installed under these directories; and (3) the mount points or mount directories are A, ProcFS, and NFS, and the entities to be mounted are a floppy drive, a Proces file system (ProcFS), and a network file system (NFS), respectively.
  • ProcFS Proces file system
  • NFS network file system
  • an administrator in the global zone 130 invokes an operating system utility (in one embodiment, zoneadm(1 m)), which accesses the configuration information associated with the non-global zone, and interacts with the kernel 150 to carry out the installation process.
  • zoneadm(1 m) an operating system utility
  • the following installation operations are performed for the current example of non-global zone A 140 ( a ).
  • the root of the file system 180 ( a ) is determined to be root 322 .
  • the directories ETC 332 and USR 324 are then created under this root 322 .
  • all of the specified files are installed into these directories.
  • the mount points A, ProcFS, and NFS are extracted from the configuration information.
  • a mount directory is created for each mount point.
  • directories A 330 , ProcFs 328 , and NFS 326 are created under root 322 . These directories are now ready to have entities mounted onto them.
  • zoneadm(1 m) an administrator in the global zone 130 invokes an operating system utility (in one embodiment, zoneadm(1 m) again), which causes a zoneadmd process 162 to be started.
  • zoneadmd 162 ( a ) is started.
  • zondadmd 162 ( a ) interacts with the kernel 150 to establish zone A 140 ( a ).
  • zone A 140 ( a ) In establishing zone A 140 ( a ), several operations are performed, including but not limited to creating zone A 140 ( a ), and maintaining a file system 180 ( a ) for zone A 140 ( a ). In one embodiment, in creating zone A 140 ( a ), a number of operations are performed, including but not limited to assigning a unique zone ID to zone A 140 ( a ), and creating a data structure associated with zone A 140 ( a ). This data structure will be used to store a variety of information associated with zone A 140 ( a ), including for example, the zone ID.
  • the path (/Zones/ZoneA/Root) to the root directory 322 of the file system 180 ( a ) is initially extracted from the configuration information for zone A 140 ( a ).
  • This directory 322 is established as the root of the file system 180 ( a ) for zone A 140 ( a ). This may involve, for example, storing the path to root 322 in the zone A data structure for future reference.
  • the configuration information for zone A 140 ( a ) indicates that a floppy drive is to be mounted onto directory A 330 , a process file system (ProcFS) is to be mounted onto directory ProcFS 328 , and a network file system (NFS) is to be mounted onto directory NFS 326 .
  • ProcFS process file system
  • NFS network file system
  • FIG. 5 shows a sample data structure that may be used for this purpose.
  • an entity is mounted, it is incorporated into the overall file system maintained by the kernel 150 for the global zone 130 . Thus, that mounted entity should be associated with the global zone 130 .
  • the mounted entity is mounted within the file system of a non-global zone 140 , it should also be associated with that non-global zone 140 .
  • the data structure of FIG. 5 shows how these associations may be maintained, in accordance with one embodiment of the present invention.
  • an entry 510 corresponding to the mount is added to a mount tracking data (MTD) structure 500 .
  • This entry comprises various sets of information pertaining to the mount, including for example, the path to the mount and certain semantics of the mount (e.g. the type of mount, etc.).
  • the entry may also include one or more zone-specific pointers that reference a next entry. These pointers make it easy to traverse the MTD structure 500 at a later time to determine all of the mounts that are associated with a particular zone.
  • MTD structure 500 comprises a global zone entry 502 .
  • this entry 502 is inserted into the MTD structure 500 when the kernel 150 is initialized.
  • a mount 1 entry 510 ( 1 ) representing the initial mount is inserted.
  • the mount 1 entry 510 ( a ) comprises information, such as the information discussed previously, pertaining to the initial mount.
  • a pointer 520 ( 1 ) in the global zone entry 502 is updated to point to entry 510 ( 1 ); thus, mount 1 510 ( 1 ) is associated with the global zone 130 .
  • entry 510 ( 1 ) does not point to any other entry.
  • zone A 140 ( a ) is transitioned from the Installed state to the Ready state.
  • entities are mounted onto the mount points of the file system 180 ( a ) for zone A 140 ( a ).
  • a floppy drive is mounted onto mount point A 330
  • a process file system (ProcFS) is mounted onto mount point ProcFS 328
  • a network file system (NFS) is mounted onto mount point NFS 326 .
  • a zone A entry 504 is inserted into the MTD structure 500 .
  • a mount A entry 510 ( 2 ) is added.
  • the mount A entry 510 ( 2 ) comprises information pertaining to the floppy drive mount.
  • several pointers are updated.
  • the pointer 530 ( 1 ) in the zone A entry 504 is updated to point to entry 510 ( 2 ); thus, the mount A entry 510 ( 2 ) is associated with zone A 140 ( a ).
  • a global zone pointer 520 ( 2 ) inside the mount 1 entry 510 ( 1 ) is updated to point to entry 510 ( 2 ).
  • the mount A entry 510 ( 2 ) is also associated with the global zone 130 .
  • a mount ProcFS entry 510 ( 3 ) is added.
  • This entry 510 ( 3 ) comprises information pertaining to the ProcFS mount.
  • several pointers are updated.
  • a zone A pointer 530 ( 2 ) inside the mount A entry 510 ( 2 ) is updated to point to entry 510 ( 3 ); thus, the mount ProcFS entry 510 ( 3 ) is associated with zone A 140 ( a ).
  • a global zone pointer 520 ( 3 ) inside the mount A entry 510 ( 2 ) is updated to point to entry 510 ( 3 ).
  • the mount ProcFS entry 510 ( 3 ) is also associated with the global zone 130 .
  • a mount NFS entry 510 ( 4 ) is added.
  • This entry 510 ( 4 ) comprises information pertaining to the NFS mount.
  • several pointers are updated.
  • a zone A pointer 530 ( 3 ) inside the mount ProcFS entry 510 ( 3 ) is updated to point to entry 510 ( 4 ); thus, the mount NFS entry 510 ( 4 ) is associated with zone A 140 ( a ).
  • a global zone pointer 520 ( 4 ) inside the mount ProcFS entry 510 ( 3 ) is updated to point to entry 510 ( 4 ).
  • the mount NFS entry 510 ( 4 ) is also associated with the global zone 130 .
  • zone A 140 ( a ) is readied, another entity is mounted and hence, incorporated into the overall file system maintained for the global zone 130 .
  • another entry 510 ( 5 ) is added to the MTD structure 500 .
  • This entry 510 ( 5 ) comprises information pertaining to the mount (mount 5 in this example).
  • the mount 5 entry 510 ( 5 ) is added, the global zone pointer 520 ( 5 ) in the mount NFS entry 510 ( 4 ) is updated to point to entry 510 ( 5 ); thus, the mount 5 entry 510 ( 5 ) is associated with the global zone 130 .
  • mount 5 is not within the file system 180 ( a ) of zone A 140 ( a ), there is no zone A pointer in the NFS mount entry 510 ( 4 ) that points to entry 510 ( 5 ). Thus, mount 5 is not associated with zone A 140 ( a ).
  • zoneadmd 162 ( a ) starts the init process 172 ( a ).
  • the init process 172 ( a ) is associated with zone A 140 ( a ). This may be done, for example, by creating a data structure for process 172 ( a ), and storing the zone ID of zone A 140 ( a ) within that data structure.
  • the init process 172 ( a ) looks in the file system 180 ( a ) (for example, in directory ETC 332 ) of zone A 140 ( a ) to determine what applications to run.
  • the init process 172 ( a ) then executes those applications to give rise to one or more other processes 174 .
  • each process 174 is started, it is associated with zone A 140 ( a ). This may be done in the same manner as that discussed above in connection with the init process 172 ( a ) (e.g. by creating a data structure for the process 174 and storing the zone ID of zone A 140 ( a ) within that data structure).
  • processes 170 ( a ) are started and associated with (and hence, are running within) zone A 140 ( a ).
  • one or more processes may submit a request to the kernel 150 for information pertaining to mounts.
  • the kernel 150 determines the zone in which that process is running, which may be the global zone 130 or one of the non-global zones 140 .
  • the kernel 150 then selectively provides the mount information appropriate for processes in that zone.
  • the kernel 150 determines the zone in which that process 170 ( a ) is running. This determination may be made, for example, by accessing the data structure associated with the process 170 ( a ), and extracting the zone ID of zone A 140 ( a ) therefrom. The zone ID may then be used to determine that the process 170 ( a ) is running within zone A 140 ( a ). After this determination is made, the kernel 150 traverses the MTD structure 500 shown in FIG. 5 to determine all of the mounts that are within the file system 180 ( a ) for zone A 140 ( a ).
  • the kernel 150 determines which mounts are within the file system 180 ( a ) for zone A 140 ( a ), it provides information pertaining to just those mounts to the requesting process 170 ( a ). Information pertaining to all of the other mounts (e.g. mount 1 and mount 5 ) in the overall file system is not provided. That way, the process 170 ( a ) is made aware of, and hence, can view only those mounts that are within the file system 180 ( a ) of the zone 140 ( a ) in which the process 170 ( a ) is running. In this manner, the kernel 150 enforces the file system boundary (from a mount standpoint) of a zone.
  • the kernel 150 enforces the file system boundary (from a mount standpoint) of a zone.
  • mount information is presented in a particular way.
  • the kernel 150 shows the root of a file system as a mount. This is done even if the root is not actually a mount. For example, the root 322 of the file system 180 ( a ) for zone A 140 ( a ) is not an actual mount. Nonetheless, in one embodiment, the kernel 150 shows root 322 as a mount when responding to a request for mount information from a process 170 ( a ) running within zone A 140 ( a ).
  • the full path to the mount may be filtered. This helps to hide the fact that the file system associated with a zone may be part of a larger overall file system.
  • the full path to the root 322 of the file system 180 ( a ) for zone A 140 ( a ) is /Zones/ZoneA/Root. This full path is not shown to a process 170 ( a ) running within zone A 140 ( a ), however. Instead, the process 170 ( a ) is shown just “/” (indicating a root) so that the process 170 ( a ) is kept unaware of the existence of the other directories (e.g. Zones, Zone A, etc.).
  • mounts 330 , 328 , and 326 are shown as /A, /ProcFS, and /NFS, respectively, instead of /Zones/ZoneA/Root/A, /Zones/ZoneA/Root/ProcFS, and /Zones/ZoneA/Root/NFS.
  • the kernel 150 perpetuates the illusion to the process 170 ( a ) that the file system 180 ( a ) of zone A 140 ( a ) is the entire file system.
  • an MTD structure 500 may be constructed and used to determine the mounts that are within a particular file system associated with a particular zone. While this is an effective method, it should be noted that other methods may also be used within the scope of the present invention.
  • a full path to each mount may be stored, and this path may be compared with the path to the root of a file system to determine whether that mount is within the file system.
  • the path to mount NFS 326 is /Zones/ZoneA/Root/NFS.
  • the path to the root 322 of the file system 180 ( a ) for zone A 140 ( a ) is /Zones/ZoneA/Root.
  • the path to the root 422 of the file system 180 ( b ) for zone B 140 ( b ) is /Zones/ZoneB/Root.
  • mount NFS 326 is not under the root of file system 180 ( b ).
  • mount NFS 326 is not within file system 180 ( b ).
  • the path to each mount with the path to the root of a particular file system it can be determined which mounts are under the root, and hence, are part of the file system. This and other methods may be used to determine the mounts that are within a particular file system. All such methods are within the scope of the present invention.
  • FIG. 6 is a block diagram that illustrates a computer system 600 upon which an embodiment of the invention may be implemented.
  • Computer system 600 includes a bus 602 for facilitating information exchange, and one or more processors 604 coupled with bus 602 for processing information.
  • Computer system 600 also includes a main memory 606 , such as a random access memory (RAM) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604 .
  • Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 604 .
  • Computer system 600 may further include a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604 .
  • ROM read only memory
  • a storage device 610 such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
  • Computer system 600 may be coupled via bus 602 to a display 612 , such as a cathode ray tube (CRT), for displaying information to a computer user.
  • a display 612 such as a cathode ray tube (CRT)
  • An input device 614 is coupled to bus 602 for communicating information and command selections to processor 604 .
  • cursor control 616 is Another type of user input device
  • cursor control 616 such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612 .
  • This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
  • bus 602 may be any mechanism and/or medium that enables information, signals, data, etc., to be exchanged between the various components.
  • bus 602 may be a set of conductors that carries electrical signals.
  • Bus 602 may also be a wireless medium (e.g. air) that carries wireless signals between one or more of the components.
  • Bus 602 may also be a medium (e.g. air) that enables signals to be capacitively exchanged between one or more of the components.
  • Bus 602 may further be a network connection that connects one or more of the components.
  • any mechanism and/or medium that enables information, signals, data, etc., to be exchanged between the various components may be used as bus 602 .
  • Bus 602 may also be a combination of these mechanisms/media.
  • processor 604 may communicate with storage device 610 wirelessly.
  • the bus 602 from the standpoint of processor 604 and storage device 610 , would be a wireless medium, such as air.
  • processor 604 may communicate with ROM 608 capacitively.
  • the bus 602 would be the medium (such as air) that enables this capacitive communication to take place.
  • processor 604 may communicate with main memory 606 via a network connection.
  • the bus 602 would be the network connection.
  • processor 604 may communicate with display 612 via a set of conductors. In this instance, the bus 602 would be the set of conductors.
  • bus 602 may take on different forms.
  • Bus 602 functionally represents all of the mechanisms and/or media that enable information, signals, data, etc., to be exchanged between the various components.
  • the invention is related to the use of computer system 600 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606 . Such instructions may be read into main memory 606 from another machine-readable medium, such as storage device 610 . Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
  • machine-readable medium refers to any medium that participates in providing data that causes a machine to operation in a specific fashion.
  • various machine-readable media are involved, for example, in providing instructions to processor 604 for execution.
  • Such a medium may take many forms, including but not limited to machine-readable storage media (e.g. non-volatile media, volatile media, etc.), and transmission media.
  • Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610 .
  • Volatile media includes dynamic memory, such as main memory 606 .
  • Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602 . Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
  • Machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
  • Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution.
  • the instructions may initially be carried on a magnetic disk of a remote computer.
  • the remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem.
  • a modem local to computer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal.
  • An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 602 .
  • Bus 602 carries the data to main memory 606 , from which processor 604 retrieves and executes the instructions.
  • the instructions received by main memory 606 may optionally be stored on storage device 610 either before or after execution by processor 604 .
  • Computer system 600 also includes a communication interface 618 coupled to bus 602 .
  • Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622 .
  • communication interface 618 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN.
  • LAN local area network
  • Wireless links may also be implemented.
  • communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
  • Network link 620 typically provides data communication through one or more networks to other data devices.
  • network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626 .
  • ISP 626 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 628 .
  • Internet 628 uses electrical, electromagnetic or optical signals that carry digital data streams.
  • the signals through the various networks and the signals on network link 620 and through communication interface 618 which carry the digital data to and from computer system 600 , are exemplary forms of carrier waves transporting the information.
  • Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618 .
  • a server 630 might transmit a requested code for an application program through Internet 628 , ISP 626 , local network 622 and communication interface 618 .
  • the received code may be executed by processor 604 as it is received, and/or stored in storage device 610 , or other non-volatile storage for later execution. In this manner, computer system 600 may obtain application code in the form of a carrier wave.

Abstract

A mechanism is disclosed for selectively providing mount information to processes running within operating system partitions. In one implementation, a non-global operating system partition is created within a global operating system environment. A file system is maintained for this non-global partition. This file system comprises zero or more mounts, and may be part of a larger, overall file system. When a process running within the non-global partition requests information pertaining to mounts, a determination is made as to which partition the process is running in. Because the process is running within the non-global partition, only selected information is provided to the process. More specifically, only information pertaining to the mounts that are within the file system maintained for the non-global partition is provided to the process. By doing so, the process is limited to viewing only those mounts that are part of the non-global partition's file system.

Description

RELATED APPLICATIONS
This application claims priority to U.S. Provisional Application Ser. No. 60/469,558, filed May 9, 2003, entitled OPERATING SYSTEM VIRTUALIZATION by Andrew G. Tucker, et al., the entire contents of which are incorporated herein by this reference.
BACKGROUND
In a typical computer, an operating system executing on the computer maintains an overall file system. This overall file system provides the infrastructure needed to enable items, such as directories and files, to be created, stored, and accessed in an organized manner.
In addition to having directories and files, the overall file system may also comprise one or more mounts. These mounts enable devices, other file systems, and other entities to be “mounted” onto particular mount points of the overall file system to enable them to be accessed in the same manner as directories and files. For example, a floppy disk drive, a hard drive, a CDROM drive, etc., may be mounted onto a particular mount point of the overall file system. Similarly, a file system such as a process file system (ProcFS) or a network file system (NFS) may be mounted onto a particular mount point of the overall file system. Once mounted, these entities (referred to herein as “mounted entities”) may be accessed in the same manner (from the standpoint of a user) as any other directory or file. Thus, they are in effect incorporated into the overall file system.
The mounted entities may be accessed by processes running on the computer. To access a mounted entity, a process may submit a request to the operating system for a list of all of the mounts in the overall file system. When the operating system returns the requested list, the process may select one of the mounts, and submit a request to the operating system to access the mounted entity on that mount. Barring some problem or error, the operating system will grant the request; thus, the process is able to access the mounted entity. Such an access mechanism works well when all processes are allowed to have knowledge of and hence, access to, all mounts in the overall file system. In some implementations, however, it may not be desirable to allow all processes to have knowledge of, and access to, all mounts. In such implementations, the above access mechanism cannot be used effectively.
SUMMARY
In accordance with one embodiment of the present invention, there is provided a mechanism for selectively providing mount information to processes running within operating system partitions. By providing mount information in a selective manner, it is possible to control which processes are allowed to view which mounts.
In one embodiment, a non-global partition is created within a global operating system environment provided by an operating system. This non-global partition (which may be one of a plurality of non-global partitions within the global operating system environment) serves to isolate processes running within the partition from other non-global partitions within the global operating system environment.
A file system is maintained for this non-global partition. This file system may comprise zero or more mounts, and may be part of a larger, overall file system maintained for the global operating system environment. The overall file system may comprise additional mounts, which are not part of the file system maintained for the non-global partition.
When a process running within the non-global partition requests information pertaining to mounts, a determination is made as to which partition the process is running in. Because the process is running within the non-global partition, only selected information is provided to the process. More specifically, in one embodiment, only information pertaining to the mounts that are within the file system maintained for the non-global partition is provided to the process. Information pertaining to mounts outside the file system maintained for the non-global partition is not provided. As a result, the process is limited to viewing only those mounts that are part of the non-global partition's file system, which helps to enforce the file system boundaries of the non-global partition.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a functional block diagram of an operating system environment in accordance with one embodiment of the present invention.
FIG. 2 is an operational flow diagram showing a high level overview of one embodiment of the present invention.
FIG. 3 shows, in tree format, a portion of an overall file system, which includes a file system for a particular operating system zone.
FIG. 4 shows the overall file system of FIG. 3, which further includes a file system for another operating system zone.
FIG. 5 shows a sample mount tracking data structure, which may be used to keep track of all the mounts in a file system, in accordance with one embodiment of the present invention.
FIG. 6 is a block diagram of a general purpose computer system in which one embodiment of the present invention may be implemented.
DETAILED DESCRIPTION OF EMBODIMENT(S) Conceptual Overview
In some computer system implementations, it may be desirable to impose limits on which mounts may be viewed by which processes. To accommodate such implementations, one embodiment of the present invention provides a mechanism for selectively providing mount information to processes running within operating system partitions. An operational flow diagram showing a high-level overview of this embodiment is provided in FIG. 2.
As shown in FIG. 2, a non-global partition is created (block 202) within a global operating system environment provided by an operating system. This non-global partition (which may be one of a plurality of non-global partitions within the global operating system environment) serves to isolate processes running within the partition from other non-global partitions within the global operating system environment. In one embodiment, processes running within this non-global partition may not access or affect processes running within any other non-global partition. Likewise, processes running in other non-global partitions may not access or affect processes running within this non-global partition.
A file system is maintained (block 204) for this non-global partition. This file system may comprise zero or more mounts, and may be part of a larger, overall file system maintained for the global operating system environment. The overall file system may comprise additional mounts, which are not part of the file system maintained for the non-global partition. In one embodiment, the file system maintained for this non-global partition may be accessed by processes running within this non-global partition, but not by processes running within any other non-global partition.
During regular operation, a process running within the non-global partition may request information pertaining to mounts. When such a request is received (block 206), a determination is made (block 208) as to which partition the process is running in. Because the process is running within the non-global partition, only selected information is provided to the process. More specifically, in one embodiment, only information pertaining to the mounts that are within the file system maintained for the non-global partition is provided (block 210) to the process. Information pertaining to mounts outside the file system maintained for the non-global partition is not provided. As a result, the process is limited to viewing only those mounts that are part of the non-global partition's file system (hence, the process can view only those mounts that it can access). In this manner, the file system boundaries of the non-global partition, as they relate to mounts, are enforced.
This embodiment will be described in greater detail in the following sections.
System Overview
FIG. 1 illustrates a functional block diagram of an operating system (OS) environment 100 in accordance with one embodiment of the present invention. OS environment 100 may be derived by executing an OS in a general-purpose computer system, such as computer system 600 illustrated in FIG. 6, for example. For illustrative purposes, it will be assumed that the OS is Solaris manufactured by Sun Microsystems, Inc. of Santa Clara, Calif. However, it should be noted that the concepts taught herein may be applied to any OS, including but not limited to Unix, Linux, Windows, MacOS, etc.
As shown in FIG. 1, OS environment 100 may comprise one or more zones (also referred to herein as partitions), including a global zone 130 and zero or more non-global zones 140. The global zone 130 is the general OS environment that is created when the OS is booted and executed, and serves as the default zone in which processes may be executed if no non-global zones 140 are created. In the global zone 130, administrators and/or processes having the proper rights and privileges can perform generally any task and access any device/resource that is available on the computer system on which the OS is run. Thus, in the global zone 130, an administrator can administer the entire computer system. In one embodiment, it is in the global zone 130 that an administrator executes processes to configure and to manage the non-global zones 140.
The non-global zones 140 represent separate and distinct partitions of the OS environment 100. One of the purposes of the non-global zones 140 is to provide isolation. In one embodiment, a non-global zone 140 can be used to isolate a number of entities, including but not limited to processes 170, one or more file systems 180, and one or more logical network interfaces 182. Because of this isolation, processes 170 executing in one non-global zone 140 cannot access or affect processes in any other zone. Similarly, processes 170 in a non-global zone 140 cannot access or affect the file system 180 of another zone, nor can they access or affect the network interface 182 of another zone. As a result, the processes 170 in a non-global zone 140 are limited to accessing and affecting the processes and entities in that zone. Isolated in this manner, each non-global zone 140 behaves like a virtual standalone computer. While processes 170 in different non-global zones 140 cannot access or affect each other, it should be noted that they may be able to communicate with each other via a network connection through their respective logical network interfaces 182. This is similar to how processes on separate standalone computers communicate with each other.
Having non-global zones 140 that are isolated from each other may be desirable in many implementations. For example, if a single computer system running a single instance of an OS is to be used to host applications for different competitors (e.g. competing websites), it would be desirable to isolate the data and processes of one competitor from the data and processes of another competitor. That way, it can be ensured that information will not be leaked between the competitors. Partitioning an OS environment 100 into non-global zones 140 and hosting the applications of the competitors in separate non-global zones 140 is one possible way of achieving this isolation.
In one embodiment, each non-global zone 140 may be administered separately. More specifically, it is possible to assign a zone administrator to a particular non-global zone 140 and grant that zone administrator rights and privileges to manage various aspects of that non-global zone 140. With such rights and privileges, the zone administrator can perform any number of administrative tasks that affect the processes and other entities within that non-global zone 140. However, the zone administrator cannot change or affect anything in any other non-global zone 140 or the global zone 130. Thus, in the above example, each competitor can administer his/her zone, and hence, his/her own set of applications, but cannot change or affect the applications of a competitor. In one embodiment, to prevent a non-global zone 140 from affecting other zones, the entities in a non-global zone 140 are generally not allowed to access or control any of the physical devices of the computer system.
In contrast to a non-global zone administrator, a global zone administrator with proper rights and privileges may administer all aspects of the OS environment 100 and the computer system as a whole. Thus, a global zone administrator may, for example, access and control physical devices, allocate and control system resources, establish operational parameters, etc. A global zone administrator may also access and control processes and entities within a non-global zone 140.
In one embodiment, enforcement of the zone boundaries is carried out by the kernel 150. More specifically, it is the kernel 150 that ensures that processes 170 in one non-global zone 140 are not able to access or affect processes 170, file systems 180, and network interfaces 182 of another zone (non-global or global). In addition to enforcing the zone boundaries, kernel 150 also provides a number of other services. These services include but are certainly not limited to mapping the network interfaces 182 of the non-global zones 140 to the physical network devices 120 of the computer system, and mapping the file systems 180 of the non-global zones 140 to an overall file system and a physical storage 110 of the computer system. The operation of the kernel 150 will be discussed in greater detail in a later section.
File System for a Non-Global Zone
As noted above, each non-global zone 140 has its own associated file system 180. This file system 180 is used by the processes 170 running within the associated zone 140, and cannot be accessed by processes 170 running within any other non-global zone 140 (although it can be accessed by a process running within the global zone 130 if that process has the appropriate privileges). To illustrate how a separate file system may be maintained for each non-global zone 140, reference will be made to FIGS. 3 and 4.
FIG. 3 shows, in tree format, a portion of an overall file system maintained by the kernel 150 for the global zone 130. This overall file system comprises a/directory 302, which acts as the root for the entire file system. Under this root directory 302 are all of the directories, subdirectories, files, and mounts of the overall file system.
As shown in FIG. 3, under the / directory 302 is a path to a root directory 322 of a file system 180 for a particular non-global zone 140. In the example shown, the path is /Zones/ZoneA/Root (as seen from the global zone 130), and the non-global zone is zone A 140(a) (FIG. 1). This root 322 acts as the root of the file system 180(a) for zone A 140(a), and everything underneath this root 322 is part of that file system 180(a). Because root 322 is the root of the file system 180(a) for zone A 140(a), processes 170(a) within zone A 140(a) cannot traverse up the file system hierarchy beyond root 322. Thus, processes 170(a) cannot see or access any of the directories above root 322, or any of the subdirectories that can be reached from those directories. To processes 170(a), it is as if the other portions of the overall file system did not exist.
FIG. 4 shows the same overall file system, except that another file system for another non-global zone 140 has been added. In the example shown, the other non-global zone is zone B 140(b) (FIG. 1), and the path to the root 422 of the file system 180(b) for zone B 140(b) is /Zones/ZoneB/Root. Root 422 acts as the root of the file system 180(b) for zone B 140(b), and everything underneath it is part of that file system 180(b). Because root 422 is the root of the file system 180(b) for zone B 140(b), processes 170(b) within zone B 140(b) cannot traverse up the file system hierarchy beyond root 422. Thus, processes 170(b) cannot see or access any of the directories above root 422, or any of the subdirectories that can be reached from those directories. To processes 170(b), it is as if the other portions of the overall file system did not exist. By organizing the file systems in this manner, it is possible to maintain, within an overall file system maintained for the global zone 130, a separate file system 180 for each non-global zone 140. It should be noted that this is just one way of maintaining a separate file system for each non-global zone. Other methods may be used, and all such methods are within the scope of the present invention.
The root of a non-global zone's file system may have any number of directories, subdirectories, and files underneath it. Using root 322 as an example, these directories may include some directories, such as ETC 332, which contain files specific to a zone 140(a) (for example, program files that are to be executed within the zone 140(a)), and some directories, such as USR 324, which contain operating system files that are used by the zone 140(a). These and other directories and files may be included under the root 322, or a subdirectory thereof.
The root of a non-global zone's file system may also have one or more mounts underneath it. Put another way, one or more mount points may exist under the root (or a subdirectory thereof), on which entities may be mounted. Using root 322 as an example, a mount point A 330 may exist under root 322 on which a floppy drive may be mounted. A mount point ProcFS 328 may also exist on which a process file system (ProcFS) may be mounted. In addition, a mount point NFS 326 may exist on which a network file system (NFS) may be mounted (ProcFS and NFS are well known to those in the art and will not be explained in detail herein). Basically, any number of mount points, on which any number and any type of entities (e.g. devices, file systems, etc.) may be mounted, may exist under the root of a non-global zone's file system.
Mounts may exist in various other portions of the overall file system. For example, the file system 180(b) for Zone B 140(b) may have a mount point D 430 on which a CDROM drive may be mounted, a ProcFS mount point 428 on which a ProcFS may be mounted, and an NFS mount point 426 on which an NFS may be mounted. Mounts may also exist in other portions (not shown) of the overall file system, which are not within any file system of any non-global zone. Overall, mounts may exist in any part of the overall file system.
Non-Global Zone States
In one embodiment, a non-global zone 140 may take on one of four states: (1) Configured; (2) Installed; (3) Ready; and (4) Running. When a non-global zone 140 is in the Configured state, it means that an administrator in the global zone 130 has invoked an operating system utility (in one embodiment, zonecfg(1 m)) to specify all of the configuration parameters of a non-global zone 140, and has saved that configuration in persistent physical storage 110. In configuring a non-global zone 140, an administrator may specify a number of different parameters. These parameters may include, but are not limited to, a zone name, a zone path to the root directory of the zone's file system 180, specification of zero or more mount points and entities to be mounted when the zone is readied, specification of zero or more network interfaces, specification of devices to be configured when the zone is created, etc.
Once a zone is in the Configured state, a global administrator may invoke another operating system utility (in one embodiment, zoneadm(1 m)) to put the zone into the Installed state. When invoked, the operating system utility interacts with the kernel 150 to install all of the necessary files and directories into the zone's root directory, or a subdirectory thereof.
To put an Installed zone into the Ready state, a global administrator invokes an operating system utility (in one embodiment, zoneadm(1 m) again), which causes a zoneadmd process 162 to be started (there is a zoneadmd process associated with each non-global zone). In one embodiment, zoneadmd 162 runs within the global zone 130 and is responsible for managing its associated non-global zone 140. After zoneadmd 162 is started, it interacts with the kernel 150 to establish the non-global zone 140. In establishing a non-global zone 140, a number of operations are performed, including but not limited to creating the zone 140 (e.g. assigning a zone ID, creating a zone data structure, etc.), starting a zsched process 164 (zsched is a kernel process; however, it runs within the non-global zone 140, and is used to track kernel resources associated with the non-global zone 140), maintaining a file system 180, plumbing network interfaces 182, and configuring devices. These and other operations put the non-global zone 140 into the Ready state to prepare it for normal operation.
Putting a non-global zone 140 into the Ready state gives rise to a virtual platform on which one or more processes may be executed. This virtual platform provides the infrastructure necessary for enabling one or more processes to be executed within the non-global zone 140 in isolation from processes in other non-global zones 140. The virtual platform also makes it possible to isolate other entities such as file system 180 and network interfaces 182 within the non-global zone 140, so that the zone behaves like a virtual standalone computer. Notice that when a non-global zone 140 is in the Ready state, no user or non-kernel processes are executing inside the zone (recall that zsched is a kernel process, not a user process). Thus, the virtual platform provided by the non-global zone 140 is independent of any processes executing within the zone. Put another way, the zone and hence, the virtual platform, exists even if no user or non-kernel processes are executing within the zone. This means that a non-global zone 140 can remain in existence from the time it is created until either the zone or the OS is terminated. The life of a non-global zone 140 need not be limited to the duration of any user or non-kernel process executing within the zone.
After a non-global zone 140 is in the Ready state, it can be transitioned into the Running state by executing one or more user processes in the zone. In one embodiment, this is done by having zoneadmd 162 start an init process 172 in its associated zone. Once started, the init process 172 looks in the file system 180 of the non-global zone 140 to determine what applications to run. The init process 172 then executes those applications to give rise to one or more other processes 174. In this manner, an application environment is initiated on the virtual platform of the non-global zone 140. In this application environment, all processes 170 are confined to the non-global zone 140; thus, they cannot access or affect processes, file systems, or network interfaces in other zones. The application environment exists so long as one or more user processes are executing within the non-global zone 140.
After a non-global zone 140 is in the Running state, its associated zoneadmd 162 can be used to manage it. Zoneadmd 162 can be used to initiate and control a number of zone administrative tasks. These tasks may include, for example, halting and rebooting the non-global zone 140. When a non-global zone 140 is halted, it is brought from the Running state down to the Installed state. In effect, both the application environment and the virtual platform are terminated. When a non-global zone 140 is rebooted, it is brought from the Running state down to the Installed state, and then transitioned from the Installed state through the Ready state to the Running state. In effect, both the application environment and the virtual platform are terminated and restarted. These and many other tasks may be initiated and controlled by zoneadmd 162 to manage a non-global zone 140 on an ongoing basis during regular operation.
Selective Provision of Mount Information
As noted previously, in one embodiment, mount information is provided to processes running within operating system partitions (zones) in a selective manner. To enable this result to be achieved, certain acts/operations are performed during each of the four states of a non-global zone 140. The acts/operations performed in each of these states will be discussed separately below. To facilitate discussion, zone A 140(a) will be used as the sample zone.
Configured State
As discussed above, when a non-global zone is configured, various configuration parameters are specified for the zone, with some parameters pertaining to the file system for the zone and other parameters pertaining to other aspects of the zone. In one embodiment, the parameters pertaining to the file system include but are not limited to: (1) a path to the root directory of the file system; (2) a specification of all of the directories and subdirectories that are to be created under the root directory, and all of the files that are to be installed into those directories and subdirectories; and (3) a list of all mount points and the entities that are to be mounted onto those mount points.
In the present example of zone A 140(a), the following sets of information are specified: (1) the path to the root directory 322 of the file system 180(a) for zone A 140(a) is /Zones/ZoneA/Root; (2) the directories to be created are ETC 332 and USR 324, and certain packages of files are to be installed under these directories; and (3) the mount points or mount directories are A, ProcFS, and NFS, and the entities to be mounted are a floppy drive, a Proces file system (ProcFS), and a network file system (NFS), respectively.
Installed State
To transition a non-global zone from the Configured state to the Installed state, an administrator in the global zone 130 invokes an operating system utility (in one embodiment, zoneadm(1 m)), which accesses the configuration information associated with the non-global zone, and interacts with the kernel 150 to carry out the installation process. In one embodiment, the following installation operations are performed for the current example of non-global zone A 140(a).
Initially, using the path specification in the configuration information, the root of the file system 180(a) is determined to be root 322. The directories ETC 332 and USR 324 are then created under this root 322. Thereafter, all of the specified files are installed into these directories. In addition, the mount points A, ProcFS, and NFS are extracted from the configuration information. For each mount point, a mount directory is created. Thus, directories A 330, ProcFs 328, and NFS 326 are created under root 322. These directories are now ready to have entities mounted onto them.
Ready State
To transition a non-global zone from the Installed state to the Ready state, an administrator in the global zone 130 invokes an operating system utility (in one embodiment, zoneadm(1 m) again), which causes a zoneadmd process 162 to be started. In the present example with zone A 140(a), zoneadmd 162(a) is started. Once started, zondadmd 162(a) interacts with the kernel 150 to establish zone A 140(a).
In establishing zone A 140(a), several operations are performed, including but not limited to creating zone A 140(a), and maintaining a file system 180(a) for zone A 140(a). In one embodiment, in creating zone A 140(a), a number of operations are performed, including but not limited to assigning a unique zone ID to zone A 140(a), and creating a data structure associated with zone A 140(a). This data structure will be used to store a variety of information associated with zone A 140(a), including for example, the zone ID.
To maintain the file system 180(a) for zone A 140(a), the path (/Zones/ZoneA/Root) to the root directory 322 of the file system 180(a) is initially extracted from the configuration information for zone A 140(a). This directory 322 is established as the root of the file system 180(a) for zone A 140(a). This may involve, for example, storing the path to root 322 in the zone A data structure for future reference.
After the root directory 322 is determined and established, a determination is made as to whether there are any entities to be mounted. In the current example, the configuration information for zone A 140(a) indicates that a floppy drive is to be mounted onto directory A 330, a process file system (ProcFS) is to be mounted onto directory ProcFS 328, and a network file system (NFS) is to be mounted onto directory NFS 326. Thus, as part of maintaining the file system 180(a), these entities are mounted onto their respective mount points.
To keep track of the mounts for future reference, one or more data structures may be maintained. FIG. 5 shows a sample data structure that may be used for this purpose. Generally, whenever an entity is mounted, it is incorporated into the overall file system maintained by the kernel 150 for the global zone 130. Thus, that mounted entity should be associated with the global zone 130. In addition, if the mounted entity is mounted within the file system of a non-global zone 140, it should also be associated with that non-global zone 140. The data structure of FIG. 5 shows how these associations may be maintained, in accordance with one embodiment of the present invention.
Specifically, whenever an entity is mounted, an entry 510 corresponding to the mount is added to a mount tracking data (MTD) structure 500. This entry comprises various sets of information pertaining to the mount, including for example, the path to the mount and certain semantics of the mount (e.g. the type of mount, etc.). In addition, the entry may also include one or more zone-specific pointers that reference a next entry. These pointers make it easy to traverse the MTD structure 500 at a later time to determine all of the mounts that are associated with a particular zone.
To illustrate how the MTD structure 500 may be maintained, a sample walk-through of the creation and maintenance of the structure 500 will now be provided. As shown, MTD structure 500 comprises a global zone entry 502. In one embodiment, this entry 502 is inserted into the MTD structure 500 when the kernel 150 is initialized. When an initial entity is mounted and hence, incorporated into the overall file system maintained for the global zone 130, a mount 1 entry 510(1) representing the initial mount is inserted. The mount 1 entry 510(a) comprises information, such as the information discussed previously, pertaining to the initial mount. A pointer 520(1) in the global zone entry 502 is updated to point to entry 510(1); thus, mount 1 510(1) is associated with the global zone 130. At this point, there are no other mounts in the overall file system. Thus, entry 510(1) does not point to any other entry.
Suppose now that zone A 140(a) is transitioned from the Installed state to the Ready state. As part of this transition, entities are mounted onto the mount points of the file system 180(a) for zone A 140(a). Specifically, a floppy drive is mounted onto mount point A 330, a process file system (ProcFS) is mounted onto mount point ProcFS 328, and a network file system (NFS) is mounted onto mount point NFS 326.
In one embodiment, when zone A 140(a) is created, a zone A entry 504 is inserted into the MTD structure 500. Then, when the floppy drive is mounted onto mount point A 330, a mount A entry 510(2) is added. The mount A entry 510(2) comprises information pertaining to the floppy drive mount. Upon the addition of entry 510(2), several pointers are updated. The pointer 530(1) in the zone A entry 504 is updated to point to entry 510(2); thus, the mount A entry 510(2) is associated with zone A 140(a). In addition, a global zone pointer 520(2) inside the mount 1 entry 510(1) is updated to point to entry 510(2). As a result, the mount A entry 510(2) is also associated with the global zone 130.
Thereafter, when the process file system (ProcFS) is mounted onto mount point ProcFS 328, a mount ProcFS entry 510(3) is added. This entry 510(3) comprises information pertaining to the ProcFS mount. Upon the addition of entry 510(3), several pointers are updated. A zone A pointer 530(2) inside the mount A entry 510(2) is updated to point to entry 510(3); thus, the mount ProcFS entry 510(3) is associated with zone A 140(a). In addition, a global zone pointer 520(3) inside the mount A entry 510(2) is updated to point to entry 510(3). As a result, the mount ProcFS entry 510(3) is also associated with the global zone 130.
Thereafter, when the network file system (NFS) is mounted onto mount point NFS 326, a mount NFS entry 510(4) is added. This entry 510(4) comprises information pertaining to the NFS mount. Upon the addition of entry 510(4), several pointers are updated. A zone A pointer 530(3) inside the mount ProcFS entry 510(3) is updated to point to entry 510(4); thus, the mount NFS entry 510(4) is associated with zone A 140(a). In addition, a global zone pointer 520(4) inside the mount ProcFS entry 510(3) is updated to point to entry 510(4). As a result, the mount NFS entry 510(4) is also associated with the global zone 130.
Suppose that, after zone A 140(a) is readied, another entity is mounted and hence, incorporated into the overall file system maintained for the global zone 130. When this occurs, another entry 510(5) is added to the MTD structure 500. This entry 510(5) comprises information pertaining to the mount (mount 5 in this example). When the mount 5 entry 510(5) is added, the global zone pointer 520(5) in the mount NFS entry 510(4) is updated to point to entry 510(5); thus, the mount 5 entry 510(5) is associated with the global zone 130. However, because mount 5 is not within the file system 180(a) of zone A 140(a), there is no zone A pointer in the NFS mount entry 510(4) that points to entry 510(5). Thus, mount 5 is not associated with zone A 140(a).
With the MTD structure 500 in place, determining all of the mounts that are associated with a particular zone (global or non-global) becomes a simple task. All that needs to be done is to follow the zone pointers in the entries 510.
Running State
To transition a non-global zone from the Ready state to the Running state, the zoneadmd process associated with the non-global zone starts an init process. In the current example, zoneadmd 162(a) starts the init process 172(a). When started, the init process 172(a) is associated with zone A 140(a). This may be done, for example, by creating a data structure for process 172(a), and storing the zone ID of zone A 140(a) within that data structure. Once started, the init process 172(a) looks in the file system 180(a) (for example, in directory ETC 332) of zone A 140(a) to determine what applications to run. The init process 172(a) then executes those applications to give rise to one or more other processes 174. In one embodiment, as each process 174 is started, it is associated with zone A 140(a). This may be done in the same manner as that discussed above in connection with the init process 172(a) (e.g. by creating a data structure for the process 174 and storing the zone ID of zone A 140(a) within that data structure). In the manner described, processes 170(a) are started and associated with (and hence, are running within) zone A 140(a).
During regular operation, one or more processes may submit a request to the kernel 150 for information pertaining to mounts. In one embodiment, when the kernel 150 receives such a request from a process, it determines the zone in which that process is running, which may be the global zone 130 or one of the non-global zones 140. The kernel 150 then selectively provides the mount information appropriate for processes in that zone.
For example, when the kernel 150 receives a request from one of the processes 170(a) running within zone A 140(a), the kernel 150 determines the zone in which that process 170(a) is running. This determination may be made, for example, by accessing the data structure associated with the process 170(a), and extracting the zone ID of zone A 140(a) therefrom. The zone ID may then be used to determine that the process 170(a) is running within zone A 140(a). After this determination is made, the kernel 150 traverses the MTD structure 500 shown in FIG. 5 to determine all of the mounts that are within the file system 180(a) for zone A 140(a). This may be done, for example, by starting with the zone A entry 504, following the pointer 530(1) in the zone A entry 504 to the mount A entry 510(2), following the zone A pointer 530(2) in the mount A entry 510(2) to the mount ProcFS entry 510(3), and following the zone A pointer 530(3) in the mount ProcFS entry 510(3) to the mount NFS entry 510(4).
After the kernel 150 determines which mounts are within the file system 180(a) for zone A 140(a), it provides information pertaining to just those mounts to the requesting process 170(a). Information pertaining to all of the other mounts (e.g. mount 1 and mount 5) in the overall file system is not provided. That way, the process 170(a) is made aware of, and hence, can view only those mounts that are within the file system 180(a) of the zone 140(a) in which the process 170(a) is running. In this manner, the kernel 150 enforces the file system boundary (from a mount standpoint) of a zone.
Presentation of Mount Information
One of the purposes of selectively providing mount information to a process is to create an illusion for the process that the file system associated with the zone in which the process is running is the entire file system. In one embodiment, to perpetuate this illusion, mount information is presented in a particular way.
First, most processes expect the root of a file system to be a mount (this mount may, for example, be a mount for a hard drive on which most or all of the files are stored). Because of this expectation, the kernel 150 shows the root of a file system as a mount. This is done even if the root is not actually a mount. For example, the root 322 of the file system 180(a) for zone A 140(a) is not an actual mount. Nonetheless, in one embodiment, the kernel 150 shows root 322 as a mount when responding to a request for mount information from a process 170(a) running within zone A 140(a).
Also, when information pertaining to a mount is provided, the full path to the mount may be filtered. This helps to hide the fact that the file system associated with a zone may be part of a larger overall file system. For example, the full path to the root 322 of the file system 180(a) for zone A 140(a) is /Zones/ZoneA/Root. This full path is not shown to a process 170(a) running within zone A 140(a), however. Instead, the process 170(a) is shown just “/” (indicating a root) so that the process 170(a) is kept unaware of the existence of the other directories (e.g. Zones, Zone A, etc.). Likewise, mounts 330, 328, and 326 are shown as /A, /ProcFS, and /NFS, respectively, instead of /Zones/ZoneA/Root/A, /Zones/ZoneA/Root/ProcFS, and /Zones/ZoneA/Root/NFS. By doing so, the kernel 150 perpetuates the illusion to the process 170(a) that the file system 180(a) of zone A 140(a) is the entire file system.
Alternative Method for Determining Mounts
It was disclosed previously that an MTD structure 500 may be constructed and used to determine the mounts that are within a particular file system associated with a particular zone. While this is an effective method, it should be noted that other methods may also be used within the scope of the present invention.
For example, instead of maintaining an MTD structure 500, a full path to each mount may be stored, and this path may be compared with the path to the root of a file system to determine whether that mount is within the file system. For example, the path to mount NFS 326 is /Zones/ZoneA/Root/NFS. The path to the root 322 of the file system 180(a) for zone A 140(a) is /Zones/ZoneA/Root. By comparing these two paths, it can be determined that mount NFS 326 is under the root of file system 180(a). Thus, mount NFS 326 is within file system 180(a). Similarly, the path to the root 422 of the file system 180(b) for zone B 140(b) is /Zones/ZoneB/Root. By comparing the path to mount NFS 326 with the path to root 422, it can be determined that mount NFS 326 is not under the root of file system 180(b). Thus, mount NFS 326 is not within file system 180(b). By comparing the path to each mount with the path to the root of a particular file system, it can be determined which mounts are under the root, and hence, are part of the file system. This and other methods may be used to determine the mounts that are within a particular file system. All such methods are within the scope of the present invention.
Hardware Overview
FIG. 6 is a block diagram that illustrates a computer system 600 upon which an embodiment of the invention may be implemented. Computer system 600 includes a bus 602 for facilitating information exchange, and one or more processors 604 coupled with bus 602 for processing information. Computer system 600 also includes a main memory 606, such as a random access memory (RAM) or other dynamic storage device, coupled to bus 602 for storing information and instructions to be executed by processor 604. Main memory 606 also may be used for storing temporary variables or other intermediate information during execution of instructions by processor 604. Computer system 600 may further include a read only memory (ROM) 608 or other static storage device coupled to bus 602 for storing static information and instructions for processor 604. A storage device 610, such as a magnetic disk or optical disk, is provided and coupled to bus 602 for storing information and instructions.
Computer system 600 may be coupled via bus 602 to a display 612, such as a cathode ray tube (CRT), for displaying information to a computer user. An input device 614, including alphanumeric and other keys, is coupled to bus 602 for communicating information and command selections to processor 604. Another type of user input device is cursor control 616, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 604 and for controlling cursor movement on display 612. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane.
In computer system 600, bus 602 may be any mechanism and/or medium that enables information, signals, data, etc., to be exchanged between the various components. For example, bus 602 may be a set of conductors that carries electrical signals. Bus 602 may also be a wireless medium (e.g. air) that carries wireless signals between one or more of the components. Bus 602 may also be a medium (e.g. air) that enables signals to be capacitively exchanged between one or more of the components. Bus 602 may further be a network connection that connects one or more of the components. Overall, any mechanism and/or medium that enables information, signals, data, etc., to be exchanged between the various components may be used as bus 602.
Bus 602 may also be a combination of these mechanisms/media. For example, processor 604 may communicate with storage device 610 wirelessly. In such a case, the bus 602, from the standpoint of processor 604 and storage device 610, would be a wireless medium, such as air. Further, processor 604 may communicate with ROM 608 capacitively. In this instance, the bus 602 would be the medium (such as air) that enables this capacitive communication to take place. Further, processor 604 may communicate with main memory 606 via a network connection. In this case, the bus 602 would be the network connection. Further, processor 604 may communicate with display 612 via a set of conductors. In this instance, the bus 602 would be the set of conductors. Thus, depending upon how the various components communicate with each other, bus 602 may take on different forms. Bus 602, as shown in FIG. 6, functionally represents all of the mechanisms and/or media that enable information, signals, data, etc., to be exchanged between the various components.
The invention is related to the use of computer system 600 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 600 in response to processor 604 executing one or more sequences of one or more instructions contained in main memory 606. Such instructions may be read into main memory 606 from another machine-readable medium, such as storage device 610. Execution of the sequences of instructions contained in main memory 606 causes processor 604 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
The term “machine-readable medium” as used herein refers to any medium that participates in providing data that causes a machine to operation in a specific fashion. In an embodiment implemented using computer system 600, various machine-readable media are involved, for example, in providing instructions to processor 604 for execution. Such a medium may take many forms, including but not limited to machine-readable storage media (e.g. non-volatile media, volatile media, etc.), and transmission media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 610. Volatile media includes dynamic memory, such as main memory 606. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 602. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
Common forms of machine-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punchcards, papertape, any other physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave as described hereinafter, or any other medium from which a computer can read.
Various forms of machine-readable media may be involved in carrying one or more sequences of one or more instructions to processor 604 for execution. For example, the instructions may initially be carried on a magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to computer system 600 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 602. Bus 602 carries the data to main memory 606, from which processor 604 retrieves and executes the instructions. The instructions received by main memory 606 may optionally be stored on storage device 610 either before or after execution by processor 604.
Computer system 600 also includes a communication interface 618 coupled to bus 602. Communication interface 618 provides a two-way data communication coupling to a network link 620 that is connected to a local network 622. For example, communication interface 618 may be an integrated services digital network (ISDN) card or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 618 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, communication interface 618 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
Network link 620 typically provides data communication through one or more networks to other data devices. For example, network link 620 may provide a connection through local network 622 to a host computer 624 or to data equipment operated by an Internet Service Provider (ISP) 626. ISP 626 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 628. Local network 622 and Internet 628 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 620 and through communication interface 618, which carry the digital data to and from computer system 600, are exemplary forms of carrier waves transporting the information.
Computer system 600 can send messages and receive data, including program code, through the network(s), network link 620 and communication interface 618. In the Internet example, a server 630 might transmit a requested code for an application program through Internet 628, ISP 626, local network 622 and communication interface 618.
The received code may be executed by processor 604 as it is received, and/or stored in storage device 610, or other non-volatile storage for later execution. In this manner, computer system 600 may obtain application code in the form of a carrier wave.
At this point, it should be noted that although the invention has been described with reference to a specific embodiment, it should not be construed to be so limited. Various modifications may be made by those of ordinary skill in the art with the benefit of this disclosure without departing from the spirit of the invention. Thus, the invention should not be limited by the specific embodiments used to illustrate it but only by the scope of the issued claims.

Claims (36)

1. A machine-implemented method, comprising:
creating, by an operating system, a plurality of non-global operating system partitions within a global operating system environment provided by the operating system, wherein each non-global operating system partition serves to isolate processes running within that non-global operating system partition from other non-global operating system partitions within the global operating system environment, wherein enforcement of boundaries between the non-global operating system partitions is carried out by the operating system, and wherein the plurality of non-global operating system partitions comprises a particular non-global operating system partition;
maintaining a file system for the particular non-global operating system partition, the file system comprising one or more mounts;
receiving a request from a process running within the particular non-global operating system partition to view information for mounts;
determining that the process is running within the particular non-global operating system partition; and
providing to the process information for only those mounts that are within the file system for the particular non-global operating system partition.
2. The method of claim 1, wherein the file system for the particular non-global operating system is part of an overall file system maintained for the global operating system environment, and wherein the overall file system comprises one or more other mounts that are not within the file system for the particular non-global operating system partition.
3. The method of claim 1, wherein maintaining comprises:
associating the one or more mounts with the particular non-global operating system partition.
4. The method of claim 3, wherein the particular non-global operating system partition has a mount data tracking structure associated therewith, and wherein associating comprises:
adding entries corresponding to the one or more mounts to the mount data tracking structure associated with the particular non-global operating system partition.
5. The method of claim 4, wherein the mount data tracking structure associated with the particular non-global operating system partition comprises a linked list of mount entries.
6. The method of claim 4, wherein providing comprises:
accessing the mount data tracking structure associated with the particular non-global operating system partition; and
determining, based upon the mount data tracking structure associated with the particular non-global operating system partition, the one or more mounts within the file system for the particular non-global operating system partition.
7. The method of claim 1, wherein the file system for the particular non-global operating system partition has a root directory, and wherein providing comprises:
determining which mounts are within the file system for the particular non-global operating system partition by determining which mounts are under the root directory, or a subdirectory thereof.
8. The method of claim 1, wherein:
the operating system isolates the process within the particular non-global operating system partition by not allowing the process to access processes running in any other non-global operating system partition.
9. The method of claim 1, wherein creating comprises assigning a unique identifier to the particular non-global operating system partition.
10. The method of claim 9, wherein determining comprises:
extracting, from a data structure associated with the process, a partition identifier; and
using the partition identifier to determine the particular non-global operating system partition.
11. The method of claim 1, wherein the file system for the particular non-global operating system partition has a root directory, and wherein providing comprises:
indicating to the process that the root directory is one of the one or more mounts.
12. The method of claim 1, wherein the file system for the particular non-global operating system partition has a root directory, wherein the root directory has an associated path, wherein each of the one or more mounts is under the root directory, or a subdirectory thereof, and wherein providing comprises:
showing, to the process, each of the one or mounts without including the path to the root directory.
13. An apparatus, comprising:
one or more processors; and
a storage having stored therein instructions which, when executed by the one or more processors, cause the one or more processors to perform the operations of:
implementing an operating system that creates a plurality of non-global operating system partitions within a global operating system environment provided by the operating system, wherein each non-global operating system partition serves to isolate processes running within that non-global operating system partition from other non-global operating system partitions within the global operating system environment, wherein enforcement of boundaries between the non-global operating system partitions is carried out by the operating system, and wherein the plurality of non-global operating system partitions comprises a particular non-global operating system partition;
maintaining a file system for the particular non-global operating system partition, the file system comprising one or more mounts;
receiving a request from a process running within the particular non-global operating system partition to view information for mounts;
determining that the process is running within the particular non-global operating system partition; and
providing to the process information for only those mounts that are within the file system for the particular non-global operating system partition.
14. The apparatus of claim 13, wherein the file system for the particular non-global operating system partition is part of an overall file system maintained for the global operating system environment, and wherein the overall file system comprises one or more other mounts that are not within the file system for the particular non-global operating system partition.
15. The apparatus of claim 13, wherein maintaining comprises:
associating the one or more mounts with the particular non-global operating system partition.
16. The apparatus of claim 15, wherein the particular non-global operating system partition has a mount data tracking structure associated therewith, and wherein associating comprises:
adding entries corresponding to the one or more mounts to the mount data tracking structure associated with the particular non-global operating system partition.
17. The apparatus of claim 16, wherein the mount data tracking structure associated with the particular non-global operating system partition comprises a linked list of mount entries.
18. The apparatus of claim 16, wherein providing comprises:
accessing the mount data tracking structure associated with the particular non-global operating system partition; and
determining, based upon the mount data tracking structure associated with the particular non-global operating system partition, the one or more mounts within the file system for the particular non-global operating system partition.
19. The apparatus of claim 13, wherein the file system for the particular non-global operating system partition has a root directory, and wherein providing comprises:
determining which mounts are within the file system for the particular non-global operating system partition by determining which mounts are under the root directory, or a subdirectory thereof.
20. The apparatus of claim 13, wherein the operating system isolates the process within the particular non-global operating system partition by not allowing the process to access processes running in any other non-global operating system partition.
21. The apparatus of claim 13, wherein implementing the operating system comprises assigning a unique identifier to the particular non-global operating system partition.
22. The apparatus of claim 21, wherein determining comprises:
extracting, from a data structure associated with the process, a partition identifier; and
using the partition identifier to determine the particular non-global operating system partition.
23. The apparatus of claim 13, wherein the file system for the particular non-global operating system has a root directory, and wherein providing comprises:
indicating to the process that the root directory is one of the one or more mounts.
24. The apparatus of claim 13, wherein the file system for the particular non-global operating system partition has a root directory, wherein the root directory has an associated path, wherein each of the one or more mounts is under the root directory, or a subdirectory thereof, and wherein providing comprises:
showing, to the process, each of the one or mounts without including the path to the root directory.
25. A machine-readable storage medium, comprising:
instructions for causing one or more processors to implement an operating system that creates a plurality of non-global operating system partitions within a global operating system environment provided by the operating system, wherein each non-global operating system partition serves to isolate processes running within that non-global operating system partition from other non-global operating system partitions within the global operating system environment, wherein enforcement of boundaries between the non-global operating system partitions is carried out by the operating system, and wherein the plurality of non-global operating system partitions comprises a particular non-global operating system partition;
instructions for causing one or more processors to maintain a file system for the particular non-global operating system partition, the file system comprising one or more mounts;
instructions for causing one or more processors to receive a request from a process running within the particular non-global operating system partition to view information for mounts;
instructions for causing one or more processors to determine that the process is running within the particular non-global operating system partition; and
instructions for causing one or more processors to provide to the process information for only those mounts that are within the file system for the particular non-global operating system partition.
26. The machine-readable storage medium of claim 25, wherein the file system for the particular non-global operating system is part of an overall file system maintained for the global operating system environment, and wherein the overall file system comprises one or more other mounts that are not within the file system for the particular non-global operating system partition.
27. The machine-readable storage medium of claim 25, wherein the instructions for causing one or more processors to maintain comprises:
instructions for causing one or more processors to associate the one or more mounts with the particular non-global operating system partition.
28. The machine-readable storage medium of claim 27, wherein the particular non-global operating system partition has a mount data tracking structure associated therewith, and wherein the instructions for causing one or more processors to associate comprises:
instructions for causing one or more processors to add entries corresponding to the one or more mounts to the mount data tracking structure associated with the particular non-global operating system partition.
29. The machine-readable storage medium of claim 28, wherein the mount data tracking structure associated with the particular non-global operating system partition comprises a linked list of mount entries.
30. The machine-readable storage medium of claim 28, wherein the instructions for causing one or more processors to provide comprises:
instructions for causing one or more processors to access the mount data tracking structure associated with the particular non-global operating system partition; and
instructions for causing one or more processors to determine, based upon the mount data tracking structure associated with the particular non-global operating system partition, the one or more mounts within the file system for the particular non-global operating system partition.
31. The machine-readable storage medium of claim 25, wherein the file system for the particular non-global operating system partition has a root directory, and wherein the instructions for causing one or more processors to provide comprises:
instructions for causing one or more processors to determine which mounts are within the file system for the particular non-global operating system partition by determining which mounts are under the root directory, or a subdirectory thereof.
32. The machine-readable storage medium of claim 25, wherein the operating system isolates the process within the particular non-global operating system partition by not allowing the process to access processes running in any other non-global operating system partition.
33. The machine-readable storage medium of claim 25, wherein the instructions for causing one or more processors to implement the operating system comprises instructions for causing one or more processors to assign a unique identifier to the particular non-global operating system partition.
34. The machine-readable storage medium of claim 33, wherein the instructions for causing one or more processors to determine comprises:
instructions for causing one or more processors to extract, from a data structure associated with the process, a partition identifier; and
instructions for causing one or more processors to use the partition identifier to determine the particular non-global operating system partition.
35. The machine-readable storage medium of claim 25, wherein the file system for the particular non-global operating system partition has a root directory, and wherein the instructions for causing one or more processors to provide comprises:
instructions for causing one or more processors to indicate to the process that the root directory is one of the one or more mounts.
36. The machine-readable storage medium of claim 25, wherein the file system for the particular non-global operating system partition has a root directory, wherein the root directory has an associated path, wherein each of the one or more mounts is under the root directory, or a subdirectory thereof, and wherein the instructions for causing one or more processors to provide comprises:
instructions for causing one or more processors to show, to the process, each of the one or mounts without including the path to the root directory.
US10/767,235 2003-05-09 2004-01-28 Mechanism for selectively providing mount information to processes running within operating system partitions Active 2026-09-13 US7490074B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/767,235 US7490074B1 (en) 2003-05-09 2004-01-28 Mechanism for selectively providing mount information to processes running within operating system partitions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US46955803P 2003-05-09 2003-05-09
US10/767,235 US7490074B1 (en) 2003-05-09 2004-01-28 Mechanism for selectively providing mount information to processes running within operating system partitions

Publications (1)

Publication Number Publication Date
US7490074B1 true US7490074B1 (en) 2009-02-10

Family

ID=40073863

Family Applications (7)

Application Number Title Priority Date Filing Date
US10/744,360 Active 2025-03-19 US7461080B1 (en) 2003-05-09 2003-12-22 System logging within operating system partitions using log device nodes that are access points to a log driver
US10/761,622 Active 2025-12-19 US7526774B1 (en) 2003-05-09 2004-01-20 Two-level service model in operating system partitions
US10/762,067 Active 2029-04-22 US7793289B1 (en) 2003-05-09 2004-01-20 System accounting for operating system partitions
US10/767,118 Active 2027-07-13 US7567985B1 (en) 2003-05-09 2004-01-28 Mechanism for implementing a sparse file system for an operating system partition
US10/767,235 Active 2026-09-13 US7490074B1 (en) 2003-05-09 2004-01-28 Mechanism for selectively providing mount information to processes running within operating system partitions
US10/771,698 Active 2027-03-07 US7805726B1 (en) 2003-05-09 2004-02-03 Multi-level resource limits for operating system partitions
US10/833,474 Active 2031-03-08 US8516160B1 (en) 2003-05-09 2004-04-27 Multi-level administration of shared network resources

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US10/744,360 Active 2025-03-19 US7461080B1 (en) 2003-05-09 2003-12-22 System logging within operating system partitions using log device nodes that are access points to a log driver
US10/761,622 Active 2025-12-19 US7526774B1 (en) 2003-05-09 2004-01-20 Two-level service model in operating system partitions
US10/762,067 Active 2029-04-22 US7793289B1 (en) 2003-05-09 2004-01-20 System accounting for operating system partitions
US10/767,118 Active 2027-07-13 US7567985B1 (en) 2003-05-09 2004-01-28 Mechanism for implementing a sparse file system for an operating system partition

Family Applications After (2)

Application Number Title Priority Date Filing Date
US10/771,698 Active 2027-03-07 US7805726B1 (en) 2003-05-09 2004-02-03 Multi-level resource limits for operating system partitions
US10/833,474 Active 2031-03-08 US8516160B1 (en) 2003-05-09 2004-04-27 Multi-level administration of shared network resources

Country Status (1)

Country Link
US (7) US7461080B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080046610A1 (en) * 2006-07-20 2008-02-21 Sun Microsystems, Inc. Priority and bandwidth specification at mount time of NAS device volume
US20090037718A1 (en) * 2007-07-31 2009-02-05 Ganesh Perinkulam I Booting software partition with network file system
US8301597B1 (en) 2011-09-16 2012-10-30 Ca, Inc. System and method for network file system server replication using reverse path lookup
US9552367B2 (en) 2011-09-16 2017-01-24 Ca, Inc. System and method for network file system server replication using reverse path lookup

Families Citing this family (59)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7970789B1 (en) * 2003-06-11 2011-06-28 Symantec Corporation Sublayered application layered system
AU2003298560A1 (en) * 2002-08-23 2004-05-04 Exit-Cube, Inc. Encrypting operating system
US20080313282A1 (en) 2002-09-10 2008-12-18 Warila Bruce W User interface, operating system and architecture
US8892878B2 (en) * 2003-05-09 2014-11-18 Oracle America, Inc. Fine-grained privileges in operating system partitions
US7698402B2 (en) * 2004-06-30 2010-04-13 Hewlett-Packard Development Company, L.P. Method and apparatus for enhanced design of multi-tier systems
US7607011B1 (en) * 2004-07-16 2009-10-20 Rockwell Collins, Inc. System and method for multi-level security on a network
US8171479B2 (en) 2004-09-30 2012-05-01 Citrix Systems, Inc. Method and apparatus for providing an aggregate view of enumerated system resources from various isolation layers
US7853947B2 (en) * 2004-09-30 2010-12-14 Citrix Systems, Inc. System for virtualizing access to named system objects using rule action associated with request
US8095940B2 (en) 2005-09-19 2012-01-10 Citrix Systems, Inc. Method and system for locating and accessing resources
US20060069662A1 (en) * 2004-09-30 2006-03-30 Citrix Systems, Inc. Method and apparatus for remapping accesses to virtual system resources
US7680758B2 (en) * 2004-09-30 2010-03-16 Citrix Systems, Inc. Method and apparatus for isolating execution of software applications
US8117559B2 (en) * 2004-09-30 2012-02-14 Citrix Systems, Inc. Method and apparatus for virtualizing window information
US8307453B1 (en) * 2004-11-29 2012-11-06 Symantec Corporation Zone breakout detection
US8219823B2 (en) * 2005-03-04 2012-07-10 Carter Ernst B System for and method of managing access to a system using combinations of user information
US20070050770A1 (en) * 2005-08-30 2007-03-01 Geisinger Nile J Method and apparatus for uniformly integrating operating system resources
US8131825B2 (en) 2005-10-07 2012-03-06 Citrix Systems, Inc. Method and a system for responding locally to requests for file metadata associated with files stored remotely
US20070083620A1 (en) * 2005-10-07 2007-04-12 Pedersen Bradley J Methods for selecting between a predetermined number of execution methods for an application program
US8539481B2 (en) * 2005-12-12 2013-09-17 Microsoft Corporation Using virtual hierarchies to build alternative namespaces
US7996841B2 (en) * 2005-12-12 2011-08-09 Microsoft Corporation Building alternative views of name spaces
US8312459B2 (en) * 2005-12-12 2012-11-13 Microsoft Corporation Use of rules engine to build namespaces
US10303783B2 (en) * 2006-02-16 2019-05-28 Callplex, Inc. Distributed virtual storage of portable media files
US7720777B2 (en) * 2006-04-11 2010-05-18 Palo Alto Research Center Incorporated Method, device, and program product to monitor the social health of a persistent virtual environment
US7716149B2 (en) * 2006-04-11 2010-05-11 Palo Alto Research Center Incorporated Method, device, and program product for a social dashboard associated with a persistent virtual environment
US8108872B1 (en) * 2006-10-23 2012-01-31 Nvidia Corporation Thread-type-based resource allocation in a multithreaded processor
US8171483B2 (en) 2007-10-20 2012-05-01 Citrix Systems, Inc. Method and system for communicating between isolation environments
US20090158299A1 (en) * 2007-10-31 2009-06-18 Carter Ernst B System for and method of uniform synchronization between multiple kernels running on single computer systems with multiple CPUs installed
US8365274B2 (en) * 2008-09-11 2013-01-29 International Business Machines Corporation Method for creating multiple virtualized operating system environments
US20100223366A1 (en) * 2009-02-27 2010-09-02 At&T Intellectual Property I, L.P. Automated virtual server deployment
US8392481B2 (en) * 2009-04-22 2013-03-05 International Business Machines Corporation Accessing snapshots of a time based file system
US8090797B2 (en) 2009-05-02 2012-01-03 Citrix Systems, Inc. Methods and systems for launching applications into existing isolation environments
US8627451B2 (en) * 2009-08-21 2014-01-07 Red Hat, Inc. Systems and methods for providing an isolated execution environment for accessing untrusted content
WO2011069837A1 (en) * 2009-12-11 2011-06-16 International Business Machines Corporation A method for processing trace data
US9684785B2 (en) * 2009-12-17 2017-06-20 Red Hat, Inc. Providing multiple isolated execution environments for securely accessing untrusted content
US20120102314A1 (en) * 2010-04-01 2012-04-26 Huizhou TCL Mobile Communications Co., Ltd. Smart phone system and booting method thereof
US8412754B2 (en) * 2010-04-21 2013-04-02 International Business Machines Corporation Virtual system administration environment for non-root user
US8995982B2 (en) 2011-11-16 2015-03-31 Flextronics Ap, Llc In-car communication between devices
WO2012044546A2 (en) * 2010-10-01 2012-04-05 Imerj, Llc Auto-waking of a suspended os in a dockable system
US9092149B2 (en) 2010-11-03 2015-07-28 Microsoft Technology Licensing, Llc Virtualization and offload reads and writes
WO2012103231A1 (en) * 2011-01-25 2012-08-02 Google Inc. Computing platform with resource constraint negotiation
US9027151B2 (en) 2011-02-17 2015-05-05 Red Hat, Inc. Inhibiting denial-of-service attacks using group controls
US9146765B2 (en) 2011-03-11 2015-09-29 Microsoft Technology Licensing, Llc Virtual disk storage techniques
US8725782B2 (en) 2011-04-25 2014-05-13 Microsoft Corporation Virtual disk storage techniques
US9519496B2 (en) 2011-04-26 2016-12-13 Microsoft Technology Licensing, Llc Detecting and preventing virtual disk storage linkage faults
US9817582B2 (en) 2012-01-09 2017-11-14 Microsoft Technology Licensing, Llc Offload read and write offload provider
US9547656B2 (en) * 2012-08-09 2017-01-17 Oracle International Corporation Method and system for implementing a multilevel file system in a virtualized environment
US9778860B2 (en) 2012-09-12 2017-10-03 Microsoft Technology Licensing, Llc Re-TRIM of free space within VHDX
US8924972B2 (en) * 2012-09-27 2014-12-30 Oracle International Corporation Method and system for logging into a virtual environment executing on a host
US9071585B2 (en) 2012-12-12 2015-06-30 Microsoft Technology Licensing, Llc Copy offload for disparate offload providers
US9251201B2 (en) 2012-12-14 2016-02-02 Microsoft Technology Licensing, Llc Compatibly extending offload token size
US20150169373A1 (en) * 2012-12-17 2015-06-18 Unisys Corporation System and method for managing computing resources
US9367547B2 (en) * 2013-03-14 2016-06-14 Oracle International Corporation Method and system for generating and deploying container templates
US9940167B2 (en) 2014-05-20 2018-04-10 Red Hat Israel, Ltd. Identifying memory devices for swapping virtual machine memory pages
US10229272B2 (en) 2014-10-13 2019-03-12 Microsoft Technology Licensing, Llc Identifying security boundaries on computing devices
US9584317B2 (en) 2014-10-13 2017-02-28 Microsoft Technology Licensing, Llc Identifying security boundaries on computing devices
US10423465B2 (en) * 2018-02-21 2019-09-24 Rubrik, Inc. Distributed semaphore with adjustable chunk sizes
US11461843B2 (en) * 2019-05-23 2022-10-04 Capital One Services, Llc Multi-lender platform that securely stores proprietary information for pre-qualifying an applicant
CN111400127B (en) * 2020-02-28 2022-09-09 深圳平安医疗健康科技服务有限公司 Service log monitoring method and device, storage medium and computer equipment
CN111782474A (en) * 2020-06-30 2020-10-16 广东小天才科技有限公司 Log processing method and device, electronic equipment and medium
CN114124680B (en) * 2021-09-24 2023-11-17 绿盟科技集团股份有限公司 File access control alarm log management method and device

Citations (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0389151A2 (en) 1989-03-22 1990-09-26 International Business Machines Corporation System and method for partitioned cache memory management
US5155809A (en) 1989-05-17 1992-10-13 International Business Machines Corp. Uncoupling a central processing unit from its associated hardware for interaction with data handling apparatus alien to the operating system controlling said unit and hardware
US5283868A (en) 1989-05-17 1994-02-01 International Business Machines Corp. Providing additional system characteristics to a data processing system through operations of an application program, transparently to the operating system
US5291597A (en) 1988-10-24 1994-03-01 Ibm Corp Method to provide concurrent execution of distributed application programs by a host computer and an intelligent work station on an SNA network
US5325526A (en) 1992-05-12 1994-06-28 Intel Corporation Task scheduling in a multicomputer system
US5325517A (en) 1989-05-17 1994-06-28 International Business Machines Corporation Fault tolerant data processing system
US5437032A (en) 1993-11-04 1995-07-25 International Business Machines Corporation Task scheduler for a miltiprocessor system
US5590314A (en) 1993-10-18 1996-12-31 Hitachi, Ltd. Apparatus for sending message via cable between programs and performing automatic operation in response to sent message
US5784706A (en) 1993-12-13 1998-07-21 Cray Research, Inc. Virtual to logical to physical address translation for distributed memory massively parallel processing systems
US5841869A (en) 1996-08-23 1998-11-24 Cheyenne Property Trust Method and apparatus for trusted processing
US5845116A (en) 1994-04-14 1998-12-01 Hitachi, Ltd. Distributed computing system
US5963911A (en) 1994-03-25 1999-10-05 British Telecommunications Public Limited Company Resource allocation
US6064811A (en) 1996-06-17 2000-05-16 Network Associates, Inc. Computer memory conservation system
US6074427A (en) 1997-08-30 2000-06-13 Sun Microsystems, Inc. Apparatus and method for simulating multiple nodes on a single machine
US6075938A (en) 1997-06-10 2000-06-13 The Board Of Trustees Of The Leland Stanford Junior University Virtual machine monitors for scalable multiprocessors
WO2000045262A2 (en) 1999-01-22 2000-08-03 Sun Microsystems, Inc. Techniques for permitting access across a context barrier in a small footprint device using global data structures
EP1043658A1 (en) 1999-04-07 2000-10-11 Bull S.A. Method for improving the performance of a multiprocessor system including a tasks waiting list and system architecture thereof
US6279046B1 (en) 1999-05-19 2001-08-21 International Business Machines Corporation Event-driven communications interface for logically-partitioned computer
US6289462B1 (en) 1998-09-28 2001-09-11 Argus Systems Group, Inc. Trusted compartmentalized computer operating system
US20020069369A1 (en) 2000-07-05 2002-06-06 Tremain Geoffrey Donald Method and apparatus for providing computer services
US20020083367A1 (en) * 2000-12-27 2002-06-27 Mcbride Aaron A. Method and apparatus for default factory image restoration of a system
US6438594B1 (en) 1999-08-31 2002-08-20 Accenture Llp Delivering service to a client via a locally addressable interface
US20020120660A1 (en) 2001-02-28 2002-08-29 Hay Russell C. Method and apparatus for associating virtual server identifiers with processes
US20020156824A1 (en) 2001-04-19 2002-10-24 International Business Machines Corporation Method and apparatus for allocating processor resources in a logically partitioned computer system
EP1253516A2 (en) 2001-04-25 2002-10-30 Sun Microsystems, Inc. Apparatus and method for scheduling processes on a fair share basis
US20020174215A1 (en) 2001-05-16 2002-11-21 Stuart Schaefer Operating system abstraction and protection layer
US20020173984A1 (en) 2000-05-22 2002-11-21 Robertson James A. Method and system for implementing improved containers in a global ecosystem of interrelated services
US20030014466A1 (en) 2001-06-29 2003-01-16 Joubert Berger System and method for management of compartments in a trusted operating system
EP1282038A2 (en) 2001-07-16 2003-02-05 Matsushita Electric Industrial Co., Ltd. Distributed processing system and distributed job processing method
EP1300766A2 (en) 2001-09-25 2003-04-09 Sun Microsystems, Inc. Method and apparatus for partitioning resources within a computer system
US20030069939A1 (en) 2001-10-04 2003-04-10 Russell Lance W. Packet processing in shared memory multi-computer systems
US6557168B1 (en) 2000-02-25 2003-04-29 Sun Microsystems, Inc. System and method for minimizing inter-application interference among static synchronized methods
US6633963B1 (en) 2000-03-31 2003-10-14 Intel Corporation Controlling access to multiple memory zones in an isolated execution environment
US20040010624A1 (en) 2002-04-29 2004-01-15 International Business Machines Corporation Shared resource support for internet protocol
US6681258B1 (en) 2000-05-31 2004-01-20 International Business Machines Corporation Facility for retrieving data from a network adapter having a shared address resolution table
US6681238B1 (en) 1998-03-24 2004-01-20 International Business Machines Corporation Method and system for providing a hardware machine function in a protected virtual machine
US6701460B1 (en) 1999-10-21 2004-03-02 Sun Microsystems, Inc. Method and apparatus for testing a computer system through software fault injection
US6725457B1 (en) 2000-05-17 2004-04-20 Nvidia Corporation Semaphore enhancement to improve system performance
US6738832B2 (en) 2001-06-29 2004-05-18 International Business Machines Corporation Methods and apparatus in a logging system for the adaptive logger replacement in order to receive pre-boot information
US20040162914A1 (en) 2003-02-13 2004-08-19 Sun Microsystems, Inc. System and method of extending virtual address resolution for mapping networks
US6792514B2 (en) 2001-06-14 2004-09-14 International Business Machines Corporation Method, system and computer program product to stress and test logical partition isolation features
US20040210760A1 (en) 2002-04-18 2004-10-21 Advanced Micro Devices, Inc. Computer system including a secure execution mode-capable CPU and a security services processor connected via a secure communication path
US20040215848A1 (en) 2003-04-10 2004-10-28 International Business Machines Corporation Apparatus, system and method for implementing a generalized queue pair in a system area network
US6813766B2 (en) 2001-02-05 2004-11-02 Interland, Inc. Method and apparatus for scheduling processes based upon virtual server identifiers
US20050021788A1 (en) * 2003-05-09 2005-01-27 Tucker Andrew G. Global visibility controls for operating system partitions
US6859926B1 (en) 2000-09-14 2005-02-22 International Business Machines Corporation Apparatus and method for workload management using class shares and tiers
US6944699B1 (en) 1998-05-15 2005-09-13 Vmware, Inc. System and method for facilitating context-switching in a multi-context computer system
US6961941B1 (en) 2001-06-08 2005-11-01 Vmware, Inc. Computer configuration for resource management in systems including a virtual machine
US7051340B2 (en) 2001-11-29 2006-05-23 Hewlett-Packard Development Company, L.P. System and method for isolating applications from each other
US7076634B2 (en) 2003-04-24 2006-07-11 International Business Machines Corporation Address translation manager and method for a logically partitioned computer system
US7095738B1 (en) 2002-05-07 2006-08-22 Cisco Technology, Inc. System and method for deriving IPv6 scope identifiers and for mapping the identifiers into IPv6 addresses
US7096469B1 (en) 2000-10-02 2006-08-22 International Business Machines Corporation Method and apparatus for enforcing capacity limitations in a logically partitioned system
US7188120B1 (en) 2003-05-09 2007-03-06 Sun Microsystems, Inc. System statistics virtualization for operating systems partitions

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69113181T2 (en) 1990-08-31 1996-05-02 Ibm Method and device for cross-division control in a distributed processing environment.
JP2671804B2 (en) 1994-05-27 1997-11-05 日本電気株式会社 Hierarchical resource management method
US5983270A (en) 1997-03-11 1999-11-09 Sequel Technology Corporation Method and apparatus for managing internetwork and intranetwork activity
US5925102A (en) 1997-03-28 1999-07-20 International Business Machines Corporation Managing processor resources in a multisystem environment in order to provide smooth real-time data streams, while enabling other types of applications to be processed concurrently
US6247109B1 (en) 1998-06-10 2001-06-12 Compaq Computer Corp. Dynamically assigning CPUs to different partitions each having an operation system instance in a shared memory space
US6356915B1 (en) 1999-02-22 2002-03-12 Starbase Corp. Installable file system having virtual file system drive, virtual device driver, and virtual disks
DE19954500A1 (en) 1999-11-11 2001-05-17 Basf Ag Carbodiimides with carboxyl or caboxylate groups
US6938169B1 (en) 1999-12-10 2005-08-30 Sun Microsystems, Inc. Channel-specific file system views in a private network using a public-network infrastructure
US7140020B2 (en) 2000-01-28 2006-11-21 Hewlett-Packard Development Company, L.P. Dynamic management of virtual partition computer workloads through service level optimization
US7225223B1 (en) * 2000-09-20 2007-05-29 Hewlett-Packard Development Company, L.P. Method and system for scaling of resource allocation subject to maximum limits
US7032222B1 (en) * 2000-10-13 2006-04-18 Hewlett-Packard Development Company, L.P. Method and system for determining resource allocation to users by granting request based on user associated different limits and resource limit
US7461144B1 (en) * 2001-02-16 2008-12-02 Swsoft Holdings, Ltd. Virtual private server with enhanced security
US7099948B2 (en) 2001-02-16 2006-08-29 Swsoft Holdings, Ltd. Virtual computing environment
JP4185363B2 (en) 2001-02-22 2008-11-26 ビーイーエイ システムズ, インコーポレイテッド System and method for message encryption and signing in a transaction processing system
US7076633B2 (en) 2001-03-28 2006-07-11 Swsoft Holdings, Ltd. Hosting service providing platform system and method
US7194439B2 (en) * 2001-04-30 2007-03-20 International Business Machines Corporation Method and system for correlating job accounting information with software license information
US7227841B2 (en) * 2001-07-31 2007-06-05 Nishan Systems, Inc. Packet input thresholding for resource distribution in a network switch
US7103745B2 (en) 2002-10-17 2006-09-05 Wind River Systems, Inc. Two-level operating system architecture
US7673308B2 (en) * 2002-11-18 2010-03-02 Symantec Corporation Virtual OS computing environment
US7027463B2 (en) 2003-07-11 2006-04-11 Sonolink Communications Systems, Llc System and method for multi-tiered rule filtering

Patent Citations (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291597A (en) 1988-10-24 1994-03-01 Ibm Corp Method to provide concurrent execution of distributed application programs by a host computer and an intelligent work station on an SNA network
EP0389151A2 (en) 1989-03-22 1990-09-26 International Business Machines Corporation System and method for partitioned cache memory management
US5155809A (en) 1989-05-17 1992-10-13 International Business Machines Corp. Uncoupling a central processing unit from its associated hardware for interaction with data handling apparatus alien to the operating system controlling said unit and hardware
US5283868A (en) 1989-05-17 1994-02-01 International Business Machines Corp. Providing additional system characteristics to a data processing system through operations of an application program, transparently to the operating system
US5325517A (en) 1989-05-17 1994-06-28 International Business Machines Corporation Fault tolerant data processing system
US5325526A (en) 1992-05-12 1994-06-28 Intel Corporation Task scheduling in a multicomputer system
US5590314A (en) 1993-10-18 1996-12-31 Hitachi, Ltd. Apparatus for sending message via cable between programs and performing automatic operation in response to sent message
US5437032A (en) 1993-11-04 1995-07-25 International Business Machines Corporation Task scheduler for a miltiprocessor system
US5784706A (en) 1993-12-13 1998-07-21 Cray Research, Inc. Virtual to logical to physical address translation for distributed memory massively parallel processing systems
US5963911A (en) 1994-03-25 1999-10-05 British Telecommunications Public Limited Company Resource allocation
US5845116A (en) 1994-04-14 1998-12-01 Hitachi, Ltd. Distributed computing system
US6064811A (en) 1996-06-17 2000-05-16 Network Associates, Inc. Computer memory conservation system
US5841869A (en) 1996-08-23 1998-11-24 Cheyenne Property Trust Method and apparatus for trusted processing
US6075938A (en) 1997-06-10 2000-06-13 The Board Of Trustees Of The Leland Stanford Junior University Virtual machine monitors for scalable multiprocessors
US6074427A (en) 1997-08-30 2000-06-13 Sun Microsystems, Inc. Apparatus and method for simulating multiple nodes on a single machine
US6681238B1 (en) 1998-03-24 2004-01-20 International Business Machines Corporation Method and system for providing a hardware machine function in a protected virtual machine
US6944699B1 (en) 1998-05-15 2005-09-13 Vmware, Inc. System and method for facilitating context-switching in a multi-context computer system
US6289462B1 (en) 1998-09-28 2001-09-11 Argus Systems Group, Inc. Trusted compartmentalized computer operating system
WO2000045262A2 (en) 1999-01-22 2000-08-03 Sun Microsystems, Inc. Techniques for permitting access across a context barrier in a small footprint device using global data structures
US6993762B1 (en) 1999-04-07 2006-01-31 Bull S.A. Process for improving the performance of a multiprocessor system comprising a job queue and system architecture for implementing the process
EP1043658A1 (en) 1999-04-07 2000-10-11 Bull S.A. Method for improving the performance of a multiprocessor system including a tasks waiting list and system architecture thereof
US6279046B1 (en) 1999-05-19 2001-08-21 International Business Machines Corporation Event-driven communications interface for logically-partitioned computer
US6438594B1 (en) 1999-08-31 2002-08-20 Accenture Llp Delivering service to a client via a locally addressable interface
US6701460B1 (en) 1999-10-21 2004-03-02 Sun Microsystems, Inc. Method and apparatus for testing a computer system through software fault injection
US6557168B1 (en) 2000-02-25 2003-04-29 Sun Microsystems, Inc. System and method for minimizing inter-application interference among static synchronized methods
US6633963B1 (en) 2000-03-31 2003-10-14 Intel Corporation Controlling access to multiple memory zones in an isolated execution environment
US6725457B1 (en) 2000-05-17 2004-04-20 Nvidia Corporation Semaphore enhancement to improve system performance
US20020173984A1 (en) 2000-05-22 2002-11-21 Robertson James A. Method and system for implementing improved containers in a global ecosystem of interrelated services
US6681258B1 (en) 2000-05-31 2004-01-20 International Business Machines Corporation Facility for retrieving data from a network adapter having a shared address resolution table
US20020069369A1 (en) 2000-07-05 2002-06-06 Tremain Geoffrey Donald Method and apparatus for providing computer services
US6859926B1 (en) 2000-09-14 2005-02-22 International Business Machines Corporation Apparatus and method for workload management using class shares and tiers
US7096469B1 (en) 2000-10-02 2006-08-22 International Business Machines Corporation Method and apparatus for enforcing capacity limitations in a logically partitioned system
US20020083367A1 (en) * 2000-12-27 2002-06-27 Mcbride Aaron A. Method and apparatus for default factory image restoration of a system
US6813766B2 (en) 2001-02-05 2004-11-02 Interland, Inc. Method and apparatus for scheduling processes based upon virtual server identifiers
US20020120660A1 (en) 2001-02-28 2002-08-29 Hay Russell C. Method and apparatus for associating virtual server identifiers with processes
US6957435B2 (en) 2001-04-19 2005-10-18 International Business Machines Corporation Method and apparatus for allocating processor resources in a logically partitioned computer system
US20020156824A1 (en) 2001-04-19 2002-10-24 International Business Machines Corporation Method and apparatus for allocating processor resources in a logically partitioned computer system
US20020161817A1 (en) 2001-04-25 2002-10-31 Sun Microsystems, Inc. Apparatus and method for scheduling processes on a fair share basis
EP1253516A2 (en) 2001-04-25 2002-10-30 Sun Microsystems, Inc. Apparatus and method for scheduling processes on a fair share basis
US20020174215A1 (en) 2001-05-16 2002-11-21 Stuart Schaefer Operating system abstraction and protection layer
US6961941B1 (en) 2001-06-08 2005-11-01 Vmware, Inc. Computer configuration for resource management in systems including a virtual machine
US6792514B2 (en) 2001-06-14 2004-09-14 International Business Machines Corporation Method, system and computer program product to stress and test logical partition isolation features
US20030014466A1 (en) 2001-06-29 2003-01-16 Joubert Berger System and method for management of compartments in a trusted operating system
US6738832B2 (en) 2001-06-29 2004-05-18 International Business Machines Corporation Methods and apparatus in a logging system for the adaptive logger replacement in order to receive pre-boot information
EP1282038A2 (en) 2001-07-16 2003-02-05 Matsushita Electric Industrial Co., Ltd. Distributed processing system and distributed job processing method
EP1300766A2 (en) 2001-09-25 2003-04-09 Sun Microsystems, Inc. Method and apparatus for partitioning resources within a computer system
US20030069939A1 (en) 2001-10-04 2003-04-10 Russell Lance W. Packet processing in shared memory multi-computer systems
US7051340B2 (en) 2001-11-29 2006-05-23 Hewlett-Packard Development Company, L.P. System and method for isolating applications from each other
US20040210760A1 (en) 2002-04-18 2004-10-21 Advanced Micro Devices, Inc. Computer system including a secure execution mode-capable CPU and a security services processor connected via a secure communication path
US20040010624A1 (en) 2002-04-29 2004-01-15 International Business Machines Corporation Shared resource support for internet protocol
US7095738B1 (en) 2002-05-07 2006-08-22 Cisco Technology, Inc. System and method for deriving IPv6 scope identifiers and for mapping the identifiers into IPv6 addresses
US20040162914A1 (en) 2003-02-13 2004-08-19 Sun Microsystems, Inc. System and method of extending virtual address resolution for mapping networks
US20040215848A1 (en) 2003-04-10 2004-10-28 International Business Machines Corporation Apparatus, system and method for implementing a generalized queue pair in a system area network
US7076634B2 (en) 2003-04-24 2006-07-11 International Business Machines Corporation Address translation manager and method for a logically partitioned computer system
US20050021788A1 (en) * 2003-05-09 2005-01-27 Tucker Andrew G. Global visibility controls for operating system partitions
US7188120B1 (en) 2003-05-09 2007-03-06 Sun Microsystems, Inc. System statistics virtualization for operating systems partitions

Non-Patent Citations (47)

* Cited by examiner, † Cited by third party
Title
Claims As Filed in European Patent Application No. 04252690.5 (6 pgs.).
Claims, Application No. 04252689.7-1243, 5 pages.
Communications from the ACM (ISSN: 0001-0782) vol. 44, Issue 2 (2001) entitled "An Operating System Approach to Securing E-Services" by Chris Dalton and Tse Huong Choo, ACM Copyright Notice, (C) 2001, (8 pgs).
Current Claims in EPO patent application No. 04 252 690.5-2211 (9 pgs)-attached.
Current Claims, European patent application 04252689.7, 6 pages.
Current Claims, Foreign application No. 04 252 690.5-2211, 9 pages.
Czajkowski, G., "Application isolation in the Java Virtual Machine", 2000, ACM Press, Proceedings of the 15th ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications, pp. 354-366.
Czajkowski, G., "Multitasking without compromise: a virtual machine evolution", ACM Press, Proceedings of the 16th ACM SIGPLAN Conference on Object Oriented Programming, Systems, Languages, and Applications, dated Oct. 2001, pp. 125-138.
European Patent Office, "Communication pursuant to Article 94(3) EPC", Foreign application No. 04 252 690.5-2211, 5 pages.
European Patent Office, "European Search Report," application No. 04252689.7, mailing date Jul. 28, 2005, 3 pages.
European Patent Office, "Result of Consultation", Application No. 04252689.7-1243, dated Aug. 11, 2008, 2 pages.
European Search Report from the European Patent Office for Foreign Patent Application No. 04252690.5 (3 pgs.).
Hewlett-Packard, "Installing and Managing HP-UX Virtual Partitions (vPars)", Third Edition, Part No. T1335-90018, Copyright Hewlett-Packard Company, Nov. 2002, pp. 1-4, 17-44, 72-75, and 157-161.
Hope, "Using Jails in FreeBSD for fun and profit", ;Login: The Magazine of USENIX &SAGE, vol. 27, No. 3, dated Jun. 2002, pp. 48-55.
Hope, Paco, "Using Jails in FreeBSD for Fun and Profit", ;Login: The Magazine of Usenix and Sage, vol. 27, No. 3, Jun. 2002, 9 pages.
IBM entitled Partitioning for the IBM eserver pSeries 690 System, (C) Copyright IBM Corp. 2001 (12 pgs).
IBM System Partitioning on IBM eserver xSeries Servers entitled "Effective Server Consolidation and Resource Management with System Partitioning" by Mark T. Chapman, IBM Server Group, dtd Dec. 2001, (23 pgs).
Jails: Confining the omnipotent root., by Poul-Henning Kamp and Robert N.M. Watson, entitled "The FreeBSD Project", http://www.servetheweb.com/, (15 pgs).
Kamp, Poul-Henning, "Rethinking / dev and devices in the UNIX kernel", BSDCon 2002 Paper, retrieved from website <http://www.usenix.org/events/bsdcon02/full-papers/kamp/kamp-html/index.html> Printed May 1, 2007, 18 pages.
Mc Dougall, Richard, et al., "Resource Management", Prentice Hall, 1999, 25 pages.
Network Working Group entitled "IP Version 6 Addressing Architecture", by R. hinden, Nokia, S. Deering, Cisco System, dtd Jul. 1998, (28 pgs).
Noordende et al., "Secure and Portable Confinement of Untrusted Programs", ACM, 2002, 14 pages.
Official Action from EPO for foreign patent application No. 04 252 690.5-2211 dated Jun. 10, 2005 (6 pgs)-attached.
Official Action from EPO for foreign patent application No. 04 252 690.5-2211 dated Nov. 23, 2005 (5 pgs)-attached.
Osman, S., et al., "The design and implementation of Zap: a system for migrating computing environments", SIGOPS Operating System, Rev. 36, SI, dated Dec. 2000, pp. 361-376.
Presotto et al., "Interprocess Communication in the Ninth Edition Unix System", John Wiley & Sons, Ltd., dated Mar. 1990, 4 pages.
Stevens, "Advanced programming in the Unix Environment", Addison-Wesley, 1993, pp. 427-436.
Sun Microsystems, "Sun EnterpriseTM 1000 Server: Dynamic System Domains," White Paper Online, Feb. 26, 2003, retrieved from the internet at <http://www.sun.com/servers/highend/whitepapers/domains.html?facet=-1>, retrieved on Jun. 21, 2005, XP-002332946, 7 pages.
Sun Microsystems, Inc. entitled Server Virtualization with Trusted Solaris(TM) 8 Operating Environment, by Glenn Faden, Sun BluePrints(TM) OnLine-Feb. 2002, http://www.sun.com/blueprints, (21 pgs).
SunSoft, a Sun Microsystems, Inc. Business entitled "File System Administration", (C) 1994 Sun Microsystems, Inc., (62 pgs).
Thompson, K., "UNIX Implementation", Bell Laboratories, The Bell System Technical Journal, 1978, pp. 1-9.
U.S. Appl. No. 10/744,360, filed Dec. 22, 2003.
U.S. Appl. No. 10/761,622, filed Jan. 20, 2004.
U.S. Appl. No. 10/762,066, filed Jan. 20, 2004.
U.S. Appl. No. 10/762,067, filed Jan. 20, 2004.
U.S. Appl. No. 10/763,147, filed Jan. 21, 2004.
U.S. Appl. No. 10/766,094, filed Jan. 27, 2004.
U.S. Appl. No. 10/767,003, filed Jan. 28, 2004.
U.S. Appl. No. 10/767,117, filed Jan. 28, 2004.
U.S. Appl. No. 10/767,118, filed Jan. 28, 2004.
U.S. Appl. No. 10/768,303, filed Jan. 29, 2004.
U.S. Appl. No. 10/769,415, filed Jan. 30, 2004.
U.S. Appl. No. 10/771,698, filed Feb. 3, 2004.
U.S. Appl. No. 10/771,827, filed Feb. 3, 2004.
U.S. Appl. No. 10/833,474, filed Apr. 27, 2004.
Virtual Private Servers and Security Contexts, dtd May 10, 2004, http://www.solucorp.qc.ca/miscprj/s-content.hc?prjstate=1&nodoc=0, (2 pgs).
Watson, "TrustedBSD-Adding Trusted Operating System Features to FreeBSD", The USENIX Association, 2001, 14 pages.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080046610A1 (en) * 2006-07-20 2008-02-21 Sun Microsystems, Inc. Priority and bandwidth specification at mount time of NAS device volume
US8095675B2 (en) * 2006-07-20 2012-01-10 Oracle America, Inc. Priority and bandwidth specification at mount time of NAS device volume
US20090037718A1 (en) * 2007-07-31 2009-02-05 Ganesh Perinkulam I Booting software partition with network file system
US7900034B2 (en) * 2007-07-31 2011-03-01 International Business Machines Corporation Booting software partition with network file system
US8301597B1 (en) 2011-09-16 2012-10-30 Ca, Inc. System and method for network file system server replication using reverse path lookup
US9552367B2 (en) 2011-09-16 2017-01-24 Ca, Inc. System and method for network file system server replication using reverse path lookup

Also Published As

Publication number Publication date
US7567985B1 (en) 2009-07-28
US7805726B1 (en) 2010-09-28
US7461080B1 (en) 2008-12-02
US7793289B1 (en) 2010-09-07
US7526774B1 (en) 2009-04-28
US8516160B1 (en) 2013-08-20

Similar Documents

Publication Publication Date Title
US7490074B1 (en) Mechanism for selectively providing mount information to processes running within operating system partitions
US7389512B2 (en) Interprocess communication within operating system partitions
US7437556B2 (en) Global visibility controls for operating system partitions
US7882227B2 (en) Mechanism for implementing file access control across a network using labeled containers
JP4297790B2 (en) Persistent key-value repository with pluggable architecture abstracting physical storage
US6895400B1 (en) Dynamic symbolic link resolution
US5946685A (en) Global mount mechanism used in maintaining a global name space utilizing a distributed locking mechanism
EP0605959B1 (en) Apparatus and methods for making a portion of a first name space available as a portion of a second name space
US7392261B2 (en) Method, system, and program for maintaining a namespace of filesets accessible to clients over a network
US6714949B1 (en) Dynamic file system configurations
US7885975B2 (en) Mechanism for implementing file access control using labeled containers
JPH11327919A (en) Method and device for object-oriented interruption system
US8892878B2 (en) Fine-grained privileges in operating system partitions
US7188120B1 (en) System statistics virtualization for operating systems partitions
US8938554B2 (en) Mechanism for enabling a network address to be shared by multiple labeled containers
US8938473B2 (en) Secure windowing for labeled containers
US7337445B1 (en) Virtual system console for virtual application environment
EP1480124B1 (en) Method and system for associating resource pools with operating system partitions
US20230131665A1 (en) Updating virtual images of computing environments
AU2003220549B2 (en) Key-value repository with a pluggable architecture

Legal Events

Date Code Title Description
AS Assignment

Owner name: SUN MICROSYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LEONARD, OZGUR C.;TUCKER, ANDREW G.;REEL/FRAME:014946/0897;SIGNING DATES FROM 20040125 TO 20040127

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12