CA2005463C - Address translation mechanism for multiple-sized pages - Google Patents

Address translation mechanism for multiple-sized pages

Info

Publication number
CA2005463C
CA2005463C CA002005463A CA2005463A CA2005463C CA 2005463 C CA2005463 C CA 2005463C CA 002005463 A CA002005463 A CA 002005463A CA 2005463 A CA2005463 A CA 2005463A CA 2005463 C CA2005463 C CA 2005463C
Authority
CA
Canada
Prior art keywords
address
page
virtual
bits
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002005463A
Other languages
French (fr)
Other versions
CA2005463A1 (en
Inventor
Steven Wayne White
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CA2005463A1 publication Critical patent/CA2005463A1/en
Application granted granted Critical
Publication of CA2005463C publication Critical patent/CA2005463C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1036Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/652Page size control

Abstract

A dynamic address translation mechanism includes a first directory-look-aside-table (DLAT) for 4KB page sizes and a second DLAT for 1MB page sizes. The page size does need not be known prior to DLAT presentation.
When a virtual address is presented for translation, it is applied simultaneously to both DLATs for translation by either DLAT if it contains a page address entry corresponding to the virtual address presented. If a DLAT "miss" occurs, segment/page table searching is initiated. The DLAT page sizes are preferably made equal to the segment/page sizes and placed on 4KB and 1MB boundaries. Virtual page addresses lie within either a 1MB page or a 4KB page, and an entry for any virtual address can exist in only one (not both) of the DLATs.

Description

2~0~463 ADDRESS TRANSLATIoN MEC~ANISM FOR MrTLTIPL~:--SIZEn PA(-F~S

Backqround of the Invention Dynamic ad~ress translation provides the ahilitv to interrupt the execution of a program, record it and its data in auxiliary storage, such as a direct access storage device tDASD!, and at a later time return the proqram and data to different main storage locations for resumption of execution. It can provide a user a system wherein storage appears to be larger than the main~storage. This apparent storage (virtual) uses virtual addresses to designate locations therein and it is normally maintained in auxiliary storaqe. It occurs in blocks of addresses, called`paqes, and only the most recently referred-to pages are assigned to occupv blocks of physical main storaqe.
AS the user refers to pages of virtual storage that do not appear in main storage, they are brought in to replace pages in main storage that are less likely to be needed in the near future.
Virtual addressing has become a key feature in the architecture of many large computers. Virtual addressinq allows programs to appear logically contiguous to the user while not being physically contiguous in the storage system. Recentlv accessed portions of the virtual space are mapped into the main storage unit. The mapping information is often stored hierarchically in a directorv comprising a segment table with entries corresponding to-contiguous 1 MB
(megabyte) segments and page; tables with entries for 4KB (kilobyte) pages wi~hin a segment.
~, - 2~0~,3 Translation of the virtual address to a real address re~uires the mapping information that can be gained by searching the appropriate seqment and pa~e tables. AS this searchin~ process is time consuming, the -number of full search~s is re~uced bv retaininq information for~ some of the recent translations in a Directory-Look-Aside-Table (DLAT~. For virtual addresses covered by the DLAT, the translation process, which is required for almost every storage acce.ss, requires only a couple of machine cycles. For addresses not covered by the DLAT, the process of searchinq the directory ranges from about 15 to 60 cycles if the segment and page tables are in main storage. Each DLAT
entry contains the information for mapping an entire page of storage, frequently 4kB. The amount of ~torage covered bv the D~AT depends on the number of entries in the D~AT and the size of the page.
When determining the optimal size for a paqe, a compromise is struck between a page large enough to amortize the overhead of swapping pages and a page small enough to incur minimal degradation due to granularity, i.e, not waste storage by committing a large page for a small obiect. While small (4kB) pages may be appropriate for code seqments ~instructions) and small data objects, high performance scientific and engineering machines often benefit from large pages. In recent years, both data obJects and the storaqe capacities have grown substantially; now a large page would often be moxe efficien~. In comparison to data obiects which are hundreds of megabytes, a 1 MB page offers sufficientlv fine granularity.
Furthermore, with the introduction of fast, mass-storage devices i~ the order of one or more ~igabytes ~GB) such as the IBM Expanded Storaqe, the - X~0~6~3 .

data transfer times for a 4KB and lMB pages have heen reduced to tens of cycles and thousands of cycles, ~espectively. With the thousands of cycles of software overhead which are incurred while resolving a paqe fault, resolving a lMB page fault mav require onl~
three to five times as long as a 4KB paqe fault. For transfer-s of large, contiguous blocks of data, a 4KB
page svstem will incur the overhead hundreds of time.s that for paqe faults in a lMB page system.
- Larqe paqés provide manv side benefits. For example, vector fe~ches and stores benefit from large pages simply bv decreasing the number of possihle paqe crossinqs. Even if the next required page is resident in physical storaqe, the vector pipe mav be interrupted on a page crossing to verify that the page has been brought into the memory. This` interruption can result in a noticeable performance degradation.
Large page~ in the order of lMB may be useful as a possible solution for scientific applications which incur performance degradation as a result of the use of small pages. To optimize the page size for an application (or a portion of the application) multiple-size pages are desirahle. To date, known DLATs are designed to handle a single page size; however to support other than a uniform page size, modifications to the current DLAT(s) would be required.
The CDC (registered trademark of Control Data Corporation) 7600 and CYBER 205 have both small and large pages. The Cyber 205 offers small (4KB, 16KB, and 64KB) pages in which the page size selection must be made "via an operating system software installation parameter. n Therefore, la qiven ~ob can allow only one size of small page. However, a large page ~512 KB) is also allowed. The CYBER translation unit handles ZO~A63 -multiple-sized pa~es hy creating a list of "associative words". Each word contains information similar to a DLAT entr~. This list is stored in main storage and the upper (most recentlv used) 16 entries can be loaded into-an internal set of registers by instruction. The translation process consists of first searching within the inte-rnal registers. If a match (a "hit") is found, the entry is moved to the top of the list and the address is resolved. If no match is found in the first 16 entries, the rest of the list, in memorv, is searched two entries at a time. If a matching entry is found, it is moved to the top of the list. Otherwise, a page~fault is generated. This, can require a large number of machine cvcles.
Although the CYBER scheme allows a translation table to handle entries which are not of uniform size, it is not an attractive solution in the future hiah performance scientific processor market. The list of associative words defines all of the pages in physical storage. Although this may be practical for small storage requirements, future jobs re~uire substantiallv more storage than is available on the CYBE~,. Some manufacturers are providing main storage of 256MB and even 2GB (gigabytes). These large real stora~es imply even greater data object sizes and larger virtual spaces to contain them. Even with lMB pages, a lGB data object would require 1000 pages; searchinq 2 entries per cycle could degrade performance substantially.
Increasing the number of entries contained in the associative registers may be prohibitive based on the amounts of loqic required.
U.S. Patent ~4,2~5,040 shows the support of multiple (two) page sizesj 128B and 4KB. The patented solution has many shortcomin~s; it is not feasible with 2~0S~63 larqe address spaces such as those reauired in high-performance scientific and engineering processors.
-The implementation described requires that re~isters be available to concurrentl~ retain the base address S values for a]l of the seqments in the virtual address space. Even for larqe pages, such as lMB, current address spaces (2GB) would require 2048 such reqisters.
Furthermore, it is expected that user requirements will force the designers to allow substantially larqer address spaces in the near future, thereby dramaticallv increasing the number of segment registers required for the approach described in the patent.
Due to the "cache-like" structure used to retain this segment information (lMB) in the present application, the number of equivalent registers can be substantially reduced to approximately forty or fifty and still provide acceptable "hit-ratios", thereby allowing nearly optimal performance.
For accesses to data in small paqes, additional storage accesses are required for the ap~roach described in the patent. Quoting from near the top of the column labeled "3", "The subsegment descriptors are contained in a table stored in the stora~e of the system. Therefore, when the mechanism is operatinq in the second or subsegment mode, it is necessary to make ~ an extra cycle to select and fetch one of the subsegment descriptors." This "extra cycle" is a storage reference cycle and on many processors (with larqe amounts of storaqe) this will translate to manY
processor cycles.
Due to the multiple "cache-like" structures of the present invention, the most recently used "pa~e information" (subsegment descriptors) is readily available resultinq in significant performance - : 2QOSa~63 improvement. Generally, only a single search c~cle is requlred .
- Although the design described in the patent ha, good performance for accesses to "large" pages, the number of registers required to -retain all of the segment pointers is unattractive for large address spaces. -Furthermore, due to the exponentially larger number-of page table entries, these entries must be stored in system storage. Therefore, for accesses to small pages, each "user" storaqe access results in two "real" storag- accesses, one for the subse~ment descriptor and one for the user data. As storage delays and storage contention contribute heavily to system performance degradation, such an approach would severely handicap a medium to hi~h performance processor.
U.S. Patent ~3,675,~15 describes a sequential search process which "continues with the count being incremented by one for each mismatch until the ID of the fetched entry matches the requested virtual address, or until the count exceeds the number of addresses in the subset, in which case a missin~
address exception occurs n . A set of "chains" of translation information is maintained. The virtual address translation process consists of séarchinq a chain. For each "page fault", an entire chain is searched to detect the occurrence of the page fault.
The DLAT access of the present application is always as fast (sometimes tens of times faster) as the chain approach in providing the information required to translate the virtual address. This is primarily due to the associative (parallel) search inherent in the DLAT
structure whereas the~ "chain" structure requires a serial search. The minimum and maximum number of cycles KI9-86-0l7 6 2~05~63 for findinq a translated page in- the "cache-like"
structure of the present application is approximately two cycles. Furthermore,-on the occurrence of a paqe fault, reco~nition of the fault is dramatically quicker with-the DLAT/segment/page table scheme in comparison to the chain approach.
A preferred form of the present improvement includes a set associative arrangement which is used in systems such as the 308X and 3090 families marketed bv International Business Machines, Inc. Set associative arrangements are well known; and, for example, are used in the structures described in U.S. Patents 4,638,426 and 4,69S,950. Some of the virtual address bits are used to select one set of entries; the set of entries (usually two) is then associatively searched. The set associative arranqement allows fast access to a larqe number of entries (usuallv 256 to 512~ but requires a small associative search (usuallY two or four entries).
The problem in applying this method to include multiple-sized pages comes in the selection of the bits which are used to select the set of entries. If DLAT
entries can cover different page sizes, the bits must be selected from those which differentiate between storage segments that are at least as large as the largest page size. However, if these bits are used to select a set of DLAT entries, when the entries are for small pages, onlv a small contiquous block of memorv can be covered. If a two-way set associative scheme is used with both 4kB and lMB pages, the congruence class must cover a contiguous seqment which is at least lMB.
However, when entries correspond to 4K8, only 8K~ ttwo entries) of the lMB contiquous space can he covered.
The present improvem~n~ provides a DLAT structure which can handle multiple-size pages concurrently. For KI9-86-0l7 7 5~63 .

purposes of illustration, the two page sizes used in this description will be 4K~ and lMB. A lMB page is - considere-d since it is the equivalent of a non-pageable seqment and segments are currentlv part of the preferred translation process. -Once the se~ment inormation has~ been obtained, rather than continuing the process (determining a paqe within a segment), an entire-seqment would be considered a sinqle entitv.
Since segments are on lMB virtual boundaries, this forces the lMB virtual paqes to be on lMB boundaries.
This diminishes the fragmentation problem encountered in multiple-size page s~rstems and allows simplified hardware, the low order page offset bits simply pass through unchanqed.
Since the IBM 309~ is a well-known machine, its DLAT facilit~ will be used to describe the present improvement. The DLAT is two-wav set associative with 128 pairs of entries, each entrr representing a 4KB
page. To allow the DLAT to cover a large contiguous portion of storage, the pair of DLAT entries are selected using the congruence class selection address bits immediately above the offset bits which define words within a page, i.e. ad~acent pages are covered by different DLAT pairs. However, the 128 pairs of entries only provide coverage for a small portion (eg. 1024 KB
in one arrangement) of a presumed 2GB space.
Without loss of generality, this improvement will be described bv focusing on an implementation of such a DLAT with the exception that entries for both 4KB and lMs will be allowed. In place of a sinqle 128-pair DLA~ in the 3090, the improvement will use two 64-pair DLAT structures, one for 4KB paqes and one for lMB
pages. An inherent advantage of lar~e pages is that a single lMB page provides coverage for a lar~e * Re~istered Trade Mark KI9-8h-017 2Q05a~63 conti~uous portion of storaqe with a sinqle entry, thereby increasing the probability of a DLAT hit and - eliminating the costly search throu~h the se~ment and page tables.
-Accordinqly it is a primary o~ject of the present invention to ~provide in high performance data processing systems having very large fast main storage devices a dvnamic address translation mechanism which is capable of far more efficient performance than those described in the prior art.
It is a more particular object of the present improvement to provide in a translation mechanism of the type described a pair of directory-lookaside-tables which are accessed simultaneously by a virtual address presented to the translation mechanism.

Summarv of the Invention The above objectives are achieved in a preferred embodiment of the invention bY providing a DLAT
facility for 4KB paqes and a second DLAT facility for lMB paqes. Each DLAT is a two-way set associative array, each with 64 pairs of entries. Although the partitions are different for the two facilities, the virtual address is partitioned into three fields. The low order bits which describe a byte within a page are the displacement bits (A20-A31 for 4K~ pages and A12-31 for the lMB pages). The 6 bits adjacent to the displacement bits are the conqruence class selection bits. These bits (A14-A19 for 4KB pages and A6-A11 for lMB pa~es) determine which pair of entries are referenceA in each DLAT facilit~r. The remaining (hiqh-order) bits are the "taq" field.
In each DLAT facility, a first pair of entries are assigned to any ~ages ha~ring a congruence class KI9-~6-017 9 - ` 2Q05~63 selection field of "0". A second pair of entries are assi~ned to any page havinq a con~ruence class selection- ield of "1", another pair for "2", and so on until the final pair of entries which are assiqned anv paqe- in the "63" congruence class. ~v using the congruence clas-s selection ~its of each address beinq translated to address a pair of entries in each DLAT
facility, it is possihle to determine whether or not the translation information for the address exLsts in these DLATs. Each entr~ in the DLAT includes a first portion which consists of the bits of the tag field of the corresponding virtual page, a second portion which includes the real page frame address of this page in real storage and a third section includes a validitv bit indicating whether or not the tag and the associated real page frame n~mber are in fact valid, that is such a paqe does presently exist in main storage. Non-pertinent DLAT fields, such as keys, etc.
are not included in the present description.
When a virtual page address is presented to the DLATs by the system proce~ssor, respective congruence class selection bits (A14-19 or A6-11) read a pair of entries from each respective DLAT, and logic compares the tag field of each entry with corresponding bits of the virtual address presented to the DLATs. If there is an equal compare, i.e. a directorv hit, and the valid bit is "on", the real page address of the entry which caused the hit is placed on the real address bus of the system for accessing main store. At the same time the displacement address bits are concatenated to the real page address for addressing the selected bit or bvtes in main storage.
Since both DLATs are accessed simultaneously by the virtual address and since their pairs of entries Z~05~6~3 are read out and compar~d with selected virtual address bits at the same time-there is no loss in performance in D~AT structure bv havinq two DLATS instead of one.
In the event that there is no DLAT hit, a conventional seqment/page table translation sequence is initiated.
- 4KB pages and lMB paqes are used exclusively with only one DLAT. Each page has an entry in only one of the DLATS; an address (virtual or real) is in either a 4KB page or a lMB paqe. Hardware allows 4KB and lMB
page table entries to be placed onl~ in the DLAT for 4KB and lMB pages respectively. Therefore a hit in one of the two DLATS guarantees a miss in the other D~AT.
As in any multiple-size page memory-management system, the operating svstem plavs a role in enforcing certain restrictions~ It is assumed that the operating system will keep lMB pages (real and virtual) on lM~
boundaries. Currently 4KB pages (real and virtual) are maintained on 4KB boundaries. The lMB paqe size is preferred since System 370 operatinq systems support lMB seqments. A lMB paqe (virtual) is therefore almost eauivalent to existinq seqments in these operatinq svstems which are on lMB boundaries.
The ma~or chanqe in such operating s~rstems would be to require real address space to have lMB paqes on megabvte boundaries. Todav there are only 4KB paae frames which exist on 4KB boundaries. Bv modifying the operating systems to block out physical memor~ into lMB
partitions pages, rather than 4KB, several advantaqes can b~ obtained. Some of the lM~ blocks can be used for backing up lMB pages while the remainder of the lMB
blocks can be partitioned into ~5~ 4RB pages. The operating system can therefore manage some number of 4KB pages and some number of lMB paqes. If ! the number of lMB and 4KB pages are determined statically, )05~63 currentlv used alqorithms can be used to manage 4KB and lMB pages. Future management schemes can allow a more efficient- dynamic conversion between a lMB block of 4KB
paqe frames and a lMB page of physical storage. As more S pages from a qiven size are requir~d some paqes of the other size can be converted. To aid in gathering continuous ''free" 4KB pages to form a lMB paqe a conceptual dividing line (which can be varied) can be J used to partition the upper portion of the storage (lMB
pages) from the lower portion (4KB pages).
Keeping lMB pages on lMB boundaries reduces fragmentation problems. This restriction also greatly simpl-ifies the hardware. The low order bits (which select bytes within a page) can simply pass through the translation unit without modification. Furthermore, it decreases the number of bits required for each entry in the page/segment tables and in the DLAT.
The foregoinq and other objects, features and advantages of the present improvement will ~e apparent from the following more particular description of the preferred embodiment of the invention a~ illustrated in the accompanying drawings.

Brief Description of the Drawinqs Fig. l diagrammatically illustrates mapping of 4KB
and lMB virtual pages into 4KB and lMB real address spaces;
Fig. 2 is a fraqmentarv diagrammatic illustration of the 4K DLAT arrav and representative entries which are found therein in accordance with the mapping of Fig. l;
Fiq. 3 is a fragmentar~ diaqrammatic illustration of the lMB DLAT array and representative entries which - - - 2Q05~63 are found therein in accordance with the mappi~ of Fig. 1:
- Fiq.~ 4 is a diagrammatic/schematic illustration of the DLAT arrays and hardware logic used to translate presented virtual addresses into -real addresses for presentation to-a main storaqe and ~ig. 5 is a hlock diagram illustratinq a preferred form of the dynamic address translation mechanism using segment tables and page tables together with the improved DLAT unit for fast translations of recently used virtual addresses.

Description of the Preferred Embodiment With reference'to'Fig. 1, it will be seen that both the virtual address space 1 and the real address space 2 are divided into a number of lM~R blocks from OMB to nMB. The virtual space blocks 3 and 4 beginning with OMB and 50MB address values are divided into groups of 256 4KB pages 3-0 to 3-255 and 4-n to 4-255.
On the other hand the lMB blocks 5-8 with starting virtual addresses of lMB, 63MB, 64MB and 65MB define lMB pa~es in the virtual address space. The blocks 5-8 are mapped into real address blocks 9-12 respectivel~r.
Certain of the 4KB pages in the blocks 3 and 4 are mapped into respective 4KB page frames 13-0 to 13-255 and 14-0 to 14-255 of real address blocks 13 and 14.
For purposes of illustrating the present improvement, it will be assumed that all of the virtual pages described above have been transferred into (mapped) mai,n storage 2,(the:real address space) from DA~D (not shown~. It is~,further assumed that certain of the virtual pages in space 1 connected ~"v arrows 15-1 to 15-n to respective paqe frames into main storage ?
are the most recentlv ref~renced paqes in their congruence classes and are therefore found as entries in the 4K page DLAT 2n (Fig. 2) and the lMB paqe DLAT
21 (Fiq. 3). Hardware (not shown) automaticallv reloads DLAT entries from paqe tables in main storage 2 as required.
The conqruence classes 0-63 for DLAT 20 are assigned to page addresses 0, 4K, 8K ... .252K; for DLAT
21, to paqe addresses 0, lM, 2M, 3M .... 63M.
Each of the DLAT 20 and 21 of the present improvement is preferably a two-way set associative array with sixty-four (64) pairs of entries (22-0 to 22-63 and 23-0 to 23-63) each pair representing one congruence class for 4K~ and lMB pages respectively.
The two entries in each pair are labeled A and R
respectively.
The DLAT pair 22-0 (Fig. 2) includes in entrv B
(1) an address value of 50MB, the value of taq bits A0-A13 of virtual page 4-0 of Fiq. 1, (2) the real paqe address value 3MB + 4KB of the paqe in main store 2 which has stored therein the information contained (ie.
the contents of) the virtual paqe, and (3) a valid bit = 1.
Entry A of pair 22-0 includes (1) the value ~OMB +
256KB) of tag bits A0-A13 for virtual paqe 3-64 (Fiq.
1), (2) the real page address (3MB + 16KB) in main store 2 into which the virtual page was mapped and (3) a valid bit = 1.
Similarlv, DLAT entry pairs 22-1 and 22-63 contain valid entries (tag bit values and real page addresses) for virtual pages 4-1, 3-65, 3-255, and 3-63 of Fiq. 1 and the main store 2V~page frames into which their contents have been stored.

- - - - 2Q05~63 In entry pair 23-0 of DLAT 21, entrv B has been - rendered invalid ~valid bit = 0) and entrv A includes the valid entrv ltaq field value = 64MB and real paqe address = 4 MB) for the virtual page 7 of Fig. 1 and ~ page 11 in main store 2 into which the contents of virtual page 7 have been stored.
Similarl~r entry pair 23-lA, B and entrv 23-63A, include valid entries for virtual pages 5, 8 and 6 respectivelv anA the main store paqes 9, 12 and ln into which their contents have been stored.
~hen (durinq program execution) a virtual address is presented by a processor to the DLATs 20 and 21, virtual address hits Al4-Al9 (Fig. 5) select a pair of entries from DLAT 20 and virtual address bits A6 to All (Fig. 4) select a pair of entries from DLAT 21.
However, the operatinq svstem, when it maps a virtual block (4K or lMB), permits a virtual memory entrv to onlv one of the DLATS 20 and 21 by assigning a block size (4KB or lMB) to all of the virtual addresses in that block (page). Thus when a new entry is made for an accessed virtual address, hardware (not shown) notes the page size, assiqns the virtual page to the appropriate DLAT 20 or 21, and stores the appropriate tag bits A0-A13 into DLAT 20 or A0-A5 into DLAT 21.
Accordinqlv, when a presented virtual address causes the selection of a pair of entries (as described above) only one pair from one DLAT can possiblv contain the appropriate tag bits for a DLAT "hit".
Reference is directed to Fig. 4 which shows the DLATs 20 and 21 and the virtual address bits A0-A31 received on processor address bus lines B0-B31.
Conqruence selection bits A14-Al9 and A6-All are apPlied to DLATs 20 and, 21 ~ria hus lines ,B14-Bl9 and B6-Bll. The DLAT 20 has entrv A and B outputs 25, 26, KT9-86-nl7 15 2QC~5~63 - -27 and 28, 29, 30. DLAT 21 has entrv~ A and B outputs - 31, 32, 33 and 34, 35, 36. Virtual address ta~ bits A0-A13 in one pair of A and ~ DLAT entries selected hy conqruence class bits A14-19 are applied to outputs 25 ~ and 28; valid bits in the entry pair are applied to outputs 26 and 29; and real address~~its are applied to outputs 2 7 and 30.
Similarly tag, valid and real address bits in one DJ.AT entrv pair, selected bv congruence class hits A6-All, are applied to outputs 31, 34 and 32, 35 and 33, 36 of DLAT 21.
Outputs 25 and 28 form inputs to compare circuits 40, 41 respectively; bus lines B0-B13 form second inputs to the compare circuits 40, 41. The outputs 42, 43 of compare circuits 40, 41 form inputs to logical AND circuits 44, 45 and entrv A and B valid outputs 26, 27 form second inputs to the AND circuits 44, 45.
The outputs 46, 47 of the AND circuits 44, 45 are applied to a logical OR circuit 48; and output 46 forms a select input to a multiplexor 50. The real address outputs 27 and 30 form inputs to the multiplexor 50.
The output 51 of multiplexor 50 is conconcatenated to the offset bit bus lines B20-B31 at junction 52 and appli~d to one input to a multiplexor 53.
Identical logical means and connections are provided for DLAT 21, including entry A and B compare circuits 55, 56, AND gates 57, 58, OR circuit 59 and multiplexor 60. The output of multiplexor 60 is concatenated at iunction 61 to page offset bit bus lines B12-B31 and both are coupled to second inputs to the multiplexor 53.
The outputs 62 and, 63 of~OR circuits 4~ and 59 are used to gate signals on,the lines at iunction 52 or 61 - ZQ~5~63 - :

through the multiplexor 53 to the real address lines A0-A31 main of 1 storage address bus 64.
However, this occurs during a DLAT access, ie. OR
circuit 48 produces a logical "1" siqnal on 4KB paqe ~ hit line 62 or OR circuit 59 produces a logical "1"
signal on lMB paqe hit line 63, only if a compare equal occurs in one of the circuits 40, 41 or 55, 56 and the valid bit (correspondinq to the one compare circuit) e~uals "1".
The operation of the DLAT facilities in Fig. 4 ,~ will now be described. When a processor (not shown~
issues a command to read data from or write data to main storage 2, virtual address-bits A0-A31 are placed on bus lines B0-B31. The congruence class bits A14-A19 and A6-A11 selects corxesponding pairs o~ entries in the nLATs 20 and 21. Compare circuits 40 and 41 compare the tag bits A0-A13 of the selected entries A and B of DLAT 20 with bits A0-A13 on bus lines B0-B13; and circuits 55, 56 compare tag bits A0-A5 stored in the selected entries A and B of DLAT 21 with virtual address bits A0-A5 on bus lines B0-BS.
If one of the circuits 40, 41, 55, 56 finds an equal compare and the corresponding valid bit equals "1", then the corresponding AND gate 44, 45, 57 or 58 produces a logical "1" output signal which is applied by one of the OR circuits 48 or 59 to page "hit" line 62 or 63. Both multiplexors 50 and 60 gate through one of the two real page frame addresses applied to their inputs depending upon the logical "1" or "0" output state of the AND gate 44 and AMD qate 57.
The outputs on lines 62, 63 notifv the processor (not shown) that (1) a direc~orv (DLAT) "hit" was made in a 4KB or lMB page and the real address is on bus 64 or (2) no directorv "hit" was made; and the real KI9-86-nl7 17 Z~OS~63 address must be obtained bv searchinq the segment and ~ page tables in main storaqe (as shown in Fiq. 5). Such table search operations are well known and are shown and described at page 205 of an Introduction to ~ Operatinq Svstems bv ~M Deitel reprinted 1984 and in qreater detail in the IBM Svstem/37n Princi~les of Operation tGA22-7000-10)- publi.shed by International Business Machines in September 1987 starting at paqe
3-20.
In the event of a paqe fault, ie: the desired paae has no entry in the seqment, page tables hecause the page is not in main store 2, the processor must access the page from an auxiliar~ device such as DASD via an I/O operation.
Brieflv, a DLAT unit 70 (Fig. 5) is comprised of DLAT arravs and associated hardware logic of the t~pe shown and described with respect to Fig. 4 herein. The unit 70 is coupled to a source 71 of the virtual address for a desired unit ~f information. The virtual address includes a segment number (bits A0-All), a paqe number (bits A12-A19) and a di~splacement (or offset) address value (bits A20-A31). This assumes that segments are on lMB boundaries in the virtual address space, and pages are on 4KB boundaries.
Address bits A0-A31 applied to the DLAT unit 70 in the manner described with respect to Fig. 4. If a DLAT
"hit" occurs, the real address bits RA0-RA31 are applied to a real address destination 72 via bus 64.
If no DLAT "hit" occurs, the segment/pa~e table mechanism of Fig. 5 is rendered active to locate the desired page. The seqment number is added at 73 to a segment table origin value in register 74 to access an entrv 75 in a segment table 76 in main storage 2.

2~05~63 ',Further action'depends on whether the virtual - - address is in a lMB or a 4KB page. For lM~ paqes, the entry 75 will have heen filled with the real (~hysical) page address bits RA0-RA11, which will be directed to ~ destination 72 without further table searching; and the page number bits RA12-RA19 and displacement hits 20-31 toqether fo~m the offset- for the desired data in the lM~ paqe accessed from entry 75.
For 4KB paqes, the entrv 75 will have been filled with the starting address of paqe table 79 of the segment defined by entrv 75.
The address value in entrv 75 is concatenated with the page number at 77 to access an entry 78 in a paqe table 79 in main storage 2. The entrv 78 includes the page frame bits RA0-RAl9 of the real address of the desired page. The page offset bits RA20-RA31 are concatenated with bits RA0-RA19 at 72.
This assumes that the desired pa~qe is a valid paqe presentlv found in main storage 2. If the ~age is not found in main storaqe 2 b~ this tahle search a page fault occurs; and the page must be accessed from an auxiliary storage device via I/O operations.
It will be apparent from the above description of the preferred embodiment of the present improvement, that changes may be made by those skilled in the art without departing from the true spirit and scope of the present invention; and the appended claims are intended to cover all such changes.

KI9-~6-017 19

Claims (9)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A dynamic storage address translation system connected between a central processing unit and a storage system for translating virtual addresses supplied by said central processing unit to real addresses and for accessing blocks of data in said storage system, each said block of data being one of a plurality of different block sizes, said dynamic storage address translation system comprising:
a plurality of dynamic address translation means, one for each block size, and each said dynamic address translation means including a set-associative table with a plurality of congruence class locations;
at least one entry in each congruence class location for storing high order tag bits of a predetermined virtual page, a real block address of the predetermined virtual page and a page valid bit, each said dynamic address translation means responsive to a respective, unique congruence class selection field and a unique tag bit fields of a supplied virtual address for determining whether or not the real block address corresponding to the supplied virtual address is stored in a congruence class entry of its set-associative table addressed by its unique congruence class selection field of the supplied virtual address.
2. The dynamic storage address translation system of Claim 1 having first and second block sizes equal respectively to segment and page sizes in the storage system, wherein the blocks of data are maintained on address boundaries of segments and pages, and wherein a virtual address entry can be placed only in the table corresponding to its block size.
3. The dynamic storage address translation system of Claim 2 wherein two translation means are provided for blocks sizes of 4 KB and 1 MB or 4 KB and 1 MB boundaries respectively, and wherein each virtual address has an entry in only that set-associative table corresponding to the block size in which the address lies.
4. The dynamic storage address translation system of Claim 1 further comprising means responsive to a page valid bit in said congruence class entry for providing a real block address to said storage system.
5. The dynamic storage address translation system of Claim 4 further comprising circuit means for comparing corresponding high order bits of a supplied virtual address with the tag bit fields or entries in the set-associative tables addressed by the congruence class selection fields of the supplied virtual address.
6. The dynamic storage address translation system of Claim 4 further comprising means responsive to said means for providing a real block address for concatenating, to said real block address, a block offset portion of said supplied virtual address for presentation to said storage system.
7. A method performed by a dynamic storage address translation system connected between a central processing unit and a storage system for converting virtual addresses supplied by said central processing unit into real storage addresses and for accessing blocks of data in said storage system, each block of data being one of a plurality of different block sizes, each block size being equal to the size of a corresponding virtual page size within a virtual space, comprising the steps of:
storing in congruence class entries of a plurality of set-associative arrays, one array for each block size, high order virtual address tag bits and real block address bits of recently-referenced virtual pages mapped into real blocks of storage space;
simultaneously accessing those congruence class entries in each set-associative array corresponding to its unique congruence class selection field in a supplied virtual address;
simultaneously comparing the tag bits in the accessed entries of each set-associative array with corresponding high order bits in the supplied virtual address; and selecting real block address bits of an entry which compares equal.
8. In a large, high performance data processing system having a virtual address space arranged in pages including a large page size and a significantly smaller page size, wherein said large and small pages are mapped respectively into large and small real storage blocks of corresponding size on large and small page boundaries, a method performed by a dynamic address translation system connected between a central processing unit and a storage system for converting virtual addresses having high order tag bits, congruence class bits and low order page offset bits, supplied by said central processing unit into real storage addresses for accessing blocks of data in said storage comprising the steps of:
storing in congruence class entries of a pair of set-associative arrays, one for each block size, high order virtual address tag bits and real block address bits of recently-referenced virtual pages;
simultaneously accessing congruence class entries in each array corresponding to its unique congruence class selection bits in a supplied virtual address;
simultaneously comparing the tag bits in the accessed entries of each array with corresponding bits in the supplied virtual address;
selecting real block address bits of an accessed entry which compares equal and concatenating said real block address bits to page offset bits of the supplied virtual address to form a real storage address; and in the event of no compare equal between the tag bits of the accessed entries and the supplied virtual address:
(1) searching a segment table for a valid real block address corresponding to the supplied virtual address of a large page and concatenating the offset bits of the supplied virtual address to the valid real block address in a successful search to form a real storage address, or (2) searching a segment table and page tables for a valid real block address corresponding to the supplied virtual address of a small page and concatenating a smaller group of offset bits of the supplied virtual address to the valid real block address in a successful search to form a real storage address.
9. The method of Claim 8 wherein the large pages are 1 MB in size mapped into 1 MB real storage blocks on 1 MB boundaries, and wherein the small pages are 4 KB in size mapped into 4 KB real storage blocks on 4 KB
boundaries within 1 MB virtual and real spaces assigned to a 4 KB page and block sizes.
CA002005463A 1988-12-15 1989-12-13 Address translation mechanism for multiple-sized pages Expired - Fee Related CA2005463C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US07/285,176 1988-12-15
US07/285,176 US5058003A (en) 1988-12-15 1988-12-15 Virtual storage dynamic address translation mechanism for multiple-sized pages

Publications (2)

Publication Number Publication Date
CA2005463A1 CA2005463A1 (en) 1990-06-15
CA2005463C true CA2005463C (en) 1995-06-27

Family

ID=23093089

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002005463A Expired - Fee Related CA2005463C (en) 1988-12-15 1989-12-13 Address translation mechanism for multiple-sized pages

Country Status (5)

Country Link
US (1) US5058003A (en)
EP (1) EP0373780B1 (en)
JP (1) JPH02189659A (en)
CA (1) CA2005463C (en)
DE (1) DE68923437T2 (en)

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2858795B2 (en) * 1989-07-14 1999-02-17 株式会社日立製作所 Real memory allocation method
CA2045789A1 (en) * 1990-06-29 1991-12-30 Richard Lee Sites Granularity hint for translation buffer in high performance processor
US5222222A (en) * 1990-12-18 1993-06-22 Sun Microsystems, Inc. Apparatus and method for a space saving translation lookaside buffer for content addressable memory
US5263140A (en) * 1991-01-23 1993-11-16 Silicon Graphics, Inc. Variable page size per entry translation look-aside buffer
EP0506236A1 (en) * 1991-03-13 1992-09-30 International Business Machines Corporation Address translation mechanism
EP0508577A1 (en) * 1991-03-13 1992-10-14 International Business Machines Corporation Address translation mechanism
JPH04360252A (en) * 1991-06-06 1992-12-14 Mitsubishi Electric Corp Address conversion system for virtual storage in computer
JPH0546447A (en) * 1991-08-08 1993-02-26 Hitachi Ltd Idle area retrieving method
JPH0581133A (en) * 1991-09-19 1993-04-02 Nec Corp Information processor
GB2260628A (en) * 1991-10-11 1993-04-21 Intel Corp Line buffer for cache memory
CA2285096C (en) * 1991-11-12 2000-05-09 Ibm Canada Limited-Ibm Canada Limitee Logical mapping of data objects using data spaces
JPH05165715A (en) * 1991-12-12 1993-07-02 Nec Corp Information processor
JP3183719B2 (en) * 1992-08-26 2001-07-09 三菱電機株式会社 Array type recording device
US5712998A (en) * 1993-07-13 1998-01-27 Intel Corporation Fast fully associative translation lookaside buffer with the ability to store and manage information pertaining to at least two different page sizes
US5479627A (en) * 1993-09-08 1995-12-26 Sun Microsystems, Inc. Virtual address to physical address translation cache that supports multiple page sizes
US5526504A (en) * 1993-12-15 1996-06-11 Silicon Graphics, Inc. Variable page size translation lookaside buffer
JPH07182239A (en) * 1993-12-24 1995-07-21 Nec Corp Segment division managing system
DE69428881T2 (en) * 1994-01-12 2002-07-18 Sun Microsystems Inc Logically addressable physical memory for a computer system with virtual memory that supports multiple page sizes
US5652872A (en) * 1994-03-08 1997-07-29 Exponential Technology, Inc. Translator having segment bounds encoding for storage in a TLB
US5440710A (en) * 1994-03-08 1995-08-08 Exponential Technology, Inc. Emulation of segment bounds checking using paging with sub-page validity
DE19524925A1 (en) * 1994-07-09 1996-02-01 Gmd Gmbh Address conversion system for memory management unit
US5907867A (en) * 1994-09-09 1999-05-25 Hitachi, Ltd. Translation lookaside buffer supporting multiple page sizes
US5946715A (en) * 1994-09-23 1999-08-31 Ati Technologies Inc. Page address space with varying page size and boundaries
US5682495A (en) * 1994-12-09 1997-10-28 International Business Machines Corporation Fully associative address translation buffer having separate segment and page invalidation
US5752275A (en) * 1995-03-31 1998-05-12 Intel Corporation Translation look-aside buffer including a single page size translation unit
US6643765B1 (en) * 1995-08-16 2003-11-04 Microunity Systems Engineering, Inc. Programmable processor with group floating point operations
US5802594A (en) * 1995-09-06 1998-09-01 Intel Corporation Single phase pseudo-static instruction translation look-aside buffer
US5784701A (en) * 1995-09-19 1998-07-21 International Business Machines Corporation Method and system for dynamically changing the size of a hardware system area
US5708790A (en) * 1995-12-12 1998-01-13 International Business Machines Corporation Virtual memory mapping method and system for address translation mapping of logical memory partitions for BAT and TLB entries in a data processing system
US6026476A (en) * 1996-03-19 2000-02-15 Intel Corporation Fast fully associative translation lookaside buffer
US5765190A (en) * 1996-04-12 1998-06-09 Motorola Inc. Cache memory in a data processing system
US5918251A (en) * 1996-12-23 1999-06-29 Intel Corporation Method and apparatus for preloading different default address translation attributes
US5930830A (en) * 1997-01-13 1999-07-27 International Business Machines Corporation System and method for concatenating discontiguous memory pages
US6012132A (en) * 1997-03-31 2000-01-04 Intel Corporation Method and apparatus for implementing a page table walker that uses a sliding field in the virtual addresses to identify entries in a page table
US6088780A (en) * 1997-03-31 2000-07-11 Institute For The Development Of Emerging Architecture, L.L.C. Page table walker that uses at least one of a default page size and a page size selected for a virtual address space to position a sliding field in a virtual address
US5983322A (en) * 1997-04-14 1999-11-09 International Business Machines Corporation Hardware-managed programmable congruence class caching mechanism
US6000014A (en) * 1997-04-14 1999-12-07 International Business Machines Corporation Software-managed programmable congruence class caching mechanism
US6026470A (en) * 1997-04-14 2000-02-15 International Business Machines Corporation Software-managed programmable associativity caching mechanism monitoring cache misses to selectively implement multiple associativity levels
US6112285A (en) * 1997-09-23 2000-08-29 Silicon Graphics, Inc. Method, system and computer program product for virtual memory support for managing translation look aside buffers with multiple page size support
US6182089B1 (en) 1997-09-23 2001-01-30 Silicon Graphics, Inc. Method, system and computer program product for dynamically allocating large memory pages of different sizes
JP2000276404A (en) * 1999-03-29 2000-10-06 Nec Corp Method and device for virtual storage and recording medium
US6857058B1 (en) * 1999-10-04 2005-02-15 Intel Corporation Apparatus to map pages of disparate sizes and associated methods
US6970992B2 (en) * 1999-10-04 2005-11-29 Intel Corporation Apparatus to map virtual pages to disparate-sized, non-contiguous real pages and methods relating thereto
US6665785B1 (en) * 2000-10-19 2003-12-16 International Business Machines, Corporation System and method for automating page space optimization
US6760826B2 (en) * 2000-12-01 2004-07-06 Wind River Systems, Inc. Store data in the system memory of a computing device
US7028139B1 (en) 2003-07-03 2006-04-11 Veritas Operating Corporation Application-assisted recovery from data corruption in parity RAID storage using successive re-reads
US7076632B2 (en) 2003-10-16 2006-07-11 International Business Machines Corporation Fast paging of a large memory block
US7296139B1 (en) 2004-01-30 2007-11-13 Nvidia Corporation In-memory table structure for virtual address translation system with translation units of variable range size
US7278008B1 (en) 2004-01-30 2007-10-02 Nvidia Corporation Virtual address translation system with caching of variable-range translation clusters
US7334108B1 (en) 2004-01-30 2008-02-19 Nvidia Corporation Multi-client virtual address translation system with translation units of variable-range size
US7475219B2 (en) * 2004-08-27 2009-01-06 Marvell International Ltd. Serially indexing a cache memory
US20080028181A1 (en) * 2006-07-31 2008-01-31 Nvidia Corporation Dedicated mechanism for page mapping in a gpu
JP2009009545A (en) * 2007-01-31 2009-01-15 Hewlett-Packard Development Co Lp Data processing system and method
US8037278B2 (en) * 2008-01-11 2011-10-11 International Business Machines Corporation Dynamic address translation with format control
US8335906B2 (en) * 2008-01-11 2012-12-18 International Business Machines Corporation Perform frame management function instruction for clearing blocks of main storage
US8019964B2 (en) * 2008-01-11 2011-09-13 International Buisness Machines Corporation Dynamic address translation with DAT protection
US8041922B2 (en) * 2008-01-11 2011-10-18 International Business Machines Corporation Enhanced dynamic address translation with load real address function
US8151083B2 (en) 2008-01-11 2012-04-03 International Business Machines Corporation Dynamic address translation with frame management
US8417916B2 (en) * 2008-01-11 2013-04-09 International Business Machines Corporation Perform frame management function instruction for setting storage keys and clearing blocks of main storage
US8041923B2 (en) 2008-01-11 2011-10-18 International Business Machines Corporation Load page table entry address instruction execution based on an address translation format control field
US8082405B2 (en) * 2008-01-11 2011-12-20 International Business Machines Corporation Dynamic address translation with fetch protection
US8117417B2 (en) 2008-01-11 2012-02-14 International Business Machines Corporation Dynamic address translation with change record override
US8677098B2 (en) 2008-01-11 2014-03-18 International Business Machines Corporation Dynamic address translation with fetch protection
US8103851B2 (en) 2008-01-11 2012-01-24 International Business Machines Corporation Dynamic address translation with translation table entry format control for indentifying format of the translation table entry
US8086811B2 (en) 2008-02-25 2011-12-27 International Business Machines Corporation Optimizations of a perform frame management function issued by pageable guests
US8095773B2 (en) 2008-02-26 2012-01-10 International Business Machines Corporation Dynamic address translation with translation exception qualifier
US8862859B2 (en) * 2010-05-07 2014-10-14 International Business Machines Corporation Efficient support of multiple page size segments
US8745307B2 (en) 2010-05-13 2014-06-03 International Business Machines Corporation Multiple page size segment encoding
US20150378904A1 (en) * 2014-06-27 2015-12-31 International Business Machines Corporation Allocating read blocks to a thread in a transaction using user specified logical addresses
US10114752B2 (en) 2014-06-27 2018-10-30 International Business Machines Corporation Detecting cache conflicts by utilizing logical address comparisons in a transactional memory
KR20220048864A (en) * 2020-10-13 2022-04-20 에스케이하이닉스 주식회사 Storage device and operating method thereof

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE37305E1 (en) * 1982-12-30 2001-07-31 International Business Machines Corporation Virtual memory address translation mechanism with controlled data persistence
US4823259A (en) * 1984-06-29 1989-04-18 International Business Machines Corporation High speed buffer store arrangement for quick wide transfer of data
US4695950A (en) * 1984-09-17 1987-09-22 International Business Machines Corporation Fast two-level dynamic address translation method and means
JPS61148551A (en) * 1984-12-24 1986-07-07 Hitachi Ltd Address converting system
JPH0656594B2 (en) * 1985-05-07 1994-07-27 株式会社日立製作所 Vector processor
JPS62222344A (en) * 1986-03-25 1987-09-30 Hitachi Ltd Address converting mechanism
US4774659A (en) * 1986-04-16 1988-09-27 Astronautics Corporation Of America Computer system employing virtual memory
US4797814A (en) * 1986-05-01 1989-01-10 International Business Machines Corporation Variable address mode cache
JPH0812636B2 (en) * 1986-12-24 1996-02-07 株式会社東芝 Virtual memory control type computer system
US4914577A (en) * 1987-07-16 1990-04-03 Icon International, Inc. Dynamic memory management system and method
US4980816A (en) * 1987-12-18 1990-12-25 Nec Corporation Translation look-aside buffer control system with multiple prioritized buffers
US4905141A (en) * 1988-10-25 1990-02-27 International Business Machines Corporation Partitioned cache memory with partition look-aside table (PLAT) for early partition assignment identification

Also Published As

Publication number Publication date
DE68923437D1 (en) 1995-08-17
CA2005463A1 (en) 1990-06-15
JPH0555900B2 (en) 1993-08-18
EP0373780B1 (en) 1995-07-12
EP0373780A2 (en) 1990-06-20
EP0373780A3 (en) 1991-04-10
US5058003A (en) 1991-10-15
DE68923437T2 (en) 1996-03-07
JPH02189659A (en) 1990-07-25

Similar Documents

Publication Publication Date Title
CA2005463C (en) Address translation mechanism for multiple-sized pages
US5375214A (en) Single translation mechanism for virtual storage dynamic address translation with non-uniform page sizes
US6493812B1 (en) Apparatus and method for virtual address aliasing and multiple page size support in a computer system having a prevalidated cache
US5475827A (en) Dynamic look-aside table for multiple size pages
US5426750A (en) Translation lookaside buffer apparatus and method with input/output entries, page table entries and page table pointers
EP0642086B1 (en) Virtual address to physical address translation cache that supports multiple page sizes
US5787494A (en) Software assisted hardware TLB miss handler
CA1283218C (en) Variable address mode cache
US6308247B1 (en) Page table entry management method and apparatus for a microkernel data processing system
US6874077B2 (en) Parallel distributed function translation lookaside buffer
US5230045A (en) Multiple address space system including address translator for receiving virtual addresses from bus and providing real addresses on the bus
US6014732A (en) Cache memory with reduced access time
US5123101A (en) Multiple address space mapping technique for shared memory wherein a processor operates a fault handling routine upon a translator miss
EP0036110A2 (en) Cache addressing mechanism
JPH07200405A (en) Circuit and method for cache of information
JP7062695B2 (en) Cache structure using logical directories
JP3210637B2 (en) Method and system for accessing a cache memory in a data processing system
JPH0529942B2 (en)
JPH035851A (en) Buffer storage device
JPS623354A (en) Cache memory access system
US6226731B1 (en) Method and system for accessing a cache memory within a data-processing system utilizing a pre-calculated comparison array
GB2395588A (en) Apparatus supporting multiple page sizes with address aliasing
JPS58141490A (en) Associative page addressing system

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed