WO1997033225A1 - Norris flash file system - Google Patents

Norris flash file system Download PDF

Info

Publication number
WO1997033225A1
WO1997033225A1 PCT/US1997/003622 US9703622W WO9733225A1 WO 1997033225 A1 WO1997033225 A1 WO 1997033225A1 US 9703622 W US9703622 W US 9703622W WO 9733225 A1 WO9733225 A1 WO 9733225A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
memory
file
creating
data segment
Prior art date
Application number
PCT/US1997/003622
Other languages
French (fr)
Inventor
Norbert P. Daberko
Original Assignee
Norris Communications, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=24454600&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO1997033225(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Norris Communications, Inc. filed Critical Norris Communications, Inc.
Priority to AU20731/97A priority Critical patent/AU2073197A/en
Publication of WO1997033225A1 publication Critical patent/WO1997033225A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/1435Saving, restoring, recovering or retrying at system level using file system or storage system metadata
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0626Reducing size or complexity of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/16Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2207/00Indexing scheme relating to arrangements for writing information into, or reading information out from, a digital store
    • G11C2207/16Solid state audio
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99943Generating database or data structure, e.g. via user interface
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99956File allocation

Definitions

  • This invention pertains to a system for memory management in a non-volatile, long-term storage medium. More particularly, this system organizes flash memory such that data storage and retrieval is optimized so as to decrease system overhead and thereby increase data throughput, system stability and fault tolerance.
  • long-term storage media typically provide the benefit of substantially non-volatile retention of data.
  • Reliable data retrieval is becoming increasingly crucial in our information society because information management is now an indispensable tool of successful business.
  • implementation of a system which can provide reliable data retrieval typically comes at a price of reduced performance and convenience.
  • long- term data storage media are usually slower, more difficult to work with, and are relatively bulky in comparison to short-term storage media.
  • Typical long-term storage media include but are not limited to flash memory, hard disks, floppy disks, tape and CD-ROMs. The main distinguishing feature of this media is that withdrawal of applied power does not result m a loss of the data stored therein.
  • short-term storage media include random access memory (RAM) which loses memory upon removal of applied power.
  • RAM is faster, draws relatively small amounts of power because there are no mechanical components, and RAM does not need to erase data stored at a particular address within RAM before new data can be written to that same address. Nevertheless, the challenge of providing virtually uninterruptable power to RAM also comes with a hardware penalty of redundant power systems.
  • the long-term storage medium which conspicuously differs from those mentioned is flash memory.
  • flash memory Using the same criteria for comparison as above, data can be written to and read from flash memory at a speed closer to RAM, draws power at a rate similar to RAM because it also has no moving mechanical moving components, but flash memory must first erase data from a particular address before new data may be written to the same physical address. Flash memory is also of a comparable size to RAM because they are both manufactured on computer chips as integrated circuits (IC) . Therefore, it would be advantageous to be able to replace RAM with a long-term storage medium when the substantial benefits of non-volatile data retention are required.
  • DOS The Microsoft DOS operating system, hereinafter referred to as DOS, allocates long-term memory space through the use of a file allocation table
  • the FAT is a method for the DOS operating system to determine which sectors on a disk are used for a file.
  • the FAT is essentially a map showing the location of files in the sectors by providing to the operating system the information needed to access data, such as the sector number of a file and possibly the length of the data in that file.
  • the DOS system then typically saves the FAT on the disk at a dedicated location where data is not permitted to overwrite it.
  • the allocation of sectors in the FAT is the manner by which the operating system determines how- much memory space is available.
  • Ban U.S. Patent No. 5,404,485.
  • Ban still takes the more conventional and disadvantageous approach of manipulating data stored in flash memory by first reading the data out to a large RAM, manipulating the data in RAM, erasing the flash memory where the data was originally stored, and then writing the data from RAM back to a contiguous block of flash memory.
  • Ban also disadvantageously creates a file structure similar to DOS which maps the location of stored data.
  • Ban creates several severe overhead burdens on the system which substantially hurt system performance. More specifically, Ban uses a virtual memory mapping system similar to the DOS FAT, the virtual memory map converting virtual addresses to physical addresses. Using this method of indirection, Ban attempts to facilitate use of flash memory as RAM. The problem with this approach is that Ban creates the need for this indirection because data manipulation takes place outside of flash memory. Ban mistakenly teaches that the time wasted copying blocks of data from flash memory to RAM for manipulation then back into flash memory is unavoidable.
  • Ban A further significant drawback to Ban is the lack of fault tolerance in a system that utilizes a virtual map stored partially in RAM.
  • the system is inherently unstable because any loss of power to RAM destroys the map which must then be reconstructed before the system can read or write data to flash memory.
  • Ban Another drawback of Ban is that the RAM requirement grows as flash memory grows. This is the consequence of using a virtual map whose size must be a ratio of the larger flash memory media in order to reflect a scaled version of what is stored in physical addresses. Ban essentially teaches that it is necessary to follow the method already used in the conventional DOS operating system which also relies on long-term storage in conjunction with significant RAM resources. That is to say, the access to and structure of storage media is changed as little as possible so that the operating system does not have to be significantly altered to utilize flash memory.
  • the challenge is to use a non ⁇ volatile, ⁇ long-term storage medium such as flash memory which can take advantage of increased fault tolerance to power interruption, significantly reduced RAM resources, and reduced system overhead caused by data transfers between RAM and the storage medium, while overcoming the erasure drawbacks unique to flash memory.
  • RAM random access memory
  • Another object is to provide a file system which efficiently and transparently erases flash memory m the background to further improve system performance. Yet another object is to reduce cost, space and power consumption by eliminating battery backup of information stored in a non-volatile primary memory.
  • Still another object is to provide a file system which uses absolute physical memory addresses to avoid the additional overhead created by memory mapping.
  • the present invention comprises two methods of data storage and subsequent retrieval .
  • Common to both methods is the reality that the order m which data segments are subsequently recalled is typically not the order in which the data segments were saved to memory.
  • Unique to each method is the order in which data segments are logically linked to each other.
  • the first and simpler method enables sequential recall of data in the order in which it was saved in memory. This sequential recall is referred to as logical recall because it refers to a sequence m time in which data was saved, and not the physical order which the data occupies m memory.
  • This method is particularly useful for flash memory which has the characteristic of not being able to overwrite data m memory without first erasing previously recorded data.
  • the second and more complex method allows for more sophisticated linking files. Instead of strictly linking files based on the order in which they are received, files are logically linked based on subject matter. Specifically, files are logically grouped together within directories and subdirectories. Yet because all files can be traced back in origin to a single root directory, all files are still logically linked to each other. Both methods comprise the minimum steps of logically dividing the primary memory into equal size blocks, each block being the smallest amount of data which can be read from or written to memory in a single read or write operation. A cache memory the size of at least one of the read/write blocks is then coupled to the primary memory and provides temporary storage space for data being written to and read from primary memory.
  • a header is placed at the beginning of the segment.
  • the header indicates which logically related data segment precedes the new data segment.
  • the header also indicates the location of the next logically related and subsequent data segment.
  • FIG. 1 is a flowchart of the steps of Ban for writing data to primary memory with the use of a virtual memory map.
  • FIG. 2 is a flowchart of the steps of Ban for reading data from primary memory to a RAM resource.
  • FIG. 3A is a block diagram of the components of a preferred embodiment of the present invention which utilizes the file system of the present invention.
  • FIG. 3B is a block diagram of the components of an alternative embodiment of the present invention which utilizes a different buffering scheme than shown in FIG.
  • FIG. 3A for utilization of the file system of the present invention.
  • FIG. 3C is a block diagram of the components of another alternative embodiment of the present invention which utilizes a different buffering scheme than shown in FIGs. 3A and 3B for utilization of the file system of the present invention.
  • FIG. 4 is a block diagram of the physical structure of primary memory in a preferred embodiment .
  • FIG. 5 is a block diagram of the logical memory structure which overlays the physical structure of FIG.
  • the illustration provides examples of Headers, Data Segments, Free Space and Bad Blocks which occupy memory in Logical Blocks.
  • FIG. 6 is a block diagram illustrating Memory Block Mapping. Each logical block as defined in FIG. 5 contains a memory block map.
  • FIG. 7A is a functional diagram of the data structure linkage implemented in a preferred embodiment of the file system of the present invention.
  • FIG. 7B is a representation of a DOS file structure depicting a root directory, directories within the root, and subdirectories within the directories.
  • FIG. 8 is a LIFO stack which represents one implementation of a recursive search method for moving backward through data stored in memory.
  • FIG. 9 is a functional representation of a playback tree structure as implemented in the present invention.
  • FIG. 10 provides a functional diagram of a playback tree before insertion of a new message within an existing message, and after insertion of the new message as implemented according to the principles of the present invention.
  • FIG. 11 illustrates a pre-existing message before and after message deletion as implemented according to the principles of the present invention.
  • flash memory requires that previously written data be erased before the occupied space is used to save new data. If data is not erased, the new data is either corrupted or not written at all.
  • flash memory is of course non-volatile.
  • Ban is typical of previous methods of imitating RAM because Ban teaches that manipulation of data cannot be done in flash memory directly. Instead, data is always erased from flash memory, manipulated in RAM, and resaved to physically create a segment of manipulated data that appears in its complete and contiguous form. For example, step 10 looks for a contiguous block of memory large enough to store the entire manipulated file.
  • the present invention realizes that data does not have to be contiguous in order to be readable in a logical or relational order.
  • the present invention claims being able to manipulate data directly in flash memory because the flash file system of the present invention enables data to be read in a logical order regardless of how many segments the file is comprised of, and where these segments are saved in memory. That is to say, implementation of the flash file system requires that each data segment have written to it a header. Within the header in predetermined fields, absolute physical addresses are saved. These addresses are physical locations within flash memory of the next logical data segment. Thus, regardless of an actual physical location, data segments are retrievable in a seemingly contiguous order despite the fact that typically the data segments are not stored contiguously.
  • Ban continues to actually make all logically related data segments physically contiguous, while this invention advantageously only makes related data segments logically contiguous by 11 creating headers which contain pointers to absolute physical locations within flash memory which provide a logical path to the data segments.
  • this invention advantageously only makes related data segments logically contiguous by 11 creating headers which contain pointers to absolute physical locations within flash memory which provide a logical path to the data segments.
  • one of the advantages of the method of the present invention is that overhead (an unproductive operation of a flash memory file system) is significantly reduced.
  • Ban requires that all related data segments (an entire file which could be several megabytes long) be read from flash memory to RAM, the modification to the data segments be made in RAM (even the insertion or deletion of a single data bit) , and then the data segments transferred back to flash memory only if a completely contiguous flash memory location exists. If no such contiguous location is available, flash memory would then be erased. If this process produces contiguous flash memory space which is insufficiently large, then other data segments adjacent to the desired memory space would also have to be read to RAM, erased from flash memory, and transferred back to flash memory at a new location so as to create a sufficiently large contiguous memory space for the originally manipulated file.
  • FIG. 1 shows in a relatively simplified flowchart the process for writing data to flash memory by Ban. A quick walk-through will highlight the important features. The method involves a two-step translation process.
  • a virtual address is mapped to a logical unit address in step 10.
  • a logical address is examined to verify if it is unallocated in steps 12 and 14. If the logical address is free, the logical address is mapped to a physical address in step 16, and then the data is written to the physical address in step 18. If the logical address was allocated, logical addresses are examined to find free memory in step 20, and then the appropriate mapping and writing is accomplished as above.
  • Ban the method uses the concept of dereferencing to allow frequent movement of data. Dereferencing of data is a common programming technique that is partly used because it allows frequently updated blocks of data to be moved within RAM without losing track of the data.
  • Losing track of mapped data is the dreaded "dangling pointer" problem associated with dereferenced data.
  • a dangling pointer occurs when a pointer providing the memory location of data is not updated with a new address of the data after it is moved.
  • FIG. 2 illustrates Ban's relatively simpler steps involved in reading data from flash memory, as opposed to the steps of writing described in FIG. 1.
  • Ban's method requires indirection through virtual mapping to compensate for the frequent movements of data.
  • the technique apparently enables flash memory to imitate the look, of RAM, but at the crippling overhead cost of significant data movement when any modification is made.
  • Ban requires a relatively large amount of RAM. This large RAM requirement is necessitated by the complication that the largest block of data which is modifiable must be able to fit entirely in RAM for modification. Providing a RAM resource for the worst case scenario is costly to provide and to supply with power. Also stored in RAM is the virtual map. Ban apparently attempts to limit reliance on RAM and increase fault tolerance by forcing a portion of the virtual map into flash memory. By splitting the virtual map, Ban also succeeds in increasing overhead again by creating a non-trivial process for locating the physical address of data as illustrated in FIG. 1. The attempt at increased fault tolerance is also only partially successful .
  • any modified data in RAM is permanently lost if power is removed. Obviously the data loss can be significant because an entire file in RAM whose previously occupied space in flash memory has just been erased would be irrevocably lost if power were interrupted. Furthermore, all virtual mapping in RAM is also lost, and must be reconstructed by scanning a block usage map that resides at the top of each block. This process must also be executed at system startup.
  • the objectives of the Ban patent are highly desirable, but implementation using the technique of virtual mapping leaves any system using the Ban method not only vulnerable to significant data loss, but tied to a method which inherently cripples itself with overhead requirements.
  • the result is a costly device which requires significant RAM resources, a powerful processor to manage the overhead, and a larger and heavier power source to supply the needs of the system.
  • flash memory such as being non-volatile, small, fast, and a relatively conservative user of power are lost.
  • the present invention takes a very different approach to memory management. This new approach, embodied in a method and apparatus, overcomes the significant drawbacks of Ban. This is accomplished by taking advantage of the properties of flash memory, instead of treating them as a liability.
  • FIG. 3A The apparatus of the present invention is shown in block diagram form in FIG. 3A.
  • the components of a system utilizing the memory management method of the present invention include an I/O device 30, a processor 32, a small cache memory 34 for temporary storage of data, and in a preferred embodiment, a relatively larger flash memory 36 for non-volatile, long-term storage of data.
  • the cache memory 34 in a preferred embodiment is only as large as a single read/write block of data as defined for the flash memory and to be explained in detail.
  • the read/write block is typically as small as 256 bytes of data, and consequently the size of the cache memory 34 is also the same as the read/write block.
  • the size of the cache memory 34 is also constant in the present invention. This is also significantly different from Ban which admits that the size of RAM must grow accordingly relative to the size of flash memory provided by the system. This is because Ban uses RAM for memory mapping. More flash memory as primary memory in Ban requires an appropriately larger sized RAM, whereas this invention strictly uses a cache memory 34 to temporarily store a block of read/write data.
  • the file system of the present invention is adaptable to use many conventions of existing file structures of operating systems currently in use so that it may be considered hardware independent. This feature of portability is crucial in this day of multiple hardware platforms for storing digital data. Therefore, the Norris Flash File System (referred to hereinafter as "file system”) can also be viewed at different levels of implementation. That is to say, the file system is working at several different levels so as to be compatible with existing hardware. For example, there is a physical memory structure, memory block mapping, and a file system logical overlay of the physical memory structure as shown in FIGS. 5 and 6. There is also an hierarchy for cataloging data saved in the memory. For example, the file system is compatible with DOS found on many Intel microprocessor-based personal computers today.
  • the file system is only making the chosen long-term storage medium appear as RAM which an operating such as DOS expects to use for data manipulation.
  • the present invention provides a file system which appears to have significant RAM resources, when m fact there is only a non-volatile long-term storage medium and a small cache, typically comprised of RAM.
  • flash memory is chosen as the non-volatile long-terms storage medium of this preferred embodiment because of its desirable characteristics, and its limitations are then dealt with and overcome by the file system. However, it is imperative to understand the organization and limitations of the preferred memory.
  • FIG. 4 is a diagram of memory structure used in the preferred embodiment. Shown at its simplest level, flash memory is divided up logically into blocks 40. A block 40 represents the smallest amount of physical memory which must be manipulated to accomplish a specific file system operation. The operations which concern this invention are reading, writing, and the particularly unusual requirement of erasure uniquely applicable to flash memory.
  • flash memory reads and writes to the same size block 40 of physical memory.
  • erasing is a function that is presently limited to larger blocks 42.
  • a typical erase block 42 is shown in relative proportion to the read/write blocks 40.
  • the specific size of the read/write and erase blocks 40, 42 are typically a function of the technology which is used for flash memory, as well as desired optimization of operation.
  • flash memory chips There are presently two types of flash memory chips on the market. A first chip uses NAND technology and is presently the preferred type of memory for this invention. The second chip uses NOR technology. The specific attributes of these chips are summarized in the table below which contrasts typical 1 megabyte (8 megabit) NOR and NAND devices.
  • Streaming data comprises using a first cache 50 memory for reading from and writing to a primary memory 30 while using a second cache memory 52 for storing data that is being processed.
  • Advanced streaming includes making the two cache memories switch tasks as shown in FIG. 3C, allowing the file system to read or write data to primary memory concurrently.
  • NOR flash can actually read data from and write data to flash memory and process it without caching data, but for consistency in a preferred embodiment, the NOR and NAND technology flash memories are treated the same. Essentially, this leaves the NOR flash memory with double the normal size cache memory just for buffering of data if streaming is implemented.
  • the level of memory structure above the physical read/write and erase blocks 40, 42 is the logical memory structure shown in FIG. 5.
  • the file system of the present invention advantageously organizes a logical memory structure over the physical memory structure.
  • the logical layer maps "logical" blocks 60 onto the physical memory 62.
  • a logical block 60 is composed of one or more erase blocks 42.
  • the physical memory 62 is divided into at least 1024 logical blocks 60, provided that 1024 erase blocks 42 are available. While not apparent, the number of logical blocks 60 is important because it represents a key balance between performance and overhead of the file system. Having
  • 1024 logical blocks 60 allows for an effective 0.1% erase block 42 (and bad block) granularity.
  • NOR flash memory With larger read/write and erase block NOR flash memory, there is typically one logical block 60 for each erase block 42. With a 2 megabyte NOR flash memory, only 32 logical blocks 60 would be allocated. With NAND flash memory and a 4 kilobyte erase block size, only 512 erase blocks 42 exist, and only 512 logical blocks 60 could be allocated. However, increasing the NAND flash memory size to 8 megabytes, 1024 logical blocks 60 would be allocated, each containing 2 erase blocks 42. To calculate the number of erase blocks 42 per logical blocks 60, the inventors use the formula:
  • Blocks int(((Tota1 memory/Size of erase block) + 1023) / 1024)
  • Each logical block 60 may also divided into one or more segments. Segments can be sized from a minimum of the read/write block 40 up to the maximum size of the logical block 60. As a rule, at least one segment is allocated for each file, file header, directory header, etc. Thus, large files that span multiple logical blocks 60 will have at least one segment for the file header and another segment for each logical block 60 only partially utilized. It should be apparent that a segment is sized according to the requirements of the data to be saved. As will be shown, different headers occupy different amounts of memory, and are coupled to different sizes of data segments.
  • FIG. 5 quickly illustrates some of the basic memory concepts explained above.
  • a logical block 60 is comprised of four erase blocks 42, each of which are in turn comprised of a plurality of read/write blocks 40. Also shown are examples of occupied 64, free 66 and bad memory 68 segements.
  • a last memory principle requiring explamation is the file system concept of memory block mapping.
  • Logical blocks 60 shown m FIG. 6 are comprised of up to four erase blocks 42.
  • the memory block map indicates good erase blocks 42 and bad erase blocks 68. As indicated previously, a single bad bit within an entire erase block 42 renders the erase block useless because there is no way of preventing the system from writing to the bad bit, thereby corrupting data.
  • a bad block 68 is indicated in the memory block ID mapping 70 as a 0 bit
  • a logical block control map 74 contains within it the beginning physical address of each block 76, the physical address of the beginning of the next available memory space within the logical block 78, and a map of bad blocks 80 within the logical block 60 to which data should not be written.
  • the Memory Block Mapping is shown in FIG. 5 as the Block List 82 section of each Logical Block 60.
  • the file system stores more than simply data files. It is the organization of the memory which comprises a key point of novelty of the present invention.
  • the file system is organized through the use of headers. Headers store most of the critical information which enables the file system to read data from memory, as well as to be able to search for data. While headers per se are not novel, the information stored in the headers of the file system create unique benefits to the user.
  • FIG. 5 which depicts the logical to physical memory structure, hinted at the use of headers when it showed Volume 90, File 92, Directory 94 and Device 96 Headers in Logical Blocks 2, 3, 4 and 5.
  • the six headers listed in hierarchical relevance are the Device Header, Volume Header, Directory Header, File Header, Data Header, and the Tree Header.
  • the Device Header most importantly, contains information relevant to the hardware of the particular primary memory medium being organized by the file system.
  • the file system looks to the Device Header when it creates the logical memory layout over the physical memory as discussed previously. For example, the erase block size 42 and logical block size 60 are written here.
  • FIG. 5 correctly shows only a single Device Header 96 in memory, because there is typically only one type of flash memory or any type of primary memory used at one time. More plainly, this means it is not possible for the file system in a preferred embodiment to control both a NAND and NOR flash memory simultaneously. Nevertheless, by providing a Device Header 96, it is possible to implement a combination of the two.
  • the next most relevant header structure is the Volume 90 Header.
  • the elements of the Volume Header are shown below in Table 3.
  • the Volume Header information pertains principally to the logical mapping memory structure shown in FIG. 5.
  • the Volume Header is easily analogized to the format information of a typical hard disk or floppy disk medium. Until formatted, a hard disk is useless to an operating system's file system. Formatting provides the logical structure overlay which a file system uses to determine the logical and corresponding physical location of data stored therein. Therefore, an element of the Volume Header is the expected DOS volume name.
  • the present invention also requires only a single Volume Header 90 for the selected memory medium. Beneath the Volume Header 90 in hierarchical structure comes the Directory Header 94. Those familiar with DOS are familiar with a root directory essentially being the root node from which branch various subdirectories.
  • Directories are created so as to organize related data files. However, the root directory in DOS is no different in structure than a subdirectory which resides in the root directory, except for its logical location. Similarly, a first Directory Header 94 forms the root directory of the file system. All subsequently created directories are logically located within the root directory. The elements of the Directory Header are listed in Table 4 below.
  • directories are also comprised of a maximum eight character name, and three character extension.
  • the Device, Volume and Directory Header 96, 90, 94 described above are the minimal file structure Headers that must exist for the file system to operate.
  • the Device, Volume and a root Directory Header 96, 90, 94 are thus created when a flash memory device is first initialized. These Headers all exist, however, to support the file structure in being able to logically and physically store data, which is the purpose of the file structure. It is left to the Header File 94 structure to actually begin storing data.
  • the elements of the File Header are shown in Table below.
  • Table 6 The Device, Volume, Directory, File and Data Headers form the basis for the file system. In might be thought that these are sufficient to organize data within any file system. However, to stop here ignores the perils of flash memory. Therefore, the file system, to be adaptable to any storage medium, must contain sufficient structure to adapt to flash memory even if not used by other storage media.
  • flash memory is to serve as primary memory with a relatively small RAM resource which cannot easily be used for data manipulation, the file system must also provide a method for easily accessing the data stored in flash memory. Providing easy access is what makes the use of flash memory an advantage instead of a liability.
  • the present invention uses two additional header structures to enable the file system to see flash memory as RAM. They are the Tree Header and a header-like data structure, the Tree Node. These structures are listed below in Tables 7 and 8 respectively.
  • End Data Pointer 4 DWord Address of ending Data Segment.
  • Header structures described in Tables 2 through 8 are the minimum essential elements of the file system required to be able to organize data such that it can be saved and recalled from primary memory. However, access to data does not necessarily mean that the access provided by the file system is sufficiently rapid in real-time for particular applications.
  • the file system of the present invention is particularly useful for the flash memory used as primary memory in U.S. Patent No. 5,XXX,XXX for a HANDHELD RECORD AND PLAYBACK DEVICE WITH FLASH MEMORY by Norris et al .
  • This patent is incorporated by reference herein, and provides a detailed example of the benefits of flash memory in a portable recorder.
  • the essential feature of playback provided by the handheld recorder illustrates the requirement for real-time recall of data.
  • a dictaphone typically operates by storing data by time only.
  • organizing voice messages by subject matter is now possible.
  • the present invention provide a novel method and apparatus for sequential recall of logically ordered data.
  • real-time implementation requires rapid data recall.
  • This method is implemented and assured through the use of the Tree Header and Tree Node.
  • the Tree Header and the Tree Node are a novel and essential part of the file system.
  • the information stored therein is used to facilitate rapid data access to more accurately imitate the abilities of RAM through the use of a playback tree structure.
  • the last header structures of the present invention are the Segment Index Entry and the Secondary Segment Index whose members are shown in Tables 9 and 10 respectively. Name Size Type Description
  • the Segment Index Entry is an element of every Header structure shown above, and provides essential data structure linkage to be explained.
  • the Secondary Segment Index fulfills the function of
  • FIG. 7A is a functional diagram of the data structure linkage in a preferred embodiment.
  • the perferred embodiment includes logically linking data segments by subject matter where directories are used to define different subjects. This is opposed to linking data segments organized only by the order in which they were recorded.
  • Headers and the file and data segments to which they are appended are all linked in a manner which provides sequential recall of data organized by subject matter. It is easy to see how this concept is important to the Norris Patent by observing that data segments which represent portions of a single voice message but saved noncontiguously in memory must be recalled as they were spoken. In a like manner, several discrete but logically related voice messages could be played back in sequence it logically stored in the same directory. It is the limitation of memory file fragmentation which causes voice data not to be stored sequentially, and thus complicating playback.
  • the file system as shown in FIG. 7A enables logical data recall.
  • FIG. 7B a representation of a
  • DOS file structure is shown for illustration purposes.
  • the lowest level of the file structure is the root directory 100.
  • Within the root directory 100 is at least one directory 102, but there may be many more.
  • Within each directory 102 a variable number of subdirectories 104 may also be created which typically contain subsets of data related to the directory in which they are located.
  • the Directory Headers 110 and 112 may be considered to be directories 102 within the root directory 100 of FIG. 7B. While table 3 shows all of the information stored in the Directory Header, the essential functional linkage information is shown as the ID 114, the Next File ID 116, and the First File ID 118. These functional fields correspond to the fields in the Segment Index Entry 112 of Table 9 which, as stated above, is appended to every Header.
  • the Next File ID 116 provides a pointer to the physical address where the next Directory Header ID 120.
  • the Next File ID 116 can be thought of as being more accurately referred to in the Directory Header 110 as a "Next Directory ID".
  • the Directory Header 110 is just another file, so the label is still appropriate, and so the label is kept for the sake of consistency only.
  • the First File ID 118 is naturally associated with an actual File Header.
  • File Headers also have an appended Segment Index Entry such as File Header 124.
  • the Next File ID 126 actually points to a next File Header ID 128 within the Directory 112, instead of pointing to another Directory.
  • the First File ID 130 has also been replaced with the First Data ID 132. This field, as shown, points to a Data Header 134 having and ID 136, a Next Data ID 130, and an actual data field 140.
  • File Header 124 and not the Data Header 134 is shown to point to a subsequent File Header 146 and to a Directory (subdirectory) Header 148.
  • the reason for the more complicated linkage structure and the method for providing backward linkage are as follows.
  • the present invention is distinctly suited for the handheld recorder of the Norris patent, that is because a dedicated dictaphone- type device using a magnetic storage tape does not require a very complicated hierarchical data linkage structure. All messages can only be stored sequentially. However, a more sophisticated data structure can provide the ability to put all recorded voice messages related to a certain topic within a particular directory so that recall can be facilitated by looking for subject matter. Likewise, an even more detailed data structure might wish to further categorize subjects within a single subject by providing subdirectories within a directory. Therefore, it is more advantageous for the file system to be able to logically link related data segments within directories and subdirectories, rather than link data segments in the exact time sequence order in which the data was recorded.
  • a voice recorder such as the one taught by Norris
  • a voice recorder such as the one taught by Norris
  • the user records a first message regarding topic A, then records a second message regarding topic B, then records a third message regarding topic A
  • the first message could be the "Large File" 150
  • the third message could be the "Small File” 152
  • the second message could be File Header 154.
  • the present invention makes it possible to recall data which is likely not to be stored contiguously in primary memory. This enables the PI to ignore the sequence in which messages were recorded, thereby facilitating subject matter organization of files. More specifically, the headers of the present invention are written so as to contain pointers which point to files which a user deems to be logically related to by subject. Therefore, while each Directory Header, File Header, and Data Header is comprised of a contiguously stored data segment in memory, logically related data segments such as 156, 158 and 160 may be physically stored in very different locations in memory. This is very advantageous because trying to move data so that all logically related segments 156, 158 and 160 all reside in contiguous memory would be a very heavy overhead burden on a processor which is otherwise engaged m playback or digitizing and compression of the voice messages.
  • the first entry in the LIFO stack is always the location of the root directory Directory Header 100 as shown in FIG. 8.
  • the next entry would be the location of the next directory Directory Header 112.
  • the user then went into a file 124 within the next directory 112.
  • subsequent stack entries would be pointers to sequentially subsequent Data Headers 134, 164 and 166.
  • FIG. 9 a playback tree structure as implemented in the present invention is shown in FIG. 9.
  • the figure shows a playback tree which is used to logically recall data which is probably not stored sequentially in physical memory.
  • the present invention is ideal for recalling data which has been stored in a critical sequence. The nature of file fragmentation typically precludes related data from being stored contiguously.
  • the playback tree has a single Tree Header 180 and at least one Tree Node 182, 184, 186, but typically many more.
  • the playback tree essentially functions as a shorthand method of recalling data.
  • the data represents recorded voice messages. Data must be recalled at a fairly consistent rate to provide accurate voice or sound reproduction. Rapid data recall is implemented in a preferred embodiment by setting aside an area of primary memory as "work" memory. Work memory is reserved at all times for playback trees. The work memory essentially becomes a mirror of the structure of File and Data Headers in primary memory.
  • One of the novel advantages of the present invention includes a method for seamless insertion of a voice message within another voice message.
  • Seamless insertion means that a user can interrupt an existing voice message at any point, record a new message, and play back the existing message with the new message seamlessly interrupting the existing message at the point the new message was recorded, and then continuing on with the rest of the existing message when the new message finishes playing.
  • the present invention accomplishes this advantageous and seamless insertion process through the use the headers already explained above. More specifically, however, the present invention uses the functional structure in FIG. 10.
  • FIG. 10 provides a functional diagram of a playback tree before insertion of a new message 190 within an existing message 188, and after insertion of the new message 190. Normally, the branch node 192 of a tree node is not used. However, when a message is inserted,
  • Tree Node 1 (182) branches to Tree Node 2 (194) .
  • Data Size field 196 is modified to reflect the portion of the original message to be played up to the insertion point, referred to a pre-insertion data 198. After the shortened data segment 198 has been played, Tree Node 2
  • Tree Node 3 (196) branches to Tree Node 3 (200) .
  • Tree Node 3 consists of the inserted message 190.
  • Node 3 branches to Tree Node 4 (202) which consists of a data segment which is all of the original message after the insertion point, referred to as Post-insertion data 204.
  • the playback tree structure is then to be used to play back the entire new message consisting of data segments 198, 190 and 204, the message would seamlessly play the data segments as if the message consisted of a contiguously stored message comprised of Pre-insertion Data 198, Inserted Data 190, and Post-insertion Data 204.
  • FIG. 11 illustrates a pre-existing message 210 before and after message deletion.
  • the message structure 210 before deletion is essentially the same as before insertion of a message as shown -in FIG. 10.
  • Pre and Post insertion segments instead of creating a Tree Node for the inserted data, as well as for the Pre and Post insertion segments, only Pre and Post erasure Tree Nodes 212 and 214 are created.
  • Data Header 2 (216) is created to keep track of the deleted data for edit history purposes. This is critical when using flash memory, because the data must eventually be erased before new data can be stored in the same physical memory space. By keeping track of what memory is used, free and marked for erasure, flash memory space can be optimized.
  • APIs Application Program Instructions
  • Table 11 summarizes all APIs implemented by the file system of the present invention. These instructions are executed by the file system in response to commands received by a processor. These API commands can be considered to be "high level" instructions executed by the file system to manipulate files stored in primary and cache memory. To understand how these API function calls operate, an example will now be given which utilizes each of the calls so as to clearly illustrate proper use and important features.
  • nfsChDir Change the current directory ngsClose Close a file nfsCreate Create a file nfsDelete Delete a file nfsErase Erase data within a file nfsFind Finda file (first, next, etc )
  • nfsGetAttr retrieve the DOS attributes of a file nfsGetltem Retrieve selected mforamtion items from a file nfsGetSyste Retrieve global file system parameters nfslnitialize Setup NFS and optionally erase or reorganize user data nfsLength
  • nfsMkDir Make a new subdirectory nfsOpen Open a file nfsOptimize Erase & pack all unused space nfsRead Read data from a file nfsRename Rename a file nfsRmDir Remove a subdirectory nfsSeek Seek
  • the first API executed is Initialize. This API is activated when power is first applied to a device utilizing the flash file system of the present invention. It might be easier to conceptually visualize the API as two different commands in DOS. They are the format command and the fdisk command. In other words, when a power switch is toggled on, the Initialize API prepares the file system for operation. Preparation includes bulk erasure of the flash memory if the memory has been previously marked for bulk erasure. This is the format aspect. Initialization also includes the process known as the fdisk command. In DOS, fdisk partitions the medium and divides the medium into sectors so that the FAT can map files to specific predefined sector addresses. Fdisk also creates the initial root directory to which all directories, subdirectories and files within them can trace their origin.
  • the process of initialization also includes the process of placing in cache memory a "message pointer" which can be analogized to a cursor m memory. Just as a cursor indicates where insertion or deletion of data is to occur, the message pointer is a cursor in primary memory which indicates where reading or modification of data will occur if executed. Upon initialization, the message pointer is located at the root directory.
  • the concept of a message pointer is crucial to the present invention because of the sequentially and logically linked data segments of the present invention.
  • Initialization of the file system thus includes the step of determining a current position of the message pointer in primary memory. Initialization also includes the step of first creating a Directory Header if a bulk erase function call was executed. A Directory Header to a root directory can be considered to be the lowest minimum setup of the file system if no files or subdirectories yet exist.
  • Creating a file is the nfsCreate command which involves the steps of creating a new file, opening a file in the current file system, writing a file header and preparing for the writing of data.
  • the utility functions which can be executed without files existing include changing the current directory with nfsChDir, retrieving global file system parameters with nfsGetSystem, erasing and packing all unused space with nfsOptimize, and setting global file system parameters with nfsSetSystem.
  • the nfsChDir requires that the specified directory must reside in the current directory, and the directory's ID is used to locate it if the file pointer does not already have the directory's address.
  • the nfsOptimize process erases and packs data segments to recover unused space. It is important to note that premature interruption of the process compromises file structure integrity. The optimize process must begin again to restore the file system to a usable condition without loss of data.
  • the remaining APIs almost exclusively perform some function call on files which have now been created.
  • the function call of nfsErase for erasing data within a file marks a section of data in the current file as deleted. Specifically, "erased" data is marked from the position specified by IStart for the number of bytes specified by lSize as represented by the presentation map created by nfsOpen.
  • the function call nfsFind or finding a file finds the file specified in the current directory, and has the user selectable options of finding the first name, finding the first file in the current directory, finding the last file in the current directory, finding the next file following any previously used file, finding the file preceding any previously used file, finding a normal file, finding a read-only file, finding a hidden file, finding a system file, and finding a subdirectory.
  • the function call nfsGetAttr retrieves the DOS attributes of a file from the DOS attributes flags of the current file.
  • the DOS attributes flags are found in Table 12 below.
  • nfsGetltem for retrieving selected information from a file will retrieve one of the following attributes: file creation time, data type, description, access password, read password, author ID, ob number, priority level or the transcriber ID.
  • the function call nfsLength returns the length of the current file in bytes. If a file is not currently open, the call fails.
  • the function call nfsOpen finds the specified file in the directory and opens it for both reading and writing. Since a file may consist of randomly sequenced segments of data and erasures, the function call must create a presentation map of the unerased data to be able to allow access to it as one continuous stream via call to nfsRead and nfsSeek.
  • the function call nfsRead reads data from the current file. The requested data is taken from the continuous stream using the presentation map created by nfsOpen.
  • the function call nfsRename renames the currently opened file. A new file header is created to record the new name, the old file header is marked as deleted, and optimization must be performed to update the file system's index of segments making the file accessible by the new name.
  • nfsRmDir removes a subdirectory from the current directory. This also applies to directories in the root directory since every directory is a subdirectory of the root. The directory header is marked "deleted", but the space is not reclaimed until optimization. In addition, all files must be removed prior to the deletion of the directory.
  • the function call nfsSeek repositions the current file pointer.
  • the current file pointer was created with the command Initialize, and represents the position in the sequential presentation of the data using the map created by nfsOpen and is without regard to the physical blocks referenced by the presentation map.
  • the function call nfsStat is used top obtain information from the current file.
  • the retrievable statistics include total file size, the file's creation data, "last modified" time, and the edit time at the current position. The time is defined as the time since January 1, 1970.
  • the function call nfsTell returns the sequential position in bytes of the current file. If a file is not currently open, the call fails.
  • nfsWrite writes data to the current file.
  • the current file position becomes the insertion point for the data. This is recorded when the segment is closed and used to update the presentation map created by nfsOpen.
  • the initial call to nfsWrite creates a data segment to be appended to the current file's list of data segments. Subsequent calls to nfsWrite append data to this segment.
  • the segment is completed and closed when any call is made to either nfsRead, nfsSeek or nfsClose.
  • BIOS calls are typically implemented in firmware or hardware, such as on an EEPROM integrated circuit which typically comes with a processor board in a personal computer. BIOS calls are lower level function calls because they are written specifically for particular hardware. Changing the hardware, such as the memory media, would require an update to BIOS. Table 13 below lists the BIOS calls implemented in a preferred embodiment of the present invention.
  • ClipConfig Returns the configuration of the user data media.
  • CopyDataToData Moves data from one location to another
  • GetDataByte retrieves one byte of user data from the current location
  • GetWorkByte retrieves on byte of work area data from the current location.
  • PutData Byte Writes one byte to the current user data location.
  • PutWorkByte Writes one byte to the current work area location.
  • the first BIOS call is ClipConfig which queries the storage media and returns configuration parameters. This command informs the file system if any particular API function calls will not function. For example, nfsWrite will not be able to operate in an overwrite mode when ClipConfig determines the memory media consists of flash memory.
  • the BIOS call CopyDataToData is useful for making bulk memory block copies from one location to another.
  • This command is particularly useful when using flash memory because it allows the file system to quickly create large blocks of contiguous memory.
  • BIOS call EraseBlock which erases entire blocks of memory rapidly so that contiguous memory space can be freed for writing long data files.
  • BIOS calls of GetDataByte, GetWorkByte, PutDataByte and PutWorkByte are important for low level manipulation of data at the byte level.
  • the BIOS call GetTime is defined as returning the time since January 1, 1970. This call works in conjunction with nfsStat as previously explained.
  • SetDataAddress and SetWorkAddress change the current address to the user data area or user work area, respectively.

Abstract

A method of memory management for a primary memory created from a non-volatile, long-term storage medium (RAM), in particular flash memory, which enables direct manipulation of data segments stored therein. The data segments (10) of a single file are typically not stored contiguously in relation to the order in which they are stored and subsequently recalled (12-20), yet the method enables recall in the logical order in which the data segments were created. This method is particularly useful for flash memory which has the characteristic of not being able to overwrite data in memory without first erasing previously recorded data (20).

Description

NORRIS FLASH FILE SYSTEM
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention pertains to a system for memory management in a non-volatile, long-term storage medium. More particularly, this system organizes flash memory such that data storage and retrieval is optimized so as to decrease system overhead and thereby increase data throughput, system stability and fault tolerance.
2. Prior Art
Efficient organization of data stored in long-term storage media is becoming increasingly important because of the substantial benefits which they provide over short-term counterparts. That is to say, long-term storage media typically provide the benefit of substantially non-volatile retention of data. Reliable data retrieval is becoming increasingly crucial in our information society because information management is now an indispensable tool of successful business. However, implementation of a system which can provide reliable data retrieval typically comes at a price of reduced performance and convenience. For example, long- term data storage media are usually slower, more difficult to work with, and are relatively bulky in comparison to short-term storage media.
Typical long-term storage media include but are not limited to flash memory, hard disks, floppy disks, tape and CD-ROMs. The main distinguishing feature of this media is that withdrawal of applied power does not result m a loss of the data stored therein. In contrast, short-term storage media include random access memory (RAM) which loses memory upon removal of applied power.
Despite the drawback of losing data stored in RAM when power is withdrawn, the benefits of RAM are substantial and not dismissed without consequences. For example, RAM is faster, draws relatively small amounts of power because there are no mechanical components, and RAM does not need to erase data stored at a particular address within RAM before new data can be written to that same address. Nevertheless, the challenge of providing virtually uninterruptable power to RAM also comes with a hardware penalty of redundant power systems.
A quick comparison of the benefits listed above for RAM versus the benefits of long-term storage media shows that a user had to be satisfied with a data storage system such as a hard disk, floppy disk, tape backup or CD-ROM that is not as fast as RAM, draws significantly greater amounts of power, is bulkier and thus less convenient, but is relatively much more reliable. Most of the drawbacks can be blamed on all the media requiring moving mechanical components. In contrast, RAM is very convenient, fast and draws substantially less power, yet cannot be relied upon not to lose data without the considerable expense and bulk of redundent power systems.
The long-term storage medium which conspicuously differs from those mentioned is flash memory. Using the same criteria for comparison as above, data can be written to and read from flash memory at a speed closer to RAM, draws power at a rate similar to RAM because it also has no moving mechanical moving components, but flash memory must first erase data from a particular address before new data may be written to the same physical address. Flash memory is also of a comparable size to RAM because they are both manufactured on computer chips as integrated circuits (IC) . Therefore, it would be advantageous to be able to replace RAM with a long-term storage medium when the substantial benefits of non-volatile data retention are required. When examining the characteristics of the long-term storage media above, it is apparent that the storage medium most similar to RAM because of size, power requirements, and speed is flash memory. Yet there remains the issue of flash memory requiring erasure of data before the memory space occupied by that data is reusable. Therefore, it would be advantageous to have a method of managing flash memory which would compensate for the erasure requirement. It would be a further advantage if this method minimized system overhead while increasing throughput of data storage and retrieval. It would also be an advantage if this same method could be applied to the other long-term storage media.
Before discussing the present invention, it is helpful to look at existing methods of memory management which are commonly referred to as operating system file structures. The Microsoft DOS operating system, hereinafter referred to as DOS, allocates long-term memory space through the use of a file allocation table
(FAT) associated with the particular memory. When a hard disk drive is first used, it is divided into sectors each of a fixed size and residing at a fixed place on the drive media. Each sector is then given a number. The FAT is a method for the DOS operating system to determine which sectors on a disk are used for a file. The FAT is essentially a map showing the location of files in the sectors by providing to the operating system the information needed to access data, such as the sector number of a file and possibly the length of the data in that file. The DOS system then typically saves the FAT on the disk at a dedicated location where data is not permitted to overwrite it. Thus the allocation of sectors in the FAT is the manner by which the operating system determines how- much memory space is available.
A prior art method of file management designed specifically for flash memory is explained in Ban, U.S. Patent No. 5,404,485. Ban, however, still takes the more conventional and disadvantageous approach of manipulating data stored in flash memory by first reading the data out to a large RAM, manipulating the data in RAM, erasing the flash memory where the data was originally stored, and then writing the data from RAM back to a contiguous block of flash memory. Ban also disadvantageously creates a file structure similar to DOS which maps the location of stored data.
The method of Ban creates several severe overhead burdens on the system which substantially hurt system performance. More specifically, Ban uses a virtual memory mapping system similar to the DOS FAT, the virtual memory map converting virtual addresses to physical addresses. Using this method of indirection, Ban attempts to facilitate use of flash memory as RAM. The problem with this approach is that Ban creates the need for this indirection because data manipulation takes place outside of flash memory. Ban mistakenly teaches that the time wasted copying blocks of data from flash memory to RAM for manipulation then back into flash memory is unavoidable.
A further significant drawback to Ban is the lack of fault tolerance in a system that utilizes a virtual map stored partially in RAM. The system is inherently unstable because any loss of power to RAM destroys the map which must then be reconstructed before the system can read or write data to flash memory.
Another drawback of Ban is that the RAM requirement grows as flash memory grows. This is the consequence of using a virtual map whose size must be a ratio of the larger flash memory media in order to reflect a scaled version of what is stored in physical addresses. Ban essentially teaches that it is necessary to follow the method already used in the conventional DOS operating system which also relies on long-term storage in conjunction with significant RAM resources. That is to say, the access to and structure of storage media is changed as little as possible so that the operating system does not have to be significantly altered to utilize flash memory.
While the objective of making a system see flash memory as RAM with its accompanying benefits of non- volatility is desirable, the approach taken by Ban fails to take full advantage of flash memory by continuing to rely heavily on RAM resources. This system then suffers from lack of fault tolerance which not only jeopardizes reliability, but slows down the entire system by requiring large data transfers between RAM and flash memory.
Accordingly, the challenge is to use a non¬ volatile, long-term storage medium such as flash memory which can take advantage of increased fault tolerance to power interruption, significantly reduced RAM resources, and reduced system overhead caused by data transfers between RAM and the storage medium, while overcoming the erasure drawbacks unique to flash memory.
OBJECTS AND SUMMARY OF THE INVENTION
It is an object of the present invention to provide a file system for non-volatile, long-term storage media which has a low processing overhead requirement, thus increasing data throughput.
It is another object of this invention to provide a file system which has particular application to the storage medium of flash memory.
It is yet another object of the present invention to provide a file system which is significantly fault tolerant, only losing data stored in a relatively small cache memory if power to the system is interrupted. It is a further object of the invention to provide a file system which does not require significant random access memory (RAM) resources.
It is yet another object to provide a file system which further reduces RAM requirements by replacing a memory map with logically linked serial data segments.
Another object is to provide a file system which efficiently and transparently erases flash memory m the background to further improve system performance. Yet another object is to reduce cost, space and power consumption by eliminating battery backup of information stored in a non-volatile primary memory.
Still another object is to provide a file system which uses absolute physical memory addresses to avoid the additional overhead created by memory mapping.
These and other objects are realized in a method of memory management for a primary memory created from non¬ volatile, long-term storage media, in particular flash memory, which enables direct manipulation of data segments stored therein. Furthermore, the present invention comprises two methods of data storage and subsequent retrieval . Common to both methods is the reality that the order m which data segments are subsequently recalled is typically not the order in which the data segments were saved to memory. Unique to each method, however, is the order in which data segments are logically linked to each other. The first and simpler method enables sequential recall of data in the order in which it was saved in memory. This sequential recall is referred to as logical recall because it refers to a sequence m time in which data was saved, and not the physical order which the data occupies m memory. This method is particularly useful for flash memory which has the characteristic of not being able to overwrite data m memory without first erasing previously recorded data. The second and more complex method allows for more sophisticated linking files. Instead of strictly linking files based on the order in which they are received, files are logically linked based on subject matter. Specifically, files are logically grouped together within directories and subdirectories. Yet because all files can be traced back in origin to a single root directory, all files are still logically linked to each other. Both methods comprise the minimum steps of logically dividing the primary memory into equal size blocks, each block being the smallest amount of data which can be read from or written to memory in a single read or write operation. A cache memory the size of at least one of the read/write blocks is then coupled to the primary memory and provides temporary storage space for data being written to and read from primary memory.
In a write operation of a new data segment, a header is placed at the beginning of the segment. The header indicates which logically related data segment precedes the new data segment. The header also indicates the location of the next logically related and subsequent data segment. By defining a "current data segment" as being the physical address of the data segment (and header) where an operation such as reading or writing would occur, previous or subsequent data segments can be accessed by moving from data segment to data segment by reading the appropriate header entry for a previous or subsequent data segment . The header entry contains an absolute physical address in primary memory of the target data segment .
These and other objects, features, advantages and alternative aspects of the present invention will become apparent to those skilled in the art from a consideration of the following detailed description taken in combination with the accompanying drawings. DESCRIPTION OF THE DRAWINGS
FIG. 1 is a flowchart of the steps of Ban for writing data to primary memory with the use of a virtual memory map. FIG. 2 is a flowchart of the steps of Ban for reading data from primary memory to a RAM resource.
FIG. 3A is a block diagram of the components of a preferred embodiment of the present invention which utilizes the file system of the present invention. FIG. 3B is a block diagram of the components of an alternative embodiment of the present invention which utilizes a different buffering scheme than shown in FIG.
3A for utilization of the file system of the present invention. FIG. 3C is a block diagram of the components of another alternative embodiment of the present invention which utilizes a different buffering scheme than shown in FIGs. 3A and 3B for utilization of the file system of the present invention. FIG. 4 is a block diagram of the physical structure of primary memory in a preferred embodiment .
FIG. 5 is a block diagram of the logical memory structure which overlays the physical structure of FIG.
4. The illustration provides examples of Headers, Data Segments, Free Space and Bad Blocks which occupy memory in Logical Blocks.
FIG. 6 is a block diagram illustrating Memory Block Mapping. Each logical block as defined in FIG. 5 contains a memory block map. FIG. 7A is a functional diagram of the data structure linkage implemented in a preferred embodiment of the file system of the present invention.
FIG. 7B is a representation of a DOS file structure depicting a root directory, directories within the root, and subdirectories within the directories. FIG. 8 is a LIFO stack which represents one implementation of a recursive search method for moving backward through data stored in memory.
FIG. 9 is a functional representation of a playback tree structure as implemented in the present invention.
FIG. 10 provides a functional diagram of a playback tree before insertion of a new message within an existing message, and after insertion of the new message as implemented according to the principles of the present invention.
FIG. 11 illustrates a pre-existing message before and after message deletion as implemented according to the principles of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
Reference will now be made to the drawings in which the various elements of the present invention will be given numerical designations and in which the invention will be discussed so as to enable one skilled in the art to make and use the invention.
Before discussing the details of implementation of the file system and the resulting benefits, it is beneficial to be reminded of the basic principles of flash memory as they apply to the present invention. First, as has already been stated, flash memory requires that previously written data be erased before the occupied space is used to save new data. If data is not erased, the new data is either corrupted or not written at all. Second, flash memory is of course non-volatile.
This means that no back-up power source is required to retain the data once it has been written to memory.
The previously mentioned Ban patent, U.S. Patent
No. 5,404,485, describes a virtual memory mapping method of memory management developed specifically for flash memory. The basic teaching of the patent is apparently an attempt to make flash memory appear as RAM. The problem, as stated previously, is that flash memory must erase previously saved data before new data can be written to the same memory space. Ban recognizes the inherent difficulty of using flash memory because of this limitation, and approaches the problem as illustrated in FIG. 1.
Ban is typical of previous methods of imitating RAM because Ban teaches that manipulation of data cannot be done in flash memory directly. Instead, data is always erased from flash memory, manipulated in RAM, and resaved to physically create a segment of manipulated data that appears in its complete and contiguous form. For example, step 10 looks for a contiguous block of memory large enough to store the entire manipulated file.
The present invention, however, realizes that data does not have to be contiguous in order to be readable in a logical or relational order. The present invention claims being able to manipulate data directly in flash memory because the flash file system of the present invention enables data to be read in a logical order regardless of how many segments the file is comprised of, and where these segments are saved in memory. That is to say, implementation of the flash file system requires that each data segment have written to it a header. Within the header in predetermined fields, absolute physical addresses are saved. These addresses are physical locations within flash memory of the next logical data segment. Thus, regardless of an actual physical location, data segments are retrievable in a seemingly contiguous order despite the fact that typically the data segments are not stored contiguously. Therefore, a key distinction between Ban and the present invention is that Ban continues to actually make all logically related data segments physically contiguous, while this invention advantageously only makes related data segments logically contiguous by 11 creating headers which contain pointers to absolute physical locations within flash memory which provide a logical path to the data segments. Stated more explicitly then, one of the advantages of the method of the present invention is that overhead (an unproductive operation of a flash memory file system) is significantly reduced. For example, Ban requires that all related data segments (an entire file which could be several megabytes long) be read from flash memory to RAM, the modification to the data segments be made in RAM (even the insertion or deletion of a single data bit) , and then the data segments transferred back to flash memory only if a completely contiguous flash memory location exists. If no such contiguous location is available, flash memory would then be erased. If this process produces contiguous flash memory space which is insufficiently large, then other data segments adjacent to the desired memory space would also have to be read to RAM, erased from flash memory, and transferred back to flash memory at a new location so as to create a sufficiently large contiguous memory space for the originally manipulated file.
The process of Ban described above reveals the tremendous overhead which its file system not only can produce, but is likely to produce. Overhead is likely because fragmentation of memory is typical of storage media. Flash memory is particularly vulnerable because of the requirement that new data cannot be written directly over old data as in most other storage media. The present invention thus overcomes the significant overhead problem of Ban by completely eliminating the transfer of any data segment which is to be modified. All modifications take place as changes to data segment headers only. Therefore, modifica ions are done directly in flash memory, and this process will now be explained more fully. FIG. 1 shows in a relatively simplified flowchart the process for writing data to flash memory by Ban. A quick walk-through will highlight the important features. The method involves a two-step translation process. First, a virtual address is mapped to a logical unit address in step 10. Secondly, a logical address is examined to verify if it is unallocated in steps 12 and 14. If the logical address is free, the logical address is mapped to a physical address in step 16, and then the data is written to the physical address in step 18. If the logical address was allocated, logical addresses are examined to find free memory in step 20, and then the appropriate mapping and writing is accomplished as above. In Ban, the method uses the concept of dereferencing to allow frequent movement of data. Dereferencing of data is a common programming technique that is partly used because it allows frequently updated blocks of data to be moved within RAM without losing track of the data. Losing track of mapped data is the dreaded "dangling pointer" problem associated with dereferenced data. A dangling pointer occurs when a pointer providing the memory location of data is not updated with a new address of the data after it is moved.
In Ban's method, movement is required because all modifications to data must take place n RAM, then saved back into flash memory. The implication of this process is obvious; overhead is tremendously taxing of processor resources as it becomes dedicated to the steps of FIG.
1. For example, even the slightest change to data requires reading all related data from flash memory to RAM, erasing the data from flash memory, modifying the data in RAM, and then saving all of the data back to flash memory in contiguous memory space. If all remaining contiguous data blocks are not sufficiently large for the data in RAM, memory must be rearranged to create a contiguous memory space.
FIG. 2 illustrates Ban's relatively simpler steps involved in reading data from flash memory, as opposed to the steps of writing described in FIG. 1. The key feature to recognize is that Ban's method requires indirection through virtual mapping to compensate for the frequent movements of data. The technique apparently enables flash memory to imitate the look, of RAM, but at the crippling overhead cost of significant data movement when any modification is made.
A last significant feature of Ban which must be illustrated applies both to the method of Ban and the required hardware for implementation of the method. Specifically, Ban requires a relatively large amount of RAM. This large RAM requirement is necessitated by the complication that the largest block of data which is modifiable must be able to fit entirely in RAM for modification. Providing a RAM resource for the worst case scenario is costly to provide and to supply with power. Also stored in RAM is the virtual map. Ban apparently attempts to limit reliance on RAM and increase fault tolerance by forcing a portion of the virtual map into flash memory. By splitting the virtual map, Ban also succeeds in increasing overhead again by creating a non-trivial process for locating the physical address of data as illustrated in FIG. 1. The attempt at increased fault tolerance is also only partially successful . For example, and this point must be emphasized, any modified data in RAM is permanently lost if power is removed. Obviously the data loss can be significant because an entire file in RAM whose previously occupied space in flash memory has just been erased would be irrevocably lost if power were interrupted. Furthermore, all virtual mapping in RAM is also lost, and must be reconstructed by scanning a block usage map that resides at the top of each block. This process must also be executed at system startup.
The objectives of the Ban patent are highly desirable, but implementation using the technique of virtual mapping leaves any system using the Ban method not only vulnerable to significant data loss, but tied to a method which inherently cripples itself with overhead requirements. The result is a costly device which requires significant RAM resources, a powerful processor to manage the overhead, and a larger and heavier power source to supply the needs of the system. It seems that the benefits of flash memory, such as being non-volatile, small, fast, and a relatively conservative user of power are lost. The present invention takes a very different approach to memory management. This new approach, embodied in a method and apparatus, overcomes the significant drawbacks of Ban. This is accomplished by taking advantage of the properties of flash memory, instead of treating them as a liability.
To understand the new method, it is necessary to have an understanding of the arrangement of the underlying hardware. The apparatus of the present invention is shown in block diagram form in FIG. 3A. As illustrated, the components of a system utilizing the memory management method of the present invention include an I/O device 30, a processor 32, a small cache memory 34 for temporary storage of data, and in a preferred embodiment, a relatively larger flash memory 36 for non-volatile, long-term storage of data.
What is important to understand about FIG. 3A which significantly differentiates the present invention from Ban is that the cache memory 34 in a preferred embodiment is only as large as a single read/write block of data as defined for the flash memory and to be explained in detail. The read/write block is typically as small as 256 bytes of data, and consequently the size of the cache memory 34 is also the same as the read/write block.
The size of the cache memory 34 is also constant in the present invention. This is also significantly different from Ban which admits that the size of RAM must grow accordingly relative to the size of flash memory provided by the system. This is because Ban uses RAM for memory mapping. More flash memory as primary memory in Ban requires an appropriately larger sized RAM, whereas this invention strictly uses a cache memory 34 to temporarily store a block of read/write data.
Having stated some of the most significant differences in physical structure of systems using the present invention and Ban, it is now possible to see how the method of this invention is able to significantly reduce overhead while increasing fault tolerance.
The file system of the present invention is adaptable to use many conventions of existing file structures of operating systems currently in use so that it may be considered hardware independent. This feature of portability is crucial in this day of multiple hardware platforms for storing digital data. Therefore, the Norris Flash File System (referred to hereinafter as "file system") can also be viewed at different levels of implementation. That is to say, the file system is working at several different levels so as to be compatible with existing hardware. For example, there is a physical memory structure, memory block mapping, and a file system logical overlay of the physical memory structure as shown in FIGS. 5 and 6. There is also an hierarchy for cataloging data saved in the memory. For example, the file system is compatible with DOS found on many Intel microprocessor-based personal computers today. This is accomplished by including with the file system elements which make files saved by the file system recognizable to DOS. This includes using DOS conventions such as a maximum of eight characters for a file name followed by a three character extension. More features of the file system, to be explained later, include implementation of an Application Program Interface (API) which consists of higher level file system function calls, as well as BIOS (hardware specific) function calls which execute the functions of the API calls at the hardware level.
However, it is important to remember that the file system is only making the chosen long-term storage medium appear as RAM which an operating such as DOS expects to use for data manipulation. Furthermore, the present invention provides a file system which appears to have significant RAM resources, when m fact there is only a non-volatile long-term storage medium and a small cache, typically comprised of RAM.
As stated before, flash memory is chosen as the non-volatile long-terms storage medium of this preferred embodiment because of its desirable characteristics, and its limitations are then dealt with and overcome by the file system. However, it is imperative to understand the organization and limitations of the preferred memory.
Understanding the organization of any flash memory begins with FIG. 4 which is a diagram of memory structure used in the preferred embodiment. Shown at its simplest level, flash memory is divided up logically into blocks 40. A block 40 represents the smallest amount of physical memory which must be manipulated to accomplish a specific file system operation. The operations which concern this invention are reading, writing, and the particularly unusual requirement of erasure uniquely applicable to flash memory.
Typically, flash memory reads and writes to the same size block 40 of physical memory. However, erasing is a function that is presently limited to larger blocks 42. A typical erase block 42 is shown in relative proportion to the read/write blocks 40. The specific size of the read/write and erase blocks 40, 42 are typically a function of the technology which is used for flash memory, as well as desired optimization of operation. There are presently two types of flash memory chips on the market. A first chip uses NAND technology and is presently the preferred type of memory for this invention. The second chip uses NOR technology. The specific attributes of these chips are summarized in the table below which contrasts typical 1 megabyte (8 megabit) NOR and NAND devices.
Type Erase Number of Read/Write Number of Block Erase Block Size Read/Write Size Blocks Blocks
NOR 64K 16 1 1M
NAND 4K 256 256 4K
Table 1
While Table 1 shows that the minimum read/write block 40 size is 256 bytes in NAND flash memory, in practice it has been found that it is sometimes advantageous to stream data as shown in FIG. 3B. Streaming data comprises using a first cache 50 memory for reading from and writing to a primary memory 30 while using a second cache memory 52 for storing data that is being processed. Advanced streaming includes making the two cache memories switch tasks as shown in FIG. 3C, allowing the file system to read or write data to primary memory concurrently. It is important to note that NOR flash can actually read data from and write data to flash memory and process it without caching data, but for consistency in a preferred embodiment, the NOR and NAND technology flash memories are treated the same. Essentially, this leaves the NOR flash memory with double the normal size cache memory just for buffering of data if streaming is implemented.
In comparison, conventional file systems of computer operating systems which don't page memory load the entire file into RAM for manipulation. Depending on the application, the largest file size could be multiple megabytes, requiring a correspondingly large RAM resource. For example, in a file creation mode, conventional file systems build the entire file m RAM, and then copy the file to flash memory or some other long-term storage medium such as a hard disk when the creation of the file is complete. The obvious benefit of the file system, then, is avoiding the use of a large RAM resource and yet accomplishing typical RAM-like functions.
The level of memory structure above the physical read/write and erase blocks 40, 42 is the logical memory structure shown in FIG. 5. In other words, the file system of the present invention advantageously organizes a logical memory structure over the physical memory structure. The logical layer maps "logical" blocks 60 onto the physical memory 62. Typically for NAND flash memory, a logical block 60 is composed of one or more erase blocks 42. Preferably, the physical memory 62 is divided into at least 1024 logical blocks 60, provided that 1024 erase blocks 42 are available. While not apparent, the number of logical blocks 60 is important because it represents a key balance between performance and overhead of the file system. Having
1024 logical blocks 60 allows for an effective 0.1% erase block 42 (and bad block) granularity.
Thus, with larger read/write and erase block NOR flash memory, there is typically one logical block 60 for each erase block 42. With a 2 megabyte NOR flash memory, only 32 logical blocks 60 would be allocated. With NAND flash memory and a 4 kilobyte erase block size, only 512 erase blocks 42 exist, and only 512 logical blocks 60 could be allocated. However, increasing the NAND flash memory size to 8 megabytes, 1024 logical blocks 60 would be allocated, each containing 2 erase blocks 42. To calculate the number of erase blocks 42 per logical blocks 60, the inventors use the formula:
Blocks = int(((Tota1 memory/Size of erase block) + 1023) / 1024)
Each logical block 60 may also divided into one or more segments. Segments can be sized from a minimum of the read/write block 40 up to the maximum size of the logical block 60. As a rule, at least one segment is allocated for each file, file header, directory header, etc. Thus, large files that span multiple logical blocks 60 will have at least one segment for the file header and another segment for each logical block 60 only partially utilized. It should be apparent that a segment is sized according to the requirements of the data to be saved. As will be shown, different headers occupy different amounts of memory, and are coupled to different sizes of data segments.
The logical memory structure mapped to physical memory structure shown m FIG. 5 quickly illustrates some of the basic memory concepts explained above. As an example, a logical block 60 is comprised of four erase blocks 42, each of which are in turn comprised of a plurality of read/write blocks 40. Also shown are examples of occupied 64, free 66 and bad memory 68 segements.
A last memory principle requiring explamation is the file system concept of memory block mapping.
Logical blocks 60 shown m FIG. 6 are comprised of up to four erase blocks 42. The memory block map indicates good erase blocks 42 and bad erase blocks 68. As indicated previously, a single bad bit within an entire erase block 42 renders the erase block useless because there is no way of preventing the system from writing to the bad bit, thereby corrupting data. A bad block 68 is indicated in the memory block ID mapping 70 as a 0 bit
72. As part of memory block mapping, a logical block control map 74 contains within it the beginning physical address of each block 76, the physical address of the beginning of the next available memory space within the logical block 78, and a map of bad blocks 80 within the logical block 60 to which data should not be written. The Memory Block Mapping is shown in FIG. 5 as the Block List 82 section of each Logical Block 60.
Keeping these aspects of memory structure in mind, it is now possible to discuss the type of data which is stored in primary and cache memory, but not yet how it is stored. As indicated previously, the file system stores more than simply data files. It is the organization of the memory which comprises a key point of novelty of the present invention. The file system is organized through the use of headers. Headers store most of the critical information which enables the file system to read data from memory, as well as to be able to search for data. While headers per se are not novel, the information stored in the headers of the file system create unique benefits to the user.
FIG. 5 which depicts the logical to physical memory structure, hinted at the use of headers when it showed Volume 90, File 92, Directory 94 and Device 96 Headers in Logical Blocks 2, 3, 4 and 5. In the preferred embodiment of the present invention, there are six possible header structures, and several other header- type data structures. The six headers listed in hierarchical relevance are the Device Header, Volume Header, Directory Header, File Header, Data Header, and the Tree Header.
The elements of the Device Header are shown in Table 2 below. Name Size Type Description
Segment ID 2 Word Unique number assigned by system
Size 2 Word Number of bytes used after this field
Type 2 Byte A501h
Version 2 Byte MSB: Major, LSB: Minor of formatting host
Attributes 2 Word DOS attributes & "deleted" attribute
Hardware Configuration 2 Word
Manufacturing Data Code 2 Word
Lot # 2 Word
Serial Number 4 DWord
Erase Block Size 4 Dword
Erase Block Count 4 DWord
Logical Block Size 2 Word Number of Erase Blocks per Logical Blocks
Creation Date/Time 4 Long Seconds since January 1 , 1980
Erase-in-Process 1 Char OOh-ln process, 01h-Complete, FFh- Restore
Delete-in-Process 1 Char OOH-ln process, 01h-Complete, FFh- Restore.
Number of Bad Blocks 2 Word
Bad Block List 2x32 Word Block numbers of bad blocks
Reserved 154 Char Reserved for Norπs
Table 2 The Device Header, most importantly, contains information relevant to the hardware of the particular primary memory medium being organized by the file system. The file system looks to the Device Header when it creates the logical memory layout over the physical memory as discussed previously. For example, the erase block size 42 and logical block size 60 are written here. FIG. 5 correctly shows only a single Device Header 96 in memory, because there is typically only one type of flash memory or any type of primary memory used at one time. More plainly, this means it is not possible for the file system in a preferred embodiment to control both a NAND and NOR flash memory simultaneously. Nevertheless, by providing a Device Header 96, it is possible to implement a combination of the two.
The next most relevant header structure is the Volume 90 Header. The elements of the Volume Header are shown below in Table 3.
Name Size Type Description
Segment ID 2 Word Unique number assigned by system
Size 2 Word Number of bytes used after this field
Type 2 Byte A502h
Version 2 Word MSB: Major, LSB: Minor of formatting host.
Attributes 2 Word DOS attributes & "deleted" attribute
Name 11 Char DOS Volume Name
(filler) 1 Char
Creation Date/Time 4 Long Seconds since January 1 , 1980
Description 20 Char
Access Password 20 Char
Read Password 20 Char
Reserved 42 Char Reserved for Norπs
Primary Segment Index 2x32 Struct Entries vary with number & size of Logical Block
Next Segment Index 4 DWord Address of First Secondary Segment Index
Table 3
The Volume Header information pertains principally to the logical mapping memory structure shown in FIG. 5. The Volume Header is easily analogized to the format information of a typical hard disk or floppy disk medium. Until formatted, a hard disk is useless to an operating system's file system. Formatting provides the logical structure overlay which a file system uses to determine the logical and corresponding physical location of data stored therein. Therefore, an element of the Volume Header is the expected DOS volume name. Typically, the present invention also requires only a single Volume Header 90 for the selected memory medium. Beneath the Volume Header 90 in hierarchical structure comes the Directory Header 94. Those familiar with DOS are familiar with a root directory essentially being the root node from which branch various subdirectories. Directories are created so as to organize related data files. However, the root directory in DOS is no different in structure than a subdirectory which resides in the root directory, except for its logical location. Similarly, a first Directory Header 94 forms the root directory of the file system. All subsequently created directories are logically located within the root directory. The elements of the Directory Header are listed in Table 4 below.
Name Size Type Description
Segment ID 2 Word Unique number assigned by system.
Size 2 Word Number of bytes used after this field.
Type 2 Byte A508h
Version 2 Word MSB: Major, LSB: Minor of formatting host
Attributes 2 Word DOS attributes & "deleted" attribute.
Owner ID 2 Word Segment ID of Directory Header (0 if root)
Directory ID 2 Word Unique number assigned by system.
Name 8 Char DOS file name.
Extension 3 Char DOS file extension.
(filler) 1 Char
Creation Date/Time 4 Long Seconds since January 1 , 1980
Description 20 Char
Access Password 20 Char
Read Password 20 Char
Reserved 38 Char
Table 4 Like DOS filenames, directories are also comprised of a maximum eight character name, and three character extension.
The Device, Volume and Directory Header 96, 90, 94 described above are the minimal file structure Headers that must exist for the file system to operate. The Device, Volume and a root Directory Header 96, 90, 94 are thus created when a flash memory device is first initialized. These Headers all exist, however, to support the file structure in being able to logically and physically store data, which is the purpose of the file structure. It is left to the Header File 94 structure to actually begin storing data. The elements of the File Header are shown in Table below.
Name Size Type Description
Segmend ID 2 Word Unique number assigned by system.
Size 2 Word Number of bytes used after this field
Type 2 Byte A510h
Version 2 Word MSB: Major, LSB Minor of formatting host.
Attributes 2 Word DOS attributes & "deleted" attribute
Owner ID 2 Word Segment ID of Directory Header
File ID 2 Word Unique number assigned by system.
Name 8 Char DOS file name.
Extension 3 Char DOS file extension
(filler) 1 Char
Creation Data/Time 4 Long Seconds since January 1 , 1980
Data Type 2 Word
Description 20 Char
Access Password 20 Char
Read Password 20 Char
Author ID 2 Word ID assigned to author.
Job Number 2 Word Job number.
Priority Level 2 Word Job priority
Transcriber ID 4 DWord
Reserved 26 Char
Table 5
Notable additions to the elements are a data type and description. One File Header 92 precedes every data file within the file system regardless of the number of data segments required to store the file data. Closely connected to the File Header is the Data Header. The elements of the Data Header are found listed in Table 6 below. Name Size Type Description
Segment ID 2 Word Unique number assigned by system
Size 2 Word Number of bytes used after this field
Type 2 Byte A520h (data)/A521h (erasure)
Version 2 Word MSB Major, LSB. Minor of formatting host
Attributes 2 Word DOS attributes & "deleted" attnbute
Owner ID 2 Word Segment ID of File Header
Data size 2 Word Size of user data
Insert ID 2 Word Segment ID to insert this data (FFFFh = beginning)
Insert Offset 2 Word Offset within insert Segment's data
Edit Date Time 4 Long Seconds since January 1 , 1980
Reserved 42 Char
Table 6 The Device, Volume, Directory, File and Data Headers form the basis for the file system. In might be thought that these are sufficient to organize data within any file system. However, to stop here ignores the perils of flash memory. Therefore, the file system, to be adaptable to any storage medium, must contain sufficient structure to adapt to flash memory even if not used by other storage media.
If flash memory is to serve as primary memory with a relatively small RAM resource which cannot easily be used for data manipulation, the file system must also provide a method for easily accessing the data stored in flash memory. Providing easy access is what makes the use of flash memory an advantage instead of a liability.
The present invention uses two additional header structures to enable the file system to see flash memory as RAM. They are the Tree Header and a header-like data structure, the Tree Node. These structures are listed below in Tables 7 and 8 respectively.
Name Size Type Description
Size 2 Word Number of bytes in header
File ID 2 Word Message ID of file
First Node 2 Word Address of first node in tree
Next Tree 2 Word Address of next tree header
Table 7
Name Size Type Description
Size 2 Word Number of bytes in header.
Next Node 2 Word Address of next node in tree.
Start Data ID 2 Word Segment ID of starting Data Segment.
Start Data Pointer 2 Word Address of starting Data Segment.
Start Data Offset 4 DWord Starting offset within starting Data Segment.
End Data ID 2 Word Segment ID of ending Data Segment.
End Data Pointer 4 DWord Address of ending Data Segment.
End Data Offset 2 Word Ending offset within ending Data Segment.
Table 8
The Header structures described in Tables 2 through 8 are the minimum essential elements of the file system required to be able to organize data such that it can be saved and recalled from primary memory. However, access to data does not necessarily mean that the access provided by the file system is sufficiently rapid in real-time for particular applications.
For example, the file system of the present invention is particularly useful for the flash memory used as primary memory in U.S. Patent No. 5,XXX,XXX for a HANDHELD RECORD AND PLAYBACK DEVICE WITH FLASH MEMORY by Norris et al . This patent is incorporated by reference herein, and provides a detailed example of the benefits of flash memory in a portable recorder. The essential feature of playback provided by the handheld recorder illustrates the requirement for real-time recall of data. In particular, it was mentioned that there are two methods of memory storage, organized by time or by subject matter. A dictaphone typically operates by storing data by time only. However, it should now be apparent that organizing voice messages by subject matter is now possible.
To facilitate rapid recall of data, it is not enough that the present invention provide a novel method and apparatus for sequential recall of logically ordered data. To show that the features of RAM are actually provided by the present invention, real-time implementation requires rapid data recall. This method is implemented and assured through the use of the Tree Header and Tree Node. The Tree Header and the Tree Node are a novel and essential part of the file system. The information stored therein is used to facilitate rapid data access to more accurately imitate the abilities of RAM through the use of a playback tree structure. The last header structures of the present invention are the Segment Index Entry and the Secondary Segment Index whose members are shown in Tables 9 and 10 respectively. Name Size Type Description
Segment ID 2 Word ID of Segment
Type 2 Word Segment Type (Directory, file, data, etc..)
Next ID 2 Word ID of next Segment in chain (FFFFh = end)
First ID 2 Word ID of first Segment in chain (FFFFh = end)
Address 4 DWord Physical address of Segment
Table 9
Name Size Type Description
Segment ID 2 Word Unique number assigned by system.
Size 2 Word Number of bytes used after this field.
Type 2 Byte A504h
Version 2 Word MSB: Major, LSB: Minor of formatting host.
Attributes 2 Word DOS attributes & "deleted" attribute.
Segment Index 12 x ? Struct Entries vary with block size & chips used.
Next Segment Index 4 DWord Addressed of next Secondary Segment Index.
Table 10 The Segment Index Entry is an element of every Header structure shown above, and provides essential data structure linkage to be explained. The Secondary Segment Index fulfills the function of
The disclosure to this point has merely hinted at the potential of the file structure. The building blocks of the file structure have at least now been sufficiently explained such that their implementation as parts of the file structure can be observed more closely at a functional level.
The functional description should begin with a look at how some of the Headers are logically linked to accomplish the objects of the present invention. To that end, FIG. 7A is a functional diagram of the data structure linkage in a preferred embodiment. The perferred embodiment includes logically linking data segments by subject matter where directories are used to define different subjects. This is opposed to linking data segments organized only by the order in which they were recorded.
Essentially, Headers and the file and data segments to which they are appended are all linked in a manner which provides sequential recall of data organized by subject matter. It is easy to see how this concept is important to the Norris Patent by observing that data segments which represent portions of a single voice message but saved noncontiguously in memory must be recalled as they were spoken. In a like manner, several discrete but logically related voice messages could be played back in sequence it logically stored in the same directory. It is the limitation of memory file fragmentation which causes voice data not to be stored sequentially, and thus complicating playback. The file system as shown in FIG. 7A enables logical data recall.
Referring first to FIG. 7B, a representation of a
DOS file structure is shown for illustration purposes.
The lowest level of the file structure is the root directory 100. Within the root directory 100 is at least one directory 102, but there may be many more. Within each directory 102 a variable number of subdirectories 104 may also be created which typically contain subsets of data related to the directory in which they are located.
Referring back to FIG. 7A, the Directory Headers 110 and 112 may be considered to be directories 102 within the root directory 100 of FIG. 7B. While table 3 shows all of the information stored in the Directory Header, the essential functional linkage information is shown as the ID 114, the Next File ID 116, and the First File ID 118. These functional fields correspond to the fields in the Segment Index Entry 112 of Table 9 which, as stated above, is appended to every Header.
As shown, the Next File ID 116 provides a pointer to the physical address where the next Directory Header ID 120. Thus, the Next File ID 116 can be thought of as being more accurately referred to in the Directory Header 110 as a "Next Directory ID". However, functionally, the Directory Header 110 is just another file, so the label is still appropriate, and so the label is kept for the sake of consistency only. The First File ID 118 is naturally associated with an actual File Header.
As shown in FIG. 7A, File Headers also have an appended Segment Index Entry such as File Header 124. However, two fields have different functions. The Next File ID 126 actually points to a next File Header ID 128 within the Directory 112, instead of pointing to another Directory. The First File ID 130 has also been replaced with the First Data ID 132. This field, as shown, points to a Data Header 134 having and ID 136, a Next Data ID 130, and an actual data field 140.
While the data structure linkage illustrated functionally in FIG. 7A appears rather straightforward, with directories pointing to other directories or files within themselves and the files in turn pointing to related data, there are still several features which are not obvious, and bear explanation to fully understand the advantages of the present invention. For example, while the linkage shows how to move down through the layers of directories, subdirectories, files and data segments, the arrows 142 are distinctly unidirectional. That is, they only point down a link, but not back out. It was stated rather simplistically before that all data within the primary memory was sequentially linked. This might lead one to believe that FIG. 7A is then in error, because the last data segment of a data header such as data segment 144 does not point to the beginning of a next file or directory. On the contrary, File Header 124 and not the Data Header 134 is shown to point to a subsequent File Header 146 and to a Directory (subdirectory) Header 148. The reason for the more complicated linkage structure and the method for providing backward linkage are as follows.
When it was stated that the present invention is distinctly suited for the handheld recorder of the Norris patent, that is because a dedicated dictaphone- type device using a magnetic storage tape does not require a very complicated hierarchical data linkage structure. All messages can only be stored sequentially. However, a more sophisticated data structure can provide the ability to put all recorded voice messages related to a certain topic within a particular directory so that recall can be facilitated by looking for subject matter. Likewise, an even more detailed data structure might wish to further categorize subjects within a single subject by providing subdirectories within a directory. Therefore, it is more advantageous for the file system to be able to logically link related data segments within directories and subdirectories, rather than link data segments in the exact time sequence order in which the data was recorded.
Therefore, in a voice recorder such as the one taught by Norris, if the user records a first message regarding topic A, then records a second message regarding topic B, then records a third message regarding topic A, it would be more advantageous to provide a method and apparatus such as the present invention for logically linking the first and third messages together within the same directory such that the first and third messages could be played sequentially without also playing the second message. For example, in FIG.7A, the first message could be the "Large File" 150, the third message could be the "Small File" 152, and the second message could be File Header 154.
Throughout all of the discussion above, it is imperative to remember that the present invention makes it possible to recall data which is likely not to be stored contiguously in primary memory. This enables the PI to ignore the sequence in which messages were recorded, thereby facilitating subject matter organization of files. More specifically, the headers of the present invention are written so as to contain pointers which point to files which a user deems to be logically related to by subject. Therefore, while each Directory Header, File Header, and Data Header is comprised of a contiguously stored data segment in memory, logically related data segments such as 156, 158 and 160 may be physically stored in very different locations in memory. This is very advantageous because trying to move data so that all logically related segments 156, 158 and 160 all reside in contiguous memory would be a very heavy overhead burden on a processor which is otherwise engaged m playback or digitizing and compression of the voice messages.
Having discussed the advantages of providing the data structure of FIG. 7A, it is still necessary to discuss how the present invention provides necessary backward linkage. Specifically, being able to get down to File Header 124 leaves the task of getting back out to Directory Header 112 so as to be able to access other directories and files. This recursive searching is accomplished through the ID field such as 114, 120, and 162. By tracking ID fields and their absolute physical locations in primary memory, it always possible to go back "up" a link just as easily as it was to move down the link. One possible implementation of this linkage is to create a "Last In, First Out" (LIFO) stack.
For example, assume the first entry in the LIFO stack is always the location of the root directory Directory Header 100 as shown in FIG. 8. Following the route illustrated in FIG. 7A, the next entry would be the location of the next directory Directory Header 112. The user then went into a file 124 within the next directory 112. At this point, if the user went down into the file, subsequent stack entries would be pointers to sequentially subsequent Data Headers 134, 164 and 166. When the last Data Header 166 in the file 124 was reached, the user simply backs out of the LIFO stack by pulling from the stack the absolute location m memory of previous headers until reaching a Header which provides either the location of the next file within the directory, or the user continues backwards through the LIFO, popping off addresses of hierarchically greater Headers until reaching the desired branching point shown in FIG. 8 as the File Header 124. Those skilled m the art w ll appreciate other methods of backward linkage.
The discussion above has focused on the data linkage structure which provides the framework for the file system to move from data segment to data segment, backwards or forwards as desired. Forward linkage is accomplished by looking to Next File IDs, First File IDs, and First Data IDs shown in FIG. 7A. Backward linkage may be accomplished through such techniques as a LIFO stack. However, another file system structure is provided by the present invention when previously recorded data is to be recalled. For example, if a voice message is to be played, if a new voice message is to be inserted within an existing message, or if an existing portion of a message is to be deleted, these operations all take place in a tree structure in a portion of primary memory permanently set aside for manipulating data in these ways. A playback tree structure provides a place in memory without any Directory Headers or otherwise irrelevant data which has nothing to do with the task of manipulating a particular data segment, in this case a voice message.
For example, a playback tree structure as implemented in the present invention is shown in FIG. 9. The figure shows a playback tree which is used to logically recall data which is probably not stored sequentially in physical memory. As stated previously, the present invention is ideal for recalling data which has been stored in a critical sequence. The nature of file fragmentation typically precludes related data from being stored contiguously. As shown in FIG.9, the playback tree -has a single Tree Header 180 and at least one Tree Node 182, 184, 186, but typically many more. The playback tree essentially functions as a shorthand method of recalling data. In the case of the handheld recorder of Norris, the data represents recorded voice messages. Data must be recalled at a fairly consistent rate to provide accurate voice or sound reproduction. Rapid data recall is implemented in a preferred embodiment by setting aside an area of primary memory as "work" memory. Work memory is reserved at all times for playback trees. The work memory essentially becomes a mirror of the structure of File and Data Headers in primary memory.
One of the novel advantages of the present invention includes a method for seamless insertion of a voice message within another voice message. Seamless insertion means that a user can interrupt an existing voice message at any point, record a new message, and play back the existing message with the new message seamlessly interrupting the existing message at the point the new message was recorded, and then continuing on with the rest of the existing message when the new message finishes playing. The present invention accomplishes this advantageous and seamless insertion process through the use the headers already explained above. More specifically, however, the present invention uses the functional structure in FIG. 10. FIG. 10 provides a functional diagram of a playback tree before insertion of a new message 190 within an existing message 188, and after insertion of the new message 190. Normally, the branch node 192 of a tree node is not used. However, when a message is inserted,
Tree Node 1 (182) branches to Tree Node 2 (194) . The
Data Size field 196 is modified to reflect the portion of the original message to be played up to the insertion point, referred to a pre-insertion data 198. After the shortened data segment 198 has been played, Tree Node 2
(196) branches to Tree Node 3 (200) . Tree Node 3 consists of the inserted message 190. Finally, Tree
Node 3 branches to Tree Node 4 (202) which consists of a data segment which is all of the original message after the insertion point, referred to as Post-insertion data 204.
If the playback tree structure is then to be used to play back the entire new message consisting of data segments 198, 190 and 204, the message would seamlessly play the data segments as if the message consisted of a contiguously stored message comprised of Pre-insertion Data 198, Inserted Data 190, and Post-insertion Data 204.
The process above is essentially the same used for deleting or editing out a portion of an existing message. FIG. 11 illustrates a pre-existing message 210 before and after message deletion. The message structure 210 before deletion is essentially the same as before insertion of a message as shown -in FIG. 10. However, instead of creating a Tree Node for the inserted data, as well as for the Pre and Post insertion segments, only Pre and Post erasure Tree Nodes 212 and 214 are created. Data Header 2 (216) is created to keep track of the deleted data for edit history purposes. This is critical when using flash memory, because the data must eventually be erased before new data can be stored in the same physical memory space. By keeping track of what memory is used, free and marked for erasure, flash memory space can be optimized.
The description of the present invention given above has focused on describing it at a functional level. Still to be described are the Application Program Instructions (APIs) which carry out the high level function calls to implement the functional operations already described.
Table 11 summarizes all APIs implemented by the file system of the present invention. These instructions are executed by the file system in response to commands received by a processor. These API commands can be considered to be "high level" instructions executed by the file system to manipulate files stored in primary and cache memory. To understand how these API function calls operate, an example will now be given which utilizes each of the calls so as to clearly illustrate proper use and important features. Function Descπption nfsChDir Change the current directory ngsClose Close a file nfsCreate Create a file nfsDelete Delete a file nfsErase Erase data within a file nfsFind Finda file (first, next, etc ) nfsGetAttr Retrieve the DOS attributes of a file nfsGetltem Retrieve selected mforamtion items from a file nfsGetSyste Retrieve global file system parameters nfslnitialize Setup NFS and optionally erase or reorganize user data nfsLength Return the length of a file's data nfsMkDir Make a new subdirectory nfsOpen Open a file nfsOptimize Erase & pack all unused space nfsRead Read data from a file nfsRename Rename a file nfsRmDir Remove a subdirectory nfsSeek Seek to a position of data within a file nfsSetAttr Change the attributes of a file nfsSetltem Change selected information items in a file nfsSetSystem Set global file system parameters nfsStat Obtains information about a file nfsTell Return the current position within a file nfsWπte Write date to a file
Table 11 The first API executed is Initialize. This API is activated when power is first applied to a device utilizing the flash file system of the present invention. It might be easier to conceptually visualize the API as two different commands in DOS. They are the format command and the fdisk command. In other words, when a power switch is toggled on, the Initialize API prepares the file system for operation. Preparation includes bulk erasure of the flash memory if the memory has been previously marked for bulk erasure. This is the format aspect. Initialization also includes the process known as the fdisk command. In DOS, fdisk partitions the medium and divides the medium into sectors so that the FAT can map files to specific predefined sector addresses. Fdisk also creates the initial root directory to which all directories, subdirectories and files within them can trace their origin.
The process of initialization also includes the process of placing in cache memory a "message pointer" which can be analogized to a cursor m memory. Just as a cursor indicates where insertion or deletion of data is to occur, the message pointer is a cursor in primary memory which indicates where reading or modification of data will occur if executed. Upon initialization, the message pointer is located at the root directory. The concept of a message pointer is crucial to the present invention because of the sequentially and logically linked data segments of the present invention. Initialization of the file system thus includes the step of determining a current position of the message pointer in primary memory. Initialization also includes the step of first creating a Directory Header if a bulk erase function call was executed. A Directory Header to a root directory can be considered to be the lowest minimum setup of the file system if no files or subdirectories yet exist.
After initialization of the file system, it will be assumed for this example that no data presently exists in flash memory. At this point, the user has the options of creating a directory within the root directory (essentially every directory created by the user is a subdirectory of the root directory) , create a file within the root directory, or carry out some utility function call other than one which acts upon data which presently exists, because no file has yet been created. Creating a directory is the nfsMkDir
(Norris File System Make Directory) command which creates a subdirectory in the current directory. Creating a file is the nfsCreate command which involves the steps of creating a new file, opening a file in the current file system, writing a file header and preparing for the writing of data.
The utility functions which can be executed without files existing include changing the current directory with nfsChDir, retrieving global file system parameters with nfsGetSystem, erasing and packing all unused space with nfsOptimize, and setting global file system parameters with nfsSetSystem. The nfsChDir requires that the specified directory must reside in the current directory, and the directory's ID is used to locate it if the file pointer does not already have the directory's address. The nfsOptimize process erases and packs data segments to recover unused space. It is important to note that premature interruption of the process compromises file structure integrity. The optimize process must begin again to restore the file system to a usable condition without loss of data.
The remaining APIs almost exclusively perform some function call on files which have now been created. The API of nfsClose for closing a file which closes and flushes if a data segment is currently open for calls to nfsWrite, and nfsDelete for deleting a file which marks file and data header to signify deletion and which does not affect data until an optimization is done. Furthermore, if specified in the global flags, the optimization will be performed automatically upon deletion of a file. The function call of nfsErase for erasing data within a file marks a section of data in the current file as deleted. Specifically, "erased" data is marked from the position specified by IStart for the number of bytes specified by lSize as represented by the presentation map created by nfsOpen.
The function call nfsFind or finding a file (first, next, etc.) finds the file specified in the current directory, and has the user selectable options of finding the first name, finding the first file in the current directory, finding the last file in the current directory, finding the next file following any previously used file, finding the file preceding any previously used file, finding a normal file, finding a read-only file, finding a hidden file, finding a system file, and finding a subdirectory.
The function call nfsGetAttr retrieves the DOS attributes of a file from the DOS attributes flags of the current file. The DOS attributes flags are found in Table 12 below.
Name Size Type Description
Read Only Flag 1 Bit O Set if file has 'Readonly' attπbute
Archived Flag 1 Bit 1 Set if file has 'Archived' attribute
Hidden Flag 1 Bit 2 Set if file has 'Hidden' attribute
System Flag 1 Bit 3 Set if file has 'System' attribute
Unused 3 Bit 4- 14 Not used
Deleted Flag 1 Bit 15 Set if Segment is deleted
Table 12
The function call nfsGetltem for retrieving selected information from a file will retrieve one of the following attributes: file creation time, data type, description, access password, read password, author ID, ob number, priority level or the transcriber ID.
The function call nfsLength returns the length of the current file in bytes. If a file is not currently open, the call fails. The function call nfsOpen finds the specified file in the directory and opens it for both reading and writing. Since a file may consist of randomly sequenced segments of data and erasures, the function call must create a presentation map of the unerased data to be able to allow access to it as one continuous stream via call to nfsRead and nfsSeek. The function call nfsRead reads data from the current file. The requested data is taken from the continuous stream using the presentation map created by nfsOpen. The function call nfsRename renames the currently opened file. A new file header is created to record the new name, the old file header is marked as deleted, and optimization must be performed to update the file system's index of segments making the file accessible by the new name.
The function call nfsRmDir removes a subdirectory from the current directory. This also applies to directories in the root directory since every directory is a subdirectory of the root. The directory header is marked "deleted", but the space is not reclaimed until optimization. In addition, all files must be removed prior to the deletion of the directory.
The function call nfsSeek repositions the current file pointer. The current file pointer was created with the command Initialize, and represents the position in the sequential presentation of the data using the map created by nfsOpen and is without regard to the physical blocks referenced by the presentation map.
The function call nfsStat is used top obtain information from the current file. The retrievable statistics include total file size, the file's creation data, "last modified" time, and the edit time at the current position. The time is defined as the time since January 1, 1970.
The function call nfsTell returns the sequential position in bytes of the current file. If a file is not currently open, the call fails.
The function call nfsWrite writes data to the current file. Upon creation of a new segment, the current file position becomes the insertion point for the data. This is recorded when the segment is closed and used to update the presentation map created by nfsOpen. The initial call to nfsWrite creates a data segment to be appended to the current file's list of data segments. Subsequent calls to nfsWrite append data to this segment. The segment is completed and closed when any call is made to either nfsRead, nfsSeek or nfsClose. An important note that applies to a more generic memory implementation arises when a parameter used by nfsWrite can specify whether the data will be inserted or overwrite a corresponding section of existing data. Obviously, overwriting previously saved data cannot occur with flash memory before erasure . Therefore, this overwrite option would be disabled when using flash memory. Just as important, however, is the adaptability of the operating system for other memory media which can overwrite without having to erase.
The description of the API function calls provides the high level function calls of the file system. More specific to hardware, however, are the BIOS calls. BIOS calls are typically implemented in firmware or hardware, such as on an EEPROM integrated circuit which typically comes with a processor board in a personal computer. BIOS calls are lower level function calls because they are written specifically for particular hardware. Changing the hardware, such as the memory media, would require an update to BIOS. Table 13 below lists the BIOS calls implemented in a preferred embodiment of the present invention.
Function Description
ClipConfig Returns the configuration of the user data media.
CopyDataToData Moves data from one location to another
EraseBlock Erases on user data block (Flash block)
GetDataByte Retrieves one byte of user data from the current location
GefTime Retrieve the current date & time in DOS format.
GetWorkByte Retrieves on byte of work area data from the current location.
PutData Byte Writes one byte to the current user data location.
PutWorkByte Writes one byte to the current work area location.
SetDataAddress Repositions the user data area pointer
SetWorkAddress Repositions the work area pointer.
Table 13 The first BIOS call is ClipConfig which queries the storage media and returns configuration parameters. This command informs the file system if any particular API function calls will not function. For example, nfsWrite will not be able to operate in an overwrite mode when ClipConfig determines the memory media consists of flash memory. The BIOS call CopyDataToData is useful for making bulk memory block copies from one location to another.
This command is particularly useful when using flash memory because it allows the file system to quickly create large blocks of contiguous memory.
This is particularly important because the file system is only capable of writing data to contiguous memory.
Of substantial importance to the present invention is the BIOS call EraseBlock which erases entire blocks of memory rapidly so that contiguous memory space can be freed for writing long data files.
The BIOS calls of GetDataByte, GetWorkByte, PutDataByte and PutWorkByte are important for low level manipulation of data at the byte level. The BIOS call GetTime is defined as returning the time since January 1, 1970. This call works in conjunction with nfsStat as previously explained.
Finally, SetDataAddress and SetWorkAddress change the current address to the user data area or user work area, respectively.
It is to be understood that the above-described embodiments are only illustrative of the application of the principles of the present invention. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the spirit and scope of the present invention. The appended claims are intended to cover such modifications and arrangements.

Claims

CLAIMSWhat is claimed is:
1. A method of memory management for a primary memory created from a non-volatile, long-term storage medium, said method enabling direct manipulation of contiguous and non-contiguous discrete data segments stored therein by a file system, and comprising the steps of:
(a) creating the primary memory from a non¬ volatile, long-term storage medium, wherein the primary memory comprises a plurality of equal size blocks in which the data segments are to be stored;
(b) coupling a cache memory to the primary memory, said cache memory providing temporary and volatile storage for at least one of the data segments; (c) writing a new data segment from the cache memory to the primary memory by linking said new data segment to a previous logical data segment by the following steps:
(1) receiving the new data segment in the cache memory;
(2) moving the new data segment from the cache memory to a next available space within primary memory such that the new data segment is stored in a contiguous memory space; (3) identifying the previous logical data segment in primary memory;
(4) creating a logical link between the previous logical data segment and the new data segment such that the logical link provides a path for serially accessing the data segments within the primary memory; and
(5) creating additional serial and logical links as subsequent new data segments are written to primary memory, said logical links providing the path for serially accessing the data segments regardless of contiguity of the data segments relative to each other within the primary memory.
2. The method of claim 1 wherein the method comprises the additional step of reading at least one data segment from primary memory by following the steps of:
(a) moving backwards or forwards from a current data segment to other serially and logically linked data segments within the primary memory along a path of serial and logical links;
(b) identifying the at least one data segment when it is the current data segment; and (c) retrieving the current data segment from the primary memory by reading said data segment into the cache memory.
3. The method as defined in claim 2 wherein the method comprises the additional step of creating a current file pointer, said current file pointer being a physical address stored in a memory corresponding to a location of a file in the primary memory, and wherein the file system accesses files stored in the primary memory by retrieving data beginning at the physical address stored in the current file pointer.
4. The method as defined in claim 3 wherein the step of moving forwards to other serially and logically linked data segments from a current data segment comprises the additional steps of:
(a) locating a next data segment address pointer stored in the current data segment which stores an address of the next data segment which is logically linked to the current data segment;
(b) retrieving the address from the next data segment address pointer; and (c) storing the address in the current file pointer.
5. The method as defined in claim wherein the step of moving backwards to other serially and logically linked data segments from a current data segment comprises the additional steps of creating a history of movement by the current file pointer along the linked data segments:
(a) creating a memory stack for storing address pointers in memory, said stack following a last in, first out process for pushing on and popping off address pointers from the stack;
(b) pushing an address pointer of a root directory onto the stack when the file system is initialized; and (c) sequentially pushing subsequent address pointers onto the stack from the current file pointer as the current file pointer moves down a logical link to a last data segment from which the file system needs to backtrack;
6. The method as defined in claim 5 wherein the step of moving backwards to other serially and logically linked data segments from a current data segment comprises the additional steps of:
(a) sequentially popping a top address pointer from the stack, said top address being an address pointer to a preceding data segment;
(b) storing the popped top address pointer to the current file pointer; and
(c) repeating steps (a) and (b) until the current address pointer contains the address of a desired data segment .
7. The method as defined in claim 1 wherein the step of creating the primary memory from a non-volatile, long-term storage medium comprises the more specific step of selecting flash memory as the primary memory.
8. The method as defined in claim 7 wherein the step of creating the primary memory from flash memory comprises the more specific step of selecting NAND technology flash memory as primary memory.
9. The method as defined in claim 1 wherein the step of coupling the cache memory to the primary memory comprises the more specific step of coupling a random access memory to the primary memory.
10. The method as defined in claim 1, wherein the step of creating a logical link between the previous logical data segment and the new data segment comprises the steps of : (a) appending a header to new data segments when writing the new data segments to primary memory by moving the new data segment from the cache memory to the next available contiguous memory space within primary memory; and (b) including in the header at least one address pointer so as to logically link the header to other data segments.
11. The method as defined in claim 10 wherein the step of appending a header to the new data segments comprises the more specific step of selecting a header from the group consisting of a device, volume, directory, file, data, and tree header.
12. The method as defined in claim 11 wherein the step of appending a header to the new data segments comprises the more specific step of selecting a header-like data structure from the group consisting of - a tree node, a segment index entry, and a secondary segment index entry.
13. The method as defined in claim 10 wherein the step of appending a header to the new data segments so as to create the logical link to the other data segments comprises the more specific steps of including within the header an identification field wherein unique identification data is stored such that no headers in the primary memory have the same identification data.
14. The method as described in claim 10 wherein the method of memory management for a primary memory comprises the additional step of reserving a fixed amount of primary memory for an edit history which provides a record of all modifications to at least one file stored in primary memory.
15. The method as described in claim 14 wherein the method of memory management which reserves the fixed amount of primary memory for an edit history comprises the more specific step of creating a playback tree structure, said playback tree structure providing temporary storage space in the work memory for a copy of sequentially related data segments comprising the at least one file stored in primary memory.
16. The method as defined in claim 15 wherein the step of creating a playback structure in temporary storage space comprises the steps of:
(a) creating a first tree header including a first tree node address pointer and a next tree node address pointer, said first tree header corresponding to a directory in which the current file pointer resides;
(b) creating at least one tree node including a next tree node address pointer, a branch node pointer, a data address pointer and a data size field, said at least one tree node corresponding to a file whose address is contained within the current file pointer.
17. The method as defined in claim 15 wherein the step of creating a playback structure in temporary storage space which provides seamless playback of data stored in a file of primary memory includes the ability to logically insert new data in an existing data segment by the steps of: (a) creating a second tree node which enables retrieval of a portion of the existing data segment which is designated as pre-insertion data;
(b) creating a third tree node which enables retrieval of data from a new data header, said new data header containing the new data to be logically inserted, and said new data being seamlessly retrievable after retrieval of the pre-exist- ng data; and (c) creating a fourth tree node which enables seamless retrieval of a portion of the existing data segment which is designated as post-insertion data after retrieval of the new data.
18. The method as defined in claim 15 wherein the step of creating a playback structure in temporary storage space which provides seamless playback of data stored in a file stored in primary memory includes the ability to logically delete a portion of data from an existing data segment without physically deleting said portion, and comprising the steps of:
(a) creating a second tree node which enables retrieval of a portion of the existing data segment which is designated as pre-erasure data; and (b) creating a third tree node which enables seamless retrieval of a portion of the existing data segment which is designated as post-erasure data after retrieval of the pre-erasure data.
19. The method as defined in claim 18 wherein the method comprises the additional step of creating a new data header which records location information about the portion of data being logically deleted from the existing data segment.
20. The method as defined in claim 2 wherein the file system includes application program instructions which execute functions of the file system which are compatible with DOS compliant computer devices.
21. The method as defined in claim 20 wherein the file system includes BIOS function calls which are specific to flash memory as the primary memory such that application program instructions which violate flash memory capabilities are ignored by the file system.
22. The method as defined in claim 2 wherein the method of memory management includes the step of creating logical blocks which overlay a physical memory structure of primary memory, said logical blocks comprising at least one erase block.
23. The method as defined in claim 22 wherein the step of creating logical blocks which overlay the physical memory structure further includes the step of creating logical memory block maps within each logical block, said memory block map indicating whether the at least one erase block is nonfunctional.
24. The method as defined in claim 23 wherein the step of creating logical memory blocks maps includes the step of creating a logical block control map, said map including a physical address of the physical memory over which the logical block corresponds, an address of a next available memory equal size block in which data may be stored, and a nonfunctional block map.
25. The method as defined in claim 15 wherein the method of memory management comprises the more specific steps of separating the work memory from a user memory in primary memory, said work memory being dedicated to storing the playback tree structures.
PCT/US1997/003622 1996-03-07 1997-03-06 Norris flash file system WO1997033225A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU20731/97A AU2073197A (en) 1996-03-07 1997-03-06 Norris flash file system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/612,772 1996-03-07
US08/612,772 US5787445A (en) 1996-03-07 1996-03-07 Operating system including improved file management for use in devices utilizing flash memory as main memory

Publications (1)

Publication Number Publication Date
WO1997033225A1 true WO1997033225A1 (en) 1997-09-12

Family

ID=24454600

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/003622 WO1997033225A1 (en) 1996-03-07 1997-03-06 Norris flash file system

Country Status (3)

Country Link
US (2) US5787445A (en)
AU (1) AU2073197A (en)
WO (1) WO1997033225A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1008024A2 (en) * 1998-03-10 2000-06-14 Baxter International Inc. Systems and methods for storing, retrieving, and manipulating data in medical processing devices
EP1385096A2 (en) * 2002-07-22 2004-01-28 Samsung Electronics Co., Ltd. Apparatus and method for managing memory in mobile communication terminal
US7127478B1 (en) 1998-04-29 2006-10-24 Siemens Aktiengesellschaft Data base for persistent data

Families Citing this family (161)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5354695A (en) 1992-04-08 1994-10-11 Leedy Glenn J Membrane dielectric isolation IC fabrication
JP3215237B2 (en) * 1993-10-01 2001-10-02 富士通株式会社 Storage device and method for writing / erasing storage device
US6978342B1 (en) 1995-07-31 2005-12-20 Lexar Media, Inc. Moving sectors within a block of information in a flash memory mass storage architecture
US6728851B1 (en) 1995-07-31 2004-04-27 Lexar Media, Inc. Increasing the memory performance of flash memory devices by writing sectors simultaneously to multiple flash memory devices
US8171203B2 (en) 1995-07-31 2012-05-01 Micron Technology, Inc. Faster write operations to nonvolatile memory using FSInfo sector manipulation
US5845313A (en) 1995-07-31 1998-12-01 Lexar Direct logical block addressing flash memory mass storage architecture
JP3241597B2 (en) * 1996-05-24 2001-12-25 一博 小野 Personal audio playback device
KR980013092A (en) * 1996-07-29 1998-04-30 김광호 File management apparatus and method of exchange system
EP0834812A1 (en) * 1996-09-30 1998-04-08 Cummins Engine Company, Inc. A method for accessing flash memory and an automotive electronic control system
JPH10254631A (en) * 1997-03-14 1998-09-25 Hitachi Ltd Computer system
US6551857B2 (en) 1997-04-04 2003-04-22 Elm Technology Corporation Three dimensional structure integrated circuits
US6272503B1 (en) * 1997-05-30 2001-08-07 Oracle Corporation Tablespace-relative database pointers
US6549901B1 (en) * 1997-05-30 2003-04-15 Oracle Corporation Using transportable tablespaces for hosting data of multiple users
GB2328531A (en) * 1997-08-23 1999-02-24 Ibm Storing a long record in a set of shorter keyed records
US6571211B1 (en) * 1997-11-21 2003-05-27 Dictaphone Corporation Voice file header data in portable digital audio recorder
US7203288B1 (en) 1997-11-21 2007-04-10 Dictaphone Corporation Intelligent routing of voice files in voice data management system
US6671567B1 (en) 1997-11-21 2003-12-30 Dictaphone Corporation Voice file management in portable digital audio recorder
KR100287366B1 (en) * 1997-11-24 2001-04-16 윤순조 Portable device for reproducing sound by mpeg and method thereof
JPH11242850A (en) * 1998-02-25 1999-09-07 Hitachi Ltd Real time data recording system
US6044346A (en) * 1998-03-09 2000-03-28 Lucent Technologies Inc. System and method for operating a digital voice recognition processor with flash memory storage
US6067278A (en) * 1998-04-06 2000-05-23 Recoton Corporation Digital recorder for car radio
US6311155B1 (en) * 2000-02-04 2001-10-30 Hearing Enhancement Company Llc Use of voice-to-remaining audio (VRA) in consumer applications
JP4304734B2 (en) * 1998-04-17 2009-07-29 ソニー株式会社 REPRODUCTION DEVICE, DATA REPRODUCTION METHOD, AND RECORDING MEDIUM
US6247024B1 (en) * 1998-09-25 2001-06-12 International Business Machines Corporation Method and system for performing deferred file removal in a file system
GB9822841D0 (en) * 1998-10-20 1998-12-16 Koninkl Philips Electronics Nv File systems supporting data sharing
AU2153100A (en) * 1998-11-20 2000-06-13 Eric J. Peter Digital dictation card and method of use in business
US6366544B1 (en) 1999-02-09 2002-04-02 Advanced Communication Design, Inc. Universal CD player
AU3000700A (en) * 1999-02-17 2000-09-04 Musicmaker.Com, Inc. System for storing user-specified digital data onto a digital medium
MY122279A (en) 1999-03-03 2006-04-29 Sony Corp Nonvolatile memory and nonvolatile memory reproducing apparatus
EP1041574B1 (en) * 1999-03-03 2008-02-20 Sony Corporation Nonvolatile memory
JP4135049B2 (en) 1999-03-25 2008-08-20 ソニー株式会社 Non-volatile memory
JP4406988B2 (en) 1999-03-29 2010-02-03 ソニー株式会社 Nonvolatile recording medium, recording method, and recording apparatus
DE60043409D1 (en) * 1999-03-03 2010-01-14 Sony Corp Non-volatile recording medium, recording medium, and recording method
WO2000054470A1 (en) 1999-03-12 2000-09-14 Lextron Systems, Inc. System for controlling processing of data passing through network gateways between two disparate communications networks
US6694200B1 (en) 1999-04-13 2004-02-17 Digital5, Inc. Hard disk based portable device
US6535949B1 (en) * 1999-04-19 2003-03-18 Research In Motion Limited Portable electronic device having a log-structured file system in flash memory
EP1210666A1 (en) * 1999-04-29 2002-06-05 Centennial Technologies, Inc. Linear flash memory compatible with compactflash mechanical interface
US6353870B1 (en) * 1999-05-11 2002-03-05 Socket Communications Inc. Closed case removable expansion card having interconnect and adapter circuitry for both I/O and removable memory
US6599147B1 (en) 1999-05-11 2003-07-29 Socket Communications, Inc. High-density removable expansion module having I/O and second-level-removable expansion memory
US7107302B1 (en) 1999-05-12 2006-09-12 Analog Devices, Inc. Finite impulse response filter algorithm for implementation on digital signal processor having dual execution units
DE19922035A1 (en) * 1999-05-12 2000-11-23 Transonic Ind Ltd Portable audio playback device for audio game or music
US7111155B1 (en) 1999-05-12 2006-09-19 Analog Devices, Inc. Digital signal processor computation core with input operand selection from operand bus for dual operations
US6859872B1 (en) 1999-05-12 2005-02-22 Analog Devices, Inc. Digital signal processor computation core with pipeline having memory access stages and multiply accumulate stages positioned for efficient operation
US6820189B1 (en) * 1999-05-12 2004-11-16 Analog Devices, Inc. Computation core executing multiple operation DSP instructions and micro-controller instructions of shorter length without performing switch operation
US6493672B2 (en) 1999-05-26 2002-12-10 Dictaphone Corporation Automatic distribution of voice files
EP2365482A3 (en) * 1999-07-07 2011-10-05 Gibson Guitar Corp. Musical instrument digital recording device with communication interface
EP1610337B1 (en) * 1999-08-27 2008-02-20 Sony Corporation Non-volatile memory
KR100727799B1 (en) * 1999-08-27 2007-06-14 소니 가부시끼 가이샤 Reproducing apparatus, and reproducing method
KR100577380B1 (en) * 1999-09-29 2006-05-09 삼성전자주식회사 A flash-memory and a it's controling method
US7017161B1 (en) 1999-10-11 2006-03-21 Dictaphone Corporation System and method for interfacing a radiology information system to a central dictation system
US9818386B2 (en) 1999-10-19 2017-11-14 Medialab Solutions Corp. Interactive digital music recorder and player
US7176372B2 (en) * 1999-10-19 2007-02-13 Medialab Solutions Llc Interactive digital music recorder and player
US7078609B2 (en) * 1999-10-19 2006-07-18 Medialab Solutions Llc Interactive digital music recorder and player
US6910082B1 (en) * 1999-11-18 2005-06-21 International Business Machines Corporation Method, system and program products for reducing data movement within a computing environment by bypassing copying data between file system and non-file system buffers in a server
DE19960114A1 (en) * 1999-12-08 2001-06-13 Heidenhain Gmbh Dr Johannes Method for storing data in a file of a data storage system
JP4842417B2 (en) * 1999-12-16 2011-12-21 ソニー株式会社 Recording device
WO2001052523A1 (en) * 2000-01-12 2001-07-19 Advanced Communication Design, Inc. Compression and remote storage apparatus for data, music and video
US6877658B2 (en) 2000-01-24 2005-04-12 En-Vision America, Inc. Apparatus and method for information challenged persons to determine information regarding pharmaceutical container labels
EP1126466A1 (en) * 2000-02-18 2001-08-22 STMicroelectronics S.r.l. Electronic device for the recording/reproduction of voice data
WO2001067448A1 (en) * 2000-03-09 2001-09-13 Advanced Communication Design, Inc. Universal cd player
US6760535B1 (en) * 2000-03-27 2004-07-06 Ati International Srl Method and apparatus for cache management for a digital VCR archive
US7187947B1 (en) 2000-03-28 2007-03-06 Affinity Labs, Llc System and method for communicating selected information to an electronic device
CN1174615C (en) * 2000-04-18 2004-11-03 松下电器产业株式会社 Storage medium, data obtaining apparatus, data holding apparatus, data obtaining method and data holding method
EP1158525A1 (en) * 2000-05-18 2001-11-28 STMicroelectronics S.r.l. Voice message managing method, in particular for voice data recording/playing/editing electronic devices
US6675180B2 (en) * 2000-06-06 2004-01-06 Matsushita Electric Industrial Co., Ltd. Data updating apparatus that performs quick restoration processing
US6606281B2 (en) 2000-06-15 2003-08-12 Digital Networks North America, Inc. Personal audio player with a removable multi-function module
WO2001097223A2 (en) * 2000-06-15 2001-12-20 Sonicblue Incorporated Personal audio player with a removable multi-function module
US6594674B1 (en) * 2000-06-27 2003-07-15 Microsoft Corporation System and method for creating multiple files from a single source file
US7295443B2 (en) 2000-07-06 2007-11-13 Onspec Electronic, Inc. Smartconnect universal flash media card adapters
US7278051B2 (en) * 2000-07-06 2007-10-02 Onspec Electronic, Inc. Field-operable, stand-alone apparatus for media recovery and regeneration
US7493437B1 (en) * 2000-07-06 2009-02-17 Mcm Portfolio Llc Flashtoaster for reading several types of flash memory cards with or without a PC
US7252240B1 (en) * 2000-07-06 2007-08-07 Onspec Electronics, Inc. Memory module which includes a form factor connector
US6438638B1 (en) * 2000-07-06 2002-08-20 Onspec Electronic, Inc. Flashtoaster for reading several types of flash-memory cards with or without a PC
US7167944B1 (en) 2000-07-21 2007-01-23 Lexar Media, Inc. Block management for mass storage
US6664459B2 (en) * 2000-09-19 2003-12-16 Samsung Electronics Co., Ltd. Music file recording/reproducing module
FR2816090B1 (en) * 2000-10-26 2003-01-10 Schlumberger Systems & Service DEVICE FOR SHARING FILES IN AN INTEGRATED CIRCUIT DEVICE
JP2002162999A (en) * 2000-11-28 2002-06-07 Sharp Corp Sound processing system
US7136630B2 (en) * 2000-12-22 2006-11-14 Broadcom Corporation Methods of recording voice signals in a mobile set
US6823449B2 (en) * 2001-03-09 2004-11-23 Sun Microsystems, Inc. Directory structure-based reading of configuration ROM
JP4622129B2 (en) * 2001-03-26 2011-02-02 ソニー株式会社 File management method, file management method program, recording medium recording file management method program, and file management apparatus
US6577458B2 (en) * 2001-04-20 2003-06-10 Richard Paul Day Memo tape recorder and reader system and method
CA2463922C (en) 2001-06-27 2013-07-16 4 Media, Inc. Improved media delivery platform
US20030004947A1 (en) * 2001-06-28 2003-01-02 Sun Microsystems, Inc. Method, system, and program for managing files in a file system
GB0123421D0 (en) 2001-09-28 2001-11-21 Memquest Ltd Power management system
GB0123415D0 (en) 2001-09-28 2001-11-21 Memquest Ltd Method of writing data to non-volatile memory
GB0123410D0 (en) 2001-09-28 2001-11-21 Memquest Ltd Memory system for data storage and retrieval
GB0123416D0 (en) 2001-09-28 2001-11-21 Memquest Ltd Non-volatile memory control
JP3757849B2 (en) * 2001-11-06 2006-03-22 住友電気工業株式会社 Method for evaluating carrier density of wafer including compound semiconductor surface layer containing In
EP1326228B1 (en) * 2002-01-04 2016-03-23 MediaLab Solutions LLC Systems and methods for creating, modifying, interacting with and playing musical compositions
US7076035B2 (en) * 2002-01-04 2006-07-11 Medialab Solutions Llc Methods for providing on-hold music using auto-composition
US20080120436A1 (en) * 2002-01-31 2008-05-22 Sigmatel, Inc. Expansion Peripheral Techniques for Portable Audio Player
US6732222B1 (en) * 2002-02-01 2004-05-04 Silicon Motion, Inc. Method for performing flash memory file management
US7231643B1 (en) 2002-02-22 2007-06-12 Lexar Media, Inc. Image rescue system including direct communication between an application program and a device driver
US8386921B2 (en) * 2002-03-01 2013-02-26 International Business Machines Corporation System and method for developing a website
US20030167211A1 (en) * 2002-03-04 2003-09-04 Marco Scibora Method and apparatus for digitally marking media content
US7440774B2 (en) 2002-04-08 2008-10-21 Socket Mobile, Inc. Wireless enabled memory module
WO2004015764A2 (en) 2002-08-08 2004-02-19 Leedy Glenn J Vertical system integration
AU2003276886A1 (en) * 2002-09-12 2004-04-30 Nline Corporation Flow control method for maximizing resource utilization of a remote system
US20040054846A1 (en) * 2002-09-16 2004-03-18 Wen-Tsung Liu Backup device with flash memory drive embedded
JP4416996B2 (en) * 2002-11-01 2010-02-17 三菱電機株式会社 Map information processing apparatus and map information providing apparatus
WO2006043929A1 (en) * 2004-10-12 2006-04-27 Madwaves (Uk) Limited Systems and methods for music remixing
US7928310B2 (en) * 2002-11-12 2011-04-19 MediaLab Solutions Inc. Systems and methods for portable audio synthesis
US7169996B2 (en) * 2002-11-12 2007-01-30 Medialab Solutions Llc Systems and methods for generating music using data/music data file transmitted/received via a network
US7107414B2 (en) * 2003-01-15 2006-09-12 Avago Technologies Fiber Ip (Singapore) Ptd. Ltd. EEPROM emulation in a transceiver
US7966188B2 (en) * 2003-05-20 2011-06-21 Nuance Communications, Inc. Method of enhancing voice interactions using visual messages
US6973519B1 (en) 2003-06-03 2005-12-06 Lexar Media, Inc. Card identification compatibility
US20040267520A1 (en) * 2003-06-27 2004-12-30 Roderick Holley Audio playback/recording integrated circuit with filter co-processor
US20060164907A1 (en) * 2003-07-22 2006-07-27 Micron Technology, Inc. Multiple flash memory device management
US7873684B2 (en) 2003-08-14 2011-01-18 Oracle International Corporation Automatic and dynamic provisioning of databases
US8311974B2 (en) 2004-02-20 2012-11-13 Oracle International Corporation Modularized extraction, transformation, and loading for a database
US7240144B2 (en) * 2004-04-02 2007-07-03 Arm Limited Arbitration of data transfer requests
US7725628B1 (en) 2004-04-20 2010-05-25 Lexar Media, Inc. Direct secondary device interface by a host
US7370166B1 (en) 2004-04-30 2008-05-06 Lexar Media, Inc. Secure portable storage device
US7571173B2 (en) * 2004-05-14 2009-08-04 Oracle International Corporation Cross-platform transportable database
US8554806B2 (en) 2004-05-14 2013-10-08 Oracle International Corporation Cross platform transportable tablespaces
US7240065B2 (en) * 2004-05-27 2007-07-03 Oracle International Corporation Providing mappings between logical time values and real time values
US7251660B2 (en) 2004-06-10 2007-07-31 Oracle International Corporation Providing mappings between logical time values and real time values in a multinode system
KR100659767B1 (en) * 2004-08-17 2006-12-20 (주)케이피비오상사 Automatic playing and recording apparatus for acoustic/electric guitar
US7594063B1 (en) 2004-08-27 2009-09-22 Lexar Media, Inc. Storage capacity status
US7464306B1 (en) 2004-08-27 2008-12-09 Lexar Media, Inc. Status of overall health of nonvolatile memory
US20060073813A1 (en) * 2004-10-06 2006-04-06 Bernhard Reus Method and system of a voice recording device and a mobile computing device
JP4956922B2 (en) * 2004-10-27 2012-06-20 ソニー株式会社 Storage device
US20060224817A1 (en) * 2005-03-31 2006-10-05 Atri Sunil R NOR flash file allocation
US7634494B2 (en) * 2005-05-03 2009-12-15 Intel Corporation Flash memory directory virtualization
EP1887560A4 (en) * 2005-05-19 2008-09-10 Kenji Yoshida Audio information recording device
WO2006123575A1 (en) * 2005-05-19 2006-11-23 Kenji Yoshida Audio information recording device
US20060270373A1 (en) * 2005-05-27 2006-11-30 Nasaco Electronics (Hong Kong) Ltd. In-flight entertainment wireless audio transmitter/receiver system
US7984084B2 (en) * 2005-08-03 2011-07-19 SanDisk Technologies, Inc. Non-volatile memory with scheduled reclaim operations
US7571275B2 (en) * 2005-08-31 2009-08-04 Hamilton Sundstrand Corporation Flash real-time operating system for small embedded applications
KR100689849B1 (en) * 2005-10-05 2007-03-08 삼성전자주식회사 Remote controller, display device, display system comprising the same, and control method thereof
US7610314B2 (en) * 2005-10-07 2009-10-27 Oracle International Corporation Online tablespace recovery for export
CA2567021A1 (en) * 2005-11-01 2007-05-01 Vesco Oil Corporation Audio-visual point-of-sale presentation system and method directed toward vehicle occupant
US7877540B2 (en) * 2005-12-13 2011-01-25 Sandisk Corporation Logically-addressed file storage methods
US8909599B2 (en) * 2006-11-16 2014-12-09 Oracle International Corporation Efficient migration of binary XML across databases
US20080119178A1 (en) * 2006-11-21 2008-05-22 Samsung Electronics Co., Ltd. Allocating Compression-Based Firmware Over the Air
US9885739B2 (en) 2006-12-29 2018-02-06 Electro Industries/Gauge Tech Intelligent electronic device capable of operating as a USB master device and a USB slave device
US9063181B2 (en) * 2006-12-29 2015-06-23 Electro Industries/Gauge Tech Memory management for an intelligent electronic device
US7814360B2 (en) * 2007-01-25 2010-10-12 Oralce International Corporation Synchronizing cluster time to a master node with a faster clock
US7525842B2 (en) 2007-01-25 2009-04-28 Micron Technology, Inc. Increased NAND flash memory read throughput
US20080288436A1 (en) * 2007-05-15 2008-11-20 Harsha Priya N V Data pattern matching to reduce number of write operations to improve flash life
US20090240863A1 (en) * 2007-10-23 2009-09-24 Psion Teklogix Inc. Distributed power regulation
JP4551940B2 (en) 2008-03-01 2010-09-29 株式会社東芝 Memory system
US20090310760A1 (en) * 2008-06-17 2009-12-17 Judith Neely Coltman Audio Message Recorder with Flexible Control
US8169856B2 (en) * 2008-10-24 2012-05-01 Oracle International Corporation Time synchronization in cluster systems
US8412880B2 (en) * 2009-01-08 2013-04-02 Micron Technology, Inc. Memory system controller to manage wear leveling across a plurality of storage nodes
US8225042B1 (en) * 2009-05-05 2012-07-17 Micron Technology, Inc. Method and apparatus for preventing foreground erase operations in electrically writable memory devices
US8447908B2 (en) * 2009-09-07 2013-05-21 Bitmicro Networks, Inc. Multilevel memory bus system for solid-state mass storage
US20130297840A1 (en) 2009-12-01 2013-11-07 Electro Industries/Gaugetech Intelligent electronic device capable of operating as a usb master device and a usb slave device
WO2011096046A1 (en) * 2010-02-02 2011-08-11 株式会社 東芝 Communication device having storage function
US9396104B1 (en) 2010-03-22 2016-07-19 Seagate Technology, Llc Accessing compressed data of varying-sized quanta in non-volatile memory
TWI431491B (en) * 2010-12-20 2014-03-21 King Yuan Electronics Co Ltd Comparison device and method for comparing test pattern files of a wafer tester
US20120173789A1 (en) * 2011-01-03 2012-07-05 Daleth Music Group Album usb
US9007836B2 (en) 2011-01-13 2015-04-14 Kabushiki Kaisha Toshiba Non-volatile semiconductor memory device
US8560922B2 (en) 2011-03-04 2013-10-15 International Business Machines Corporation Bad block management for flash memory
US8892844B2 (en) 2011-03-07 2014-11-18 Micron Technology, Inc. Methods of accessing memory cells, methods of distributing memory requests, systems, and memory controllers
US20130086315A1 (en) * 2011-10-04 2013-04-04 Moon J. Kim Direct memory access without main memory in a semiconductor storage device-based system
KR102168169B1 (en) * 2014-01-07 2020-10-20 삼성전자주식회사 Mapping methods of non-volatile memory system and system for providing the same
US9927470B2 (en) 2014-05-22 2018-03-27 Electro Industries/Gauge Tech Intelligent electronic device having a memory structure for preventing data loss upon power loss
AU2016220081A1 (en) 2015-02-18 2017-08-31 Pilldrill, Inc. System and method for activity monitoring
USD939988S1 (en) 2019-09-26 2022-01-04 Electro Industries/Gauge Tech Electronic power meter
CN110928682B (en) * 2019-11-13 2023-06-09 深圳国微芯科技有限公司 Method for accessing computer memory by external device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4953122A (en) * 1986-10-31 1990-08-28 Laserdrive Ltd. Pseudo-erasable and rewritable write-once optical disk memory system
US5361340A (en) * 1990-01-05 1994-11-01 Sun Microsystems, Inc. Apparatus for maintaining consistency in a multiprocessor computer system using virtual caching
US5404485A (en) * 1993-03-08 1995-04-04 M-Systems Flash Disk Pioneers Ltd. Flash file system
US5581736A (en) * 1994-07-18 1996-12-03 Microsoft Corporation Method and system for dynamically sharing RAM between virtual memory and disk cache

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4685057A (en) * 1983-06-06 1987-08-04 Data General Corporation Memory mapping system
US5268870A (en) * 1988-06-08 1993-12-07 Eliyahou Harari Flash EEPROM system and intelligent programming and erasing methods therefor
US5070032A (en) * 1989-03-15 1991-12-03 Sundisk Corporation Method of making dense flash eeprom semiconductor memory structures
US5172338B1 (en) * 1989-04-13 1997-07-08 Sandisk Corp Multi-state eeprom read and write circuits and techniques
DE69034191T2 (en) * 1989-04-13 2005-11-24 Sandisk Corp., Sunnyvale EEPROM system with multi-chip block erasure
US5200959A (en) * 1989-10-17 1993-04-06 Sundisk Corporation Device and method for defect handling in semi-conductor memory
US5263160A (en) * 1991-01-31 1993-11-16 Digital Equipment Corporation Augmented doubly-linked list search and management method for a system having data stored in a list of data elements in memory
US5437020A (en) * 1992-10-03 1995-07-25 Intel Corporation Method and circuitry for detecting lost sectors of data in a solid state memory disk
US5337275A (en) * 1992-10-30 1994-08-09 Intel Corporation Method for releasing space in flash EEPROM memory array to allow the storage of compressed data
US5357475A (en) * 1992-10-30 1994-10-18 Intel Corporation Method for detaching sectors in a flash EEPROM memory array
US5341330A (en) * 1992-10-30 1994-08-23 Intel Corporation Method for writing to a flash memory array during erase suspend intervals
US5448577A (en) * 1992-10-30 1995-09-05 Intel Corporation Method for reliably storing non-data fields in a flash EEPROM memory array
US5454103A (en) * 1993-02-01 1995-09-26 Lsc, Inc. Method and apparatus for file storage allocation for secondary storage using large and small file blocks
US5581723A (en) * 1993-02-19 1996-12-03 Intel Corporation Method and apparatus for retaining flash block structure data during erase operations in a flash EEPROM memory array
US5551020A (en) * 1994-03-28 1996-08-27 Flextech Systems, Inc. System for the compacting and logical linking of data blocks in files to optimize available physical storage
US5491774A (en) * 1994-04-19 1996-02-13 Comp General Corporation Handheld record and playback device with flash memory
US5586291A (en) * 1994-12-23 1996-12-17 Emc Corporation Disk controller with volatile and non-volatile cache memories

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4953122A (en) * 1986-10-31 1990-08-28 Laserdrive Ltd. Pseudo-erasable and rewritable write-once optical disk memory system
US5361340A (en) * 1990-01-05 1994-11-01 Sun Microsystems, Inc. Apparatus for maintaining consistency in a multiprocessor computer system using virtual caching
US5404485A (en) * 1993-03-08 1995-04-04 M-Systems Flash Disk Pioneers Ltd. Flash file system
US5581736A (en) * 1994-07-18 1996-12-03 Microsoft Corporation Method and system for dynamically sharing RAM between virtual memory and disk cache

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1008024A2 (en) * 1998-03-10 2000-06-14 Baxter International Inc. Systems and methods for storing, retrieving, and manipulating data in medical processing devices
EP1008024A4 (en) * 1998-03-10 2006-11-02 Baxter Int Systems and methods for storing, retrieving, and manipulating data in medical processing devices
US7127478B1 (en) 1998-04-29 2006-10-24 Siemens Aktiengesellschaft Data base for persistent data
EP1385096A2 (en) * 2002-07-22 2004-01-28 Samsung Electronics Co., Ltd. Apparatus and method for managing memory in mobile communication terminal
EP1385096A3 (en) * 2002-07-22 2006-04-19 Samsung Electronics Co., Ltd. Apparatus and method for managing memory in mobile communication terminal

Also Published As

Publication number Publication date
US5787445A (en) 1998-07-28
US5839108A (en) 1998-11-17
AU2073197A (en) 1997-09-22

Similar Documents

Publication Publication Date Title
US5787445A (en) Operating system including improved file management for use in devices utilizing flash memory as main memory
EP0466389B1 (en) File system with read/write memory and write once-read many (WORM) memory
US8065473B2 (en) Method for controlling memory card and method for controlling nonvolatile semiconductor memory
Quinlan A cached worm file system
CA2325810C (en) Method and system for file system management using a flash-erasable, programmable, read-only memory
US8549051B2 (en) Unlimited file system snapshots and clones
US7072916B1 (en) Instant snapshot
EP2176795B1 (en) Hierarchical storage management for a file system providing snapshots
KR100324028B1 (en) Method for performing a continuous over-write of a file in a nonvolatile memory
US7594062B2 (en) Method for changing data of a data block in a flash memory having a mapping area, a data area and an alternative area
CN107180092B (en) File system control method and device and terminal
US8019925B1 (en) Methods and structure for dynamically mapped mass storage device
US8315995B1 (en) Hybrid storage system
US6434678B1 (en) Method for data storage organization
US6606628B1 (en) File system for nonvolatile memory
KR101447188B1 (en) Method and apparatus for controlling I/O to optimize flash memory
US8818950B2 (en) Method and apparatus for localized protected imaging of a file system
JP2006294031A (en) Memory drive for operation on network, method of accessing file data in sequential access storage medium from network, memory logic including logic for converting command based on file and logic for storing toc, magnetic tape, and logic for accessing data of tape and toc region
KR20070060070A (en) Fat analysis for optimized sequential cluster management
KR20110039417A (en) Apparatus, system, and method for efficient mapping of virtual and physical addresses
JP2003196142A (en) Write-once type memory device and file management method
KR20010037155A (en) Flash file system
US5787446A (en) Sub-volume with floating storage space
JP4221959B2 (en) Bridge file system, computer system, data management method and recording medium using bridge file system
JP2774691B2 (en) File system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN YU AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 97531977

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA