WO2003083668A1 - Morphing memory pools - Google Patents

Morphing memory pools Download PDF

Info

Publication number
WO2003083668A1
WO2003083668A1 PCT/IB2003/001008 IB0301008W WO03083668A1 WO 2003083668 A1 WO2003083668 A1 WO 2003083668A1 IB 0301008 W IB0301008 W IB 0301008W WO 03083668 A1 WO03083668 A1 WO 03083668A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
configuration
packets
packet
pool
Prior art date
Application number
PCT/IB2003/001008
Other languages
French (fr)
Inventor
Hendrikus C. W. Van Heesch
Egidius G. P. Van Doren
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to AU2003209598A priority Critical patent/AU2003209598A1/en
Priority to EP03745348A priority patent/EP1499979A1/en
Priority to KR10-2004-7015677A priority patent/KR20040101386A/en
Priority to US10/509,456 priority patent/US20050172096A1/en
Priority to JP2003581024A priority patent/JP2005521939A/en
Publication of WO2003083668A1 publication Critical patent/WO2003083668A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement

Definitions

  • the invention relates to a method for altering memory configurations in a physical memory where a first memory configuration and at least a second memory configuration are defined by at least one memory pool comprising at least one memory packet, respectively.
  • the invention further relates to the use of such a method.
  • Allocators are categorised by the mechanism they use for recording which areas of memory are free and for merging adjacent free blocks into lager free blocks.
  • Important for an allocator is its policy and strategy, i.e. whether the allocator properly exploits the regularities in real request streams.
  • An allocator provides the functions of allocating new blocks of memory and releasing a given block of memory. Different applications require different strategies of allocation, as well as different memory sizes. A strategy for allocation is to use pools of equally sized memory blocks.
  • Each allocation request is mapped onto a request for a packet from a pool that satisfies the request.
  • packets are allocated and released within a pool, external fragmentation is avoided. Fragmentation within a pool may only occur in case a requested memory block does not fit exactly into a packet of the selected pool.
  • the streaming data is processed by a graph of processing nodes.
  • the processing nodes process the data, using data packets.
  • Each packet corresponds to a memory block in a memory, which is shared by all processing nodes.
  • a streaming graph is created when it is known which processing steps have to be carried out on the streaming data.
  • the size of the packets within the pools depend on the data to be streamed. Audio data requires packet sizes of some kilobytes, and video data requires packet sizes of up to one megabyte.
  • a streaming graph In case a streaming graph has to be changed, the configuration of memory pools has also to be changed.
  • a streaming graph might be changed in case different applications and their data streams .are supported within one system. Also the processing steps of a data stream might be changed, which requires to include or remove processing nodes from the streaming graph.
  • not all application data may be stored at one time within the memory. That means that memory pools needed for a first application have to be released for memory pools of a second application. By releasing and allocating memory, fragmentation of that memory may occur.
  • Software streaming is based on a graph of processing nodes where the communication between the nodes is done using memory packets.
  • Each memory packet corresponds to a memory block in a memory, shared by all nodes.
  • Fixed size memory pools are provided in streaming systems. In these memory pools fixed size memory packets are allocated.
  • Each processing node may have different requirements for its packets, so there are typically multiple different pools.
  • a change in the streaming graph which means that the processing of data is changed, requires a change of memory configuration, because different packet sizes might be required in new memory pools.
  • To allow a seamless change between memory configurations the usage of released memory packets for a new memory pools has to be allowed, prior to the release of all memory packets of a previous memory pool.
  • a method comprising the steps of detecting a released memory packed within a memory pool of said first memory configuration, assigning memory from said released memory packed to said second memory configuration, determining the size of said assigned free memory of said second memory configuration, and allocating within said assigned free memory a required amount of memory for a memory packet of a pool of said second memory configuration in case said assigned free memory size satisfies said allocation request.
  • a memory configuration provides a defined number of memory pools, each comprising a certain number of memory packets, whereby a memory pool is made up by at least one memory packet.
  • the memory of this data packet may be released, as the processed data is sent to the next processing node. Which means that the allocator releases a memory packets after processing of the stored data.
  • this memory packet can be assigned to a second memory configuration. It is also possible that a transition to a further memory configuration may be carried out.
  • the overall size of this assigned free memory is determined. This is the size of all released memory packets from said first memory configuration, which are assigned to at least said second memory configuration, and which are not reallocated, yet.
  • this memory packet is allocated within said assigned free memory. That means that released free memory may be used by a second memory configuration prior to the release of all allocated memory packets of said first memory configuration.
  • a method according to claim 2 is preferred. In that case, a transition to a further memory configuration may be carried out, even though previous transition is not wholly completed.
  • a method according to claim 3 is preferred.
  • a method according to claim 4 is preferred. In that case free memory may be allocated to memory packets of said second memory configuration ahead of releasing any memory packets of said first memory configuration. It is also possible that memory is assigned to memory packets of more than one following memory configuration.
  • memory configurations are fixed in advance for all configurations.
  • equally sized memory packets according to claim 6 are preferred.
  • a method according to claim 8 is preferred. Previous to changing from a first configuration to a second configuration, the allocator knows the second configuration, which means that the allocator knows the number of memory pools and the sizes of memory packets within said pools.
  • An integrated circuit in particular a digital signal processor, a digital video processor, or a digital audio processor, providing a memory allocation according to previously described method is yet another aspect of the invention.
  • FIG. 1 a flowchart for an inventive method
  • Fig. 2 a diagrammatic view of a memory configuration.
  • Fig. 1 depicts a flowchart of a method according to the invention.
  • a configuration A is defined and allocated within a memory.
  • Configuration A describes the number of memory pools and the number and size of memory blocks (packets) within each of said memory pools.
  • a new memory configuration B has to be determined 4.
  • the memory configuration B is determined based on the needs of the requested mode.
  • step 8 all free memory of configuration A is assigned to configuration B.
  • step 10 it is determined whether any memory requests are still pending. These requests are determined based on the memory configuration B, which has been determined previously in step 4. The allocator knows whether memory packets still have to be allocated to configure the memory according to configuration B or not.
  • step 12 it is determined whether the assigned free memory for configuration B is large enough for a memory packet of configuration B in step 12. In case the free memory assigned to configuration B is large enough for a memory packet of a pool of configuration B, this memory packet is allocated within the free assigned memory in step 14.
  • step 16 is processed. It is determined whether still any packets are allocated for configuration A in step 16. In case there are still any memory packets allocated for configuration A, a release of any memory packets within configuration A is awaited in step 18.
  • step 19 After a memory packet within configuration A is released, the released memory packed is assigned to configuration B in step 19. The steps 10, 12, 14, 16, 18 and 19 are processed until no more memory requests are pending.
  • step 10 If is detected in step 10 that configuration B is wholly configured and no more memory requests are pending, the steps 10, 16, 18, 19 are processed until all memory packets of configuration A are released. If this is the case the mode transition is ended in step 20. After all steps 2 to 20 are processed, the memory is configured according to configuration B and no further memory packets are allocated for configuration A.
  • memory packets may be used in configuration B before all memory packets of configuration A are released.
  • Fig. 2 a diagrammatic view of a memory configuration is depicted.
  • the memory 22 is addressable via memory addresses 22 o - 22 8 .
  • configuration A memory 22 is divided in two pools Al, A2, pool Al comprising three packets of size 2, and pool A2 one packet of size 3.
  • the memory 22 will be reorganised into two pools Bl, B2, pool Bl comprising three packets of size 1, and pool B2 two packets of size 3.
  • packet A2 ⁇ at address 22 6 is released and the released memory is assigned to configuration BO.
  • the assigned free memory BO is allocated to memory packet B2 2 .
  • memory packet Al i at address 22 o is released and assigned to free memory BO.
  • step 14 2 memory packets Bl i, Bl 2 are allocated at memory addresses 22 0 , 22 1 within free memory BO.
  • step 18 3 memory packet Al 2 is released at memory address 22 2 and in step 14 3 memory packet Bl 3 is allocated within free memory BO.
  • step 18 4 memory packet Al is released and assigned to free memory BO.
  • step 14 4 memory packet B2 ⁇ is allocated within free memory BO at address 22 3 .
  • a pool is placed in both configurations at a same memory position and the amount of packets that can be added to pools of new configurations can be maximised when a packet from a previous configuration is released.

Abstract

The invention relates to a method, the use of such a method and an integrated circuit for altering memory configurations in a physical memory. A memory configuration comprising memory pools of memory packets can be changed into a new memory configuration by detecting a released memory packet within a memory pool of said first memory configuration, assigning memory from said released memory packet to said second memory configuration, determining the size of said assigned free memory of said second memory configuration and allocating within said assigned free memory a required amount of memory for a memory packet of a pool of said second memory configuration in case that assigned free memory size satisfies that allocation request. By said transition a seamless mode change may be applied and memory packets released within a first mode may already be used by said second mode. Fragmentation may be avoided.

Description

Morphing memory pools
The invention relates to a method for altering memory configurations in a physical memory where a first memory configuration and at least a second memory configuration are defined by at least one memory pool comprising at least one memory packet, respectively. The invention further relates to the use of such a method.
In many applications physical memory is limited and must be used efficient. To use the physical memory, an allocator has to allocate free memory blocks within the provided physical memory. As memory blocks are allocated and deallocated within time, the physical memory gets fragmented, which means that blocks of not allocated memory between allocated blocks of allocated memory occur. These so called holes cause that not all available physical memory can be used by the application.
From "Dynamic Storage Allocation: A Survey and Critical Review", Paul R. Wilson, et al., Dep. of Computer Sciences, University of Texas at Austin, allocators, and mechanisms for avoiding fragmentation in memories, are known. Allocators are categorised by the mechanism they use for recording which areas of memory are free and for merging adjacent free blocks into lager free blocks. Important for an allocator is its policy and strategy, i.e. whether the allocator properly exploits the regularities in real request streams. An allocator provides the functions of allocating new blocks of memory and releasing a given block of memory. Different applications require different strategies of allocation, as well as different memory sizes. A strategy for allocation is to use pools of equally sized memory blocks. These equally sized memory blocks may also be called packets. Each allocation request is mapped onto a request for a packet from a pool that satisfies the request. In case packets are allocated and released within a pool, external fragmentation is avoided. Fragmentation within a pool may only occur in case a requested memory block does not fit exactly into a packet of the selected pool.
In streaming systems, the streaming data is processed by a graph of processing nodes. The processing nodes process the data, using data packets. Each packet corresponds to a memory block in a memory, which is shared by all processing nodes. A streaming graph is created when it is known which processing steps have to be carried out on the streaming data. The size of the packets within the pools depend on the data to be streamed. Audio data requires packet sizes of some kilobytes, and video data requires packet sizes of up to one megabyte.
In case a streaming graph has to be changed, the configuration of memory pools has also to be changed. A streaming graph might be changed in case different applications and their data streams .are supported within one system. Also the processing steps of a data stream might be changed, which requires to include or remove processing nodes from the streaming graph. As most systems are memory constraint, not all application data may be stored at one time within the memory. That means that memory pools needed for a first application have to be released for memory pools of a second application. By releasing and allocating memory, fragmentation of that memory may occur.
In case a user decides that a certain audio- or a video-filter needs to be inserted, or removed from the streaming graph, the configuration of the memory has to be changed. This configuration change has to be carried out without loosing data. In particular in streaming systems, data keeps on streaming into the system at a fixed rate. It is not possible to stop processing the data by the nodes, wait until one pool is completely released and finally allocate its memory to a new pool. Such a procedure would require buffering of the streaming data, which is not possible with limited memory.
Software streaming is based on a graph of processing nodes where the communication between the nodes is done using memory packets. Each memory packet corresponds to a memory block in a memory, shared by all nodes. Fixed size memory pools are provided in streaming systems. In these memory pools fixed size memory packets are allocated. Each processing node may have different requirements for its packets, so there are typically multiple different pools. A change in the streaming graph, which means that the processing of data is changed, requires a change of memory configuration, because different packet sizes might be required in new memory pools. To allow a seamless change between memory configurations the usage of released memory packets for a new memory pools has to be allowed, prior to the release of all memory packets of a previous memory pool.
As current allocators do not provide a sufficient method for such a seamless change between processing modes, it is an object of the invention to limit the amount of extra buffering while changing the mode of operation. It is a further object of the invention to allow shifting of the same piece of memory between at least two pools in different modes. It is yet a further object of the invention to reuse the same memory by different memory pools in different modes.
These objects of the invention are solved by a method comprising the steps of detecting a released memory packed within a memory pool of said first memory configuration, assigning memory from said released memory packed to said second memory configuration, determining the size of said assigned free memory of said second memory configuration, and allocating within said assigned free memory a required amount of memory for a memory packet of a pool of said second memory configuration in case said assigned free memory size satisfies said allocation request. The advantages are that transitions between operation modes are seamless and no extra hardware and only a little extra memory is required. Memory fragmentation only occurs during transition between different modes.
A memory configuration provides a defined number of memory pools, each comprising a certain number of memory packets, whereby a memory pool is made up by at least one memory packet.
When a processing node has processed a data packet, the memory of this data packet may be released, as the processed data is sent to the next processing node. Which means that the allocator releases a memory packets after processing of the stored data.
In case a memory packet within a first memory configuration is released, this memory packet can be assigned to a second memory configuration. It is also possible that a transition to a further memory configuration may be carried out.
After assigning free memory to at least said second memory configuration the overall size of this assigned free memory is determined. This is the size of all released memory packets from said first memory configuration, which are assigned to at least said second memory configuration, and which are not reallocated, yet.
In case the size of the assigned free memory satisfies a memory request for a memory packet for a pool of said second memory configuration, this memory packet is allocated within said assigned free memory. That means that released free memory may be used by a second memory configuration prior to the release of all allocated memory packets of said first memory configuration.
To apply configuration changes between more than two memory configurations, a method according to claim 2 is preferred. In that case, a transition to a further memory configuration may be carried out, even though previous transition is not wholly completed. To assure that all memory packets of a first configuration are released and assigned to a second configuration, a method according to claim 3 is preferred.
In some cases not all memory is used by a memory configuration. Thus, a method according to claim 4 is preferred. In that case free memory may be allocated to memory packets of said second memory configuration ahead of releasing any memory packets of said first memory configuration. It is also possible that memory is assigned to memory packets of more than one following memory configuration.
To allow allocation of memory packets a method according to claim 5 is preferred. In that case, memory configurations are fixed in advance for all configurations. When streaming data is processed, equally sized memory packets according to claim 6 are preferred.
To assure a mode change within a certain time, releasing memory packets according to claim 7 is preferred.
To allow an efficient allocation of memory pools and memory packets in case a memory configuration is changed, a method according to claim 8 is preferred. Previous to changing from a first configuration to a second configuration, the allocator knows the second configuration, which means that the allocator knows the number of memory pools and the sizes of memory packets within said pools.
The use of a previously described method in streaming systems, in particular in video- and audio-streaming systems, where a memory configuration is based on a defined streaming graph, is a further aspect of the invention.
An integrated circuit, in particular a digital signal processor, a digital video processor, or a digital audio processor, providing a memory allocation according to previously described method is yet another aspect of the invention.
These and other aspects of the invention will be appeared from, and elucidated with reference to the embodiments described hereinafter. Fig. 1 a flowchart for an inventive method; Fig. 2 a diagrammatic view of a memory configuration.
Fig. 1 depicts a flowchart of a method according to the invention. In step 2 a configuration A is defined and allocated within a memory. Configuration A describes the number of memory pools and the number and size of memory blocks (packets) within each of said memory pools. In case a mode change is requested 6 a new memory configuration B has to be determined 4. The memory configuration B is determined based on the needs of the requested mode.
In step 8 all free memory of configuration A is assigned to configuration B. In step 10 it is determined whether any memory requests are still pending. These requests are determined based on the memory configuration B, which has been determined previously in step 4. The allocator knows whether memory packets still have to be allocated to configure the memory according to configuration B or not.
In case there are pending memory requests, it is determined whether the assigned free memory for configuration B is large enough for a memory packet of configuration B in step 12. In case the free memory assigned to configuration B is large enough for a memory packet of a pool of configuration B, this memory packet is allocated within the free assigned memory in step 14.
In case the size of the assigned free memory is smaller than any requested memory packet of any pool of configuration B step 16 is processed. It is determined whether still any packets are allocated for configuration A in step 16. In case there are still any memory packets allocated for configuration A, a release of any memory packets within configuration A is awaited in step 18.
After a memory packet within configuration A is released, the released memory packed is assigned to configuration B in step 19. The steps 10, 12, 14, 16, 18 and 19 are processed until no more memory requests are pending.
If is detected in step 10 that configuration B is wholly configured and no more memory requests are pending, the steps 10, 16, 18, 19 are processed until all memory packets of configuration A are released. If this is the case the mode transition is ended in step 20. After all steps 2 to 20 are processed, the memory is configured according to configuration B and no further memory packets are allocated for configuration A.
During transition from configuration A to configuration B, memory packets may be used in configuration B before all memory packets of configuration A are released.
In Fig. 2 a diagrammatic view of a memory configuration is depicted. The memory 22 is addressable via memory addresses 22o - 228. In configuration A, memory 22 is divided in two pools Al, A2, pool Al comprising three packets of size 2, and pool A2 one packet of size 3. During transition 25 from configuration A to configuration B, the memory 22 will be reorganised into two pools Bl, B2, pool Bl comprising three packets of size 1, and pool B2 two packets of size 3. In step 18] packet A2ι at address 226 is released and the released memory is assigned to configuration BO. In step 14ι the assigned free memory BO is allocated to memory packet B22. In step 182 memory packet Al i at address 22o is released and assigned to free memory BO. In step 142 memory packets Bl i, Bl2 are allocated at memory addresses 220, 221 within free memory BO. In step 183 memory packet Al2 is released at memory address 222 and in step 143 memory packet Bl3 is allocated within free memory BO. In step 184, memory packet Al is released and assigned to free memory BO. Finally in step 144 memory packet B2ι is allocated within free memory BO at address 223.
By applying the inventive method, a pool is placed in both configurations at a same memory position and the amount of packets that can be added to pools of new configurations can be maximised when a packet from a previous configuration is released.
By using the extra knowledge where a packet will need to be allocated in a future mode, fragmentation may be prevented. Furthermore, memory pools can be allocated incrementally, which reduces the latency of a streaming system and thus the amount of memory that is required for seamless modes changes.

Claims

CLAIMS:
1. Method for altering memory configurations in a physical memory where a first memory configuration and at least a second memory configuration are defined by at least one memory pool comprising at least one memory packet, respectively, comprising the steps of: a) detecting a released memory packet within a memory pool of said first memory configuration, b) assigning memory from said released memory packet to said second memory configuration, c) determining the size of said assigned free memory of said second memory configuration, and d) allocating within said assigned free memory a required amount of memory for a memory packet of a pool of said second memory configuration in case said assigned free memory size satisfies said allocation request.
2. Method according to claim 1, characterized by repeating the steps a-d until all allocated memory packets of said first memory configuration are released and all memory packets of said second memory configuration are allocated.
3. Method according to claim 1 , characterized by carrying out an alteration of said memory configurations according to steps a-d to a further memory configuration prior to the release of all memory packets of said previous memory configurations.
4. Method according to claim 1, characterized by assigning all free memory of said first memory configuration to at least said second memory configuration prior to step a.
5. Method according to claim 1, characterized by configuring said memory configurations by allocating a fixed memory location to said at least one memory pool, and assigning memory packets within each of said at least two memory pools.
6. Method according to claim 1, characterized by allocating equally sized memory packets within a memory pool.
7. Method according to claim 1, characterized by releasing memory packets of said first memory configuration within a finite time.
8. Method according to claim 1, characterized by determining said second configuration prior to step a.
9. Use of a method according to claim 1 in streaming systems, in particular in video- and audio-streaming systems, where a memory configuration is based on a defined streaming graph.
10. Integrated circuit, in particular a digital signal processor, a digital video processor, or a digital audio processor, providing a memory allocation method according to claim 1.
PCT/IB2003/001008 2002-04-03 2003-03-14 Morphing memory pools WO2003083668A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
AU2003209598A AU2003209598A1 (en) 2002-04-03 2003-03-14 Morphing memory pools
EP03745348A EP1499979A1 (en) 2002-04-03 2003-03-14 Morphing memory pools
KR10-2004-7015677A KR20040101386A (en) 2002-04-03 2003-03-14 Morphing memory pools
US10/509,456 US20050172096A1 (en) 2002-04-03 2003-03-14 Morphing memory pools
JP2003581024A JP2005521939A (en) 2002-04-03 2003-03-14 Memory pool transformation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP02076271.2 2002-04-03
EP02076271 2002-04-03

Publications (1)

Publication Number Publication Date
WO2003083668A1 true WO2003083668A1 (en) 2003-10-09

Family

ID=28459538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2003/001008 WO2003083668A1 (en) 2002-04-03 2003-03-14 Morphing memory pools

Country Status (7)

Country Link
US (1) US20050172096A1 (en)
EP (1) EP1499979A1 (en)
JP (1) JP2005521939A (en)
KR (1) KR20040101386A (en)
CN (1) CN1647050A (en)
AU (1) AU2003209598A1 (en)
WO (1) WO2003083668A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7516291B2 (en) * 2005-11-21 2009-04-07 Red Hat, Inc. Cooperative mechanism for efficient application memory allocation
CN101594478B (en) * 2008-05-30 2013-01-30 新奥特(北京)视频技术有限公司 Method for processing ultralong caption data
JP5420972B2 (en) * 2009-05-25 2014-02-19 株式会社東芝 Memory management device
US20140149697A1 (en) * 2012-11-28 2014-05-29 Dirk Thomsen Memory Pre-Allocation For Cleanup and Rollback Operations
US20150172096A1 (en) * 2013-12-17 2015-06-18 Microsoft Corporation System alert correlation via deltas
CN107203477A (en) 2017-06-16 2017-09-26 深圳市万普拉斯科技有限公司 Memory allocation method, device, electronic equipment and readable storage medium storing program for executing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5544327A (en) * 1994-03-01 1996-08-06 International Business Machines Corporation Load balancing in video-on-demand servers by allocating buffer to streams with successively larger buffer requirements until the buffer requirements of a stream can not be satisfied
US7093097B2 (en) * 2001-11-27 2006-08-15 International Business Machines Corporation Dynamic self-tuning memory management method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JOHNSON ET AL.: "SPATIAL PLACEMENT ALGORITHM FOR A MEMORY SCHEDULING PROBLEM", IBM TECHNICAL DISCLOSURE BULLETIN, vol. 15, no. 9, February 1973 (1973-02-01), NEW YORK US, pages 2843 - 2844, XP002242502 *

Also Published As

Publication number Publication date
EP1499979A1 (en) 2005-01-26
AU2003209598A1 (en) 2003-10-13
KR20040101386A (en) 2004-12-02
CN1647050A (en) 2005-07-27
US20050172096A1 (en) 2005-08-04
JP2005521939A (en) 2005-07-21

Similar Documents

Publication Publication Date Title
KR100724438B1 (en) Memory control apparatus for bsae station modem
EP1492295B1 (en) Stream data processing device, stream data processing method, program, and medium
US7818503B2 (en) Method and apparatus for memory utilization
US7596659B2 (en) Method and system for balanced striping of objects
US6760795B2 (en) Data queue system
US20060136779A1 (en) Object-based storage device with low process load and control method thereof
US20080086603A1 (en) Memory management method and system
WO2020073233A1 (en) System and method for data recovery in parallel multi-tenancy ssd with finer granularity
US10552936B2 (en) Solid state storage local image processing system and method
US7453878B1 (en) System and method for ordering of data transferred over multiple channels
US6009471A (en) Server system and methods for conforming to different protocols
JP2005500620A (en) Memory pool with moving memory blocks
US6614709B2 (en) Method and apparatus for processing commands in a queue coupled to a system or memory
US20050172096A1 (en) Morphing memory pools
US6647439B1 (en) Arrangement with a plurality of processors sharing a collective memory
EP1178643B1 (en) Using a centralized server to coordinate assignment of identifiers in a distributed system
JP2005084907A (en) Memory band control unit
JP2009163325A (en) Information processor
US11592986B2 (en) Methods for minimizing fragmentation in SSD within a storage system and devices thereof
US20060230246A1 (en) Memory allocation technique using memory resource groups
US8166272B2 (en) Method and apparatus for allocation of buffer
WO2010082604A1 (en) Data processing device, method of memory management, and memory management program
US20080270676A1 (en) Data Processing System and Method for Memory Defragmentation
US20140068220A1 (en) Hardware based memory allocation system with directly connected memory
TW201706849A (en) A packet processing system, method and device to optimize packet buffer space

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PH PL PT RO RU SC SD SE SG SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003745348

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 10509456

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 20038076500

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2003581024

Country of ref document: JP

Ref document number: 1020047015677

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 1020047015677

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 2003745348

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 2003745348

Country of ref document: EP