|Publication number||US6895128 B2|
|Application number||US 09/870,389|
|Publication date||May 17, 2005|
|Filing date||May 29, 2001|
|Priority date||May 29, 2001|
|Also published as||US20020186899|
|Publication number||09870389, 870389, US 6895128 B2, US 6895128B2, US-B2-6895128, US6895128 B2, US6895128B2|
|Original Assignee||Mevis Breastcare Gmbh & Co. Kg|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (12), Referenced by (26), Classifications (11), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This application is related to the following patent applications, filed on the same day as this application and assigned to the same assignee, MeVis Technology GmbH & Co. KG
The present invention relates to a method and apparatus for the operation of a cache buffer store in a data processing system, such as a computer system for rendering of medical images. More particularly, the invention relates to a method and apparatus including a procedure for prefetching of images and a corresponding algorithm for replacing least recently used images from the cache.
In many data processing systems, there is provided between the working memory of the central processing unit and the main memory a high-speed buffer storage unit that is commonly called a “cache”. This unit enables a relatively fast access to a subset of data that were previously transferred from the main storage to the cache, and thus, improves the speed of operation of a data processing system.
U.S. Pat. No. 4,807,110 discloses a prefetching mechanism for a cache and a two-level shadow directory. When an information block is accessed, a parent identifier derived from the block address is stored in a first level of the shadow directory. The address of a subsequently accessed block is stored in the second level of the shadow directory, in a position associated with the first-level position of the respective parent identifier. With each access to an information block, a check is made whether the respective parent identifier is already stored in the first level of the shadow directory. If it is found, then a descendant address from the associated second-level position is used to prefetch an information block to the cache if it is not already resident therein. This mechanism is to reduce the occurrence of cache misses.
U.S. Pat. No. 6,154,767 shows a method and apparatus for using attribute transition probability models for prefetching resources. Idle bandwidth of a client is utilized for the prefetching and resource prefetching by the resource server utilizes idle processing and/or data bus resources of the server.
U.S. Pat. No. 6,151,662 discloses a method of data transaction typing for caching and prefetching. A microprocessor assigns a data transaction type to each instruction. The data transaction type is based upon the encoding of the instruction, and indicates an access mode for memory operations corresponding to the instruction. The access mode may, for example, specify caching and prefetching characteristics for the memory operation. The access mode for each data transaction type is selected to enhance the speed of access by the microprocessor to the data, or to enhance the overall cache and prefetching efficiency of the microprocessor by inhibiting caching and/or prefetching for those memory operations.
From U.S. Pat. No. 6,098,064 a method for prefetching and caching documents according to a probability ranked need list is known. Documents are prefetched and cached on a client computer from servers located on the Internet in accordance with their computed need probability. Those documents with a higher need probability are prefetched and cached before documents with lower need probabilities. The need probability for a document is computed using both a document context factor and a document history factor. The context factor of the need probability of a document is determined by computing the correlation between words in the document and a context of the operating environment. The history factor of the need probability of a document is determined by integrating both how recently the document was used and the frequency of document use.
U.S. Pat. No. 6,154,826 discloses a method for maximizing memory system bandwidth by accessing data in a dynamically determined order. The order in which to access said information is selected based on the location of information stored in the memory. The information is repeatedly accessed from memory and stored in the temporary storage until all streamed information is accessed. The information is stored until required by the data processor. The selection of the order in which to access information maximizes bandwidth and decreases the retrieval time.
Caching is an important technique in all fields of data processing where large amounts of data have to be handled, such as image processing. A particular field of image processing is the rendering, archival, retrieval, processing, transformation, analysis and display of medical images, such as, in the field of digital radiography.
U.S. Pat. No. 6,041,135 shows an interactive off-line processing method for radiographic images. In this off-line image processing method for radiographic images an image is decomposed into detail images and multiple resolution levels and a residual image; detail images are modified up to a preset resolution level and a processed image is reconstructed by means of the modified detail images and the residual image. Interactive processing is performed with different parameter settings.
U.S. Pat. No. 6,127,669 discloses a method for computer aided determination of window and level settings for filmless radiology. Input image data is transformed into an image histogram. This histogram is then segmented into a small number of parts corresponding to structures of interest. Segmentation of the histogram is done by a Viterbi optimal runlength-constrained approximation nonlinear filter.
Window and level settings are calculated to correspond to this segmentation. A readable image of one of the structures of interest is then displayed. In another embodiment, the tool provides a menu of optimal window and level settings corresponding to the different types of information a radiologist may be interested in.
A common disadvantage of prior art workstations for processing and displaying of digital radiographic images is the latency time experiences by the user of the system when a new image is loaded.
This is due to the fact that a single digital radiographic image can be over 50 MB in size. For example, in digital mammography a case to be reviewed by a radiologist is composed of about 4 to 8 different images, each 60 MB in size for certain modalities. A complete case thus is about 500 MB of data.
Even with high-speed processors and memory components such volumes of data result in substantial latency times that are often not acceptable for the users of such workstations. As a consequence, usage of data processing systems for the archiving, transformation, retrieval and transformation of digital radiographic images has been limited so far.
It is therefore an object of the present invention to provide an improved method and apparatus for prefetching in order to reduce latency times experienced by a user, in particular with respect to applications for digital radiographic images.
It is a further object of the invention to provide a method and system having an improved “least recently used algorithm” for freeing a cache memory of image data which is no longer needed.
It is a still further object to provide such a method and system for coupling to a central picture archiving and communication system (PACS).
The invention accomplishes the above and other objects and advantages by providing a method and computer system for the prefetching of data, such as image data, for example medical images from the field of digital radiography. The invention enables maximizing system throughput and minimizing latency times experienced by a user. In brief, this is accomplished by taking a predefined workflow into consideration for the prefetching of the images. A corresponding method and system is disclosed in co-pending US Patent Application entitled A Method And Computer System For Screening Of Medical Cases, Ser. No. 09/870,380; filed May 29, 2001,
This is of particular advantage in so-called “screening programs” where a large number of medical cases is reviewed sequentially by one or a group of radiologists. Typically the cases of such screening programs are all structured in the same or in a similar way. For example each case can consist of the same sequence of image formats and views.
The foregoing and other objects and advantages of the invention will become more apparent from the detailed description of the preferred embodiment of the invention described in detail below with reference to the accompanying drawings in which:
The invention will be described in detail as set forth in the preferred embodiments illustrated in the drawings. Although these embodiments depict the invention as in its preferred application, such as, for a digital radiology image system, it should be readily apparent that the invention has equal application to any type of data processing system that encounters or deals with the same or similar problems.
The cache 3 is coupled to a data source 4. The data source 4 can be any type of data source, such as a file system stored on a hard drive or another storage medium or the output of an image processing device, such as a computer aided diagnosis tool.
The data processing system 1 further has a file 5 for storing a workflow to be performed. The file 5 has a sequence 6 of steps S1, S2, S3, . . . Si, . . . , Sn to be sequentially performed in the workflow. The sequence 6 has a pointer 7 which points to the current step being carried out by the data processing system and/or its user. In the example considered here, the current step to which the pointer 7 is directed, is the step S2.
A prefetcher 8 has access to the file 5 and is coupled to the cache 3 for requesting of data to be prefetched.
In operation, the application program 2 reads the sequence 6 to obtain the step to which the pointer 7 is directed. After the application program 2 has obtained the next step to be carried out, it issues a corresponding request to the cache 3 to request data which are required for the performance of the current step.
These data are transferred from the cache 3 to the application program 2 for display and/or further processing. If a so-called “cache miss” occurs the request for the data is passed to the data source 4 from the cache 3 in order to read the data from the data source 4 and store the data in the cache 3 for access by the application program 2.
A user of the application program 2 can also input a jump request to jump in the sequence 6 in a forward or backward direction. If such a jump occurs, then the pointer 7 is moved by a corresponding number of steps along the sequence 6.
In order to prevent the occurrence of cache misses, the prefetcher 8 reads from the sequence 6 the step consecutive to the current step and all remaining images for the current step. In the example considered here the current step is the step S2 and the consecutive step is the step S3. Hence, when the application program 2 works on the current step S2 the prefetcher 8 reads the consecutive step S3 and performs an access operation to the cache 3 in order to request the data which are required for the application program 2 to perform the step S3.
If the data for carrying out the step S3 is already in the cache 3, no further action is required. If the contrary is the case, the request is transferred to data source 4 in order to provide the cache 3 with the required data.
When the processing of the current step S2 by the application program 2 is complete, the position of the pointer 7 is incremented by one, such that the pointer 7 points to the consecutive step. The consecutive step S3 is read by the application program 2 and the application program 2 requests the data for the performance of the step S3—which is the new current step—from the cache 3.
As the corresponding data has been requested previously from the cache 3 by the prefetcher 8 while the application program 2 was busy with the processing of the step S2, it is ensured that the data for the performance of the step S3 is already loaded in cache 3 such that no cache miss occurs. Again the prefetcher 8 performs an access operation to the sequence 6 in order to read the consecutive step of the new current step S3 in order to request the data being required for the performance of the new consecutive step while the current step S3 is being carried out.
Alternatively the prefetcher 8 can also request the data for a number of consecutive steps or set of steps in the sequence 6 rather than just the data for the next consecutive step.
In step 43 the application requests at least a subset of the data required for the performance of the complete step Si from the cache. During the processing of the step Si all of the data required for the performance of step Si is requested by the application in certain discrete data packages.
After the completion of the processing of the step Si the application goes to the next step Si+1 in the workflow in step 44. From there the application jumps to a step Si+1+j in the workflow in the example considered here in response to a corresponding input operation of a user. The step Si+1+j is the new current step. The jump is performed in step 45.
In parallel, the data required for the performance of the current step Si and the consecutive step Si+1 of the current step are requested in step 46 from the cache for the purposes of prefetching. In step 47 it is decided whether the data is already in the cache.
If this is the case an acknowledgement is sent to the prefetcher in step 48. If the contrary is the case a data source is accessed in step 9 in order to load the data into the cache in step 10. Next an acknowledgement is sent to the prefetcher in step 48. It is to be noted that the step 42 and the steps 46 to 48, 9 and 10 are carried out in parallel to the steps 41 and 43 to 45 with respect to the then current step.
It is to be noted that image 3 and/or other images of the m images is skipped completely—depending on the user specific initialization of the file 9 a.
The case stack 31 has a pointer 10 a which points to the current case being processed by the application 2. In the example considered here this is the case 2. Likewise the file 9 a has a pointer 11 pointing to the current image being processed within the current case. In the example considered here the current image is the image 1 (2) of the case 2.
The application 2 is coupled to a display buffer 12 that is coupled to a monitor 13.
Further the cache 3 is coupled to a data source manager 14. The data source manager 14 has a table 15 relating image formats (Data Tags) to corresponding data sources. In the example, the format 1 is related to the data source of file reader 16, the format 2 to the scaler 17, the format 3 to CLAHE 18 for performing of a contrast limited adaptive histogram equalization, and format 4 to wavelet enhancement 19.
In operation, the prefetcher 8 sequentially requests all of the images contained in the current case as indicated by the file 9 a from the cache 3, as well as, the images of the consecutive case. For example, when the prefetcher 8 requests the image 1 (1) of the consecutive case 3, the request is provided to the cache 3. If the corresponding data is already in the cache 3 this is acknowledged to the prefetcher 8.
If this is not the case, the request is passed onwards to the data source manager 14. The data source manager 14 identifies the appropriate data source for the request, which is the file reader 16. As a consequence, the image 1 (1) of the case 3 is requested from the file reader 16, which reads the corresponding data from mass storage, such as the hard drive of the data processing system 1, not shown in
When the prefetcher 8 requests the image 2 (3) of the case 3, a situation can occur where the image 2 is already in the cache 3, but not in the right format. As the image is not available in the required format the request is still passed onwards to the data source manager 14 for identification of the appropriate data source, which is CLAHE 18 in this case.
As a consequence, the CLAHE 18 is requested to provide the image 2 in the required format 3. This is done by the CLAHE 18 by requesting the corresponding image data in the format 1 from the cache 3, and then performing the contrast limited adaptive histogram equalization on the raw image data. The resulting transformed raw image data is stored in the format 3 in the cache 3 and an acknowledgement is send to the prefetcher 8.
The application program 2 is coupled to the prefetcher 8 to inform the prefetcher 8 of a next case to be processed. Again the prefetcher requests the images required for the current case and a consecutive case from the image cache 3.
If a requested image is not in the image cache 3, the request is passed onwards to the image source 20. The operation of image source 20 corresponds to the data source manager 14. The image source 20 identifies an appropriate image source such as file reader 16 or scaler 17 or passes the request onwards to enhancement component 21, which passes the request onwards to CLAHE 18 or wavelet transformer 19 depending on the required image enhancement.
The requested image is loaded and/or generated by the respective image source and the image is stored in the image cache 3. This is acknowledged by the data source, which has loaded and/or generated the requested image by the message “Req_done” to all requestors of the image.
If a requestor of the image does not require the image anymore, a “release” message is sent to the image cache 3 from the requestor. Likewise a “cancel” message can be sent to the image cache 3 in case a request is to be cancelled before it has been completed. Such a situation can occur when the user jumps to another case in the case stack.
FIG. 6. shows the format of an image file 22 having a header 23 and a field 24 for storage of the pictorial data.
The header 23 contains an entry for the case number to which the image belongs, the image number for identification of the image, the image format, the requestors, the priority of the image for the purposes of a least recently used algorithm and a lock count. The utilization of those fields will become more apparent in the following.
Also, the application program can have more than one requestor. For example individual program components of the application program can take the role of individual requestors of images.
In step 71 the corresponding request is sent to the cache. In step 72 the cache decides whether the requested image is already available. If this is the case the lock count is incremented in step 73. In step 74 the priority of the image is incremented by +P2. For example, the value of P2 can be equal to 100. In step 75 an acknowledgement is sent to all requestors of the image; the requestors are identified by the corresponding data field in the header 23 of the requested image.
If the image is not in the cache the control goes to step 76 for generation of a header of the image in the cache. The priority of the image is set to be equal to a predefined value of P1 in step 77. For example the value of P1 can be equal to 100. In step 78 the lock count is set to be equal to 1.
In step 79 it is decided whether the cache has enough capacity for loading of the pictorial data of the requested image. If this is the case the pictorial data are requested from the appropriate image source in step 80. In step 81 the pictorial image data are loaded and/or generated by the image source.
This can require that the image source itself issues a request for the image in an appropriate format, for example to calculate the desired transformation in the requested image format. For example, if the image source is CLAHE, the CLAHE requires raw image data in the format 1. Correspondingly, the CLAHE requests the image in the format 1 such that the steps from step 70 onwards are invoked.
Once the requested image is present in the cache in the raw data format 1, the CLAHE image transformation is carried out to generate the image in the requested format 3. This way the step 81 is completed and an acknowledgement is provided to all requestors of the image in the format 3 in step 82.
If it is decided in step 79 that the cache does not have enough capacity, the control goes to step 83. In step 83 the contents of the cache is examined for images having a lock count equal to 0 . All images having a lock count equal to 0 are grouped into a set S.
In step 84 an image IL of the set S having the lowest priority of the images in the set S is identified. This image IL is freed from the cache in step 85.
In step 86 all priorities of the images of the set S are decremented by one but not below 0. From 86 the control returns back to step 79 in order to repeat the step 83 to 86 if the available cache capacity is still insufficient.
The flow chart of
In step 81 a, a corresponding release message is received by the image cache and the header of the image Ik is identified in order to decrement the lock count in the header. This is done in step 81 a. In step 82 a, the priority is decremented by P3 but not below 0. For example, the value of P3 can be equal to 50. This way the released image can be replaced or removed from the cache in case capacity needs to be provided. The replacement is done in accordance with the least recently used algorithm as illustrated with respect to
Each of the workstations 25 has the components of the data processing system 1 of FIG. 4. In addition each workstation 25 has a scheduler 28 that is connected to the case stack 31 and a keyboard 27 for entering jump commands and other entries. Further the workstation 25 has a local storage 29 for the storage of image data provided by the image server 26 and a user profile 30.
In operation the scheduler 28 identifies the case to be processed from case stack 31 and requests the corresponding image data from the image server 26. Preferably this is done when the workstation is not in use by the radiologist such as during night time.
This way the required data can be downloaded from the image server 26 over night, which takes a considerable amount of time in view of the large data volume. The image data is captured and stored on the storage 29 locally on the workstation 25. When the cases of the case stack 31 are reviewed by the radiologist, the required image data is obtained directly from the storage 29 and not from the image server 26 which further reduces latency times.
Although the present invention has been shown and described with respect to preferred embodiments, nevertheless, changes and modifications will be evident to those skilled in the art from the teachings of the invention. Such changes and modifications that embody the spirit, scope and teachings of the invention are deemed to fall within the purview of the invention as set forth in the appended claims.
List of reference numerals
data processing system
data source manager
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4807110||Apr 6, 1984||Feb 21, 1989||International Business Machines Corporation||Prefetching system for a cache having a second directory for sequentially accessed blocks|
|US5917929||Jul 23, 1996||Jun 29, 1999||R2 Technology, Inc.||User interface for computer aided diagnosis system|
|US5987345||Nov 29, 1996||Nov 16, 1999||Arch Development Corporation||Method and system for displaying medical images|
|US6041135||Nov 6, 1997||Mar 21, 2000||Buytaert; Tom Guido||Fast interactive off-line processing method for radiographic images|
|US6098064||May 22, 1998||Aug 1, 2000||Xerox Corporation||Prefetching and caching documents according to probability ranked need S list|
|US6127669||Jan 29, 1998||Oct 3, 2000||University Of Maryland||Computer-aided determination of window and level settings for filmless radiology|
|US6151662||Dec 2, 1997||Nov 21, 2000||Advanced Micro Devices, Inc.||Data transaction typing for improved caching and prefetching characteristics|
|US6154767||Jan 15, 1998||Nov 28, 2000||Microsoft Corporation||Methods and apparatus for using attribute transition probability models for pre-fetching resources|
|US6154826||Feb 28, 1997||Nov 28, 2000||University Of Virginia Patent Foundation||Method and device for maximizing memory system bandwidth by accessing data in a dynamically determined order|
|US6574629 *||Dec 23, 1998||Jun 3, 2003||Agfa Corporation||Picture archiving and communication system|
|US20020102028 *||Feb 1, 2001||Aug 1, 2002||KELLER Scott||Image storage and display system|
|EP0766183A1 *||Sep 29, 1995||Apr 2, 1997||Hewlett-Packard Company||Browsing electronically stored information|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7015808 *||Jan 30, 2004||Mar 21, 2006||Icad, Inc.||Method and system for remote monitoring of processing status in computer-aided detection systems|
|US7062756 *||Nov 30, 2001||Jun 13, 2006||Sun Microsystems, Inc.||Dynamic object usage pattern learning and efficient caching|
|US7292690||Oct 18, 2002||Nov 6, 2007||Sony Corporation||Video scene change detection|
|US7310422||Nov 18, 2005||Dec 18, 2007||Sony Corporation||Partial encryption and PID mapping|
|US7711115||Oct 21, 2003||May 4, 2010||Sony Corporation||Descrambler|
|US7724907||Mar 12, 2003||May 25, 2010||Sony Corporation||Mechanism for protecting the transfer of digital content|
|US7730300||Mar 11, 2003||Jun 1, 2010||Sony Corporation||Method and apparatus for protecting the transfer of data|
|US7747853||Mar 31, 2004||Jun 29, 2010||Sony Corporation||IP delivery of secure digital content|
|US7751560||Jun 26, 2006||Jul 6, 2010||Sony Corporation||Time division partial encryption|
|US7751563||Sep 25, 2006||Jul 6, 2010||Sony Corporation||Slice mask and moat pattern partial encryption|
|US7765567||Dec 13, 2002||Jul 27, 2010||Sony Corporation||Content replacement by PID mapping|
|US7823174||Apr 13, 2004||Oct 26, 2010||Sony Corporation||Macro-block based content replacement by PID mapping|
|US7853980||Jan 23, 2004||Dec 14, 2010||Sony Corporation||Bi-directional indices for trick mode video-on-demand|
|US7859549||Mar 8, 2005||Dec 28, 2010||Agfa Inc.||Comparative image review system and method|
|US7895616||Feb 27, 2002||Feb 22, 2011||Sony Corporation||Reconstitution of program streams split across multiple packet identifiers|
|US7895617||Jan 31, 2006||Feb 22, 2011||Sony Corporation||Content substitution editor|
|US7930193||Mar 17, 2010||Apr 19, 2011||Marx James G||Systems and methods for workflow processing|
|US7937277||Mar 17, 2010||May 3, 2011||Marx James G||Systems and methods for workflow processing|
|US8086743 *||Jun 12, 2009||Dec 27, 2011||Microsoft Corporation||Multi-channel communication with request reordering or reprioritization|
|US8566119||Aug 13, 2007||Oct 22, 2013||Siemens Medical Solutions Usa, Inc.||Method of Picture Archiving Communication System messaging intelligent update mechanism|
|US20040088558 *||Oct 21, 2003||May 6, 2004||Candelore Brant L.||Descrambler|
|US20040158721 *||Jan 23, 2004||Aug 12, 2004||Candelore Brant L.||System, method and apparatus for secure digital content transmission|
|US20040181666 *||Mar 31, 2004||Sep 16, 2004||Candelore Brant L.||IP delivery of secure digital content|
|US20050108060 *||Nov 5, 2004||May 19, 2005||Konica Minolta Medical & Graphic, Inc.||Medical image information management system|
|US20050190273 *||Apr 29, 2005||Sep 1, 2005||Microsoft Corporation||System and method for exchanging images|
|US20060018627 *||Jul 15, 2005||Jan 26, 2006||Canon Kabushiki Kaisha||Image reproducing apparatus and image reproducing method|
|U.S. Classification||382/305, 707/E17.031, 382/128, 382/100, 382/132|
|International Classification||G06F17/30, G06T7/00|
|Cooperative Classification||G06T7/0012, G06F17/3028|
|European Classification||G06T7/00B2, G06F17/30M9|
|Aug 27, 2001||AS||Assignment|
|Dec 20, 2001||AS||Assignment|
|Feb 11, 2005||AS||Assignment|
|Sep 30, 2008||FPAY||Fee payment|
Year of fee payment: 4
|Nov 8, 2012||FPAY||Fee payment|
Year of fee payment: 8