|Publication number||US7496217 B2|
|Application number||US 11/006,781|
|Publication date||Feb 24, 2009|
|Filing date||Dec 8, 2004|
|Priority date||Dec 8, 2003|
|Also published as||CN1655193A, CN100421128C, DE10357206A1, DE10357206B4, US20050123197|
|Publication number||006781, 11006781, US 7496217 B2, US 7496217B2, US-B2-7496217, US7496217 B2, US7496217B2|
|Original Assignee||Siemens Aktiengesellschaft|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (10), Non-Patent Citations (2), Referenced by (11), Classifications (19), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present application hereby claims priority under 35 U.S.C. §119 on German patent application number DE 103 57 206.6 filed Dec. 8, 2003, the entire contents of which are hereby incorporated herein by reference.
The invention generally relates to a method for segmentation of section image data of an examination object with the aid of a model based on parameters. The invention also generally relates to a method for production of a model based on parameters for use in a segmentation method such as this and to an image processing system by which a method such as this can be carried out.
The result of investigations by means of modalities which produce section images, such as CT scanners, magnetic resonance appliances and ultrasound appliances generally include a number of series with a large number of section images of the relevant examination object. For further planning of the examination and/or in order to produce a diagnosis, this section image data must in many cases be processed further during the examination itself, or immediately after the examination. The so-called “segmentation” of anatomical structures plays a major role in the further processing of this section image data. A segmentation process such as this is used to break down the image data of the examination object such that specific object elements of an examination object, that is to say specific anatomical structures which are the focal point of the respective examination, are separated from the rest of the image data. One obvious example of this is separation of the bone structure of the pelvis from a CT or MR section image data record of a patient's lower body.
A further example is contrast agent angiography by way of computed tomography. In an examination such as this it is worthwhile, and maybe absolutely essential, to remove the interfering bone components from the volume data record in order subsequently to make it possible to produce diagnostically valid MIP representations (MIP=Maximum Intensity Projection) or other result images. This is particularly important for those examinations in the area of the skull or spinal column. Good segmentation of the already existing section image data also plays an important role in other areas of angiography, for operation planning or else for selecting the modality for further detailed images of an anatomical structure that is of interest.
One relatively simple segmentation algorithm is the so-called “threshold value method”. This method operates in such a way that the intensity values (which are referred to as “Hounsfield values” in computer tomography) of the individual voxels, that is to say of the individual 3-D pixels, are compared with a fixed threshold value setting. If the value of the voxel is above the threshold value, then this voxel is added to a specific structure. However, this method can be used for magnetic resonance scans, particularly for contrast agent examinations or in order to separate the external skin surface from the environment.
In the case of computed tomography scans, this method may additionally also be used for separation of bone structures. This method is not suitable for separation of other tissue types. Furthermore, unfortunately, in many cases it is also impossible to use this method to separate different adjacent bone structures from one another, for example in order to separate the joint cavity in the pelvis structure from the joint head of the femur when scanning a hip joint, and to view these object elements separately. Furthermore, a simple threshold value method such as this often cannot be used reliably due to so-called partial volume effects and metal artifacts, which ensure that parts of the surface of the object element to be separated cannot be determined.
Thus, in many cases, only manual segmentation of the section image data is possible. Unfortunately, however, such manual segmentation is often very difficult to carry out owing to the complicated anatomy of the examination object, and is associated with a high time penalty.
In principle, the segmentation process can be improved by using so-called model-based methods, in which morphological knowledge of the examination object is included in the segmentation process.
Virtual models are already used in many fields of technology in order to simulate objects in specific states or to recognize objects again. By way of example, US 2001/0026272 A1 proposes a simulation method in which, taking into account mechanical and visual material characteristics of a piece of clothing, the physically correct fit of the relevant piece of clothing on the human body can be simulated. Furthermore, for example, U.S. Pat. No. 6,002,782 discloses a method for personal identification based on the correlation of recorded 2D images of the human face with simulated 2D images of already known optical 3D face scans, with the x axis of the 3D face scan being made to match the viewing direction of the camera associated with the 2D image.
The medical field, as well, already makes use of methods in order to produce virtual models of objects to be examined, on the basis of widely differing measurements, and these models can then be used as the basis for the further examination of the relevant object. By way of example, U.S. Pat. No. 6,028,907 describes a method in which a three-dimensional model of a spinal column to be examined is produced from two-dimensional CT section images and two-dimensional CT scout images.
In the case of the present problems of segmentation of section image data, image data which is missing, for example as a result of partial volume effects or metal artifacts, can be compensated for by matching a model to a target structure in the section image data (which includes the object element to be separated) in individual layers. This allows the complete reconstruction of the object element to be separated, for example the organ or the specific bone structure. However, in this method, the problem of segmentation is in the end changed to the problem of matching a model as well as possible to a target structure in the section image data.
One object of an embodiment of the present invention is to provide a corresponding method and/or an image processing system for simple and reliable segmentation of section image data of an examination object using a model. In particular, an object is to provide one wherein the model can be matched to the target structure satisfactorily, and/or with as little time penalty as possible.
An object can be achieved by a method and/or an image processing system.
On the basis of the method according to an embodiment of the invention, a norm model which is based hierarchically on parameters and in which the model parameters are organized hierarchically on the basis of their influence on the anatomical overall structure of the model is in this case used as the anatomical norm model, whose geometry can be varied on the basis of model parameters and can thus be matched to the target structure.
The “individualization” of the norm model, that is to say the matching to the target structure, is in this case carried out in a number of iteration steps, with the number of model parameters which can be set at the same time in the respective iteration step, and thus the number of degrees of freedom for model variation—being increased corresponding to the hierarchical order of the parameters as the number of iteration steps increases. This method ensures that, during the individualization process, those model parameters which have the greatest influence on the anatomical overall geometry of the model are adjusted first of all. Only then may the lower-level model parameters, which influence only some of the overall geometry, be adjusted, on a gradual basis. This ensures an effective procedure, which is in consequence time-saving, for model matching.
Finally, once the model has been matched in the desired manner to the target structure—for example there are no longer any discrepancies between the model and the target structure or the discrepancies are minimal or are below a specific threshold value—all of those pixels within the section image data which are within a contour of the individualized model or of a model element, or which differ from this by at most a specific difference value are selected. The selection process may in this case be carried out removing the relevant pixels, or by removing all the other pixels, that is to say with the relevant pixels being cut out. A “model element” should in this case be regarded as a part of the individualized norm model, for example the skull base of a skull model.
For this purpose, an image processing system according to an embodiment of the invention requires an interface for reception of the section image data which has been measured by a modality and is to be segmented, a target structure determination unit for determination of the target structure in the section image data, a memory device with a number of corresponding anatomical norm models for different target structures in the section image data, with the model parameters being organized hierarchically on the basis of this influence on the anatomical overall geometry of the model, and a selection unit for selection of one of the anatomical norm models on the basis of the section image data to be segmented. Furthermore, the image processing system requires an adaptation unit in order to match the selected norm model to the target structure in the section image data for individualization purposes in a number of iteration steps, with the number of model parameters which can be set being increased in accordance with the hierarchical organization as the number of iteration steps increases. Finally, the image processing system requires a separation unit in order, finally, to select all of the pixels within the section image data which lie within a contour of the individualized model or of a model element, or which differ from this by at most a specific difference value.
The model parameters are preferably each associated with one hierarchy class. This means that different model parameters may possibly also be associated with the same hierarchy class since they have approximately the same influence on the anatomical overall geometry of the model. All of the model parameters in one specific hierarchy class can then be added for the first time in one specific situation step in order to be set. The model parameters in the hierarchy class below this are then added in the next iteration step, etc.
A model parameter may be associated with a hierarchy class on the basis of a discrepancy in the model geometry which occurs when the relevant model parameter is changed by a specific value. In this case, in one particularly preferred method, specific areas of discrepancies, for example numeral discrepancy intervals, are associated with different hierarchy classes. Thus, for example, a parameter is varied in order to place this parameter in a hierarchy class, and the resultant discrepancy between the geometrically changed model and the initial state is calculated. The extent of the discrepancy in this case depends on the nature of the norm model used.
One feature is that a precisely defined discrepancy measure should be determined which quantifies as accurately as possible the geometry change in the model before and after variation of the relevant model parameter, in order to ensure realistic comparison of the influence of the various model parameters on the model geometry. For this purpose, a standard step width is preferably used for each parameter type, that is to say for example for distance parameters, for which the distance between two points in the model is varied, or for angle parameters in which an angle between three points in the model is varied, in order to allow the geometry influence to be compared directly. The parameters are then split between the hierarchy classes simply by presetting numerical intervals for this discrepancy measure.
An uppermost hierarchy class whose model parameters can be set immediately in a first iteration step preferably contains at least those model parameters whose variation globally changes the norm model. These include, for example, the total of nine parameters relating to rotation of the overall model about the three model axes, the translation along the three model axes, and the scaling of the entire model on the three model axes.
The digital anatomical norm models which can be used may in principle be constructed in widely differing ways. One option, for example, is to model anatomical structures on a voxel basis, with specific software being required for editing of such volume data, although this software is generally expensive and is not widely used. Another option is to model so-called “finite elements”, with a model generally being formed from tetrahedrons. However, specific and expensive software is also required for models such as these.
Simple modeling of anatomical boundary surfaces by triangulation is relatively widely used. The corresponding data structures are supported by a large number of standard programs from the computer graphics field. Models constructed on this principle are referred to as so-called surface-oriented anatomical models. This is the lowest common denominator for the modeling of anatomical structures since appropriate surface models can be derived not only from the first-mentioned volume models by triangulation of the voxels but also by changing the tetrahedrons in the finite element method into triangles.
It is thus possible to use surface-oriented models built on a triangle basis as norm models. First of all, this method allows the models to be produced very easily and at very low cost. Secondly, models which have already been produced in a different form, in particular the volume models which have been mentioned, can be adopted by appropriate transformation, so that there is then no need to create an appropriate model from new.
Section image scans, for example, can be segmented with corresponding effort using a classic manual method in order to create such surface models from new. Finally, the models can be generated from the information obtained in this way about the individual structures, for example individual organs. For example, in order to obtain human bone models, it is also possible to measure a human skeleton with the aid of laser scanners, or to scan them, to segment them and triangulate them by means of a CT scanner.
When using surface models produced on a triangle basis, the discrepancy between the unchanged norm model and the changed norm model after variation of one parameter is preferably calculated on the basis of the sum of the geometric distances between the corresponding triangles in the models in the various states.
The hierarchical organization of the individual model parameters may in principle be carried out during the segmentation of the section image data. It is then possible, for example, to first of all check in each iteration step which further model parameters have the greatest influence on the geometry, and then to add these parameters. However, since this is associated with considerable computation complexity, the classification or organization of the model parameters in the hierarchical order is particularly preferably done in advance, for example even while producing the norm model, but at least before the storage of the norm model in a model database or the like, for subsequent selection.
Thus, the model parameters are preferably organized in advance hierarchically on the basis of their influence on the anatomical overall geometry of the model, in an autonomous method for production of norm models, which are then available for use in said segmentation method. During this process, the model parameters can likewise be associated with corresponding hierarchy classes, with a parameter once again being associated with a hierarchy class on the basis of the discrepancy in the model geometry which occurs when the relevant model parameter is changed by a specific value.
This separation of the hierarchical arrangement of the model parameters into a separate method for production of a norm model has the advantage that the calculation of the hierarchical organization of model parameters need be carried out only once for each norm model, thus making it possible to save valuable computation time during the segmentation process. The hierarchical organization can be stored together with the norm model in a relatively simple manner, for example by organizing the parameters in hierarchy classes or by logically linking them with appropriate markers or the like in a file header, or by storing them at another normalized position in the file which also contains the further data for the relevant norm model.
There are various options for determination of the target geometry of the object element to be separated in the section image data. In one preferred method, the target geometry is at least partially determined automatically by way of a contour analysis method. Contour analysis methods such as these operate on the basis of gradients between adjacent pixels. Widely differing contour analysis methods are known to those skilled in the art.
The advantage of contour analysis methods such as these is that the methods can be used not only for CT scanner section image data but also for magnetic resonance section image data and for ultrasound section image data. One relatively good alternative is to use the threshold value method, which has already been described in the introduction and in which, for example, the intensity values of the individual pixels are analyzed to determine whether they exceed a specific threshold value. As has already been mentioned, this latter method is, however, suitable only for determination of the target geometries of the skin surface or for contrast agent examinations in CT and MR scans as well as for CT scans for determination of bone target structures.
In one particularly preferred variant, a current discrepancy value between the geometry of the modified norm model and of the target structure is in each case determined on the basis of a specific discrepancy function during the process of matching the norm model to the target structure. One possible calculation option is to add the squares of the minimum spatial distances between the model triangles and the target structure. This discrepancy value may be used in various way depending on whether the method is being carried out manually, semi-automatically or fully automatically.
In the case of a manual method, the individual model parameters may be offered the user for variation, for example by means of a graphic user interface, in each iteration step on the basis of their hierarchical organization.
In this case, the current discrepancy value is then preferably also indicated, so that the user will see immediately on variation of the relevant model parameter whether and to what extent the geometry discrepancies are reduced by his actions. In particular, in this case, it is also possible to determine discrepancy values individually for each model parameter and to indicate these instead of an overall discrepancy value, or in addition to it.
One typical example of this is to display the target structure and/or the norm model to be matched, or at least parts of these objects on a graphics user interface at a terminal, in which case the user may, for example, use the keyboard or a pointing device such as a mouse or the like to adapt a specific model parameter—for example the distance between two points on the model. A moving bar or some similar optically easily identifiable means is used to indicate to the user the extent to which the discrepancies are reduced by his actions, in particular displaying on the one hand the total discrepancy of the model and, on the other hand, the discrepancies relating to the adaptation of the specific current model parameter—for example in the case of the distance between two points in the model, its difference from the distance between the relevant points in the target structures.
In consequence, this method can also be used to achieve satisfactory discrepancy values with manual matching, in a convenient manner and with relatively little time penalties.
The discrepancy function can preferably also be used in order to carry out the matching process completely automatically or at least partially automatically. In an automatic matching method such as this, the model parameters are likewise changed iteratively on the basis of their hierarchical organization, with the discrepancy function being minimized overall.
Automatic matching may in this case be carried out completely in the background, so that the operator can carry out other work and, in particular, can process other image data in parallel, or can control further measurements, on a console of the image processing system which is carrying out the segmentation process. In this case, it is possible for the process to be displayed permanently, for example on a screen (part) while the method is being carried out automatically, so that the user can monitor the progress in the matching process. In this case, a current value of the discrepancy function, possibly only for the parameter which is currently being varied as well, is preferably once again displayed to a user. In particular, it is also possible to indicate the discrepancy values on the screen permanently, for example in a task bar or the like, while the rest of the user interface is free for other work by the user.
In one very particularly preferred exemplary embodiment, the model parameters are in each case logically linked to a position of at least one anatomical landmark of the model such that the model has an anatomically sensible geometry for each parameter set. Typical examples of this are, on the one hand, global parameters such as rotation or translation of the overall model, in which the positions of all of the model parameters are changed appropriately with respect to one another. Other model parameters are, for example, the distance between two anatomical landmarks or an angle between three anatomical landmarks, for example in order to determine a knee position.
Such coupling of the model parameters to medically sensibly selected anatomical landmarks has the advantage that a diagnostic statement can always be made after the individualization process. Furthermore, the positions of such anatomical landmarks are described exactly in the anatomical specialist literature. A procedure such as this therefore makes it easier to carry out the segmentation process, since a medically trained user, for example a doctor or an MTA, is familiar with the anatomical landmarks, and these essentially determine the anatomy.
By way of example, the human pelvis can be described in a known manner by the following variables:
The selection unit, the adaptation unit and the separation unit for the image processing system may particularly preferably be in the form of software in an appropriately suitable image computer processor. This image computer should have an appropriate interface for reception of image data and a suitable memory device for anatomical norm models. In this case, this memory device need not necessarily be an integrated part of the image computer, and it is sufficient for the image computer to be able to access this and appropriate external memory device. An implementation of the method according to the invention in the form of software has the advantage that existing image processing systems can also be retrofitted appropriately and in a relatively simple manner, by suitable updates. The image processing system according to the invention may, in particular, also be a drive unit for the modality which records the section image data itself, and has the necessary components for processing of the section image data according to the invention.
A separate method according to an embodiment of the invention, which is carried out before the segmentation process, in order to produce a norm model in which the model parameters are organized hierarchically on the basis of their influence on the anatomical overall geometry of the model, may likewise also be in the form of suitable software on a computer. In particular, in this case, it is also possible for the image processing system which carries out the segmentation process on the image data to be used for production of the norm models. By way of example, free computation capacities could be used at specific times at which the image processing system is loaded only lightly by current tasks in order to produce norm models with hierarchically organized model parameters, and to store them in a database for subsequent use.
The invention will be explained in more detail in the following text using exemplary embodiments and with reference to the attached drawings, in which:
The exemplary embodiment of an image processing system 1 according to the invention as illustrated in
In the exemplary embodiment illustrated in
The modality 2 is normally driven via the control device 3, which also acquires the data from the modality 2. The control device 3 may have its own console or the like for operation in situ, although this is not illustrated here. However, it is also possible for it to be controlled via the bus by way of a separate workstation which is located in the vicinity of the modality.
First of all, the section image data to be evaluated is defined in a first method step I. The section image data D may, of example, be supplied directly from the modality 2 or its control device 3 via the bus 4 to the image computer 10. However, this may also be section image data D which has already been recorded at an earlier time, and has been stored in a bulk memory 9.
A norm model M for the section image data D to be segmented is then selected for segmentation in a step II. Thus, an anatomical norm model is selected in accordance with a questionnaire on which the segmentation process is based. For example, a skull model is selected for segmentation of section image scans of a skull, a pelvis model is selected for segmentation of section image scans of a pelvis bone structure, and a knee model is selected for segmentation of section image scans of a knee. For this purpose, image computer 10 has a memory 12 with widely differing norm models for different possible anatomical structures. These may be models which comprise a number of object elements.
One example of a norm model M which comprises a number of individual object elements is the pelvis norm model that is illustrated in
In a further step III, which may also be carried out in parallel with or before the method step II for model selection, target structures Z are defined within the section image data D. This may be done fully automatically, semi-automatically or completely manually, for example with the aid of contour analysis, as has already been mentioned. A threshold value method may also be used for certain structures and certain scanning methods, as has already been described further above.
The model M is in this case selected automatically by way of a selection unit 14 on the basis of the segmentation task, which, by way of example, can be predetermined via the console 5 manually at the start of the segmentation method, and the determination of a target structure Z by means of a target structure determination unit 17, which is in this case in the form of software in the processor 11 in the image computer 10.
As an example,
The individual model parameters are varied for this purpose in an adaptation process in a number of iteration steps S until, in the end, all of the parameters have been individualized, that is to say they have been set such that they match, or the individualization is sufficient, that is to say the discrepancy the norm model M and the target structure Z is a minimum or is below a predetermined threshold value. Each iteration step S in this case comprises a number of process steps IV, V, VI, VII, which are carried out in the form of a loop.
The loop or the first process step S starts with the method step IV, in which the optimum parameters for translation, rotation and scaling are determined first of all. These are the parameters in the uppermost (referred to in the following text as the “0-th”) hierarchy class, since these parameters affect the overall geometry. The three translation parameters tx, ty, tz and the three rotation parameters rx, ry, rz are shown schematically around the three spatial axes in
Once this matching process has been carried out as far as possible, model parameters which have not yet been set are estimated from already determined parameters in a further step V. Thus, initial values for lower-level parameters are estimated from the settings of higher-level parameters. One example of this is the estimation of the knee width from the settings for a scaling parameter for the body height. This value is predetermined as the initial value for the subsequent setting of the relevant parameter. This makes it possible to speed up the method considerably. The relevant parameters are then set optimally in the method step VI.
According to an embodiment of the invention, the parameters are organized hierarchically on the basis of their influence on the anatomical overall geometry of the model. The greater the geometric effect of a parameter, the higher it is in the hierarchy. As the number of iteration steps S increases, the number of model parameters which can be set is increased at the same time, corresponding to the hierarchical organization.
Thus, in the first iteration step S or within the first run through the loop, only the parameters for the 1-th hierarchy level below the 0-th hierarchy level are used for setting the model in the step VI. In the second run, it is then possible to once again first of all subject the model to translation, rotation and scaling in the method step IV. Those model parameters in the 2nd hierarchy class which have not yet been determined are then estimated from already determined parameters in the method step V, and are then added for setting in step VI. This method is then repeated n-times, with all of the parameters in the n-th stage being optimized in the n-th iteration step, and the final step VII in the iteration step S once again being used to determine whether any further parameters are still available which have not yet been optimized.
A new (n+1)-th iteration step is then started once again, with the model first of all being moved, rotated or scaled appropriately once again, and, finally, it is possible to set all of the parameters again in series, with the parameters in the (n+1)-th class now also being available. Another check is then carried out in the method step VII to determine whether all of the parameters have been individualized, that is to say whether there are still any parameters which have not yet been optimized, or whether the desired matching has already been achieved.
The iteration method described above ensures that matching is carried out as effectively and in as time-saving a manner as possible. In this case, during the matching process, both the target structure Z and the associated model M as well as the currently calculated discrepancy values and the currently calculated value of a discrepancy function can be displayed on the screen 6 of the console 5 at any time during the matching process. Further, the discrepancies can also be visualized, as illustrated in
The lower-level hierarchy classes are obtained from quantitative analysis of the geometry influence. To do this, each parameter is changed and the resultant discrepancy between the geometrically changed model and the initial state is calculated. This discrepancy may be quantified, for example, by the sum of the geometric distances between corresponding model triangles, when triangle-based surface models are used, as illustrated in
As already mentioned, this method preferably makes use of model parameters which are directly linked to one or more positions of specific anatomical markers in the model. On the one hand, this has the advantage that only medically sensible transformations of the model are carried out. On the other hand, this has the advantage that the medically trained user will generally know these anatomical landmarks and can thus work quite well with these parameters. One example of a parameter such as this is the distance d12 shown in
In the present implementation of the method according to an embodiment of the invention, the user can, for example, use a mouse pointer to select one of the anatomical landmarks L1, L2 on the spinae iliacae anteriores superiores and change its position interactively. In this way, he can vary the distance between the distantia spinarum, that is to say the length of the distance d12, and can thus vary the entire pelvis geometry.
In the same way, the user can also, for example, change the distance d34 between the anatomical marker L3 on the top of the sacrum T3 and the anatomical marker L4 on the symphysis pubica T4. Since the two parameters d12, d34 have approximately the same influence on the overall geometry of the norm module M of the pelvis, these parameters are in this case arranged in the same hierarchy class and can be changed by the user within the same iteration step S, or are varied automatically in this iteration step S.
Another typical example in which the distances between two landmarks are organized in different classes, can be explained with reference to the anatomical markers on a skull model. A front view of a skull model with different anatomical landmarks is shown in
In this case, the geometrical effect of the first parameter, which indicates the distance between the orbitals, is greater than the geometric effect of the second parameter, which indicates the distance between the processi styloidei. This can be examined by varying the geometry of the model while changing the parameters by one millimeter. Since the processi styloidei are relatively small structures, the geometrical model change is restricted to a small area around these bony projections.
In contrast, the orbital sockets are relatively much larger. If the distance between the orbitals is changed, a component of the model which occurs more than once will change its geometry, leading to an increased discrepancy. The parameter represented by the distance between the orbitals is thus arranged in a considerably higher hierarchy class than the change in the distance between the processi styloidei since, in principle, parameters with a greater geometric range in the parameter hierarchy are at a higher level than parameters with a more local effect.
When a model parameter d12, d34 which in this case includes, as described here, a distance between two anatomical landmarks L1, L2, L3, L4 in the norm model M is varied, the geometry of the norm model M is preferably deformed in an area along a straight line between the anatomical landmarks L1, L2, L3, L4 in proportion to the change in the distance. This is illustrated for the distance d12 in the
In the event of variation of a model parameter d12, which includes the change in the position of the first anatomical landmark, in this case as an example the landmark L1 of the norm model M relative to an adjacent landmark, for example in this case the landmarks L3, L4, the geometry of the norm model M is preferably deformed in an appropriate manner in an area U surrounding the relevant first anatomical landmark L1 in the direction of the relevant adjacent landmarks L3, L4. In this case, the deformation advantageously decreases as the distance a from the relevant first anatomical landmark L1 increases. Thus, the deformation in the close area around the landmark L1 is greater than in those areas which are further away from it, in order to achieve the effect illustrated in
When, finally, sufficient matching has been achieved, then the actual segmentation process is carried out in the method step VIII. This is done in a separation unit 16, which is likewise in the form of a software module within the processor 11. In this case, all the pixels within the section image data located within a contour of the model or of a part of interest thereof are selected. For this purpose, by way of example, the relevant pixels are deleted from the image data or all the other data items are deleted, so that only the desired pixels remain. The separated element can then be processed further as required.
As this point, it should once again be stated expressly that the system architectures and processes illustrated in the figures are only exemplary embodiments, whose details may be varied by those skilled in the art without any problems. In particular, the control device 3 (provided that, for example, it has an appropriate console) may also have all the corresponding components of the image computer 10 in order to carry out the image processing in accordance with the method according to the invention directly there.
In this case, in consequence, the control device 3 itself forms the image processing system according to an embodiment of the invention, and there is no need for a further workstation or a separate image computer. Incidentally, it is not absolutely essential for the various components of an image processing system according to the invention to be implemented in a processor or in image computer or the like, and, rather than this, the various components may also be distributed between a number of processors or between computers which are networked with one another.
Incidentally, it is possible to retrofit existing image processing systems (in which already known post-processing processes are implemented) with a process control unit according to the invention in order additionally to use these systems in accordance with the method according to the invention as described above. In many cases, it may possibly also be sufficient to update the control software with suitable control software modules.
Exemplary embodiments being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the present invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US6002782||Nov 12, 1997||Dec 14, 1999||Unisys Corporation||System and method for recognizing a 3-D object by generating a 2-D image of the object from a transformed 3-D model|
|US6028907 *||May 15, 1998||Feb 22, 2000||International Business Machines Corporation||System and method for three-dimensional geometric modeling by extracting and merging two-dimensional contours from CT slice data and CT scout data|
|US6195409 *||Mar 19, 1999||Feb 27, 2001||Harbor-Ucla Research And Education Institute||Automatic scan prescription for tomographic imaging|
|US7242793 *||Apr 23, 2001||Jul 10, 2007||Washington University||Physically-based, probabilistic model for ultrasonic images incorporating shape, microstructure and system characteristics|
|US20010026272||Feb 26, 2001||Oct 4, 2001||Avihay Feld||System and method for simulation of virtual wear articles on virtual models|
|US20010033283 *||Feb 6, 2001||Oct 25, 2001||Cheng-Chung Liang||System for interactive 3D object extraction from slice-based medical images|
|US20030020714 *||Mar 5, 2002||Jan 30, 2003||Michael Kaus||Method of segmenting a three-dimensional structure contained in an object, notably for medical image analysis|
|US20030160786 *||Feb 28, 2003||Aug 28, 2003||Johnson Richard K.||Automatic determination of borders of body structures|
|WO2001078005A2 *||Apr 10, 2001||Oct 18, 2001||Cornell Research Foundation, Inc.||System and method for three-dimensional image rendering and analysis|
|WO2004075112A1 *||Feb 9, 2004||Sep 2, 2004||Philips Intellectual Property & Standards Gmbh||Image segmentation by assigning classes to adaptive mesh primitives|
|1||*||Ginneken, Bram van et al., "Active Shape Model Segmentation With Optimal Features", Aug. 2002, IEEE Transactions on Medical Imaging, vol. 21, No. 8, pp. 924-933.|
|2||*||Shen, D. et al., "HAMMER: Hierarchical Attribute Matching Mechanism for Elastic Registration", Nov. 2002, IEEE Transactions on Medical Imaging, vol. 21, No. 11, pp. 1421-1439.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7970190 *||Aug 31, 2007||Jun 28, 2011||Brainlab Ag||Method and device for determining the location of pelvic planes|
|US8103067 *||Jun 21, 2007||Jan 24, 2012||Siemens Aktiengesellschaft||Analysis method for image data records including automatic specification of analysis regions|
|US9117255 *||Apr 15, 2013||Aug 25, 2015||Anatomage Inc.||Patient-specific three-dimensional dentition model|
|US9224204 *||Mar 13, 2014||Dec 29, 2015||Siemens Medical Solutions Usa, Inc.||Method and apparatus for registration of multimodal imaging data using constraints|
|US9283061||Dec 12, 2011||Mar 15, 2016||Straumann Holding Ag||Method and analysis system for the geometrical analysis of scan data from oral structures|
|US20060251311 *||Jul 22, 2003||Nov 9, 2006||Humanitas Mirasole S.P.A.||Method and apparatus for analyzing biological tissue images|
|US20070297678 *||Jun 21, 2007||Dec 27, 2007||Siemens Aktiengesellschaft||Analysis method for image data records including automatic specification of analysis regions|
|US20080056433 *||Aug 31, 2007||Mar 6, 2008||Wolfgang Steinle||Method and device for determining the location of pelvic planes|
|US20130223718 *||Apr 15, 2013||Aug 29, 2013||Anatomage Inc.||Patient-Specific Three-Dimensional Dentition model|
|US20140270446 *||Mar 13, 2014||Sep 18, 2014||Siemens Medical Solutions Usa, Inc.||Method and Apparatus for Registration of Multimodal Imaging Data Using Constraints|
|US20160317806 *||Jun 17, 2016||Nov 3, 2016||Medtronic, Inc.||Implantable medical device|
|U.S. Classification||382/128, 382/154, 382/228|
|International Classification||G06T15/08, A61B8/00, G06T17/00, G06T5/00, A61B5/055, G06K9/34, A61B6/03, G06K9/00|
|Cooperative Classification||G06K9/00201, G06T2207/30008, G06T7/12, G06T2207/10081, G06T7/149|
|European Classification||G06K9/00D, G06T7/00S5, G06T7/00S2|
|Jan 21, 2005||AS||Assignment|
Owner name: SIEMENS AKTIENGESELLSCHAFT, GERMANY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TANK, MARTIN;REEL/FRAME:016169/0456
Effective date: 20050102
|Jul 6, 2012||FPAY||Fee payment|
Year of fee payment: 4
|Jun 28, 2016||AS||Assignment|
Owner name: SIEMENS HEALTHCARE GMBH, GERMANY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SIEMENS AKTIENGESELLSCHAFT;REEL/FRAME:039271/0561
Effective date: 20160610
|Jul 20, 2016||FPAY||Fee payment|
Year of fee payment: 8