WO2000022576A1 - Method for processing volumetric image data of an object visualized by texture mapping for adding shadows - Google Patents

Method for processing volumetric image data of an object visualized by texture mapping for adding shadows Download PDF

Info

Publication number
WO2000022576A1
WO2000022576A1 PCT/EP1999/007666 EP9907666W WO0022576A1 WO 2000022576 A1 WO2000022576 A1 WO 2000022576A1 EP 9907666 W EP9907666 W EP 9907666W WO 0022576 A1 WO0022576 A1 WO 0022576A1
Authority
WO
WIPO (PCT)
Prior art keywords
volume
shadow
values
data
image
Prior art date
Application number
PCT/EP1999/007666
Other languages
French (fr)
Inventor
Uwe Behrens
Ralf Ratering
Original Assignee
Gmd - Forschungszentrum Informationstechnik Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US09/807,327 priority Critical patent/US6771263B1/en
Application filed by Gmd - Forschungszentrum Informationstechnik Gmbh filed Critical Gmd - Forschungszentrum Informationstechnik Gmbh
Priority to DE69901572T priority patent/DE69901572T2/en
Priority to EP99950711A priority patent/EP1121664B1/en
Publication of WO2000022576A1 publication Critical patent/WO2000022576A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/60Shadow generation

Abstract

Texture-based volume rendering is a technique to efficiently visualize volumetric data using texture mapping hardware. In this paper we present a method, that extends this approach to render shadows for the volume. The method takes advantage of fast framebuffer operations modern graphics hardware offers, but does not depend on any special purpose hardware. The visual impression of the final image is significantly improved by bringing more structure and three-dimensional information into the often foggyish appearance of texture-based volume renderings. Although the method does not perform lighting calculations, the resulting image has a shaded appearance, which is a further visual cue to spatial understanding of the data and lets the images appear more realistic. As calculating the shadows is independent of the visualization process it can be applied to any form of volume visualization, though volume rendering based on two- or three-dimensional texture mapping hardware makes the most sense. Compared to unshadowed texture-based volume rendering, performance decreases by less than 50 %, which is still sufficient to guarantee interactive manipulation of the volume data. In the special case where only the camera is moving with the light position fixed to the scene there is no performance decrease at all, because recalculation has only to be done if the position of the light source with respect to the volume changes.

Description

METHOD FOR PROCESSING VOLUMETRIC IMAGE DATA OF AN OBJECT VISUALIZED BY TEXTURE MAPPING FOR ADDING SHADOWS
INTRODUCTION
Texture-based volume rendering is a technique to efficiently visualize volumetric data using texture mapping hardware. The present invention relates to a method that extends this approach to render shadows for the volume. The method takes advantage of fast framebuffer operations modern graphics hardware offers, but does not depend on any special purpose hardware.
The visual impression of the final image is significantly improved by bringing more structure and three-dimensional information into the often foggyish appearance of texture-based volume renderings. Although the method does not perform lighting calculations, the resulting image has a shaded appearance, which is a further visual cue to spatial understanding of the data and lets the images appear more realistic.
As calculating the shadows is independent of the visualization process it can be applied to any form of volume visualization, though volume rendering based on two- or three-dimensional texture mapping hardware makes the most sense. Compared to unshadowed texture-based volume rendering, performance decreases by less than 50 %, which is still sufficient to guarantee interactive manipulation of the volume data. In the special case where only the camera is moving with the light position fixed to the scene there is no performance decrease at all, because recalculation has only to be done if the position of the light source with respect to the volume changes.
In volume rendering, the volume data is typically treated as a semitransparent cloud of voxels emitting diffuse light. The task is, to find per pixel the total intensity of all voxels that contribute to it. The most straightforward method casts a ray through each image pixel and integrates the intensities of all voxels pierced by the ray [7]. Care must be taken to weigh the contribution of voxels further away by the accumulated opacity of the voxels in front.
Otherwise, all voxels of the same luminance had the same brightness on screen and it would be practically impossible to distinguish front from back. Unfortunately, the ray-casting approach is computationally expensive and usually does not allow interactive manipulation of the data. To overcome this limitation alternative methods have been developed that achieve comparable results in less time [5,6] of which texture-based volume rendering [2] promises the highest interactive frame rates.
2 VOLUME RENDERING WITH TEXTURES
The use of three-dimensional textures to perform volume rendering was described by Cabral et al. [2]. The method differs from so-called image-based volume rendering methods [5,6,7] which calculate the final image pixel by pixel, by its object-based approach and its extensive use of image composition and texturing hardware. Whereas image-based methods integrate luminance and opacity over a ray shot through the volume of interest in software, texture based methods render the data as a stack of textured parallel slices, from back to front, leaving the integration of luminance and opacity to an image composition step that can efficiently be performed in hardware [2]. This speed advantage is impaired by the inferior quality of the created images. Important secondary visual cues like illumination and shadows were formerly hard to integrate at interactive frame rates, if at all.
This is unfortunate, especially in applications, like medical visualization, where a thorough understanding of the spatial relationships is crucial. In the context of a neurosurgical operation planning system, we observed that the image quality of texture-based volume rendering often does not sufficiently communicate the orientation of the data-set. While this can partly be overcome as long as the object is moved, the impression vanishes, as soon as a still image shall be examined in closer detail. We therefore searched for methods that add visual cues to the image without spoiling interactivity.
2.1 Cabral's Basic Method
The basic method for rendering volume data with texture mapping works as follows: First, the three-dimensional data-set is loaded into the texture buffer of the graphics subsystem as a three-dimensional texture block. An image is generated by rendering a stack of parallel planes, perpendicular to the viewing direction, from back to front, with each plane being textured with an accordingly oriented slice from the texture block. Each new texture plane is then blended with the contents of the framebuffer by using a suitable blending function. The back- to-front rendering ensures a proper depth relation between the slices, while the purpose of the blending is to ensure that voxels farther away appear with an intensity scaled by the transparency of the voxels in front.
2.2 Extensions
A number of extensions have been proposed to the basic method. Variations of the blending operation allow for different properties to be extracted from the data. Examples are the common "over" operator [10] which implements attenuated integration, similar to image-based volume rendering, or a maximum intensity projection (MIP) operator [8] which can be used to visualize contrast media in medical imaging. Van Gelder and Kim [14] extended the method to incorporate diffuse and specular illumination [9]. In a preprocessing step their method finds the gradients at each voxel. When rendering the volume, the gradient serves to calculate the total light intensity of each voxel. These results are then saved as a new 3D texture, that can be rendered. Although this method gives impressive results in terms of additional realism, and improved spatial comprehension can be achieved, it decreases rendering performance on the order of a factor of 10 [14].
These modifications are motivated by the need to improve the visualization with respect to selected aspects of the data. The MlP-variation of the composition operator is used to focus attention towards the area of strongest contrast enhancement, while van Gelder and Kim try to overcome the foggyish, dull appearance of the original method and emphasize the underlying isoluminance surfaces and thus the spatial context.
2.3 Introducing Shadows
An important secondary visual cue for orientation in a volume are shadows. The superior visual impression of volume data rendered by using raytracing techniques [4,7] is caused by the detailed modeling of illumination, including shadows. In the context of a neurosurgical operation planning system [1], we found that shadows can significantly improve the spatial understanding of volume data, if rendered at interactive frame rates.
The central problem to be solved here is how to decide whether a voxel is shadowed by another voxel. A voxel is in shadow with respect to a light source, if any other non-ransparent voxel lies on the line of sight between itself and the light source. Thus, to decide whether a voxel is in shadow, a ray is traced to the light source, and the opacity of all intersected voxels is integrated. The amount of light reaching the final voxel is inversely proportional to the accumulated opacity. This technique is known as "shooting shadow rays" in raytracing. A similar, preprocessing version is at the core of Reeves' shadow map method [11]. In [12] Sobierajski presented an illumination model for volume rendering taking into account shadows, scattering and other subtle effects, which was successfully used in a state-of-the-art system. Earlier, Rushmeier [13] developed a similar model, although her approach was based on radiative heat transfer [3].
In [17] a method for generating the shadow of an arbitrary-shaped three- dimensional object is disclosed wherein the object is totally opaque. Morever, [18] discloses a method for generating shadows and lighting effects using texture mapping wherein again the objects displayed are opaque.
3 OBJECT / SOLUTION
3.1 Object
It is the object of the present invention to provide a comparatively simple method to accurately render shadowed semitransparent volumes with only a moderat increase in rendering time. The method is fast, easy to implement, and can be extended to multiple light sources.
3.2 Summary of the invention
According to the invention this object is solved by a method for processing volumetric image data of a semitransparent object generating a shadow due to light of a light source impinging onto the object in a direction defined by a light vector, wherein each volumetric image data represents a value of transparency and a value of brightness and wherein the method comprises the steps of:
(a) dividing the volumetric image data into several image slice data representing individual layers of the object arranged sequentially behind each other,
(b) on the basis of the light vector and the values of transparency of the image slice data of the object layer being closest to the light source, generating a shadow value for each image slice data of the object layer being closest to the light source and storing these shadow values,
(c) overlaying the stored shadow values generated in step (b) and the brightness values of the image slice data of the object layer second most close to the light source according to the light vector and the distance between these two adjacent object layers,
(d) on the basis of the light vector and the values of transparency of the image slice data of the object layer second most close to the light source, generating shadow values for each image slice data of the second most object layer and adding these values to the stored shadow values,
(e) overlaying the stored shadow values generated in step (d) and the brightness values of the image slice data of the object layer third most close to the light source according to the light vector and the distance between these two adjacent object layers,
(f) performing steps (d) and (e) for each object layer which is next in the direction away from the light source and
(g) volume rendering the brightness values of the image slice data of the object layer closest to the light source and the overlaid shadow and brightness values of the image slice data of all the other object layers to display the semitransparent object including the shadow generated by the semitransparent object.
Preferably, overlaying comprises a step of blending the brightness and shadow values according to a blending function. According to the invention, the shadows cast by the individual volumetric image data (voxels) are calculated successively from layer to layer of the object. In order to render a volume, the volumetric image data are divided into several groups of image slice data wherein the groups represent individual layers of the object arranged sequentially behind each other. According to the invention there is calculated the accumulated shadow generated by the individual image slice data of the layers. Each volumetric image data represents both a value of transparency and brightness. Preferably to each volumetric image data a brightness value is associated wherein the transparency value can be regarded as the inverse of the brightness value.
After having calculated the shadow values of each image slice data of a respective layer, these shadow values are combined with the brightness values of the image slice data of the object layer being next in the direction of the light vector. These overlaid shadow and brightness values for each object layer are stored. After all the image slice data of all the layers are performed in that the shadows cast by them onto adjacent layers are calculated, the stored overlaid shadow and brightness values are used for rendering the shadowed volume/object. In this rendering process for showing the image slice data of the layer closest to the light source, the brightness values of its image slice data are used. Namely, there is no shadow which cast onto this layer.
When overlaying the generated shadow values of the image slice data of a layer and the brightness values of the image slice data of the layer next in the direction of the light vector, the orientation of the light vector and, in particular, the angle of the direction of the light vector and the viewing direction, as well as the values of transparency of the image slice data are taken into consideration. The location at which a shadow resulting from the transparency value of an image slice data hits the layer next in the direction of the light vector is calculated on the basis of the orientation of the light vector and the distance between two adjacent object layers. This distance is a distance between the centers of two voxels of two adjacent layers.
The advantage of the method according to the invention is that it bases on the well known texture-based volume rendering applied which according to the invention is to a semitransparent object/volume including shadows. In that the individual parts of the overall shadow generated by the object are calculated on a layer-to-layer-basis, the result again are layer oriented image slice data which can be rendered using texture mapping.
The invention will described in more detail referring to a preferred embodiment as shown in the drawing.
Fig. 1 A schematic overview of the method according to the invention taking an unshadowed volume V as input and producing a shadowed volume V* as output by doing framebuffer operations. Buffer 1 and buffer 2 are reserved pixel areas inside of the framebuffer where the operations take place. In buffer 2 all shadows of slices pι,...,pj_j are accumulated, while in buffer 1 this accumulated shadow SJ is applied to the actual splice pj. The resulting shadowed slice pi * is then transferred to the shadowed volume.
Fig. 2 This shows a simple trick: To get the shadow of pi , p2 is drawn with the same texture, but with a special blending function an and offset into the framebuffer. Then pi is blended with the Over-operator into the framebuffer and so the impression of a shadowed image is produced. In the example one can see a CT-image with bones classified as being opaque and white and the surrounding tissue being grey and semitransparent. Note, how the opaque bones cast a stronger shadow onto the background than the semitransparent tissue. Fig. 3 Polygon pi textured with a volume slice casting the shadow onto background polygon p2- The tangent of the angle 0 in the triangle defines the shift ds for the shadow offset.
Fig. 4 A light source L shining from the right top onto the volume. The shadows, the dotted lines, are propagated through the volume and accumulate the voxels α-values. The voxels in the resulting volume V* are then darkened according to their own transparency and to that of the shadows that fell onto them. Note, how the semitransparent voxels in p4 are less darkened than the same colored opaque voxels. Also note, how in pg the semitransparent voxels of p4 cast a lighter shadow than the opaque ones, and how the shadow from p2 is propagated to pg.
Fig. 5 Overview of the implemented method. We had a add buffer 3 to make a copy of the accumulated shadow SJ that can be mixed with p 's α in step 2.
Fig. 6 Texture-based volume rendering of a shadowed and an unshadowed CT data-set. The right image looks more realistic, which results from some kind of „fake illumination" that comes as a side effect with the shadows.
Fig. 7 Comparison of shadowed and unshadowed texture-based volume rendering of a geometrical data-set. This 128x128x64 voxel sized data- set was generated to control the correct calculation of the shadows. The images were rendered using 256 parallel textured planes. In the scene there is the large block on the top with the four smaller blocks below, that have different transparencies. In the shadowed image one can see how the different smaller blocks cast different shadows on the ground and how the shadow of the large block appears different on the smaller blocks. One can also see the fake diffuse illumination effect here, because surfaces that face away from the light source are darker than others.
Fig. 8 Screenshots from a volume rendering with the 128x128x64 voxel sized data-set mapped as a 3D-texture onto 256 parallel planes. Note, how the shadow casts from the fetus' arm onto its face. Although the shadowed image looks more realistic, there are some details covered by the shadow. But the data-set can now be explored by interactively moving the light source, so details will be revealed that cannot be seen in the unshadowed image.
4 DESCRIPTION OF A PREFERRED EMBODIMENT
4.1 Overview
The input to our method is the volume data V consisting of image slices pj,..., pn and a light vector L. Every time L changes, the method produces a new output volume V* that contains the shadowed slices pj *,..., pn*. This is done by calculating a shadow map sj on the fly for each input volume slice pj in the framebuffer. If we leave out some details we get the following overview of our method as is illustrated in Fig. 1 :
0. copy pj to pi*
for all slices p2 to pn:
1. draw slice pj into bufferl
2. get shadow SJ by mixing slice pj_] into shadow s[_\ in buffer2 3. get pj* by mixing new sj into pj in buffer 1
4. read shadowed slice pj* out of buffer 1
The resulting shadowed volume V* can be visualized using Cabral's texture- based volume rendering or any other volume visualization method. Thus, whenever the volume is rotated or moved with respect to the light source new shadow maps and a new shadowed volume are created. When light and volume positions remain fixed and only the camera moves, the voxels lying in the shadow stay the same, so no reprocessing is necessary.
Note, that the shadow inside the volume is built incrementally by this method. As the i-1 front slices contribute to the shadow for a slice pj, a naive method would have to go through all these slices pi,.->Pi-l again for each i. While this would give us 0(n2) complexity, our method gets away with O(n), because there are only four additional operations necessary to generate a shadowed out of an unshadowed slice. As these operations are all pixel drawing, reading or copying operations one can anticipate, that this method performs very fast on modern graphics hardware.
The real work in our method is, of course, done in steps 2 and 3. For step 2 we have to find out how to accumulate the shadows inside the volume, while in step 3 the shadow must be applied to a volume slice. To understand how this works in detail, we will first show how to cast the shadow of a single slice onto a plain polygon, then see what happens, if the shadow falls onto another slice, and last but not least, how the shadows are propagated through the volume. 4.2 Casting the Shadow of a Single Slice onto a Plain Polygon
Consider the situation when we want a polygon pi , textured with a single volume slice, to cast a shadow onto a plain background polygon p2- The trick at the core of our method is, that we map the same texture onto p2 and then do the following two steps, as illustrated in Fig. 2:
1. Draw p2 with a special blending function and offset ds
2. Draw p\ with the Over-operator
This simple trick gives the impression of pj casting a shadow onto p2- The two important details here are, how to calculate the correct offset ds for p2, and how to blend p2 into the background.
4.2.1. Calculating the shadow offset
Looking at Fig. 2 again, one can imagine that the shadow is cast further to the left as the light moves towards the right. This shadow offset solely depends on the distance dp between the two polygons and the angle φ between the light vector L and the polygon normal N:
ds = dp • tanφ (1)
Note that we restrict the light position by | φ | <45° to avoid the discontinuities of the tangent. Note also that if φ = 45°, ds = dp, as shown in Fig. 3. 4.2.2 How to get a shadow from a volume slice
If we initially take a look at the framebuffer we find it regularly filled with the background or destination color
Figure imgf000015_0001
What we want to achieve is, to darken cj where the background polygon p2 is intransparent and thus, the image polygon pj casts a shadow.
This is no binary decision since the opacity of pi 's pixels is proportional to the intensity of the shadow that is cast. If we use the following blending function to draw p2 into the framebuffer, we get the shadow of pj :
c = (l-α^cd (2)
with C(j being the color of the background, αs the α-value of p2 and c the resulting color in the framebuffer. To simplify the further formulas we assume shadows to be colorless, so we can leave out the color values of p\ since the shadow is only influenced by the transparency. This is not a real restriction though, because the blending function could be extended to model colored shadows as well.
4.3 Casting a Shadow onto another Volume Slice
While blending function (2) works correct if the framebuffer is initially empty, things get a little more complicated, if we want to cast the shadow onto an image that was already drawn into the framebuffer. We then have to take into account the α- values of the already drawn image in two ways: On the one hand we have to keep α^ unchanged, because a shadow that falls onto an object should not manipulate its transparency. On the other hand we have to integrate α^ into the blending equation, because shadows appear stronger on opaque objects than on semitransparent ones. Especially in the case α^ = 0 there should not be drawn any shadow at all. So we get the modified blending equations:
c = (l-α(s)cd
Figure imgf000016_0001
Remember, that the product α^αg is high for strong shadows on opaque objects, so we have to take its inverse to attenuate the color components of the image that is to be shadowed.
Now that we have all the tools we need to let one volume slice cast a shadow onto the next one, we want to apply this procedure to a complete volume.
4.4 Shadowing a Complete Volume
If one wants to shadow a complete volume, care must be taken to ensure that shadows being cast on transparent parts of one slice must appear on the non- transparent parts of the next slice underneath. In other words, the shadow must be recast from slice to slice through the whole volume as is shown in Fig. 4. Several steps of this recasting may be necessary, if more than one slice is transparent. This non-local property is the reason why shadows are hard to model in traditional, surface-oriented computer graphics. Fortunately, this can easily be integrated into our method.
Let us now take a closer look at the procedure in Fig. 1 again: Step 1 simply draws a single slice into the framebuffer. Step 2 adds one slice to the accumulated shadow. We already know that we have to take the offset ds into account here, according to equation (1), but we do not know yet, which blending function to use to get the correct result.
Step 3 applies the accumulated shadow to the actual image. This is done by using blending function (3), because we want to cast a shadow onto a slice that contains different transparencies.
Step 4 reads the result out of the framebuffer.
So all that is left to do, is to find an appropriate blending function for step 2 to accumulate the shadows. The blending function for this step should not involve color calculations, since, for colorless shadows only α-values are needed. We have to decide, how the α- value of a new volume slice contributes to the already accumulated shadow in buffer2. In equation (2) we defined the shadow's intensity by the transparency of the shadow image, in that the shadow is lighter, where the original volume slice is more transparent. This means the shadow's intensity is proportional to the α- value of the original image.
So let us assume the situation where we have the accumulated shadow of all i- 1 front slices drawn into the framebuffer. Each time a new image polygon is added to the shadow image, the accumulated opacity must be increased according to the image's α- value. A blending function that describes this effect pretty well is:
α = l-(l-\αs)(l-αd) (4)
Note that the framebuffer has to be initially filled with α = 0.0 to get correct results.
By using blending function (4) for step 2 we assure that the shadows are correctly combined when we cross the volume, while equation (3) lets shadows appear stronger on method opaque voxels than on transparent ones. The next section will show, how the and especially the two blending functions can be implemented. 5 IMPLEMENTATION DETAILS
We implemented our method using the OpenGL graphics API [15]. It takes advantage of fast framebuffer operations like pixel drawing, copying and reading. We used the three-dimensional texture mapping extension to the OpenGL API to visualize the shadowed volume, though the method can also be implemented on platforms, where only two-dimensional textures are available. This is accomplished by the fact, that the process of shadowing the volume is independent of the visualization process. The method was implemented on a Silicon Graphics Octane workstation that was also used for benchmarking tests.
The main problem we first had to deal with for the implementation was, that OpenGL does not offer direct equivalents to the blending functions (3) and (4). We had to consider how to modify the method, so that the OpenGL blending functions could be applied.
5.1 Using OpenGL Blending Functions
The OpenGL API offers countless ways, how to combine the pixels of an image with those already in the framebuffer. This may be done either with blending functions, or with logical operations. Unfortunately our rather sophisticated equations (3) and (4) cannot be directly translated into OpenGL API calls.
So let us take a closer look at the two equations again. If we invert the α-values in (4) we get the following simpler blending function for the accumulation of the shadows:
α = (l-αs)αd (5) This blending function has its OpenGL API counterpart in the glBlendFunc(GL_ZERO, GL_ONE vfINUS_SRC_ALPHA) call, so we are done with accumulating the shadows. Unfortunately, this function now causes strong shadows to have low α- values, which of course is not a very intuitive definition, because one would expect strong shadows to have a high opacity. But if we remember that we want to lower the colors of the image in the framebuffer, we can imagine, that it will be handier to have the shadow's α-values inverted. This would result in the following simple multiplication to darken the color-values of pj according to the α-values of the shadow s[:
c = αscd α = αd (6)
This function can be implemented using the glBlendFunc(GL_ZERO, GL_SRC_ALPHA) call, if we first block the α-channel of the framebuffer by setting a color mask with glColorMask(GL TRUE, GL TRUE, GL TRUE, GL_FALSE).
If we used function (6) alone, shadows would appear equally strong on structures with different transparencies. To avoid this, we have to introduce an additional blending function, that blends the actual image polygon pj into the shadow si and lowers the α- value of sj by its own α:
α = αsαd (7)
As we already know, this equation corresponds to the glBlendFunc(GL_ZERO, GL_SRC_ALPHA) call. So for the implementation, we had to divide the two blending functions (3) and (4), into the three simpler ones (5), (6) and (7). To apply these functions we allocated an additional pixel area in the framebuffer and slightly modified our method as is illustrated in Fig. 5: for all slices p] to pn:
1. draw image slice pi into buffer 1
2. get si* by mixing pi into sj with (7) in buffer3
3. get pi* by mixing si* into pj with (6) in buffer 1
4. get shadow s[+\ by mixing pj to shadow s with (5) in buffer2
5. make copy of new shadow s{+\ in buffer3
6. read shadowed slice pi* out of buffer 1
Note that the new method added step 5 and splitted up step 2 of the original method into steps 2 and 3. In step 2 the actual image slice pj is blended into a copy of the accumulated shadow s[ in buffer3, to reduce the shadows α- value where pi is opaque. We have to keep a copy of s[ in buffer2, because the shadow image is lost for our method once we blended pi into it.
But even with this extended method we only need to draw one volume slice into the framebuffer, do five copying operations and read the resulting image out of framebuffer again to get one shadowed volume slice. However, the method is still expected to run fast, because copying pixel areas inside of the framebuffer can be done very fast on today's graphics hardware.
5.2 Moving the Camera
One of the advantages of our method is, that it is possible to move the camera with absolutely no performance decrease compared to the unshadowed volume, if the light position is fixed with regard to the volume. This is due to the fact, that the shadow that is cast inside the volume, only depends on the light position and is independent of the camera, as can be seen in Fig. 4. Although this is a nice feature, the best spatial information is achieved, if camera and light move synchronically, so that the observer gets the impression of a moving object. Only moving the light source with a fixed camera position is very helpful to understand the three-dimensional properties of the data-set as well, as this is like exploring the data-set by shining with a flashlight into the volume. Every movement of the light requires a recalculation of the shadows though. But if the graphics hardware offers high pixel transfer rates from host memory to the framebuffer this can be done at interactive frame rates, as benchmarking results show.
5.3 Moving the Light
One issue we did not mention so far, is the restriction on the light position to have an angle φ < 45° with the normal of the volume slices, like stated in equation (1). This restriction can be easily overcome by our method. If we think of the six faces of the volume cube and their normals Ni ,...,N6, we have to find the Nj with the smallest scalar product L Ni. This normal determines in which direction to cross the volume and so, in which orientation the slices have to be taken out of the volume. As these slices are always parallel to the volume axes, there are different ways how to efficiently draw them into the framebuffer. This is part of the next section.
5.4 3D Texture Mapping vs. Framebuffer Drawing
To draw the single volume slices into the framebuffer, we used the three- dimensional texture mapping extension to the OpenGL API for our implementation. This extension offers an easy way to get arbitrary slices out of the volume and render them into the framebuffer. Step 1 of our method can so be performed very fast, because the volume can stay resident in the texture cache as a three-dimensional texture and has not be transferred from host memory to the framebuffer. It also makes no difference in which direction we have to go through the volume according to the light sector, because the position of the slices in the volume can be chosen freely, by assigning the appropriate texture coordinates.
Due to the fact that the slices are always sampled parallel to the volume axes out of the volume, the method can also be efficiently implemented on platforms where no 3D textures are available. This is done by using the glDrawPixel call, which transfers a pixel array out of host memory to the framebuffer. We only have to keep three rotated volume copies resident in the host memory, because the slices can only be taken sequentially out of host memory with the glDrawPixel call. So for each light sector and its opposite sector, there is one volume copy, which makes this process a fast alternative to the three-dimensional texture approach.
In texture-based volume rendering it is a common method, to color the volume data via the pixel map, when it is transferred from host memory to texture cache. This means that the 3D texture must be reloaded every time the lookup table for the volume has changed. If we use the glDrawPixel call for our method, the pixel map is applied when the data is transferred from host memory to the framebuffer. If the lookup table is changed now, no texture has to be reloaded, only the pixels are drawn as usual. This should speed-up the shadow method for applications, where the many lookup table manipulations occur.
5.5 Adjusting Shadow Intensities by Pixel Mapping
As the properties of the volume data may vary very much, depending on how the data was acquired, it may be necessary to adjust the intensity of the shadows. While in some cases one may want to have a strong shadow in the scene to get an aesthetic pleasing image, strong shadows may be rather disturbing in volumes with high transparencies. Fortunately, the shadow intensity can be regulated by applying the pixel map to any step of our method, because the pixel drawing, copying and reading operations can be done via the pixel map. We did not have to make extensive use of this feature, as our results were fine with the method as described, but if there should be the need to do so, there are many ways of fine tuning.
6 RESULTS
6.1 Image Quality
The images our method produced so far show physically accurate shadows, because semitransparent voxels cast a light shadow and strong shadows appear lighter on semitransparent voxels than on opaque ones, as can be seen in Fig. 7. On fully opaque geometric objects the shadow seems to be even a little too sharp, though geometrically correct. But as volume data-sets usually have smooth transitions from opaque to more transparent voxels, this does not impair the realism that is added to volume renderings in medical applications, as is shown in Fig. 8.
Figures 6 and 7 also show, that the shadowed volumes look as if they were diffusely illuminated, although no illumination calculation was actually performed. This surprising effect stems from the fact that the shadows follow the contour of the volume, and thus provide an additional visual cue about its shape. Together with the consistent darkening of those sides facing away from the light direction, the visual impression of a shaped, illuminated volume arises. This nice effect is particularly interesting, because, as van Gelder and Kim [14] pointed out, illumination helps in understanding the spatial structure of the volume (as shadows do) which is desirable on its own, and here comes free along with the shadows. Of course, this is no "real" illumination, it merely appears as if there was some, but it nevertheless increases the quality of the visualization.
6.2 Performance
The method was developed on a Silicon Graphics Octane MXI workstation with 4Mb texture memory. We compared rendering results with shadowed and unshadowed volumes rendered in a 600x600 pixel sized window. The volume size was limited to 128x128x64 voxels to assure that the 3D texture fits into the Octane's texture memory without getting into texture bricking struggle.
volume size texture planes unshadowed shadowed in voxels fps fps
64x64x64 64 6.3 3.6
64x64x64 128 3.3 2.3
64x64x64 256 1.7 1.4
128x128x64 64 5.0 1.8
128x128x64 128 2.6 1.4
128x128x64 256 1.3 0.9
Table 1 : Benchmarking results for SGI Octance MXI
As one can see the performance decrease for shadowing the volumes depends on the volume size and not on the number of texture planes used to visualize the data-set. This is not surprising, since shadowing the volume is a preprocessing step to the texture-based volume rendering. For practical applications we found, that using about 256 texture planes gives the best trade-off between image quality and performance for unshadowed volume rendering. Fortunately, in this case our method decreases performance only by about 30 %. Even for larger volumes than 128x128x64 voxels the method would perform well, compared to the unshadowed case, if we also increased the number of texture planes. The method gives a bad performance, when only a little number of texture planes in a large volume is used. But if one wants to render an image with only a few texture planes, one is surely not interested in details and so will abandon a shadowed volume anyway.
7 CONCLUSION
In conclusion, we developed and implemented an method that incorporates shadows in a texture-based volume Tenderer to increase realism of the resulting images. By using the incremental property of the shadow we were able to achieve this without decreasing performance by more than 50 %. Rendering and blending of data and shadows can be performed in real time on modern workstations. This is achieved by the fact that most operations take place in the graphics hardware's framebuffer, while only little mathematical calculation has to be done in processor memory. The method can be implemented with the standard OpenGL 1.1 API, although three-dimensional textures, that will be part of the upcoming OpenGL 1.2 standard [16], simplify the visualization process.
References
[1] U. Behrens, M. Bublat, M. Fieberg, G. Grunst, M. Jahnke, K. Kansy, R. Ratering, H.-J. Schwarzmaier, and R Wisskirchen. Enabling systems for neurosurgery . In Proc. Computer Assisted Radiology CAR '98 (Tokio, Japan, June 24- 27), 1998.
[2] Brian Cabral, Nancy Cam, and Jim Foran. Accelerated volume rendering and tomographic reconstruction using texture mapping hardware. In 1994 Symposium on Volume Visualization, pages 91-98. ACM SIGGRAPH, October 1994.
[3] Cindy M. Goral, Kenneth E. Torrance, Donald R Greenberg, and Bennett Battaile. Modelling the interaction of light between diffuse surfaces. In Computer Graphics (SIGGRAPH '84 Proceedings), volume 18, pages 212-22, July 1984.
[4] 0. Kϋhne, C. Poliwoda, C. Reinhart, T. Gϋnther, J. Hesser, and R. Manner. Interactive segmentation and visualization of volume data sets. In Proc. Visualization '97, 1997.
[5] Philippe Lacroute and Marc Levoy. Fast volume rendering using a shear- warp factorization of the viewing transformation. In Proceedings of SIGGRAPH '94 (Orlando, Florida, July 24-29. 1994), Computer Graphics Proceedings. Annual Conference Series, pages 451-458. ACM SIGGRAPH, ACM Press, July 1994.
[6] David Laur and Pat Hanrahan. Hierarchical splatting: A progressive refinement method for volume rendering. In Computer Graphics (SICGRAPH '91 Proceedings), volume 25, pages 285-288, July 1991. [7] Marc Levoy. Efficient ray tracing of volume data. ACM Transactions on Graphics, 9(3):245-261, July 1990.
[8] Tom McReynolds. Programming with OpenGL: Advanced Techniques. In SIGGRAPH '97 Course Notes, Course No. 11. 1997.
[9] Bui-T. Phong. Illumination for computer generated pictures. Communications of the ACM, 18(6):311-317, June 1975.
[10] Thomas Porter and Tom Duff. Compositing digital images. In Computer Graphics (SIGGRAPH '84 Proceedings), volume 18, pages 253-259, July 1984.
[11] William T. Reeves, David H. Salesin, and Robert L. Cook. Rendering antialiased shadows with depth maps. In Computer Graphics (SIGGRAPH '87 Proceedings), volume 21, pages 283-291, July 1987.
[12] Holly E. Rushmeier and Kenneth E. Torrance. The zonal method for calculating light intensities in the presence of a participating medium. In Computer Graphics (SIGGRAPH '87 Proceedings), volume 21, pages 293-302, July 1987.
[13] Lisa Sobierajski and Arie Kaufman. Volumetric ray tracing. In 1994 Symposium on Volume Visualization, pages 11-18. ACM SIGGRAPH, October 1994.
[14] Allen Van Gelder and Kwansik Kim. Direct volume rendering with shading via three-dimensional textures. In 1996 Volume Visualization Symposium, pages 23-30. IEEE, October 1996. [15] Mason Woo, Jackie Neider, and Tom Davis. OpenGL Programming Guide: the official guide to learning OpenGL. Addison-Wesley, 2nd edition, 1997.
[16] OpenGL Version 1.2. http://www.opengl.org/Developers/VersionL2/html/ opengl l2.html.
[ 17] „METHOD TO GENERATE THE SHADOW OF ARBITRARY- SHAPED THREE-DIMENSIONAL OBJECTS", IBM TECHNICAL DISCLOSURE BULLETIN, vol. 33, no. 2, 1 July 1990, pages 146-148, XP000123567.
[18] SEGAL E.A.: „FAST SHADOWS AND LIGHTING EFFECTS USING TEXTURE MAPPING", COMPUTER GRAPHICS, vol. 26, no. 2, July 1992, pages 249-252, XP002096276, USA.

Claims

1. Method for processing volumetric image data of a semitransparent object generating a shadow due to light of a light source impinging onto the object in a direction defined by a light vector, wherein each volumetric image data represents a value of transparency and a value of brightness and wherein the method comprises the steps of:
(a) dividing the volumetric image data into several image slice data representing individual layers of the object arranged sequentially behind each other,
(b) on the basis of the light vector and the values of transparency of the image slice data of the object layer being closest to the light source, generating a shadow value for each image slice data of the object layer being closest to the light source and storing these shadow values,
(c) overlaying the stored shadow values generated in step (b) and the brightness values of the image slice data of the object layer second most close to the light source according to the light vector and the distance between these two adjacent object layers,
(d) on the basis of the light vector and the values of transparency of the image slice data of the object layer second most close to the light source, generating shadow values for each image slice data of the second most object layer and adding these values to the stored shadow values,
(e) overlaying the stored shadow values generated in step (d) and the brightness values of the image slice data of the object layer third most close to the light source according to the light vector and the distance between these two adjacent object layers,
(f) performing steps (d) and (e) for each object layer which is next in the direction away from the light source and
(g) volume rendering the brightness values of the image slice data of the object layer closest to the light source and the overlaid shadow and brightness values of the image slice data of all the other object layers to display the semitransparent object including the shadow generated by the semitransparent object.
2. Method according to claim 1, characterized in that the overlaying step (e) comprises a step of blending the brightness and shadow values according to a blending function.
PCT/EP1999/007666 1998-10-13 1999-10-12 Method for processing volumetric image data of an object visualized by texture mapping for adding shadows WO2000022576A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US09/807,327 US6771263B1 (en) 1998-10-13 1999-10-02 Processing volumetric image data with shadows
DE69901572T DE69901572T2 (en) 1998-10-13 1999-10-12 METHOD FOR VOLUMETRIC IMAGE DATA PROCESSING OF AN OBJECT VISUALIZED BY TEXTURE IMAGING
EP99950711A EP1121664B1 (en) 1998-10-13 1999-10-12 Method for processing volumetric image data of an object visualized by texture mapping for adding shadows

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP98119291.7 1998-10-13
EP98119291 1998-10-13

Publications (1)

Publication Number Publication Date
WO2000022576A1 true WO2000022576A1 (en) 2000-04-20

Family

ID=8232784

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP1999/007666 WO2000022576A1 (en) 1998-10-13 1999-10-12 Method for processing volumetric image data of an object visualized by texture mapping for adding shadows

Country Status (4)

Country Link
US (1) US6771263B1 (en)
EP (1) EP1121664B1 (en)
DE (1) DE69901572T2 (en)
WO (1) WO2000022576A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2392072A (en) * 2002-08-14 2004-02-18 Autodesk Canada Inc Generating shadow image data of a 3D object
US8130244B2 (en) * 2008-11-28 2012-03-06 Sony Corporation Image processing system

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004005364A (en) * 2002-04-03 2004-01-08 Fuji Photo Film Co Ltd Similar image retrieval system
EP1566773B1 (en) * 2004-02-18 2007-12-19 Harman Becker Automotive Systems GmbH Alpha blending based on a look-up table
TWI248764B (en) * 2004-09-01 2006-02-01 Realtek Semiconductor Corp Method and apparatus for generating visual effect
WO2006099490A1 (en) * 2005-03-15 2006-09-21 The University Of North Carolina At Chapel Hill Methods, systems, and computer program products for processing three-dimensional image data to render an image from a viewpoint within or beyond an occluding region of the image data
US7532214B2 (en) * 2005-05-25 2009-05-12 Spectra Ab Automated medical image visualization using volume rendering with local histograms
NZ542843A (en) * 2005-10-05 2008-08-29 Pure Depth Ltd Method of manipulating visibility of images on a volumetric display
US8041129B2 (en) 2006-05-16 2011-10-18 Sectra Ab Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products
US7830381B2 (en) * 2006-12-21 2010-11-09 Sectra Ab Systems for visualizing images using explicit quality prioritization of a feature(s) in multidimensional image data sets, related methods and computer products
US7961187B2 (en) * 2007-03-20 2011-06-14 The University Of North Carolina Methods, systems, and computer readable media for flexible occlusion rendering
JP2009018048A (en) * 2007-07-12 2009-01-29 Fujifilm Corp Medical image display, method and program
US7970237B2 (en) * 2007-08-01 2011-06-28 Adobe Systems Incorporated Spatially-varying convolutions for rendering glossy reflection effects
US7982734B2 (en) * 2007-08-01 2011-07-19 Adobe Systems Incorporated Spatially-varying convolutions for rendering soft shadow effects
US8698806B2 (en) * 2009-11-09 2014-04-15 Maxon Computer Gmbh System and method for performing volume rendering using shadow calculation
US8797461B2 (en) * 2012-12-28 2014-08-05 Behavioral Technologies LLC Screen time control device and method
US11321904B2 (en) 2019-08-30 2022-05-03 Maxon Computer Gmbh Methods and systems for context passing between nodes in three-dimensional modeling
US11714928B2 (en) 2020-02-27 2023-08-01 Maxon Computer Gmbh Systems and methods for a self-adjusting node workspace
US11373369B2 (en) 2020-09-02 2022-06-28 Maxon Computer Gmbh Systems and methods for extraction of mesh geometry from straight skeleton for beveled shapes

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991018359A1 (en) * 1990-05-12 1991-11-28 Rediffusion Simulation Limited Image generator

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4750171A (en) * 1986-07-11 1988-06-07 Tadiran Electronics Industries Ltd. Data switching system and method
JP3203160B2 (en) * 1995-08-09 2001-08-27 三菱電機株式会社 Volume rendering apparatus and method
US6115048A (en) * 1997-01-21 2000-09-05 General Electric Company Fast method of creating 3D surfaces by `stretching cubes`
US6014143A (en) * 1997-05-30 2000-01-11 Hewlett-Packard Company Ray transform method for a fast perspective view volume rendering
US6298148B1 (en) * 1999-03-22 2001-10-02 General Electric Company Method of registering surfaces using curvature
US6407737B1 (en) * 1999-05-20 2002-06-18 Terarecon, Inc. Rendering a shear-warped partitioned volume data set
US6396492B1 (en) * 1999-08-06 2002-05-28 Mitsubishi Electric Research Laboratories, Inc Detail-directed hierarchical distance fields
US6603484B1 (en) * 1999-08-06 2003-08-05 Mitsubishi Electric Research Laboratories, Inc. Sculpting objects using detail-directed hierarchical distance fields
US6483518B1 (en) * 1999-08-06 2002-11-19 Mitsubishi Electric Research Laboratories, Inc. Representing a color gamut with a hierarchical distance field

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991018359A1 (en) * 1990-05-12 1991-11-28 Rediffusion Simulation Limited Image generator

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2392072A (en) * 2002-08-14 2004-02-18 Autodesk Canada Inc Generating shadow image data of a 3D object
GB2392072B (en) * 2002-08-14 2005-10-19 Autodesk Canada Inc Generating Image Data
US7142709B2 (en) 2002-08-14 2006-11-28 Autodesk Canada Co. Generating image data
US8130244B2 (en) * 2008-11-28 2012-03-06 Sony Corporation Image processing system

Also Published As

Publication number Publication date
DE69901572T2 (en) 2003-06-05
US6771263B1 (en) 2004-08-03
EP1121664B1 (en) 2002-05-22
DE69901572D1 (en) 2002-06-27
EP1121664A1 (en) 2001-08-08

Similar Documents

Publication Publication Date Title
Behrens et al. Adding shadows to a texture-based volume renderer
EP1121664B1 (en) Method for processing volumetric image data of an object visualized by texture mapping for adding shadows
Everitt Interactive order-independent transparency
Cook et al. Distributed ray tracing
Diefenbach Pipeline rendering: interaction and realism through hardware-based multi-pass rendering
Westermann et al. Efficiently using graphics hardware in volume rendering applications
Wilhelms et al. A coherent projection approach for direct volume rendering
WO1999049417A1 (en) Fog simulation for partially transparent objects
Wyman Interactive image-space refraction of nearby geometry
US6396502B1 (en) System and method for implementing accumulation buffer operations in texture mapping hardware
McReynolds et al. Programming with opengl: Advanced rendering
Sintorn et al. Real-time approximate sorting for self shadowing and transparency in hair rendering
Nagy et al. Depth-peeling for texture-based volume rendering
Policarpo et al. Deferred shading tutorial
Wan et al. Interactive stereoscopic rendering of volumetric environments
McReynolds et al. Programming with opengl: Advanced techniques
Lambru et al. Hybrid global illumination: A novel approach combining screen and light space information
Měch Hardware-accelerated real-time rendering of gaseous phenomena
Nielsen et al. Fast texture-based form factor calculations for radiosity using graphics hardware
Kye et al. Interactive GPU-based maximum intensity projection of large medical data sets using visibility culling based on the initial occluder and the visible block classification
Demiris et al. 3-D visualization in medicine: an overview
Yang et al. Rendering hair with back-lighting
Leshonkov et al. Real-time Rendering of Small-scale Volumetric Structure on Animated Surfaces
Brüll et al. Billboard Ray Tracing for Impostors and Volumetric Effects
Archer Leveraging GPU advances for transparency, compositing and lighting using deep images and light grids

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA DE JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1999950711

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1999950711

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 09807327

Country of ref document: US

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWG Wipo information: grant in national office

Ref document number: 1999950711

Country of ref document: EP