Sunteți pe pagina 1din 10

A Crash Course On Illustrative Visualization

Yuri Meiburg April 18, 2010


Abstract Illustrative Visualization is a new eld in Visualization, and tries to abstract data in order to amplify the cognition in datasets. There are many techniques which do this and this paper will focus on such a technique: Halos. We will start by explaining various usages, implementations and functions of halos. Because illustrative visualization is a very broad eld we will also describe another technique, called style transfer functions. This extension to ordinary transfer functions can be used to model dierent densities in volume data in dierent styles, which enables us to more accurately render certain datasets, to distinguish dierent parts even better. The general purpose of this paper is to provide a little insight in the eld of illustrative visualization.

Introduction

Illustrative Visualization is the eld which tries to amplify cognition, through visual representations which are computer-supported or even entirely computer-generated [15, 17]. Most techniques applied in Illustrative Visualization have either proven to be useful in other computer visualization techniques, or are based on existing techniques used in traditional drawings. Previously drawings were done by hand, but it has become so much easier to acquire digital data that rendering this data (almost) real time poses some great advantages. Where these techniques were previously only used in books, for example to clarify for a block of text, it is now possible to view (almost) realtime feature-highlighted images generated from some dataset. An example of a traditional scientic illustration is shown in Figure 1(a), and a computer generated example is shown in Figure 1(b).

(a) A scientic illustration of the dis- (b) A generated view, showing fMRI data section of a frog. Dierent features are with depth dependant halos, along with emphasized using dierent drawing tech- volume rendering to provide additional niques.2 information about the brain.

Figure 1: Demonstration of the dierences between handdrawn scientic illustrations and computer generated samples. A common technique is applying halos to emphasize parts of an illustration, and due to its frequent occurance in illustrative visualization the focus of this paper will be mostly on halos and their dierent
2 This

image is created by Boymans, M. and is available online at: http://www.myrtheboymans.nl/

implementations and functions. To demonstrate the broadness of illustrative visualization this paper will also cover a technique on an extension of traditional transfer functions. The paper is structured as follows: Section 2.1 will cover Depth Dependent Halos [6, 20], and Section 2.2 will describe techniques which use halos as a supporting role. Section 3 will cover Style Transfer Functions, an extension to traditional transfer functions. The paper is concluded in Section 4.

2
2.1

Various types of halos


Depth dependent halos

Depth dependent halos [6] are proposed by Everts et al. and is a technique which focuses on visualizing dense data bundles. One of the features of this technique is that it tries to emphasize depth information by generating halos of various sizes. An example image is shown in Figure 2.

Figure 2: DTI ber tracts visualized using the depth dependent halo technique [6] as proposed by Everts et al., using a varying halo thickness to emphasize relative depths, and by bundling colinear segments no information is lost. This gure shows DTI ber tracts, visualized using depth dependent halos. When ber tracts lie colinear in the image, they are emphasized through a thicker black line, while less structured lines are de-emphasized. Larger gaps between two bundles suggest a greater distance between them, while small halos suggest that two segments are close to eachother. This gives an impression of the relative depth of dierent bundles. This technique can be rendered in real-time. 2.1.1 Advantage of depth dependent halos

Traditionally lines were rendered using tubes and shading, but this has proven to be somewhat limited. Because tubes are relatively thick (compared to simple lines) in dense areas this technique tends to lose information of individual line orientations. An example of this eect is shown in Figure 4(a). Plain line rendering on the other hand, does not have this problem. But unfortunately this technique suers from a complete lack of depth perception, as Figure 4(b) shows. In 1979 Appel et al. proposed a halo eect for lines [1]. When this technique is applied to a sparse area of lines, it works very well: Figure 3 shows a cube with 2 highlighted quads. The left quad is ambiguous, as it is not clear if the red or the green quad is in front. Using halos removes this ambiguity, as shown in the right cube. It is now clear that the red face is in front of the green face. Although this technique works ne for few lines, on dense areas this method removes a lot of information as can be seen in Figure 4(c).

Figure 3: The halo technique as proposed by Appel et al. Using the halo technique reduces the ambiguity the left image has. The right image shows much more clear that the red face is the front face, and the green face is the back face. 2

(a) Rendering tubes with shad- (b) Rendering plain lines loses (c) Simple halos removes a lot (d) Depth dependent halos emowing removes orientation in- all depth perception, and gives of lines in dense areas. phasize colinear bundles, but formation of single lines. a lot of visual clutter. does not remove any information.

Figure 4: Overview of dierent rendering techniques for lines The last gure, Figure 4(d) shows the proposed Depth Dependent Halos technique. This gure gives more insight in the data than the other three discussed methods, especially with respect to colinearity of lines. If a line is not colinear with a bundle, it will show a halo around these lines to emphasize this. 2.1.2 How is this implemented?

Although this eect can easily be extended to point data, we will focus on line data, as that is what the method was intended to display. This means that initially there are only lines of zero width. On the central processing unit (CPU) all the vertices are duplicated, with the exact same position. All vertices get a direction attribute, which is calculated by linear interpolation of the two lines adjacent to that vertex. To achieve interactive framerates a small trick is used, involving texture coordinates. Each vertex gets a coordinate (u, v), where u is the position on the line, and v is 0 (left), or if it is a duplicate it will be 1 (right). This enables us to distinguish between left and right vertices on the graphics processing unit, so we can convert the lines in to triangle strips. The conversion of lines to triangle strips is done in the vertex shader, as each vertex is replaced using Equation 1. pout = pin + ||V D||(v 0.5)wstrip (1)

where D is the direction of the vertex, V is the viewing vector, pin is the original location of the vertex and wstrip is the width of the strip (the halo plus the black line). This gives us triangle strips wich are always aligned to face the viewer (similar to sprites). For each fragment the color is determined using Equation 2. s = wstrip |v 0.5|, If s < 0.5 wstrip black, else white (2)

This gives ordinary halos, as shown in Figure 5(a). To make these halos depth dependent, they are folded downwards, as shown in Figure 5(b). This is done by displacing white fragments according to the formula shown in Equation 3. dnew = dold + dmax fdisplacement (2 |v 0.5|) (3)

This adjustment is what makes the halos depth dependent. If two strips are on the same height, and right next to eachother or cross one another there will be no halo. At the same time, if two lines are colinear but on a dierent level in the dataset, it would show the full halo of the most frontal line. And when two lines cross close, but not exactly at the same height it will show a smaller halo. A stepwise example is shown in Figure 6, which shows vertical lines crossing one horizontal line. The vertical lines are far away at the left and are closer than the horizontal line on the right. 2.1.3 Additional enhancements

Even though the spational relationships between bundles is visualized relatively clear, some more optimization was performed in the form of depth cueing. Elber proposed a method where lines are thinned 3

(a) The result of creating triangle strips and the color determination of each fragment.

(b) The desired eect

Figure 5: The halos as they are without displacement, and the desired eect with displacement.

Figure 6: Demonstration of depth dependent halos, the vertical lines are placed increasingly closer to the viewer (halfway they come before the horizontal line). further in relation with the distance to the viewpoint [5]. This exaggerates the spational relations even further. Figure 7 shows a nice comparison from a dataset which is rendered twice, on the left without depth cueing, on the right with depth cueing enabled.

(a) Depth dependent halos, without depth cueing.

(b) Depth dependent halos, with depth cueing.

Figure 7: Depth Cueing using the method proposed by Elber. The further away a line is from the viewpoint, the thinner it is displayed. When lines end in front of another it clearly shows the rectangular strip. This can be perceived as distracting, thus the visual appearance of the lines is improved by tapering the halos at the end of the strip. This gradual narrowing of the halo only aects the end of a line, but is visually more appealing. A comparsion of a part of a scene with and without tapering is shown in Figure 8. 2.1.4 Further development

After this was published, it was picked up for further development by Svetachov et al.[20] who have extended this method to also include context information in the rendering. This extension uses stippling and hatching (as inspired by [22, 12]) to render volume data in a matching style, using methods from [24, 4] to extract silhouettes and feature lines from volume data. Using cutting planes it is easy to view dierent layers of information. An example of this method is shown in Figure 9 4

(a)

(b)

Figure 8: Small part of a scene where lines end in front of other lines, (a) rendered without tapering, (b) rendered with tapering.

Figure 9: Demonstration of pen-and-ink based volume rendering to provide additional information to depth dependent halos.

2.2
2.2.1

Halos as supporting technique


Molecular visualization

In some illustrative visualization techniques the role of halos is little more than supporting another visualization method. One such example is the real time visualization of molecules, by Tarini et al. [21]. The focus of this research lies on the visualization of complex molecules, of up to the order of 106 atoms. Even though there already has been done research regarding the visualization of large molecules (for instance [10, 16, 23, 19]), and it is well known that one of the mayor problems is to visualize the structure. Tarini et al. propose to use a combination of various visualization techniques to try and overcome this problem. The main components are ambient occlusion, and edge cueing, but their endresult is composed of 8 dierent rendering techniques. These techniques are shown in Figure 10(a), and the endresult is shown in Figure 10(b).

(a) 8 dierent techniques added together to create (b) The end result of the 8 techniques the image in (b) from (a)

Figure 10: A rendering of qutemol, and the decomposition of the dierent eects. The majority of the output is dened by ambient occlusion. This is a technique which was invented in 2002 by Landis [11], and is ment as a crude approximation to global illumination. Global illumination is a group of algorithms which try to add more realistic lighting eects to 3D renderings. The disadvantage of global illumination is that it is rather slow. Ambient Occlusion is not exact, but creates visually appealing images as well and is much quicker than full global illumination. To demonstrate how much this technique constitutes to qutemol, see Figure 11 for a comparison of a scene with and a scene without ambient occlusion.

(a) Large molecule without am- (b) Large molecule with ambient bient occlusion occlusion

Figure 11: A comparison of a large molecule with and without ambient occlusion. Note how much depth information is emphasized with ambient occlusion. Tthe visualization technique from Tarini et al. also incorporates the use of three dierent types of halos. The rst is type is referred to as depth-aware halos, and shows similarities to the method mentioned in Section 2.1, and a technique from Luft et al. [13]. 6

2.2.2

Interactive halos

In 2007 Bruckner and Grller [2] proposed an extension of the usual halos. This technique makes it o possible to create and alter halos on volumetric datasets using transfer functions. Previous work in this area usually relies on a pre-processing step wherein the halo contributions are computed to eciently render these contributions at runtime. The problem with this approach is that it is rather limited regarding halo changes at runtime. Bruckner and Grller try to nd regions which emit halos in real time, o by assigning a what they call halo seed. A halo seed represents how much a point in the dataset contributes to a halo. If a point is on a contour, then the angle between the view vector v and gradient vector fp is nearly orthogonal. That combined with the magnitude of the gradient vector | fp | to prevent noise can be used to generate this halo seed [18]. The halo transfer function is computed according to Equation 4 h(P ) = hv (P ) hd (P ) hp (P ) (4)

where hv (P ) is the value inuence function, which is based on the data value at sample point P , hd (P ) is the directional value function, which is based on the direction of the eye-space normal and hp (P ), which is the positional inuence function. This function is based on the distance of the sample point to a user-dened focus point. The nal seed intensity s(P ) is then computed according to Equation 5. s(P ) = h(P )| fp | (1 fp v) (5)

where , are additional control parameters to ne-tune the inuence of the gradient magnitued and the dot product. After computing the halo contribution set, it needs to be mapped to visual contributions. This is done using a halo prole function. A halo prole function maps each non-zero value in the dataset to a color and an intensity. Zero-values are always mapped to fully transparent. A few sample transfer functions are shown in Figure 12(a). The size of these halos can be set with an additional parameter . The inuence of is shown in Figure 12(b). A possible application of this technique is shown in Figure 13.

(a)

(b)

Figure 12: (a) Example transfer functions for interactive halos, (b)Inuence of halo size parameter (linearly incremented).

(a)

(b)

Figure 13: A demonstration of the use of interactive halos. (a) Without a halo, (b) With interactive halo.

Style Transfer Functions

A technique which does not involve halos but is also included in the eld of illustrative visualization, is style transfer functions [3], developed by Bruckner and Grller in 2007. Style transfer functions are an o extension to the regular transfer functions, which provide a basic mapping from data intensities to a color and transparency. Previous work on extensions of transfer functions include [7, 9, 8, 14]. Bruckner and Grller extend traditional transfer functions by replacing colors with dierent rendering o styles. These styles are pre-generated by rst rendering the style on a sphere, and then map the sphere to a 2d map (schematically shown in Figure 14(a)). This works because half a sphere shows all possible visible normals. Using these sphere maps it is relatively easy to translate a normal to a position on the spheremap, and thus get the appropriate style on that fragment. The proposed extension thus provides a method to go from Figure 14(b) to Figure 14(c).

(a) A spheremap is generated by projecting a sphere rendered in that style on to a 2D map.

(b) The traditional transfer functions displaying colors (interpolated) for each data intensity on the x-axis, and the opacity on the y-axis.

(c) Shows the same graph, but now with style transfer functions.

Figure 14: (left) Mapping from stylized sphere to a map, (right) extension from transfer functions to style transfer functions. These style transfer functions are curvature controlled and can be interactively modied in the original implementation of Bruckner and Grller. This technique can be rendered at interactive framerates. These o style transfer functions can be used to create a non-photorealistic rendering (NPR) of volume data. Due to its intention to abstract real data in order to increase the information in the image it belongs to illustrative visualization. An example of a rendering using style transfer functions is shown in Figure 15.

Figure 15: An example rendering using stylized transfer functions. The 6 dierent styles are displayed below the image.

Conclusion

Illustrative Visualization is a relatively new eld in Computer Visualization, but has already proven to be incredibly broad. Although it serves a general purpose of amplifying cognition for certain types of information in a dataset, the applications are extremely versatile. Illustrative Visualization can also be applied to all kinds of datasets. A very common technique incorporated in various researches are halos. Halos have proven themselves useful to distinguish important sections of a dataset from the dataset as a whole. It is a basic technique to emphasize certain parts. Due to the fact that illustrative visualization is such a broad eld it is hard to label techniques accordingly. Because of this, it is not possible to write a survey-paper consisting of the entire eld.

Acknowledgements

We thank Isenberg for showing a lot of patience and pro-actively looking for solutions when problems arose during research. We also thank Everts and Svetachov for providing their implementations to further develop and to use for this paper, we have also extensively used their material. Some images are taken from other papers to clarify the text explaining their technique.

References
[1] Appel, A., Rohlf, F. J., and Stein, A. J. The haloed line eect for hidden line elimination. In SIGGRAPH 79: Proceedings of the 6th annual conference on Computer graphics and interactive techniques (New York, NY, USA, 1979), ACM, pp. 151157. [2] Bruckner, S., and Groller, M. E. Enhancing depth-perception with exible volumetric halos. Tech. Rep. TR-186-2-07-04, Institute of Computer Graphics and Algorithms, Vienna University of Technology, Favoritenstrasse 9-11/186, A-1040 Vienna, Austria, april 2007. human contact: technical-report@cg.tuwien.ac.at. [3] Bruckner, S., and Groller, M. E. Style transfer functions for illustrative volume rendering. Computer Graphics Forum 26, 3 (september 2007), 715724. was awarded the 3rd Best Paper Award at Eurographics 2007. [4] Burns, M., Klawe, J., Rusinkiewicz, S., Finkelstein, A., and DeCarlo, D. line drawings from volume data. ACM Transactions on Graphics (Proc. SIGGRAPH) 24, 3 (aug 2005), 512518. [5] Elber, G. Line illustrations in computer graphics. The Visual Computer 11, 6 (1995), 290296. [6] Everts, M. H., Bekker, H., Roerdink, J. B., and Isenberg, T. Depth-dependent halos: Illustrative rendering of dense line data. IEEE Transactions on Visualization and Computer Graphics 15 (2009), 12991306. [7] Hladuvka, J., Konig, A., and Groller, E. Curvature-based transfer functions for direct volume rendering. In In Bianca Falcidieno, editor, Spring Conference on Computer Graphics 2000 (2000), pp. 5865. [8] Kindlmann, G., Whitaker, R., Tasdizen, T., and Moller, T. Curvature-based transfer functions for direct volume rendering: Methods and applications. In Proceedings of IEEE Visualization 2003 (October 2003), pp. 513520. [9] Kniss, J., Kindlmann, G., and Hansen, C. Interactive volume rendering using multi-dimensional transfer functions and direct manipulation widgets. In VIS 01: Proceedings of the conference on Visualization 01 (Washington, DC, USA, 2001), IEEE Computer Society, pp. 255262. [10] Krone, M., Bidmon, K., and Ertl, T. Interactive visualization of molecular surface dynamics. IEEE Transactions on Visualization and Computer Graphics 15 (2009), 13911398. 9

[11] Landis, H. Production-ready global illumination. SIGGRAPH 2002 Course Note #16: RenderMan in Production 29 (2002), 87102. [12] Lu, A., Member, S., Morris, C. J., Taylor, J., Ebert, D. S., Hansen, C., Rheingans, P., Society, I. C., Society, I. C., and Hartner, M. Illustrative interactive stipple rendering. IEEE Transactions on visualization and computer graphics (2003). [13] Luft, T., Colditz, C., and Deussen, O. Image enhancement by unsharp masking the depth buer. ACM Transactions on Graphics 25, 3 (jul 2006), 12061213. [14] Lum, E. B., and Ma, K.-L. Lighting transfer functions using gradient aligned sampling. In VIS 04: Proceedings of the conference on Visualization 04 (Washington, DC, USA, 2004), IEEE Computer Society, pp. 289296. [15] Miller, G. A. Wordnet Search 3.0. http://wordnetweb.princeton.edu/perl/webwn?s= illustration. [Online; accessed 10-March-2010]. [16] Pettersen, E., Goddard, T., Huang, C., Couch, G., Greenblatt, D., Meng, E., and Ferrin, T. Ucsf chimeraa visualization system for exploratory research and analysis. J Comput Chem 25, 13 (2004), 160512. [17] Rautek, P., Bruckner, S., Groller, E., and Viola, I. Illustrative visualization: new technology or useless tautology? SIGGRAPH Comput. Graph. 42, 3 (2008), 18. [18] Rheingans, P., and Ebert, D. Volume illustration: Nonphotorealistic rendering of volume models. IEEE Transactions on Visualization and Computer Graphics 7, 3 (2001), 253264. [19] Sayle, R., and Milner-White, E. J. Rasmol: Biomolecular graphics for all. Trends in Biochemical Sciences (TIBS) 20, 9 (September 1995), 374. [20] Svetachov, P., Everts, M. H., and Isenberg, T. DTI in Context: Illustrating Brain Fiber Tracts In Situ. Computer Graphics Forum 29, 3 (jun 2010). To appear. [21] Tarini, M., Cignoni, P., and Montani, C. Ambient occlusion and edge cueing for enhancing real time molecular visualization. IEEE Transactions on Visualization and Computer Graphics 12, 5 (2006), 12371244. [22] Treavett, S., and Chen, M. Pen-and-ink rendering in volume visualization. Visualization Conference, IEEE 0 (2000), 38. [23] V., H. C. W. Cn3D : A new generation of three-dimensional molecular structure viewer. Trends in biochemical sciences 22 (1997), 314216. [24] Yuan, X., and Chen, B. Illustrating surfaces in volume. In VisSym (2004), pp. 916, 337.

10

S-ar putea să vă placă și