Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Point-Based Graphics
Point-Based Graphics
Point-Based Graphics
Ebook869 pages

Point-Based Graphics

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The polygon-mesh approach to 3D modeling was a huge advance, but today its limitations are clear. Longer render times for increasingly complex images effectively cap image complexity, or else stretch budgets and schedules to the breaking point.

Comprised of contributions from leaders in the development and application of this technology, Point-Based Graphics examines it from all angles, beginning with the way in which the latest photographic and scanning devices have enabled modeling based on true geometry, rather than appearance.

From there, it’s on to the methods themselves. Even though point-based graphics is in its infancy, practitioners have already established many effective, economical techniques for achieving all the major effects associated with traditional 3D Modeling and rendering. You’ll learn to apply these techniques, and you’ll also learn how to create your own. The final chapter demonstrates how to do this using Pointshop3D, an open-source tool for developing new point-based algorithms.

  • The first book on a major development in computer graphics by the pioneers in the field
  • Shows how 3D images can be manipulated as easily as 2D images are with Photoshop
LanguageEnglish
Release dateMay 4, 2011
ISBN9780080548821
Point-Based Graphics

Related to Point-Based Graphics

Titles in the series (25)

View More

Software Development & Engineering For You

View More

Reviews for Point-Based Graphics

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Point-Based Graphics - Markus Gross

    Research

    1

    INTRODUCTION

    Gross Markus,      Computer Graphics Laboratory, ETH Zürich, Haldeneggsteig4/Weinbergstrasse, CH - 8092 Zürich, Tel: +41-44-632 7114, Fax: +41-44-632 1596. E-mail address: grossm@inf.ethz.ch

    Pfister Hanspeter,      MERL - Mitsubishi Electric Research Laboratories, 201 Broadway, Cambridge, MA 02139, USA, Tel: +1 617 621 7566, Fax: +1 617 621 7550. E-mail address: pfister@merl.com

    1.1

    OVERVIEW

    Markus Gross and Hanspeter Pfister

    Point primitives have experienced a major renaissance in recent years, and considerable research has been devoted to the efficient representation, modeling, processing, and rendering of point-sampled geometry. There are two main reasons for this new interest in points: on one hand, we have witnessed a dramatic increase in the polygonal complexity of computer graphics models. The overhead of managing, processing, and manipulating very large polygonal-mesh connectivity information has led many researchers to question the future utility of polygons as the fundamental graphics primitive. On the other hand, modern three-dimensional (3D) digital photography and 3D scanning systems acquire both geometry and appearance of complex, real-world objects. These techniques generate huge volumes of point samples, which constitute the discrete building blocks of 3D object geometry and appearance—much as pixels are the digital elements for images.

    Over the past five years, point-based graphics has seen an amazing growth. By the time of publication of this book, three symposia on point-based graphics will have concluded, the first of which was started in Zürich, Switzerland, in 2004. The large number of submissions to these conferences shows the huge interest in this young and exciting field and its potential for research and teaching.

    This interest in combination with the huge success of various tutorials on this topic and thousands of downloads of Pointshop3D, a freeware software package for point-based graphics, have motivated us to create this textbook. It presents a comprehensive collection of both fundamental and more advanced topics in point-based computer graphics. The book is based on a series of courses that we and some of the authors taught over the past five years at major graphics conferences. We have extended our material significantly and we have invited numerous prolific authors in the field to contribute to this publication.

    The book assumes familiarity with the standard computer graphics techniques for surface representation, modeling, and rendering. No previous knowledge about point-based methods is required. The book is suitable for both classroom and professional use. The comprehensive coverage of the topic makes the book a reference and teaching tool, and the in-depth coverage of algorithms as well as the inclusion of the Pointshop3D open-source system makes it very attractive for developers.

    The book is intended for researchers and developers with a background in traditional (polygon-based) computer graphics. They will obtain a state-of-the-art overview of the use of points to solve fundamental computer graphics problems such as surface data acquisition, representation, processing, modeling, and rendering. With this book, we hope to stimulate research and development of point-based methods in games, entertainment, special effects, visualization, digital content creation, and other areas. For instance, game developers will learn how to use point-based graphics for game characters and special effects (physics, water, etc.) employing real-time rendering on graphics processing units (GPUs). Developers in the movies and special effects industry will learn how to use points for offline, high-quality global illumination, character rendering, and physics. Engineers will learn how to process huge point clouds that naturally arise during object scanning. Architects of current GPUs (e.g., at NVIDIA and ATI) will learn what operations need to be implemented or accelerated to facilitate point-based graphics. Digital content creators and artists will use Pointshop3D for the creation of very complex models.

    We believe that point-based graphics bear a huge potential for future research and development and might influence the way we will do computer graphics in the future. We hope that this book will stimulate new ideas in this rapidly moving field and that it will convince more graphics researchers and developers of the utility of point-based graphics.

    1.2

    BOOK ORGANIZATION

    The book organization follows essentially the 3D content creation pipeline, as outlined in Figure 1.1.

    Figure 1.1 The 3D graphics content-creation pipeline serves as a model for the book’s organization.

    Historically, points have received relatively little attention in computer graphics. Yet, there has been fundamental work that laid ground for the more recent developments. In Chapter 2, Marc Levoy will present an historical perspective on the topic. He will highlight early work on point-based modeling and rendering, and will point out how this work provided a basis for the subsequent chapters of this book.

    The first stage in Figure 1.1 involves the acquisition of point clouds from real-world models through means of 3D scanning and reconstruction. Chapter 3 will give a comprehensive overview over the state-of-the-art in 3D acquisition and scanning methods for point-sampled models. The authors focus both on geometry and appearance acquisition. The discussed algorithms and systems will make the reader familiar with the essentials of scanning technology, including a practical guide to build a low-cost 3D scanning system. The final topic of this chapter is devoted to sophisticated appearance acquisition using 3D photography.

    The next stage in the content creation pipeline includes mathematical methods to reconstruct surfaces from point clouds and to deal with the discrete nature of point sets. Chapter 4 acquaints the reader with the mathematical and algorithmic fundamentals of point-based surface representations. It describes the basic concepts of discrete differential geometry and topology as well as specific representations, such as the famous moving least squares (MLS) method. Other topics of the chapter are discretization and sampling and an overview over the most important data structures for point-based representations. The chapter concludes with a presentation of real-time, iterative refinement methods.

    Once the surface representations are in place, the next step in the content creation pipeline is the digital processing, filtering, modeling, and editing of point models. Chapter 5 is devoted to the digital processing of point-sampled models. It demonstrates the versatility of point-sampled representations that combine the simplicity of conventional image editing operations with the power of advanced 3D modeling methods. The chapter includes a variety of preprocessing methods, such as model cleaning, filtering, and feature extraction, as well as photo editing operations. More advanced shape modeling operations, like deformations and constructive solid geometry (CSG), will also be discussed. The chapter is closely related to the core functions of Pointshop3D, the software accompanying the book.

    The final stage in our content creation pipeline is high-quality and efficient display of the point model. Novel rendering pipelines and concepts had to be devised for point-based models. Chapter 6 presents a comprehensive overview of high-quality rendering methods for point-sampled geometry. It starts with a review of the fundamentals of surface splatting, one of the most widely used techniques for point rendering. More advanced and hardware-accelerated methods for point splatting will be discussed next. Finally, we explain ray-tracing methods for point-sampled geometry and acceleration structures for high-performance point rendering.

    Very often, graphics models have to be animated; i.e., their shape and attributes have to be controlled and altered over time. Due to the complexity of the topic, animation cannot be treated comprehensively. But Chapter 7 will describe physically based animation using point-sampled representations. This topic has emerged recently as a promising alternative to conventional finite element simulation. It is inspired by so-called meshless methods, where the continuum is discretized using unstructured point samples. We will demonstrate that such methods allow for a wide spectrum of material simulations, including brittle fracture, elastic and plastic deformations, and fluids. Such physical point representations are combined with high-resolution point-sampled surface geometry.

    The concluding Chapter 8 contains a collection of select topics related to point-based computer graphics. One such method is the dynamic representation, compression, and display of 3D video. A second one is the modeling and analysis of uncertainty in point clouds. A further topic discusses point-based visualization of attributed datasets. Another contribution addresses the computation of global illumination in point-sampled scenes and shows how such methods are used in a production environment. The chapter demonstrates the versatility and application potential of point-based methods.

    1.3

    COMMON ISSUES AND REOCCURRING PATTERNS

    Points are clearly the simplest of all graphics primitives. Throughout the book, there are reoccurring issues inherent to point-based graphics that can be summarized as follows.

    Points generalize pixels and voxels toward irregular samples of geometry and appearance. The conceptually most significant difference to triangles is that points—much as voxels or pixels—carry all attributes needed for processing and rendering. There is no distinction between vertex and fragment anymore.

    As a sampled representation including geometry and (prefiltered) appearance, point representations allow one to carry over some of the computationally expensive fragment processing, such as filtering, to the preprocessing stage. Their very sameness of geometry and appearance creates the potential of designing leaner graphics pipelines. Of course, this simplified processing comes at a price. Straightforward framebuffer projection leaves holes in the image that have to be filled for close-up views. Point models also require a denser sampling compared to triangle meshes. The higher resolution of the representation potentially leads to increased bandwidth requirements between the computer processing unit (CPU) and GPU. In some sense, bandwidth has to be traded with processing speed.

    Points, in their purest form, do not store any connectivity or topology. Since many 3D acquisition algorithms generate point clouds as output, points naturally serve as the canonical representation for 3D acquisition systems. In contrast, triangle meshes are the result of 3D reconstruction algorithms and require prior assumptions on topology and sampling. The lack of topology and connectivity, however, is strength and weakness at the same time. The atomic nature of a point sample gives the representation a built-in level of detail (LOD), making it possible, for instance, to stream and render point clouds progressively.

    Points have proven their ability to model complex geometry. Their lack of connectivity enables one to conveniently resample without the need to restructure the representation on the fly. Resampling, one of the key ingredients of many point graphics algorithms, can be accomplished in many different ways. Continuous surface reconstructions are provided by the many versions of MLS. The lack of connectivity makes changes of model topology more accessible, but comes at a cost. k-nearest neighborhoods, needed for many surface processing algorithms, have to be computed on the fly. This, in turn, requires more elaborate data structures, including K-d-trees or spatial hashing. Also, improperly sampled point models do not give guarantees on topological correctness, which may or may not be a problem. The flexibility of dynamic adjacency computation is specifically efficient if the model size is large and the operations are local. Some researchers have resorted to cache strategies to retain some static adjacency in the representation.

    Similar observations hold for physically based simulations. Meshless methods have successfully been applied to compute elastic and plastic deformations as well as fracturing of solid objects. It has been shown that the absence of a rigid mesh structure facilitates the modeling of phase transitions, for example, during melting. The proposed methods are robust and render visually plausible results. In addition to the use of points for the discretization of computational domains, some research has been done to reconstruct and animate the corresponding surfaces using point samples. Again, the previously discussed properties of point representations help to conveniently change topology (fracture, melting) or resample dynamically (deformation).

    In summary, point primitives constitute a simple and versatile low-level graphics and visualization primitive. Representation points have different strengths and weaknesses compared to other graphics primitives. They are not going to replace the existing ones, but have proven their ability to complement them. Many technical issues related to point-based graphics boil down to reconstruction and resampling. As a sample-based approach to graphics, points stimulate us to take a signal processing view onto graphics and visualization.

    1.4

    ACKNOWLEDGMENTS

    This book reflects a significant part of our own research and experience in this topic collected and carried out over the past years. There are many individuals who have contributed to its completion, in small and in large ways.

    First, we would like to thank all authors for all the work and effort they have put into their contributions to bring the book to completion. We also thank all reviewers for providing very valuable feedback in various stages of the manuscript. In particular, Mario Botsch and Miguel Otaduy from the Computer Graphics Laboratory in Zürich helped greatly with the revision of the manuscript.

    We were delighted that Turner Whitted from Microsoft Research, one of the pioneers of point based graphics, agreed to write the excellent foreword to our book. We were also very pleased to read the endorsements from some of the most distinguished senior researchers of our community, including Michael Cohen from Microsoft Research, Fredo Durand from MIT, Henry Fuchs from University of North Carolina, Leo Guibas from Stanford University, and Arie Kaufman from SUNY at Stony Brook.

    Our special thanks goes to Rolf Adelsberger, who did an invaluable job in keeping the sources of the document consistent, tracking authors and changes, compiling the manuscript for the publisher, checking figures and references and assembling the Pointshop3D resources. Richard Keiser also helped in many ways to put together the CD accompanying this book. Martin Wicke and Doo-Young Kwon assisted us in setting up internal websites for communication with our authors. We also thank all the students and collaborators involved with the projects that are summarized in this book. Without their help and effort the presented results would not have been possible.

    We thank our publisher, Elsevier, for accepting our proposal and everybody involved for bringing this book to life. Special thanks go to Tim Cox for his patience and support of the project, and to Dave Eberly for his encouragement and help. Our gratitude also goes to our production editors, Alan Rose and Darice Moore from Multiscience Press, our project managers, Dawnmarie Simpson, Michelle Ward and Michele Cronin, our acquisitions editor, Tiffany Gasbarrini, and our publisher, Denise E. M. Penrose.

    Many thanks go also to our employers, ETH Zürich and Mitsubishi Electric Research Laboratories (MERL), for giving us the freedom and flexibility to work on this project. Finally, our work and careers would be meaningless without the love and support of our wives and children, Jennifer, Lilly, and Audrey Pfister, and Lisa, Jana, Adrian Gross.

    2

    THE EARLY HISTORY OF POINT-BASED GRAPHICS

    Levoy Marc,      Stanford University, Computer Graphics Laboratory, Stanford, CA 94305, USA, Tel: +1 650 725 4089, Fax: +1 650 723 0033. E-mail address: levoy@cs.stanford.edu

    Marc Levoy

    Why is it worthwhile to study where an idea came from? Thomas Kuhn, writing in The Structure of Scientific Revolutions, notes that scientists like to see their discipline’s past developing linearly toward its present vantage [Kuh62]. As a result, textbooks often discard or obscure the origins of ideas, thereby robbing students of the experience of a scientific revolution. This in turn makes them unable to realize when one is upon them and ignorant about how to act in these circumstances. I do not claim that point-based rendering was a scientific revolution, at least not in 1985 when Turner Whitted and I wrote our first paper on the topic. However, that paper was written in response to a scientific crisis, which bears some of the same characteristics. As a technical achievement, our paper was a failure. However, as a story of crisis and response it is instructive. In this spirit I offer the following historical account.

    2.1

    SAMPLE-BASED REPRESENTATIONS OF GEOMETRY

    Since the beginning of computer graphics, a creative tension has existed between representing scenes as geometry versus as collections of samples. Early sample-based representations included textures, sprites, range images, and density volumes. More recent examples include light fields, layered depth images, image caches, and so on. Points are another such representation, often used to approximate curved surfaces as this book amply demonstrates. In each case researchers faced a common set of challenges: how to edit the scene by manipulating its samples, how to store and compress these samples, how to transform and shade them, and how to render them with correct sampling, visibility, and filtering.

    However, to understand the early history of point rendering, we must understand a different tension that existed in the early history of computer graphics, one between image-order and object-order algorithms for displaying geometric primitives. It was in response to this tension that Turner Whitted and I proposed points as a way to display curved surfaces [LW85]. And it was on the shoals of sampling, visibility, and filtering that our idea ran aground. Let us see why.

    2.2

    IMAGE-ORDER VERSUS OBJECT-ORDER VISIBILITY AND ANTIALIASING

    In their seminal paper on hidden-surface algorithms [SSS74], Ivan Sutherland et al. showed that visibility is tantamount to sorting. As any student of computing knows, sorting N objects into P bins can be done using a gather or a scatter. In computer graphics, the gather strategy leads to an image-order algorithm. One example is ray tracing [Whi80]; for the viewing ray associated with each image pixel, search among the geometric primitives in a scene for the frontmost primitive intersecting that ray. By contrast, the scatter strategy leads to an object-order algorithm. The most common of these is the Z-buffer [Cat74]; create an array as large as the screen, and for each primitive decide which pixel it falls into. While building such an array was expensive in the 1970s, causing Sutherland et al. [SSS74] to dismiss the Z-buffer algorithm as hopelessly impractical, a steady decline in semiconductor memory prices eventually made this and other image-order algorithms both practical and attractive. Image-order traversal is particularly easy to implement because the number of samples that should be taken of the primitive is obvious: one per pixel. For an object-order algorithm, enough samples must be taken to avoid leaving any pixels uncovered, but not so many that the algorithm becomes inefficient.

    To avoid aliasing artifacts in computer-generated images, each pixel should be assigned not a point sample of the scene but instead a sample of the convolution of the scene by a filter function. Repeating this process for every pixel in a two-dimensional (2D) image, and assuming the filter is a discrete 2D function, we obtain four nested loops. Since convolution is linear, these loops can be rearranged so that the outer loop is over image pixels, leading to an image-order algorithm, or over points on the scene primitives at some resolution, leading to an object-order algorithm. As was the case for visibility, antialiasing poses fewer problems if implemented in image order. In an influential early paper, Edwin Catmull [Cat78] observed that to compute a correct color in a pixel, only those primitives or portions of primitives that lie frontmost within the filter kernel centered at the pixel should be included in the convolution. This is easy in an image-order algorithm, because all primitives that might contribute to the pixel are evaluated at once. In an object-order algorithm, solving this problem requires retaining subpixel geometry for every primitive in every pixel. To avoid this difficulty, researchers have proposed computing visibility at a higher resolution than the pixel spacing (by supersampling and averaging down), approximating subpixel geometry using a bitmask [Car84] or summarizing it as a scalar value (called alpha), leading to digital compositing [Wal81, PD84]. In some rendering algorithms, subpixel geometry has been used as both an alpha value and a filter weight, leading to problems of correctness to which I will return later.

    2.3

    THE CHALLENGE POSED BY PROCEDURAL MODELING

    If it is easier to render scenes in image order, why did researchers develop objectorder algorithms? The answer lies in the convenience of procedural modeling, which may be loosely defined as the generation of scene geometry using a computer algorithm (rather than interactively or by sensing). Examples of procedural modeling include fractal landscapes [Car80], clouds [Gar85], plants [PL90], and generative surface models [Sny92]. Although some cite Levoy and Whitted [LW85] as introducing points as primitives, procedurally generated points or particles had already been used to model smoke [CHP+79], clouds [Bli82b], fire [Ree83], and tree leaves and grass [Ree85].

    To render a procedurally defined object using favored image-order algorithms, one must be able to compute for a given pixel which part of the object (if any) lands there. If the procedure is expensive to invert in this sense, or even uninvertible, then an object-order algorithm rendering must be used. During the 1970s and early 1980s, researchers invested considerable effort in resolving this conflict between rendering order and geometry traversal order. As an example, Reeves [Ree85] modeled tree leaves as circular particles with semitransparent fringes. To decide how many particles to draw for each tree, he examined its approximate size on the screen. He rendered these particles using an image-order algorithm. In this algorithm, transparency could be used either as a filter weight or a compositing alpha, but not both, as noted earlier. To resolve this ambiguity, Reeves sorted his particles into buckets by screen location and Z-depth, treated transparency as weight, and additively accumulated color and weight in each pixel. When a bucket was finished, it would be combined with other buckets using digital compositing, with the accumulated weight in each pixel now serving as its alpha value. While not exact, this algorithm worked well for irregular geometry like trees and grass.

    Another important class of procedurally defined objects are parametric surfaces. For given values of the parameters s and t, it is straightforward to evaluate the surface functional, yielding an (x, y) position on the screen. However, for a given pixel position it may be difficult to determine whether the surface touches it. For parametric bicubic surfaces, some researchers attacked this inverse problem head-on, developing scanline algorithms that directly gave these curves of intersection [Bli78, Whi78]. However, these algorithms were fragile and difficult to implement efficiently. Others proposed an object-order approach, subdividing the surface recursively in parametric space into patches until their projection covered no more than one pixel [Cat74]. Still others proposed hybrid solutions, subdividing the surface recursively until it was locally flat enough [Cla79, LCWB80] (or detailed enough in the case of fractal surfaces [Car80]) to represent using a simpler primitive that could be rendered using an image-order algorithm. Another hybrid solution was to partially evaluate the procedural geometry, producing an estimate of its spatial extent in the form of an image space decomposition [RW80] or collection of bounding boxes [Kaj83]; the overlap between these extents and screen pixels could then be evaluated in image

    Enjoying the preview?
    Page 1 of 1