Sunteți pe pagina 1din 23

Visible-Surface determination

Given a set of 3-D objects and a viewing position for generating a realistic graphics display, we wish to determine which lines or surfaces of the objects are visible, either from the COP (for perspective projections) or along the direction of projection (for parallel projections), so that we can display only the visible lines or surfaces. Visibility tests are conducted to determine the surface that is visible from a given viewpoint. This process is known as visible-line or visible-surface determination, or hidden-line or hidden-surface elimination. There are two fundamental approaches for visible-surface determination, according to whether they deal with their projected images or with object definitions directly. These two approaches are called imagespace approach and object-space approach, respectively. 1. The first approach (image-space) determines which of n objects in the scene is visible at each pixel in the image. The pseudocode for this approach looks like as: for(each pixel in the image) { determine the object closest to the viewer that is passed by the projector through the pixel; draw the pixel in the appropriate color; } That is, in an image-space algorithm, the visibility is decided point by point at each pixel position on the projection plane. If the number of objects is n and the pixels is p then effort is proportional to n.p. Image-space approaches require two buffers: one for storing the pixel intensities and another for updating the depth of the visible surfaces from the view plane. 2. The second approach (object-space) compares all objects directly with each other within the scene definition and eliminates those objects or portion of objects that are not visible. In terms of pseudocode, we have: for (each object in the world) { determine those parts of the object whose view is unobstructed (not blocked) by other parts of it or any other object; draw those parts in the appropriate color; } This approach compares each of the n objects to itself and to the other objects, and discarding invisible portions. Thus, the computational effort is proportional to n2

1. Back-Face Detection Method


A fast and simple object-space method for identifying the back faces of a polyhedron is based on the "inside-outside" tests. A point (x, y, z) is "inside" a polygon surface with plane parameters A, B, C, and D if When an inside point is along the line of sight to the surface, the polygon must be a back face (we are inside that face and cannot see the front of it from our viewing position). We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian components (A, B, C). In general, if V is a vector in the viewing direction from the eye (or "camera") position, then this polygon is a back face if V.N>0 Furthermore, if object descriptions have been converted to projection coordinates and our viewing direction is parallel to the viewing z-axis, then V = (0, 0, Vz) and V.N=VZC So that we only need to consider the sign of C, the component of the normal vector N In a right-handed viewing system with viewing direction along the negative z V axis, the polygon is a back face if C < 0. AIso, we cannot see any face whose normal has z component C = 0, since our viewing direction is grazing that polygon. Thus, in general, we can label any polygon as a back face if its normal vector has a z component value: C<=0

Similar methods can be used in packages that employ a left-handed viewing system. In these packages, plane parameters A, B, C and D can be calculated from polygon vertex coordinates specified in a clockwise direction (instead of the counterclockwise direction used in a right-handed system). Also, back faces normal vectors that point away from the viewing position and are identified by C >= 0 when the viewing direction is along the positive zv axis. By examining parameter C for the different planes defining an object, we can immediately identify all the back face

2. Depth-buffer (or z-buffer) Method Depth-buffer method is a fast and simple technique for identifying visible-surfaces. This method is also referred to as the z-buffer method, since object depth is usually measured from the view plane along the z-axis of a viewing system. This algorithm compares surface depths at each pixel position (x,y) on the view plane. Here we are taking the following assumption: Plane of projection is z=0 plane Orthographic parallel projection.

For each pixel position (x,y) on the view plane, the surface with the smallest z-coordinate at that position is visible. For example, Figure 3 shows three surfaces S1, S2, and S3, out of which surface S1 has the smallest z-value at (x,y) position. So surface S1 is visible at that position. So its surface intensity value at (x,y) is saved in the refresh-buffer.

Here the projection is orthographic and the projection plane is taken as the xy-plane. So, each (x,y,z) position on the polygon surfaces corresponds to the orthographic projection point (x,y) on the projection plane. Therefore, for each pixel position (x,y) on the view plane, object depth can be compared by comparing z-values, as shown in Figure . For implementing z-buffer algorithm two buffer areas (two 2-D arrays) are required. 1) Depth-buffer: z-buffer(i,j) , to store z-value, with least z, among the earlier z-values for each (x,y) position on the view plane. 2) Refresh-buffer: COLOR(i,j): for storing intensity values for each position. Algorithm: Given: A list of polygons {P1,P2,..,Pn}. Step1: Initially all positions (x,y) in the depth-buffer are set to 1.0 (maximum depth) and the refreshbuffer is initialized to the background intensity i.e., z-buffer(x,y):=1.0; and COLOR(x,y):= Background color.

Step2: For each position on each polygon surface (listed in the polygon table) is then processed one scan line at a time. Calculating the depth (z-value) at each (x,y) pixel position. The calculated depth is then compared to the value previously stored in the depth buffer at that position to determine visibility. a) If the calculated z-depth is less than the value stored in the depth-buffer, the new depth value is stored in the depth-buffer, and the surface intensity at that position is determined and placed in the same (x,y) location in the refresh-buffer, i.e., If z-depth< z-buffer(x,y), then set z-buffer(x,y) = z-depth; COLOR(x,y)=Isurf(x,y); // where Isurf(x,y) is the projected intensity value of the polygon surface Pi at pixel position (x,y). b) After all surfaces have been processed, the depth buffer contains depth values for the visible surfaces and the refresh-buffer contains the corresponding intensity values for those surfaces. Calculation of depth values, z, for a surface position (x,y): We know that for any polygon faces, the equation of the plane is of the form: A.x+B.y+C.z+D=0 --------------------(1) , where A, B, C, D are known to us. To calculate the depth value z, we have to solve the plane equation (1) for z: z=( A. x B . y D)/C --------(2) Consider a polygon in Figure intersected by scan-lines at y and y 1 on y-axis.

Now, if at position (x, y) equation (2) evaluates to depth z, then at next position (x+1,y) along the scan line, the depth zH can be obtained as: zH =[A.(x+1) B.yD]/C --------------(3) From equation (2) and (3), we have zzH =A/C zH =zA/C -----------------(4) The ratio A/C is constant for each surface. So we can obtain succeeding depth values across a scan-line from the preceding values by a single addition. On each scan-line, we start by calculating the depth on the

left edge of the polygon that intersects that scan-line and then proceed to calculate the depth at each successive position across the scan -line by Equation-(4) till we reach the right edge of the polygon. Similarly, if we are processing down, the vertical line x intersecting the (y1)th scan-line at the point (x, y-1). Thus from Equation (2) the depth zv is obtained as: zv=[A.xB.(y1) D]/C =([A.xB.yD]/C)+B/C =z+B/C ----------------(5) Starting at the top vertex, we can recursively calculate the x position down the left edge of the polygon from the relation: x=x-1/m, where m is the slope of the edge (see Figure ). Using this x position, the depth z at (x,y-1) on the (y-1) scan-line is obtained as: z=[A.xB.(y1) D]/C =[A.(x1/m) B.(y1) D]/C =z+(A/m+B)/C ---------------------(6) Since m= for a vertical line, Equation (6) becomes equation (5). Thus, if we are processing down, then we can obtain succeeding depth values across a scan-line from the preceding values by a single addition by using Equation (5), i.e., z v= z+B/C.

Advantages (z-buffer method): 1) The z-buffer method is easy to implement and it requires no sorting of surface in a scene. 2) In z-buffer algorithm, an arbitrary number of objects can be handled because each object is processed one at a time. The number of objects is limited only by the computers memory to store the objects. 3) Simple hardware implementation. 4) Online algorithm (i.e., we dont need to load all polygons at once in order to run algorithm). Disadvantages: 1) Doubles memory requirements (at least), one for z-buffer and one for refress-buffer. 2) Device dependent and memory intensive. 3) Wasted computation on drawing distant points that are drawn over with closer points that occupy the same pixel.

4) Spends time while rendering polygons that are not visible. 5) Requires re-calculations when changing the scale.

3. Scan-Line method
In contrast to z-buffer method, where we consider one surface at a time, scan-line method deals with multiple surfaces. As it processes each scan-line at a time, all polygon intersected by that scan-line are examined to determine which surfaces are visible. The visibility test involves the comparison of depths of each overlapping surface to determine which one is closer to the view plane. If it is found so, then it is declared as a visible surface and the intensity values at the positions along the scan-line are entered into the refresh-buffer. Assumptions: 1. Plane of projection is Z=0 plane. 2. Orthographic parallel projection. 3.Objects made up of polygon faces. Scan-line algorithm solves the hidden- surface problem, one scan-line at a time, usually processing scan lines from the bottom to the top of the display. We require two arrays, intensity [x] & depth [x] to hold values for a single scan-line

Here at Q1and Q2 both polygons are active (i.e., sharing). Compare the z-values at Q1 for both the planes (P1 & P2). Let z1(1),z1(2) be the z-value at Q1 ,corresponding to P1& P2 polygon respectively. Similarly z2(1), z2(2) are the z-values at Q2, corresponding to P1 & P2 polygon respectively.

Q1,Q2 is filled with the color of P1.

Q1,Q2 is filled with the color of P2 Case3: Intersection is taking place. In this case we have to go back pixel by pixel and determine which plane is closer. Then choose the color of the pixel. Algorithm (scan-line): For each scan line perform step (1) through step (3). 1) For all pixels on a scan-line, set depth [x]=1.0 (max value) & Intensity [x] = background-color. 2) For each polygon in the scene, find all pixels on the current scan-line (say S1) that lies within the polygon. For each of these x-values: a) calculate the depth z of the polygon at (x,y) b) if z < depth [x], set depth [x]=z & intensity corresponding to the polygons shading. 3) After all polygons have been considered, the values contained in the intensity array represent the solution and can be copied into a frame-buffer. Advantages of Scan line Algorithm: Here, every time, we are working with one-dimensional array, i.e., x[0x_max] for color not a 2D-array as in depth buffer algorithm. Ques: Distinguish between z-buffer method and scan-line method. What are the visibility test made in these methods? Ans: In z-buffer algorithm every pixel position on the projection plane is considered for determining the visibility of surfaces w. r. t. this pixel. On the other hand in scan-line method all surfaces intersected by a scan line are examined for visibility. The visibility test in z-buffer method involves the comparison of depths of surfaces w. r. t. a pixel on the projection plane. The surface closest to the pixel position is considered visible. The visibility test in scan-line method compares depth calculations for each overlapping surface to determine which surface is nearest to the view-plane so that it is declared as visible.

4. Area-Subdivision methods:
This method is essentially an image-space method but uses object-space operations reordering (or sorting) of surfaces according to depth. This method takes advantage of area-coherence in a scene by locating those view areas that represent part of a single surface. In this method we successively subdivide the total viewing (screen) area, usually a rectangular window, into small rectangles until each small area is the projection of part of a single visible surface or no surface at all. Assumptions: Plane of projection is z=0 plane Orthographic parallel projection Direction of projection d=(0,0,-1) Assume that the viewing (screen) area is a square Objects are made up of polygon faces.

To implement the area-subdivision method, we need to identify whether the area is part of a single surface or a complex surface by means of visibility tests. If the tests indicate that the view is sufficiently complex, we subdivide it. Next, we apply the tests to each of the smaller areas and then subdivide further if the tests indicate that the visibility of a single surface is still uncertain. We continue this process until the subdivisions are easily analyzed as belonging to a single surface or until they are reduced to the size of a single pixel. Starting with the full screen as the initial area, the algorithm divides an area at each stage into 4 smaller area, as shown in Figure , which is similar to quad-tree approach.

Test to determine the visibility of a single surface are made by comparing surfaces (i.e., polygons P) with respect to a given screen area A. There are 4 possibilities:

1) Surrounding polygon: Polygon that completely contains the area. 2) Intersecting (or overlapping) polygon: Polygon that intersects the area . 3) Contained polygon: polygon that is completely contained within the area. 4) Disjoint polygon: Polygon that is completely outside the area .

Further subdivisions of a specified area are needed, if one of the following conditions is true: Case 1: All the polygons are disjoint from the area. In this case, the background color can be displayed in the area. Case 2: Exactly one polygon faces, after projection, intersecting or contained in the square area. In this case the area is first filled with the background color, and then the part of the polygon contained in the area is scan converted. Case 3: There is a single surrounding polygon, but no intersecting or contained polygons. In this case the area is filled with the color of the surrounding polygon. To check whether the polygon is any one of these four cases, we have to perform the following test: Test 1: For checking disjoint polygons (use Min-max test). Suppose you have two polygons P1 and P2. The given polygons P1 and P2 are disjoint if any of the following four conditions is satisfied (see Figures-11(a) and 11(b)): These four tests are called Min-max test i) x(1)max < x(2)min ii) x(2)max < x(1)min iii) y(1)max < y(2)min iv) y(2)max < y(1)min

Test 2: (Intersection Test): If Min-max test fails then we go for intersection test. Here we take each edge one by one and see if it is intersecting. For example, see Figure 12, for each edge of P1 we find the intersection of all edges of P2.

Test 3: (Containment test): If intersection test fails, then it can be either contained polygon or surrounding polygon. So we do the containment test. For this test we have the following three cases, shown in Figures 13(a),(b) and (c). a) P1 contains P2. b) P2 contains P1. c) P1 and P2 are disjoint.

Case a: Verify a vertex point of P2 lies inside of P1. If the result is true, P2 is completely inside of P1. Case b: If the result of case-a is not true, then verify whether P2 contains a vertex point of P1. If the result is true, then P2 contains P1. Case c: If both case-a and case-b (containment test) failed then we conclude that P1 and P2 are disjoint. For a given screen area, we keep a potentially visible polygons list (PVPL), those in categories 1, 2 and 3. (Disjoint polygons are clearly not visible

Subdivision Algorithm
1) Initialize the area to be the whole screen. 2) Create a PVPL w.r.t. an area, sorted on z min (the smallest z coordinate of the polygon within the area). Place the polygons in their appropriate categories. Remove polygons hidden by a surrounding polygon and remove disjoint polygons. 3) Perform the visibility decision tests: a) If the list is empty, set all pixels to the background color. b) If there is exactly one polygon in the list and it is classified as intersecting (category 2 ) or contained (category 3), color (scan-converter) the polygon, and color the remaining area to the background color. c) If there is exactly one polygon on the list and it is a surrounding one, color the area the color of the surrounding polygon. d) If the area is the pixel (x,y), and neither a, b, nor c applies, compute the z coordinate z(x, y) at pixel (x, y) of all polygons on the PVPL. The pixel is then set to the color of the polygon with the smallest z coordinate. 4) If none of the above cases has occurred, subdivide the screen area into fourths. For each area, go to step 2

5.

BSP Trees:
Binary space partition trees (bsp trees) are very useful for people who work with 2D and 3D computer graphics. The word binary means "having two parts". You may have heard the term "binary star". A binary star is actually two stars that move around the same center. The word space as used here simply means the area that we are working with. The word partition means to divide things up. Binary space partitioning means that we are dividing a space up into two parts. We then take each of the smaller parts and divide them up into two parts each. We continue to divide up the smaller parts, organizing them in a specific way until the parts are sufficiently small enough for our use. A BSP tree is simply a sorted list of all the polygons of such a scene. The sorting is done in a particular way so that we can use the information in the list very quickly.

Figure 1 A small segment of a "no frills" game world General algorithm for creating a BSP tree: Step 1: Make a list of all the walls.

Figure 2 (Figure 1 viewed from above)

Step 2: Choose one of the walls, "the splitter wall" to split up the space into two parts, P1 and P2. Step 3: Put all of the walls that are completely in front of the splitter wall into one list, called "front". Step 4: Put all of the walls that are completely in back of the splitter wall into another list called "back". Step 5: If the splitter wall intersects any of the walls, split that wall into two parts, putting the part of the wall in front of the splitter wall into the front list and the other part in the back list. Step 6: The splitter wall is placed in the bsp tree. The first wall simply becomes the root of the tree. Thereafter, the wall is placed in the bsp tree to the right of its parent wall if in the scene the wall is in front of the parent wall. The wall is placed in the bsp tree to the left of the parent wall if in the scene

the wall is in back of the parent wall. Step 7: Recursively split up the P1 space until you are left with one line segment only. Then recursively split up the P2 space until there is only one line segment remaining. We will now follow these steps in order to create a bsp tree of the world in figure 1. Figure 1 shows an example of a portion of a game world where the player moves through rooms and halls. We can label each of the walls in this scene A through H, as is shown in both figures 3 and 4. Note that in the view in Figure 3 walls D and G are not visible to the player.

Figure 3 (Figure 1 with letters added to each visible wall) from above with all walls labelled)

Figure 4 (Figure 3 as viewed

Step 1: Following the algorithm, we start with a list of all of the polygons in one of the levels of the game world. Here (in Figure 3 and 4), we have just a portion of a world, so our list of lines is quite small-list of all lines: { A,B,C,D,E,F,G,H }. In a real game level, this list would be huge, perhaps 10,000 or more polygons. Step 2: In order to sort the line segments, we choose one segment with which to split up the world. There are ways to choose which line is best used to split up the world, but for simplicity sake, these ways are not used here. Here, line segment C is the chosen splitting line, as is shown in red in figure 6:

Figure 6 The red line is the first splitting line. The arrow points to the front of line C.

Steps 3 and 4: After splitting the world into 2 parts using line segment C as the "splitter", we put all the other lines into lists as follows: Lines that are in front of C: { G, H } Lines that are in back of C: { A, B, D, E } Lines that C splits: { F } Step 5: Because line F is split by C, we break F up into the part of F that is in front of C and the part of F that is in back of C, labelled in figure 6 as F1 and F2. Now we have finished the first split. Instead of the original list of all the lines, we now have 2 lists: Lines in front of C: { G, H, F1 } Lines in back of C: { A, B, D, E, F2 } We now have the first node (the root node) of our binary space partitioning tree, namely: C Step 6: We next repeat this splitting process until we have assigned each line segment to a place in the bsp tree. More specifically, we will use the next line in the "front" list, which is line F1, to split the world again:

Figure 7 F1 used Now we have the following lists: Lines in front of G: { F1 } Lines in back of G: { H } Note that G doesn't split any lines. Also, lines H and F1 do not yield any further divisions of our tree, so they become leaf nodes as is shown below in figure 8.

The bsp tree now looks as follows:

Figure 8

Next we look at the nodes in the list of lines that are behind line C, splitting up the space behind that line and creating the following bsp tree:

Figure 9 The bsp tree representing the figure 1 scene. Step 7: This bsp tree shows all the relations between all the walls in figure 1. If a node, let's call it N1, is lower and to the right of a node, which we'll call N2, then that means that the wall represented by N1 is in front of the wall represented by node N2. If node N3 is lower and to the left of Node N1, then the wall representing N3 is in back of the wall represented by N1. For example: walls G, H and F1 are in front of wall C, while wall H is in back of wall G.
Our goal is to draw each of the parts of the scene (that's what rendering means). Basically, we use the current node as the dividing line. If the viewpoint is in front of the dividing line, we know that we should draw everything behind this line first, before drawing the node itself, since we are drawing back to front. If the viewpoint is in back of the dividing line, then we want to draw everything in front of the node before the node itself gets drawn. The summary of the order of nodes that we just printed is: F2,E,D,B,A,C,F1,G,H. by comparing this order to Figure 1, all of the walls behind wall C (including F2,E,D,B and A) were drawn first, farthest from the viewpoint to nearest. Then, all of the walls in front of node C were drawn next, also farthest to nearest.

Algorithm BSPTree constructBSPTree(PolygonSet S) Assumptions: Assume that we are given a method makeTree which constructs a binary tree and are given a pointer to this tree (a root node) Input: a set of polygons, S Output: a binary search partitioning tree of S { PolygonSet FrontSet, BackSet; if set S is empty, return an empty tree; Choose a polygon P from set S; Let FrontSet and BackSet be empty polygon sets; for each polygon Q in S - {P} { if Q is in front of the plane containing P insert Q in FrontSet; else if Q is in back of the plane containing P insert Q in BackSet; else split Q into two polygons Qfront (in front of P's plane) and Qback (in back of P's plane); insert Qfront in FrontSet; insert Qback in BackSet; } return makeTree(P,constructBSPTree(FrontSet),constructBSPTree(BackSet)); }

6.

Ray Tracing

Ray Tracing is a global illumination based rendering method for generating realistic images on the computer. Ray tracing handles shadows, multiple specular reflections, and texture mapping in a very easy straight-forward manner. In ray tracing, a ray of light is traced in a backwards direction. That is we start from the eye or camera and trace the ray through a pixel in the image plane into the scene and determine what it intersects. The pixel is then set to the color values returned by the ray. If the ray misses all objects, then that pixel is shaded the background color.

Tracing rays from the light source to the eye. Lots of rays are wasted because they never reach the eye.
Algorithm: When a ray hits a surface, it could generate up to three new types of rays: reflection, refraction, and shadow. A reflected ray continues on in the mirror-reflection direction from a shiny surface. It is then intersected with objects in the scene; the closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. To further avoid tracing all rays in a scene, a shadow ray is used to test if a surface is visible to a light. A ray hits a surface at some point. If the surface at this point faces a light, a ray (to the computer, a line segment) is traced between this intersection point and the light. If any opaque object is found in between the surface and the light, the surface is in shadow and so the light does not contribute to its shade. This new layer of ray calculation added more realism to ray traced images.

Explanation A primary ray is shot through each pixel and tested for intersection against all objects in the scene. If there is an intersection with an object then several other rays are generated. Shadow rays are sent towards all light sources to determine if any objects occlude the intersection spot. In the figure below, the shadow rays are labeled Si and are sent towards the two light sources LA and LB. If the surface is reflective then a reflected ray, Ri, is generated. If the surface is not opaque, then a transmitted ray, Ti, is generated. Each of the secondary rays is tested against all the objects in the scene.

The reflective and/or transmitted rays are continually generated until the ray leaves the scene without hitting any object or a preset recursion level has been reached. This then generates a ray tree, as shown below.

Advantages of ray tracing Ray tracing's popularity stems from its basis in a realistic simulation of lighting over other rendering methods (such as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. Relatively simple to implement yet yielding impressive visual results, ray tracing often represents a first foray into graphics programming.

Disadvantages of ray tracing A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately.

Types:
1. Forward Ray tracing Rays from light source bounce of objects before reaching the camera Computational wastage

2. Backward Ray tracing Track only those rays that finally made it to the camera

(or) Ray Tracing Algorithm:


In ray tracing, a ray of light is traced in a backwards direction. That is we start from the eye or camera and trace the ray through a pixel in the image plane into the scene and determine what it intersects. The pixel is then set to the color values returned by the ray. If the ray misses all objects, then that pixel is shaded the background color.

Sometimes the ray misses all of the objects:

and sometimes the ray will hit an object:

If the ray hits an object, we want to know if that point on the object is in a shadow. So, when the ray hits an object, a secondary ray, called a "shadow" ray, is shot towards the light sources.

If this shadow ray hits another object before it hits a light source, then the first intersection point is in the shadow of the second object. For a simple illumination model this means that we only apply the ambient term for that light source.

First Intersection point in the shadow of the second object

Also, when a ray hits an object, a reflected ray is generated which is tested against all of the objects in the scene.

If the reflected ray hits an object then a local illumination model is applied at the point of intersection and the result is carried back to the first intersection point. Contribution from the reflected ray

If the intersected object is transparent, then a transmitted ray is generated and tested against all the objects in the scene. Transmitted Ray

As with the reflected ray, if the transmitted ray hits an object then a local illumination model is applied at the point of intersection and the result is carried back to the first intersection point. Contribution from the transmitted ray

The reflected rays can generate other reflected rays that can generate other reflected rays, etc. The next sequence of three images shows a simple scene with no reflection, a single reflection, and then a double reflection. Scene with no reflection rays

Scene with one layer of reflection

Scene with two layers of reflection

S-ar putea să vă placă și