Sunteți pe pagina 1din 49

Computer Graphics

Unit 1
Q 1. List the application of graphics system. Ans: The following are considered graphics applications : paint programs : Allow you to create rough freehand drawings. The images are stored as bit maps and can easily be edited. illustration/design programs: Supports more advanced featuresthan paint programs, particularly for drawing curved lines. The images are usually stored in vectorbased formats. Illustration/design programs are often called draw programs. presentation graphics software : Lets you create bar charts, pie charts, graphics, and other types of images for slide shows and reports. The charts can be based on data imported from spreadsheet applications. animation software: Enables you to chain and sequence a series of images to simulate movement. Each image is like a frame in a movie. CAD software: Enables architects and engineers to draft designs. desktop publishing : Provides a full set of word-processing features as well as fine control over placement of text and graphics, so that you can create newsletters, advertisements, books, and other types ofdocuments. Q 2.Define Aspect Ratio : The aspect ratio of an image is the ratio of the width of the image to its height, expressed as two numbers separated by a colon. That is, for an x:y aspect ratio, no matter how big or small the image is, if the width is divided into x units of equal length and the height is measured using this same length unit, the height will be measured to be y units Resolution : Image resolution describes the detail an image holds. The term applies to digital images, film images, and other types of images. Higher resolution means more image detail. Image resolution can be measured in various ways. Basically, resolution quantifies how close lines can be to each other and still be visiblyresolved Persistence : Like the burn-in on CRTs, image persistence on LCD monitors is caused by the continuous display of static graphics on the screen for extended periods of time. What this does is cause the LCD crystals to have a memory for their location in order to generate the colors of that graphic. When a different color is then displayed in that location, the color will be off from what it should be and instead have a faint image of what was previously displayed. Q 3. Explain mid point circle algorithm for drawing circle. Ans: Beginning with the equation of a circle:

We could solve for y in terms of x , and use this equation to compute the pixels of the circle. What we'd like to do is to use this discriminating function to maintain our trajectory of drawn pixels as close as possible to the desired circle. Luckily, we can start with a point on the circle, (x0, y0+r) (or (0, r) in our adjusted coordinate system). As we move along in steps of x we note that the slope is less than zero and greater than negative one at points in the direction we're heading that are near our known

point on a circle. Thus we need only to figure out at each step whether to step down in y or maintain y at each step. 1. 2. 3. 4. We will compute points between x=0 and x=y and then draw the 8 matching points In that area, the slope of the curve is between 0 and -1 From each step/point (x, y), the next one is either (x+1, y) or (x+1, y-1) We decide about which one by looking at the midpoint M M inside (f<0) => next point is E M ouside (f>0) => next point is SE

We can find any point's symmetric complement about these lines by permuting the indices. For example the point (x,y) has a complementary point (y,x) about the linex=y. And the total set of complements for the point (x,y) are {(x,-y), (-x,y), (-x,-y), (y,x), (y,-x), (-y,x),(-y,-x)} Q 4. Explain Bresenhams line drawing algo Ans: The Bresenham line algorithm is an algorithm which determines which points in an ndimensional raster should be plotted in order to form a close approximation to a straight line between two given points. It is commonly used to draw lines on a computer screen, as it uses only integer addition, subtraction and bit shifting, all of which are very cheap operations in standard computer architectures. It is one of the earliest algorithms developed in the field of computer graphics. The common conventions will be used: that pixel coordinates increase in the right and down directions (e.g. that the pixel at (1,1) is directly above the pixel at (1,2)), and that the pixel centers have integer coordinates.

The endpoints of the line are the pixels at (x0, y0) and (x1, y1), where the first coordinate of the pair is the column and the second is the row. Bresenham's algorithm chooses the integer y corresponding to the pixel center that is closest to the ideal (fractional) y for the same x; on successive columns y can remain the same or increase by 1. The general equation of the line through the endpoints is given by:

Since we know the column, x, the pixel's row, y, is given by rounding this quantity to the nearest integer

y = y1-y0 x1-x0

(x-x0)+y0

The slope (y1 y0) / (x1 x0) depends on the endpoint coordinates only and can be precomputed, and the ideal y for successive integer values of x can be computed starting from y0 and repeatedly adding the slope

The Bresenham line algorithm has the following advantages: An fast incremental algorithm Uses only integer calculations The Bresenham algorithm is another incremental scan conversion algorithm The big advantage of this algorithm is that it uses only integer calculations such as addition/subtraction and bit shifting. The main advantage of Bresenham's algorithm is speed. The disadvantage of such a simple algorithm is that it is meant for basic line drawing. The "advanced" topic of anti aliasing isn't part of Bresenham's algorithm, so to draw smooth lines, you'd want to look into a different algorithm. Q 5. Explain scan line polygon fill algorithm. Ans : The scanline fill algorithm is an ingenious way of filling in irregular polygons. The algorithm begins with a set of points. Each point is conected to the next, and the line between them is considered to be an edge of the polygon. The points of each edge are adjusted to ensure that the point wih the smaller y value appears first. Next, a data structure is created that contains a list of edges that begin on each scanline of the image. The program progresses from the first scanline upward. For each line, any pixels that contain an intersection between this scanline and an edge of the polygon are filled in. Then, the algorithm progresses along the scanline, turning on when it reaches a polygon pixel and turning off when it reaches another one, all the way across the scanline. There are two special cases that are solved by this algorithm. First, a problem may arise if the polygon contains edges that are partially or completely out of the image. The algorithm solves this problem by moving pixel values that are outside the image to the boundaries of the image. This method is preferable to eliminating the pixel completely, because its deletion could result in a "backwards" coloring of the scanline i.e. pixels that should be on are off and vice versa. The second case has to do with the concavity of the polygon. If the polygon has a concave portion, the algorithm will work correctly. The pixel on which the two edges meet will be marked twice, so that it is turned off and then on. If, however, the polygon is convex at the intersection of two edges, the coloring will turn on and then immediately off, resulting in "backwards" coloring for the rest of the scanline. The problem is solved by using the vertical location of the next point in the polygon to determine the concavity of the current portion. Overall, the algorithm is very robust. It turns out that the only difficulty comes with polygons that have large amounts of edges, like circles and ellipses. Filling in such a polygon would be very costly. Luckily, there is a better way to fill circles and ellipses. Below is an image of a polygon filled in with scanfill. Drawing Polygons While the point-in-polygon algorithm is useful for determining whether a few points are inside a polygon, it is woefully inefficient for filling a polygon, because it requires checking every side of the polygon for

every pixel in the image. To speed things up tremendously, we will check each side of the polygon only once per pixel row. It works like this:

Figure 1 shows a polygon. We are about to render one row of pixels. All the pixels on that row have the same Y coordinate, which is represented by the red line in the figure. Loop through the polygon and build a list of threshold-crossing nodes, just as in the point-in-polygon algorithm but instead of comparing them with an X coordinate, store them all in a list. Figure 1 shows the indices (0 through 5) of the nodes. In this example, the polygon starts at the blue corner, and is traced counter-clockwise, which generates a fairly random horizontal order for the nodes.

Next, sort the node list so that it proceeds from left-to-right, as shown in Figure 2. This takes a little time, but we have to do it only once per pixel row.

Now, as shown in Figure 3, it is a simple matter to fill all the pixels between each pair of nodes from node 0 to 1, node 2 to 3, and node 4 to 5. Q 6. Explain cohen Sutherland line clipping algorithm. Ans: The Cohen-Sutherland line clipping algorithm quickly detects and dispenses with two common and trivial cases. To clip a line, we need to consider only its endpoints. If both endpoints of a line lie inside the window, the entire line lies inside the window. It is trivially accepted and needs no clipping. On the other hand, if both endpoints of a line lie entirely to one side of the window, the line must lie entirely outside of the window. It is trivially rejected and needs to be neither clipped nor displayed. Algorithm The Cohen-Sutherland algorithm uses a divide-and-conquer strategy. The line segment's endpoints are tested to see if the line can be trivally accepted or rejected. If the line cannot be trivally accepted or rejected, an intersection of the line with a window edge is determined and the trivial reject/accept test is repeated. This process is continued until the line is accepted.

To perform the trivial acceptance and rejection tests, we extend the edges of the window to divide the plane of the window into the nine regions. Each end point of the line segment is then assigned the code of the region in which it lies. 1. Given a line segment with endpoint P1=(x1,y1) and P2=(x2,y2) 2. Compute the 4-bit codes for each endpoint. If both codes are 0000,(bitwise OR of the codes yields 0000 ) line lies completely inside the window: pass the endpoints to the draw routine. If both codes have a 1 in the same bit position (bitwise AND of the codes is not 0000), the line lies outside the window. It can be trivially rejected. 3. If a line cannot be trivially accepted or rejected, at least one of the two endpoints must lie outside the window and the line segment crosses a window edge. This line must be clipped at the window edge before being passed to the drawing routine. 4. Examine one of the endpoints, say P1=(x1,y1) . Read P1 's 4-bit code in order: Left-toRight, Bottom-to-Top. 5. When a set bit (1) is found, compute the intersection I of the corresponding window edge with the line from P1 to P2. Replace P2 with I and repeat the algorithm. Illustration of Line Clipping Before Clipping

1.

Consider the line segment AD. Point A has an outcode of 0000 and point D has an outcode of 1001. The logical AND of these outcodes is zero; therefore, the line cannot be trivally rejected. Also, the logical OR of the outcodes is not zero; therefore, the line cannot be trivally accepted. The algorithm then chooses D as the outside point (its outcode contains 1's). By our testing order, we first use the top edge to clip AD at B. The algorithm then recomputes B's outcode as 0000. With the next iteration of the algorithm, AB is tested and is trivially accepted and displayed. Consider the line segment EI Point E has an outcode of 0100, while point I's outcode is 1010. The results of the trivial tests show that the line can neither be trivally rejected or accepted. Point E is determined to be an outside point, so the algorithm clips the line against the bottom edge of the window. Now line EI has been clipped to be line FI. Line FI is tested and cannot be trivially accepted or rejected. Point F has an outcode of 0000, so the algorithm chooses point I as an outside point since its outcode is1010. The line FI is clipped against the window's top edge, yielding a new line FH. Line FH cannot be trivally accepted or rejected. Since H's outcode is 0010, the next iteration of the algorthm clips against the window's right edge, yielding line FG. The next iteration of the algorithm tests FG, and it is trivially accepted and display.

2.

After Clipping After clipping the segments AD and EI, the result is that only the line segment AB and FG can be seen in the window.

Q 7. Explain various transformation operations that can be performed on an image ? Ans: Transformations, in their strictest and broadest mathematical sense, are mathematical operations that map a set of items X onto another set, Y. In geometric terms, this means that we can apply a mathematical operation on a set of objects in space to map them onto a new space geometry. Translation Translation is to move, or parallel shift, a figure. We use a simple point as a starting point. This is a simple operation that is easy to formulate mathematically. We want to move the point P1 to a new position P2.

P1=(x1,y1)=(3,3) P2=(x2,y2)=(8,5) We see that x2=x1+5 y2=y1+2 This means that translation is defined by adding an offset in the x and y direction: tx and ty: x2=x1+tx y2=y1+ty We assume that we can move whole figures by moving all the single points. For a many-sided figure, a polygon, this means moving all the corners.

Scaling

Again we will use a single point as a starting point: P1. P1=(x1,y1)=(3,3) We "scale" the point by multiplying it with a scaling factor in the x-direction, sx=2, and one in the ydirection, sy=3, and get P2=(x2,y2)=(6,9) The relation is: x2=x1sx y2=y1sy

It may seem a bit strange to say that we scale a point, since a point in a geometric sense doesn't have any area. It is better to say that we are scaling a vector.

If we consider the operation on a polygon, we see that the effect becomes a little more complicated than with translation. In addition to moving the single points, the polygon's angles and area is also changed. Note: Scaling is expressed in relation to origin.

Rotation

Rotation is more complicated to express and we have to use trigonometry to formulate it. P1=(x1,y1)=(rcos(v1),rsin(v1)) P2=(x2,y2)=(rcos(v1+v2),rsin(v1+v2))

We'll introduce the trigonometric formulas for the sum of two angles: sin (a+b) = cos(a)sin(b)+sin(a)cos(b) cos (a+b) = cos(a)cos(b)-sin(a)sin(b) and get: P1=((rcos(v1), rsin(v1) ) P2=(rcos(v1)cos(v2)-rsin(v1)sin(v2), rcos(v1)sin(v2)+rsin(v1)cos(v2)) We insert: x1=rcos(v1) y1=rsin(v1) in P2's coordinates: P2=(x2,y2)=(x1cos(v2)-y1sin(v2), x1sin(v2)+y1cos(v2)) and we have expressed P2's coordinates with P1's coordinates and the rotation angle v. x2= x1cos(v)-y1sin(v) y2= x1sin(v)+y1cos(v)

Q 8. What is clipping ? explain liang barsky 2d clipping algo. Ans: Clipping refers to the removal of part of a scene. Internal clipping removes parts of a picture outside a given region; external clipping removes parts inside a region. We'll explore internal clipping, but external clipping can almost always be accomplished as a by-product. The Liang-Barsky is optimized for clipping to an upright rectangular clip window (the Cyrus-Beck algorithms is similar but clips to a more general convex polygon). Liang-Barsky uses parametric equations, clip window edge normals, and inner products can improve the efficiency of line clipping over Cohen-Sutherland. Let

denote the parametric equation of the line segment from p0 to p1Let Ne denote the outward pointing normal of the clip window edge e, and let pe be an arbitrary point on edge e. Consider the vector L(t) - pe from pe to a point on the line L(t)Figure 2 shows several of these vectors for different values of t. At the intersection of L(t) and edge ethe inner product of Ne and L(t)-pe is zero, see figure 2. In fact, we have

which if we Solve for t yields

(Note that checks need to be made that the denominator above is not zero. Figure 2: The setup for Liang-Barsky clipping.

Using the 4 edge normals for an upright rectangular clip window and 4 points, one on each edge, we can calculate 4 parameter values where L(t) intersects each edge Let's call these parameter values tL, tR, tB, tT Note any of the t's outside of the interval [0,1] can be discarded, since they correspond to points before p0(when t < 0) and points after p1 (when t>1). The remaining t values are characterized as ``potentially entering'' (PE) or ``potentially leaving'' (PL) The parameter ti is PE if when traveling along the (extended) line from p0 to p1 we move from the outside to the inside of the window with respect to the edge i. The parameter ti is PL if when traveling along the (extended) line from p0 to p1 we move from the inside to the outside of the window with respect to the edge i. Figure 3: Potentially enter

ing and leaving edge intersections. The inner product of the outward pointing edge normal Ni with p1-p0 can be used to classify the parameter tias either PE or PL. 1. If the parameter ti is potentially entering (PE). The vectors Ni and p1-p0 point in opposite directions. Since Ni is outward, the vector p1-p0 from p0 to p1points inward. If

2.

the parameter ti is potentially leaving (PL). The vectors Ni and p1-p0 point in similar directions. Since Ni is outward, the vector p1-p0 from p0 to p1 points outward too. 3. 4. Let tpe be the largest PE parameter value and tpl the smallest PL parameter value The clipped line extends from L(tpe) to L(tpl), where 0<=tpe<=tpl<=1

.Q 9. Explain Sutherland hodgeman polygon clipping algo. Ans; The Sutherland-Hodgeman Polygon Clipping Algorithm Introduction

It is often necessary, particularly in graphics applications, to "clip" a given polygon with another. Figure 1 shows an example. In the figure, the clipping polygon is drawn with a dashed line, the clipped polygon with a regular line, and the resulting polygon is drawn with a heavy line. In this article we'll look at the particular case where the clipping polygon is a rectangle which is oriented parallel with the axes. For this case, the Sutherland-Hodgeman algorithm is often employed. This is the algorithm that we will explore. The Sutherland-Hodgeman Algorithm

The Sutherland-Hodgeman polygon clipping algorithm is relatively straightforward and is easily implemented in C. It does have some limitations, which we'll explore later. First, let's see how it works. For each of the four sides of the clipping rectangle, consider the line L through the two points which define that side. For each side, this line creates two planes, one which includes the clipping rectangle and one which does not. We'll call the one which does not the "clipping plane" (Figure 2).

For each of the four clipping planes, we must remove any vertices of the clipped polygon which lie inside the plane and create new points where the segments associated with these vertices cross the lineL (Figure 3). After clipping each of the four planes, we are left with the clipped polygon. Limitations

This algorithm always produces a single output polygon, even if the clipped polygon is concave and arranged in such a way that multiple output polygons might reasonably be expected. Instead, the polygons will be linked with overlapping segments along the edge of the clipping rectangle (Figure 4). This may or may not be the desired result, depending on your application. Q 10. What are the various ways to represent a curve? Derive a cubic Bezier curve equation. Ans: Curves Mathematically, a curve is: A continuous map from a one-dimensional space to an n-dimensional space. Intuitively, think of a curve as something you can draw with a (thin) pen on a piece of paper. You cannot create lled regions, but you can create the outlines of things. A curve is an innitely large set of points. The points in a curve have a property that any point has 2 neighbors, except for a small number of points that have one neighbor (these are the endpoints). Some curves have no endpoints, either because they are innite (like a line) or they are closed (loop around and connect to themselves). The problem that we need to address is how to describe a curve - to give names or representations to all curves so that we can represent them on a computer. For some curves, the problem of naming them is easy since they have known shapes: line segments, circles, elliptical arcs, etc. A general curve that doesnt not have a named shape is sometimes called a free-form curve. Because a free-form curve can take on just about any shape, they are much harder to describe. There are three main ways to describe curves mathematically: Implicit curve representations dene the set of points on a curve by giving a procedure that can test to see if a point in on the curve. Usually, an implicit curve is dened by an implicit function of the form: f(x, y) = 0 so that the curve is the set of points for which this equation is true. Note that the implicit function is a scalar function (it returns a single real number). Explicit or Parametric curve descriptions provide a mapping from a free parameter to the set of points on the curve. That is, this free parameter (a single number) provides an index to the points on the curve. The parametric form of a curve denes a function that assigns positions to values of the free parameter. Intuitively, if you think of a curve as something you can draw with a pen on a piece of paper, the free parameter is time, ranging from the time that we 1began drawing the curve to the time that we nish. The parametric function of this curve would tells us where the pen was at any instant in time:

x, y = f (t). Note that the parametric function is a vector valued function that returns a vector (a point position). Since we are working in 2D, these will be 2-vectors, but in 3D they would be 3 vectors. Generative or Procedural curve descriptions provide procedures that can generate the points on the curve that do not fall into the rst two categories. Examples of generative curve descriptions include subdivision schemes and fractals. Some curves can be easily represented in both explicit and implicit forms. For example, a circle with its center at the origin and radius of 1 can be written in implicit form as: f(x, y) = x2+ y2= 0, or in parametric form as: x, y = f (u) = cos u, sin u. Dierent representations of curves have advantages and disadvantages. For example, parametric curves are much easier to draw because we can sample the free parameter. Generally, parametric forms are the most commonly used in computer graphics since they are easier to work with. Our focus will be on parametric curves

A cubic Bezier curve is defined by four points. Two are endpoints. (x0,y0) is the origin endpoint. (x3,y3) is the destination endpoint. The points (x1,y1) and (x2,y2) are control points. Two equations define the points on the curve. Both are evaluated for an arbitrary number of values of t between 0 and 1. One equation yields values for x, the other yields values for y. As increasing values for t are supplied to the equations, the point defined by x(t),y(t)moves from the origin to the destination. This is how the equations are defined in Adobe's PostScript references. x(t) = axt3 + bxt2 + cxt + x0 x1 = x0 + cx / 3 x2 = x1 + (cx + bx) / 3 x3 = x0 + cx + bx + ax y(t) = ayt3 + byt2 + cyt + y0 y1 = y0 + cy / 3 y2 = y1 + (cy + by) / 3 y3 = y0 + cy + by + ay This method of definition can be reverse-engineered so that it'll give up the coefficient values based on the points described above: cx = 3 (x1 - x0) bx = 3 (x2 - x1) - cx ax = x3 - x0 - cx - bx cy = 3 (y1 - y0) by = 3 (y2 - y1) - cy ay = y3 - y0 - cy - by Now, simply by knowing cordinates for any four points, you can create the equations for a simple Bzier curve. Q 11.Sort note Rational b-spline curve. Ans: Rational B-splines Rational B-splines have all of the properties of non-rational B-splines plus the following two useful features:

They produce the correct results under projective transformations (while non-rational B-splines only produce the correct results under affine transformations). They can be used to represent lines, conics, non-rational B-splines; and, when generalised to patches, can represents planes, quadrics, and tori. The antonym of rational is non-rational. Non-rational B-splines are a special case of rational B-splines, just as uniform B-splines are a special case of non-uniform B-splines. Thus, non-uniform rational Bsplines encompass almost every other possible 3D shape definition. Non-uniform rational B-spline is a bit of a mouthful and so it is generally abbreviated to NURBS. We have already learnt all about the the B-spline bit of NURBS and about the non-uniform bit. So now all we need to know is the meaning of the rational bit and we will fully(?) understand NURBS. Rational B-splines are defined simply by applying the B-spline equation to homogeneous coordinates, rather than normal 3D coordinates. We discussed homogeneous coordinates in the IB course. You will remember that these are 4D coordinates where the transformation from 4D to 3D is:

the inverse transform was:

This year we are going to be more cunning and say that:

Thus our 3D control point, Pi=(xi,yi,zi), becomes the homogeneous control point,

. A NURBS curve is thus defined as:

Q 12. How B-spline is differ from Bizier curve. Ans: In CAGD applications, a curve may have a so complicated shape that it cannot be represented by a single Bzier cubic curve since the shape of a cubic curve is not rich enough. Increasing the degree of a Bzier curve adds flexibility to the curve for shape design. However, this will significantly increase processing effort for curve evaluation and manipulation. Furthermore, a Bzier curve of high degree may cause numerical noise in computation. For these reasons, we often split the curve such that each subdivided segment can be represented by a lower degree Bzier curve. This technique is known as piecewise representation. A curve that is made of several Bzier curves is called a composite Bzier curve or a Bzier spline curve. In some area (e.g., computer data exchange), a composite Bzier

cubic curve is known as the PolyBzier. If a composite Bzier curve of degree n has m Bzier curves, then the composite Bzier curve has in total mn+1 control vertices. A curve with complex shape may be represented by a composite Bzier curve formed by joining a number of Bzier curves with some constraints at the joints. The default constraint is that the curves are jointed smoothly. This in turn requires the continuity of the first-order derivative at the joint, which is known as the first-orderparametric continuity. We may relax the constraint to require only the continuity of the tangent directions at the joint, which is known as the first-order geometric continuity. Increasing the order of continuity usually improves the smoothness of a composite Bzier curve. Although a composite Bzier curve may be used to describe a complex shape in CAGD applications, there are primarily two disadvantages associated the use of the composite Bzier curve: 1. 2. It is considerably involved to join Bzier curves with some order of derivatives continuity. For the reason that will become clear later, a composite Bzier curve requires more control vertices than a B-spline curve.

These disadvantages can be eliminated by working with spline curves. Originally, a spline curve was a draughtsman's aid. It was a thin elastic wooden or metal strip that was used to draw curves through certain fixed points (called nodes). The resulting curve minimizes the internal strain energy in the splines and hence is considered to be smooth. The mathematical equivalent is the cubic polynomial spline. However, conventional polynomial splines are not popular in CAD systems since they are not intuitive for iterative shape design. B-splines (sometimes, interpreted as basis splines) were investigated by a number of researchers in the 1940s. But B-splines did not gain popularity in industry until de Boor and Cox published their work in the early 1970s. Their recurrence formula to derive B-splines is still the most useful tool for computer implementation.

Q 13. Explain Bezier curve with characteristics. Ans: A Bzier curve is a curved line or path defined by mathematical equations. It was named after Pierre Bzier, a French mathematician and engineer who developed this method of computer drawing in the late 1960s while working for the car manufacturer Renault. Most graphics software includes a pen tool for drawing paths with Bzier curves. The most basic Bzier curve is made up of two end points and control handles attached to eachnode. The control handles define the shape of the curve on either side of the common node. Drawing Bzier curves may seem baffling at first; it's something that requires some study and practice to grasp the geometry involved. But once mastered, Bezier curves are a wonderful way to draw!

A Bezier curve with three nodes. The center node is selected and the control handles are visible. Characteristics 1. A Bezier curve is defined on n+1 points P0, , Pn and is represented as a parametric polynomial curve of degree n. 2. Bezier curves are invariant under affine transformations, but they are not invariant under projective transformations.

3. Bezier curves are also invariant under affine parameter transformations. That is, while the curve is usually defined on the parametric interval [0,1], a simple affine transformation on the parameters to the interval [a,b] yields the same curve. 4. The Bezier curve starts at the first control point and stops at the last control point. In general, it will not pass through any other control point, but its shape mimics that of the control polygon. 5. The vector tangent to the Bezier curve at the start (stop) is parallel to the line connecting the first two (last two) control points. 6. A Bezier curve will always be completely contained inside of the Convex Hull of the control points. For planar curves, imagine that each control point is a nail pounded into a board. The shape a rubber band would take on when snapped around the control points is the convex hull. For Bezier curves whose control points do not all lie in a common plane, imagine the control points are tiny balls in space, and image the shape a balloon will take on if it collapses over the balls. This shape is the convex hull in that case. In any event, a Bezier curve will always lie entirely inside its planar or volumetric convex hull. 7. Bezier curves exhibit a symmetry property: The same Bezier curve shape is obtained if the control points are specified in the opposite order. The only difference will be the parametric direction of the curve. The direction of increasing parameter reverses when the control points are specified in the reverse order. 8. Adjusting the position of a control point changes the shape of the curve in a "predictable manner". Intuitively, the curve "follows" the control point. There isno local control of this shape modification. Every point on the curve (with the exception of the first and last) move whenever any interior control point is moved. In the image below, see how a curve defined in terms of four control points (the magenta curve) changes when one of its control points is moved to the right, yielding the modified (cyan) curve. Q 14. Explain subdivision method. Ans: The Subdivision Method At each stage in the globally adaptive algorithm a selected subregion must be subdivided. A simple natural subdivision method is to cut at the midpoint of each edge, but this produces pieces. Another problem with this subdivision is that it is not as adaptive as the method we use because this subdivision is done without any analysis of the integrand, even though the error for the integral over a selected subregion is often due to irregularity of the integrand in only a small number of directions. Our subdivision strategy, which uses a division of the largest error subregion into at most four new pieces, and which takes account of differences in integrand behavior in different directions, allows the algorithm to proceed from one stage to the next in a controlled manner. The subdivision procedure that we use, is a modified version of a procedures first described in and further developed (for = 2 only) in . Once a subregion has been selected for subdivision, the globally adaptive algorithm used by CUBPACK will recommend a subdivision into at most 2, 3 or 4 pieces, depending on the current progress of the integration. Our subdivision procedure then divides the subregion by cutting one, two or three edges of the selected subregion to produce a 2-division, 3-division or 4-division of the selected subregion, respectively. An -simplex has n(n+1)/2 edge directions, and our algorithm chooses subdivision directions from these directions. In order for the algorithm to be efficient, a method is needed for selecting good edges for subdivision, and therefore some measure of integrand irregularity is needed. A popular measure of integrand irregularity that has been successfully used with adaptive algorithms for hyper-rectangles is a fourth difference of the integrand. We follow this approach, using modifications of the methods described in and . Our simplex algorithm uses fourth differences centered at the centroid of the selected simplex. Let be the vertices of the current largest error simplex subregion by , for . Now , and let the edge directions be given define ,

with is given by

(the centroid of

). Then a fourth difference operator for the

direction

The scaling factor where

is used to provide some bias for division of very long edges. All of the points . Our general algorithm is designed to allow . . If a 2-division has that have

is computed for these differences lie within

for vector integrands . When

has more than one component, we define where is large. Let

The edges for subdivision are edges been recommended, then let the same vertices as subdivision let except that the

. Two new subregions are produced from and vertices are respectively replaced by

. For a 3- or 4-

. If a 4-division has been recommended, and

(the two edges with largest values have similar values), then our algorithm first divides into two pieces using the algorithm for a 2-division and then halves each of the two new pieces by bisection of the edge for each piece. If either a 3-division has been recommended, or a 4-division is , then our algorithm considers two possible 3-divisions. Let vertex . The vertices indexed by and if necessary, so significantly larger than . , and

recommended but index be defined by

define a triangle. Now order these vertices, by exchanging that the other . If 's), then the edge (the largest

value edge has

is trisected, producing three new equal volume subregions of , and two subregions of are

Otherwise, the edge

is cut at the point

produced. The subregion that has the original edge

is then divided into two pieces by cutting that .

edge at the midpoint. The final result is three new equal volume subregions of Q 14. Explain Beta spline.

Ans: The Beta-spline introduced recently by Barsky is a generalization of the uniform cubic B-spline: parametric discontinuities are introduced in such a way as to preserve continuity of the unit tangent and curvature vectors at joints (geometric continuity) while providing bias and tension parameters, independent of the position of control vertices, by which the shape of a curve or surface can be manipulated. We introduce a practical method by which different values of the bias and tension at each point along a curve, the actual position being determined by substituting these values into the equations for a uniformly-shaped Beta-spline. We explore the properties of the resulting piecewise polynomial curves and surfaces. An important characteristic is their local response when either the position of a control vertex or the value of a shape parameter is altered. There is also a conceptually simple and obvious way to directly generalize the equations defining the uniformly-shaped Beta-splines so that each shape parameter may have a distinct value at every joint. Unfortunately, the curves which result lack many desirable properties.

Q1. What are different types of projection? Ans: Q 2. What do you mean by vanishing point ? Ans:

Unit -3

Q 3. Write 3-D rotation transformation matrix with respect to z axis. Ans:

Q 1. What is term spline means? Ans: In computer graphics splines are popular curves because of the simplicity of their construction, their ease and accuracy of evaluation, and their capacity to approximate complex shapes through curve fitting and interactive curve design. Q 2. Write parametric form of circle and parabola. Ans: Parabola For example, the simplest equation for a parabola,

Unit 2

can be parametrized by using a free parameter t, and setting

Circle Although the preceding example is a somewhat trivial case, consider the following parametrization of a circle of radius a:

where t is in the range 0 to 2 pi. Q 3. Define convex hull property. Ans: We can rewrite x(t)as follows:

The analogous representation of y(t)is:

and thus:

The functions Bi,3(t) are the Bernstein polynomes of the third degree. They determine for each the weight of the Pion P(t). On the figure with these polynomes, one can see how the influence of each control point changes with . Such functions are called blending functions.

=200pt Remark that:

The first and the third remark imply that P(t)leaves P0in the direction of P1. The second and the third remark imply that P(t)arrives in P3coming from the direction of P2. The previous remarks only show that our solution can match our 4 conditions. The next remarks are very important:

for

and for

. This implies that

is a weighted average of the control

points. The point (with between and ) is always situated in the convex polygon (quadrangle or triangle) formed by the control points. In other words, if we wrap a rubber band around the control points, it contains the curve. One calls this the convex hull property. The next figure illustrates this. =250pt

Q 4. Explain continuity condition on curve. Ans; An alternate method for joining two successive curve sections is to specify conditions for geometric continuity. In This case, we only require parametric derivatives of the two sections to be proportional to each other at their common boundary instead of equal to each other. Zero- order geometric continuity, described as G0 continuity, is the same as zero- order parametric continuity. That is, the two curves sections must have the same coordinate position at the boundary point. First order geometric continuity, or G1 continuity, means that the parametric first derivatives are proportional at the intersection on two successive sections. If we denote the parametric position on the curve as P(u), the direction of the tangent vector P'(u), but not necessarily its magnitude, will be the same for two successive curve sections at their joining point under G1 continuity. Second-order geometric continuity, or G2 continuity, means that both the first and second parametric derivatives of the two curve sections are proportional at their boundary. Under G 2 continuity, curvatures of two curve sections will match at the joining position. A curve generated with geometric continuity conditions is similar to one generated with parametric continuity, but with slight differences in curve shape. Figure below provides a comparison of geometric and parametric continuity. With geometric continuity, the curve is pulled toward the section with the greater tangent vector.

Figure 1: Curves with G1 continuity

Figure 1: Curves with C1 continuity Q 5. What is splines? Explain characteristics. Ans: A spline is a type of piecewise polynomial function. In mathematics, splines are often used in a type of interpolation known as spline interpolation. Spline curves are also used in computer graphics and computer-aided design (CAD) to approximate complex shapes. We need to know what the essential characteristics of splines are before we consider how to construct a basis system for them. Spline functions are formed by joining polynomials together at fixed points called knots. That is, we divide the interval extending from lower limit tL to upper limit tU over which we wish to approximate a curve into L+1 sub-intervals separated by L interior boundaries l called knots, or sometimes breakpoints.) There is a distinction between these two terms, but we will come to this later. Consider the simplest case in which a single breakpoint divides interval [tL; tU] into two subintervals. The spline function is, within each interval, a polynomial of specified degree (the highest power defining the polynomial) or order (the number of coefficients defining the polynomial, which is one more than its degree). Let's use m to designate the order of the polynomial, so that the degree is m - 1: At the interior breakpoint 1, the two polynomials are required to join smoothly. In the most common case, this means that the derivatives match up to the order one less than the degree. In fact, if they matched up to the derivative whose order equaled the degree, they would be the same polynomial. Thus, a spline function defined in this was has one extra degree of freedom than a polynomial extending over the entire interval. For example, let each polynomial be a straight line segment, and therefore of degree one. In this, they join at the breakpoint with matching derivatives up to degree 0; in short, they simply join and having identical values at the break point. Since the first polynomial has two degree of freedom (slope and intercept), and the second, having its value already defined at the break point, is left with only one degree of freedom (slope), the total polygonal line has three degrees of freedom. Correspondingly, if both polynomials are quadratics, then the match both in terms of values and in terms of slope of first derivative at 1. The first polynomial has three degrees of freedom, but the second loses two because if the constraint on its value and slope at i, and thus retains only one. This leaves a total of four degrees of freedom for the spline function formed in this way, as opposed to three for a quadratic polynomial over the entire interval [tL; tU]. Figure 1 shows these linear and quadratic cases with a single breakpoint. Q 6. Continuity condition for Bezier curve and Beta spline curve . Ans: Bezier Curve Bezier curves are a class of approximating splines. They are defined using control points, but do not necessarily pass through all the control points. Instead the control points act as "handles" to define the shape of the curve. The general form of a Bezier curve is P(u) = Sk pk Bk,n(u)

Where k = 0,...,n, and pk is the kth control point, and Bk,n(u) is a Bernstein polynomial: Bk,n(u) = C(n, k)uk (1 - u)n-k where C(n, k) is a binomial coefficient = n!/(k! (n-k)!) Continuity You should note that each Bezier curve is independent of any other Bezier curve. If we wish two Bezier curves to join with any type of continuity, then we must explicitly position the control points of the second curve so that they bear the appropriate relationship with the control points in the first curve. Any Bezier curve is infinitely differentiable within itself, and is therefore continuous to any degree (Cncontinuous, ). We therefore only need concern ourselves with continuity across the joins between curves. Assume that we have two Bezier curves of the same order: P(t), defined by (P0,P1..Pn), and Q(t), defined by (Q0,Q1,Qn). C0-continuity (continuity of position) can be achieved by setting P(1)=Q(0). This gives a formula for Q0 in terms Pi of the s:

Similarly for C1-continuity, we need C0-continuity and P(1)=Q(0)., giving:

Combining Equations gives a formula for Q1 in terms of the Pi s: = = Continuing in this vein, we find that the requirements for C2-continuity (i.e. C1-continuity and ) give:

Combining Equations and gives a formula for Q2 in terms of the Pi s: = = Q 7. What is B- spline curves ? Explain characteristics. Ans: B-splines are not used very often in 2D graphics software but are used quite extensively in 3D modeling software. They have an advantage over Bzier curves in that they are smoother and easier to control. B-splines consist entirely of smooth curves, but sharp corners can be introduced by joining two spline curve segments. The continuous curve of a b-spline is defined by control points. While the curve is shaped by the control points, it generally does not pass through them.

Affine Invariance A B-spline curve has many nice properties. The first one we would like to show is affine invariance. Translational Invariance: Suppose we translate the control polygon first, and then produce a new curve based on this newly positioned control polygon. The new curve is exactly the same as the one if we translate the old curve, for all the points on the curve, point by point. This is called translation invariance. Rotation Invariance: If we rotate the control polygon to produce a new curve, it will be the same as if we rotate the old curve. This is called rotation invariance. Scaling Invariance: If we scale the control polygon to produce a new curve, it will be the same as if we scale the old curve. This is called scaling invariance. Invariance under translation, rotation and scaling are common examples of affine invariance. Earlier interpolating methods didn't have this property. Therefore, interpolants have to be re-calculated whenever they are transformed. This can be a particularly difficult problem. Convex Hull Property A curve may lie within the convex hull formed by control vertices. This is called convex hull property. One way to alter the shape of the curve is to keep the same set of basis functions but to change the position of the control vertices. Again, at all times, the curve lies within the convex hull formed by control vertices. Note that each control vertex has different effect contributing to the shape of the curve, depending solely on its corresponding basis function. So far we have seen only one evaluation interval. Suppose now we have one extra control vertex and we effectively have two intervals such that the first set of control vertices define the first piece of the curve together with the first set of basis, and the first segment lies entirely within the convex hull formed by the first set of control vertices. The next set of control vertices define the second piece of the curve together with the second set of basis, and the second segment lies entirely within the convex hull formed by the second set of control vertices. Each piece of segment is a polynomial curve so that they form a piecewise polynomial curve and they are connected in a smooth way such that they share control vertices and their knots overlap. The point where segments are pieced together is often called a joint or a junction point. It is a point in R 2 which always corresponds to a knot position at the parameter space. We can have three segments, four segments, five segments, or more. Let us traverse along the curve. On each evaluation interval, there are exactly K basis functions that are non-zero where K is the order. The basis functions always add up to 1. As a consequence, on each evaluation interval, there are exactly K control vertices that can have contribution to the curve definition, and the curve segment lies entirely in the convex hull formed by every successive K vertices. Locality Properties The next property we would like to demonstrate is the locality property. Given a vertex associated with the basis function which is non-zero over only the first four intervals, moving the vertex will change only the first four segments of the curve. Another vertex is associated with the basis function which is nonzero over the last four intervals, moving this vertex will change only the last four segments of the curve. Another vertex is associated with the basis function, but its support is truncated, therefore it is non-zero over the last two intervals, and moving this vertex will change the last two segments of the curve. As a conclusion, the locality of the control vertices indicates that moving a control vertex will change at most K curve segments, where K is the order, because its associated basis function can have, as its support, at most K intervals on which it is non-zero. Locality of Knots The next property we would like to demonstrate is another locality property, the locality of knots. Let us try to a knot. A knot sitting on the support of four basis functions but influenced by only one basis

function stops before the last interval of the support of the basis function. Therefore, when we move around this knot, only the first three segments will be changed. Let us try to move this knot. This knot sits on the support of four basis functions, but its influence on the one basis function starts after the first interval of the support of the next basis function. Therefore, when we move around this knot, only the last three segments are changed. This knot sits on the support of five basis functions. The union of their support covers the entire domain of this curve. Therefore moving this knot will change almost the entire curve. As a conclusion, the locality of the knots indicates that moving a knot will change at most (K-1) intervals to its left and at most (K-1) intervals to its right, where K is the order. Continuity Multiple knots may have impact on the continuity of the curve. A B-spline curve is infinitely differentiable, except possibly at knot positions. A curve may be C2 continuous at a point. C2 continuity means the second derivatives are continuous. As a matter of fact, at every point on this curve, a linear combination of C2 continuous basis functions, the curve is at least C 2 continuous. If we move this knot and make it collapse to its neighbor, the curve will be C l continuous at this point. Cl continuity means the first derivatives are continuous. It can be proven that this point sits on the line connecting the two vertices and the curvature is no longer continuous at this point. The curve can't be C 2, instead it is C1continuous. Although it seems that by stacking up two knots, the curve was changed from containing five segments to four segments, actually it can be interpreted as if there are still five segments in this curve, except one of them is a degenerated, zero-length segment which begins and ends at exactly the same knot position. Now, let us move this knot to make a triple knot. At this point, only one basis function is contributing, therefore, it interpolates the vertex, and the curve is C O continuous, quite obviously. Let us make a quadruple knot. It is a jump discontinuity, some people call it C -1 continuous. Let us traverse from the left side of the parameter. It passes through one vertex. As soon as we move over to the other side, it jumps to the next vertex, and moves on. As a conclusion, a B-spline curve of order K is in general CK-2 continuous. For instance, a cubic B-spline curve is C2 continuous. But at a knot position, the continuity is C K-M-I, where K is the order, and M is the multiplicity of that knot. Variation Diminishing Property The next property we would like to illustrate is the variation diminishing property. The variation diminishing property says that a B-spline curve is no more wiggly than its control polygon. An intuitive illustration of this is that a straight line will intersect the curve no more than it does the control polygon. For instance, if we have a straight line, it intersects the control polygon three times and it intersects the curve also three times. In this case, it intersects the control polygon twice, but it intersects the curve only once. In this case, it intersects the control polygon three times, but it doesn't intersect the curve at all. Q 8. Explain Bezier curve with properties . Ans. A Bzier curve is a parametric curve frequently used in computer graphics and related fields. Generalizations of Bzier curves to higher dimensions are calledBzier surfaces, of which the Bzier triangle is a special case. In vector graphics, Bzier curves are used to model smooth curves that can be scaled indefinitely. "Paths," as they are commonly referred to in image manipulation programs,[note 1] are combinations of linked Bzier curves. Paths are not bound by the limits of rasterized images and are intuitive to modify. Bzier curves are also used in animation as a tool to control motion Properties 1. A Bezier curve is defined on n+1 points P0, , Pn and is represented as a parametric polynomial curve of degree n. 2. Bezier curves are invariant under affine transformations, but they are not invariant under projective transformations. 3. Bezier curves are also invariant under affine parameter transformations. That is, while the curve is usually defined on the parametric interval [0,1], a simple affine transformation on the parameters to the interval [a,b] yields the same curve. 4. The Bezier curve starts at the first control point and stops at the last control point. In general, it will not pass through any other control point, but its shape mimics that of the control polygon.

5. The vector tangent to the Bezier curve at the start (stop) is parallel to the line connecting the first two (last two) control points. 6. A Bezier curve will always be completely contained inside of the Convex Hull of the control points. For planar curves, imagine that each control point is a nail pounded into a board. The shape a rubber band would take on when snapped around the control points is the convex hull. For Bezier curves whose control points do not all lie in a common plane, imagine the control points are tiny balls in space, and image the shape a balloon will take on if it collapses over the balls. This shape is the convex hull in that case. In any event, a Bezier curve will always lie entirely inside its planar or volumetric convex hull. 7. Bezier curves exhibit a symmetry property: The same Bezier curve shape is obtained if the control points are specified in the opposite order. The only difference will be the parametric direction of the curve. The direction of increasing parameter reverses when the control points are specified in the reverse order. 8. Adjusting the position of a control point changes the shape of the curve in a "predictable manner". Intuitively, the curve "follows" the control point. There isno local control of this shape modification. Every point on the curve (with the exception of the first and last) move whenever any interior control point is moved. In the image below, see how a curve defined in terms of four control points (the magenta curve) changes when one of its control points is moved to the right, yielding the modified (cyan) curve. Q 9. Explain subdivision method . Ans: At each stage in the globally adaptive algorithm a selected subregion must be subdivided. A simple natural subdivision method is to cut at the midpoint of each edge, but this produces pieces. Another problem with this subdivision is that it is not as adaptive as the method we use because this subdivision is done without any analysis of the integrand, even though the error for the integral over a selected subregion is often due to irregularity of the integrand in only a small number of directions. Our subdivision strategy, which uses a division of the largest error subregion into at most four new pieces, and which takes account of differences in integrand behavior in different directions, allows the algorithm to proceed from one stage to the next in a controlled manner. The subdivision procedure that we use, is a modified version of a procedures first described in and further developed (for = 2 only) in .Once a subregion has been selected for subdivision, the globally adaptive algorithm used by CUBPACK will recommend a subdivision into at most 2, 3 or 4 pieces, depending on the current progress of the integration. Our subdivision procedure then divides the subregion by cutting one, two or three edges of the selected subregion to produce a 2-division, 3-division or 4-division of the selected subregion, respectively. An -simplex has our algorithm chooses subdivision directions from these directions. edge directions, and

In order for the algorithm to be efficient, a method is needed for selecting good edges for subdivision, and therefore some measure of integrand irregularity is needed. A popular measure of integrand irregularity that has been successfully used with adaptive algorithms for hyper-rectangles is a fourth difference of the integrand. We follow this approach, using modifications of the methods described in and .Our simplex algorithm uses fourth differences centered at the centroid of the selected simplex. Let be the vertices of the current largest error simplex subregion by with is given by , for (the centroid of . Now , and let the edge directions be given define , direction

). Then a fourth difference operator for the

The scaling factor where

is used to provide some bias for division of very long edges. All of the points . Our general algorithm is designed to allow . . If a 2-division has that have

is computed for these differences lie within

for vector integrands . When

has more than one component, we define where is large. Let

The edges for subdivision are edges been recommended, then let the same vertices as subdivision let except that the

. Two new subregions are produced from and vertices are respectively replaced by

. For a 3- or 4-

. If a 4-division has been recommended, and

(the two edges with largest values have similar values), then our algorithm first divides into two pieces using the algorithm for a 2-division and then halves each of the two new pieces by bisection of the edge for each piece. If either a 3-division has been recommended, or a 4-division is , then our algorithm considers two possible 3-divisions. Let vertex . The vertices indexed by Now . If 's), then the edge order these vertices, (the largest by exchanging value edge has and if , and so

recommended but index define that the other be defined by a triangle.

necessary,

significantly larger than . are

is trisected, producing three new equal volume subregions of , and two subregions of

Otherwise, the edge

is cut at the point

produced. The subregion that has the original edge

is then divided into two pieces by cutting that .

edge at the midpoint. The final result is three new equal volume subregions of Q 11. Expalin Beta spline.

Ans: The Beta-spline introduced recently by Barsky is a generalization of the uniform cubic B-spline: parametric discontinuities are introduced in such a way as to preserve continuity of the unit tangent and curvature vectors at joints (geometric continuity) while providing bias and tension parameters, independent of the position of control vertices, by which the shape of a curve or surface can be manipulated. We introduce a practical method by which different values of the bias and tension at each point along a curve, the actual position being determined by substituting these values into the equations for a uniformly-shaped Beta-spline. We explore the properties of the resulting piecewise polynomial curves and surfaces. An important characteristic is their local response when either the position of a control vertex or the value of a shape parameter is altered. There is also a conceptually simple and obvious way to directly generalize the equations defining the uniformly-shaped Beta-splines so that each shape parameter may have a distinct value at every joint. Unfortunately, the curves which result lack many desirable properties.

Q 1. What are different types of projection ? Ans: Projection ---- (1) Parallel projection Orthographic Projection Multiview Projection (2) Perspective Projection Cavalier Projection Cabinet Projection Q2. What do you mean by vanishing point ? Ans: The image in the plane of a photograph of the point toward which a system of parallel lines in the object space converges. Since any system of parallel lines in the object space will meet at infinity, the image of the meeting point will be formed by the ray through the perspective center and it will be parallel to the system. Q 3. What is major difference between perspective and paralleal projection? Ans: In paralell projection there is no vanishing point so opposite edges (front and back) of a rectangular box will be the same length. Perspective has far away objects smaller than closer ones but this is not the case in paralell projection. Q 4. Explain Phong shading method. Ans: The basic Phong shading technique tends to be very slow. However, it can look really good, and so it has been the subject of much optimisation. It's only possible to optimise the origional algorithm so far. No matter how much Fixed Point maths and lookup tables you throw at it, it still runs too slowly to make it a realistic option for realtime graphics. A document has been circulating for some time under the name of OTMPHONG.DOC. It suggests that instead of interpolating the normal vector across the polygon, you should interpolate the angle between the light source and the normal. This is a nice idea except for the fact that it doesn't really work. It is essentially a repeat of Gouraud shading, and of little use to anyone. Except that this gave me an idea which I later discovered people had already suggested. I have heard this explained in several ways, all of which are essentially equivalent. I considered that rather than simply interpolate an angle across the polygon, it might be possible to interpolate two. This would be very similar to interpolating a vector. Now, rather than do some horrid calculations with these two angles, you might as well consider them to be texture coordinates and lookup the result of the calculations in a texture map. This is really a big step in the life of realtime graphics. No longer must we put up with dodgy Gouraud shading. We can now enjoy the full benefits of Phong shading from the comfort of our home computers. Phong shading is now just linear texture mapping, which, thanks to the likes of Michael Abrash, can be done very quickly indeed. so how to actually do this. I haven't done a vast amount of investigation here, and the only way I have come up with so far has been a teeny bit slow, although I am sure there is a faster method. If anyone knows one, I would be happy to know what it is. So, for each vertex of the polygon to be rendered, you will need to calculate the coordinates to the Phong Map. You have a polygon. For this polygon, define two vectors which are at right angles to each other and to the surface normal. Call them V and H. These two vectors represent the (u,v) coordinates of the phong map. Now we'll look at this polygon at an angle so the vector from the light source to the vertex can be seen. This is vector L. The aim is to calculate the coordinates

Unit 3

(u,v) in terms of V,H and L The algorithm is a very simple one. If the phong map is 256x256 and centered then: u = ( V . L ) * 128 + 127 v = ( H . L ) * 128 + 127 do this for each vertex, and then map the Phong map onto it, and there you have one nicely phong shaded polygon. Since this can be slow, there are various ways you can speed it up if you don't mind a little loss of freedom. If you assume that the light source is at the same place as the camera, then you can ignore the V and H vectors altogether. Instead take the X and Y components of the normal vector, multiply by 128 and add 127 (assuming that is that the magnitude of the normal vector is 1). Alternatively, you can take the light source as being like an inverse camera. Transform the object as if the light source were the camera, and calculate the phong as in the previous paragraph. Q 5. Difference between 2D texture mapping and bump mapping? Ans: A texture map is applied (mapped) to the surface of a shape or polygon. This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate (which in the 2d case is also known as a UV coordinate) either via explicit assignment or by procedural definition. Image sampling locations are then interpolated across the face of a polygon to produce a visual result that seems to have more richness than could otherwise be achieved with a limited number of polygons. Multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface, such as tree bark or rough concrete, that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in recent video games as graphics hardware has become powerful enough to accommodate it in real-time. Bump mapping is used to add detail to an image without increasing the number of polygons. Bump mapping relies on light-reflection calculations to create small bumps on the surface of the object in order to give it texture; the surface of the object is not changed. Bumps are applied by matching up a series of grayscale pixels with colored pixels on the rendered, colored object. Lighter grayscale pixels create a sense of maximum relief or maximum indentation; darker pixels have less effect. A computer must contain a supporting 3D graphics card when it runs an application that has been coded to include bump maps. If the graphics card does not support bump mapping, then the bumps won't be seen. In the case of computer games, the programmer usually will code an alternate version that doesn't use bump maps. This version will look flatter and less real. Q 5. Explain painters algorithm. Ans: The painter's algorithm, also known as a priority fill, is one of the simplest solutions to the visibility problem in 3D computer graphics. When projecting a 3D scene onto a 2D plane, it is necessary at some point to decide which polygons are visible, and which are hidden. The name "painter's algorithm" refers to the technique employed by many painters of painting distant parts of a scene before parts which are nearer thereby covering some areas of distant parts. The painter's algorithm sorts all the polygons in a scene by their depth and then paints them in this order, farthest to closest. It will paint over the parts that are normally not visible thus solving the visibility problem at the cost of having painted redundant areas of distant objects. Visible Surface Determination: Painter's Algorithm The painter's algorithm is based on depth sorting and is a combined object and image space algorithm. It is as follows: 1. Sort all polygons according to z value (object space); Simplest to use maximum z value 2. Draw polygons from back (maximum z) to front (minimum z) This can be used for wireframe drawings as well by the following:

Draw solid polygons using Polyscan (in the background color) followed by Polyline (polygon color). 2. Polyscan erases Polygons behind it then Polyline draws new Polygon. Problems with simple Painter's algorithm

1.

Look at cases where it doesn't work correctly. S has a greater depth than S' and so will be drawn first. But S' should be drawn first since it is obscured by S. We must somehow reorder S and S'.:

We will perform a series of tests to determine, if two polygons need to be reordered. If the polygons fail a test, then the next test must be performed. If the polygons fail all tests, then they are reordered. The initial tests are computationally cheap, but the later tests are more expensive. So look at revised algorithm to test for possible reordering Could store Zmax, Zmin for each Polygon. - sort on Zmax - start with polygon with greatest depth (S) - compare S with all other polygons (P) to see if there is any depth overlap(Test 0) If S.Zmin <= P.Zmax then have depth overlap (as in above and below figures)

If have depth overlap (failed Test 0) we may need to reorder polygons. Next (Test 1) check to see if polygons overlap in xy plane (use bounding rectangles)

Do above tests for x and y If have case 1 or 2 then we are done (passed Test 1) but for case 3 we need further testing failed Test 1) Next test (Test 2) to see if polygon S is "outside" of polygon P (relative to view plane) Remember: a point (x, y, z) is "outside" of a plane if we put that point into the plane equation and get: Ax + By + Cz + D > 0 So to test for S outside of P, put all vertices of S into the plane equation for P and check that all vertices give a result that is > 0. i.e. Ax' + By' + Cz' + D > 0 x', y', z' are S vertices A, B, C, D are from plane equation of P (choose normal away from view plane since define "outside" with respect to the view plane)

If the test of S "outside" of P fails, then test to see if P is "inside" of S (again with respect to the view plane) (Test 3). Compute plane equation of S and put in all vertices of P, if all vertices of P inside of S then P inside. inside test: Ax' + By' + Cz' + D &lt0 where x', y', z' are coordinates of P vertices so for above case:

Then we do the 4th test and check for overlap for actual projections in xy plane since may have bounding rectangles overlap but not actual overlap For example: Look at projection of two polygons in the xy plane Then have two possible cases.

All 4 tests have failed therefore interchange P and S and scan convert P before S. But before we scan convert P we must test P against all other polygons. Look at an example of multiple interchanges

Test S1 against S2 and it fails all tests so reorder: S2, S1, S3 Test S2 against S3 and it fails all tests so reorder: S3, S2, S1 Possible Problem: Polygons that alternately obscure one another. These three polygons will continuously reorder. One solution might be to flag a reordered polygon and subdivide the polygon into several smaller polygons.

Q 6. Explain warnocks Algorithm. Ans: Warnock's Area Subdivision Algorithm : John Warnock proposed an elegant divide-and-conquer hidden surface algorithm. The algorithm relies on the area coherence of polygons to resolve the visibility of many polygons in image space. Depth sorting is simplified and performed only in those cases involving image-space overlap.

Warnock's algorithm classifies polygons with respect to the current viewing window into trivial or nontrivial cases. Trivial cases are easily handled. For nontrivial cases, the current viewing window is recursively divided into four equal subwindows, each of which is then used for reclassifying remaining polygons. This recursive procedure is continued until all polygons are trivially classified or until the current window reaches the pixel resolution of the screen. At that point the algorithm reverts to a simple z-depth sort of the intersected polygons, and the pixel colour becomes that of the polygon closest to the viewing screen. All polygons are readily classified with respect to the current window into the four categories illustrated in Figure

surrounding

Intersecting

inside

Outside The classification scheme is used to identify certain trivial cases that are easily handled. These "easy choice tests" and the resulting actions include: 1. For polygons outside from the window, set the colour/intensity of the window equal to the background colour. 2. There is only one inside or intersecting polygon. Fill the window area with the background colour then render the polygon. 3. There is only one surrounding polygon. Fill the window with the polygon's colour. 4. If more than one polygon intersects, is inside, or surrounds, and at least one is a surrounding polygon. a. Is one surrounding polygon, P , in front of all others? If so, paint window with the colour of P . The test is: Calculate the z-depths for each polygon plane at the corners of the current window. If all four z-depths of the P plane are all smaller than any z-depths of other polygons in the window, then Pis in front. If the easy choice tests do not classify the polygon configuration into one of these four trivial action cases, the algorithm recurs by dividing the current window into four equal subwindows. Rather than revert to the complex geometrical tests of the Painter's algorithm, Warnock's algorithm simply makes the easy choices and invokes recursion for non-trivial cases. A noteworthy feature of Warnock's algorithm concerns how the divide-and-conquer area subdivision preserves area coherence. That is, all polygons classified as surrounding and outside retain this classification with respect to all subwindows generated by recursion. This aspect of the algorithm is the basis for its efficiency. The algorithm may be classified as a radix four quick sort. Windows of 1024 1024 pixels may be resolved to the single pixel level with only ten recursive calls of the algorithm. While the original Warnock algorithm had the advantages of elegance and simplicity, the performance of the area subdivision technique can be improved with alternative subdivision strategies. Some of these include: 1. Divide the area using an enclosed polygon vertex to set the dividing boundary. 2. Sort polygons by minimum z and use the front polygon as the window boundary. Q 7. Explain perspective projection in detail. Ans; Perspective projection is a type of projection where three dimensional objects are not projected along parallel lines, but along lines emerging from a single point. This has the effect that distant objects appear smaller than nearer objects. It also means that lines which are parallel in nature appear to intersect in the projected image, for example if railways are pictured with perspective projection, they appear to converge towards a single point, called vanishing point. Photographic lenses and the human eye work in the same way, therefore perspective projection looks most realistic. The perspective projection requires greater definition. A conceptual aid to understanding the mechanics

of this projection involves treating the 2D projection as being viewed through a camera viewfinder. The camera's position, orientation, and field of view control the behavior of the projection transformation. The following variables are defined to describe this transformation: - the point in 3D space that is to be projected. - the location of the camera. - The rotation of the camera. When vector <1,2,0> is projected to the 2D vector <1,2>. - the viewer's position relative to the display surface. Which results in: - the 2D projection of . and the 3D

First, we define a point as a translation of point into a coordinate system defined by . This is achieved by subtracting from and then applying a vector rotation matrix using to the result. This transformation is often called a camera transform, and can be expressed as follows, expressing the rotation in terms of rotations about the x, y, and z axes (these calculations assume that the axes are ordered as a left-handed system of axes):

This representation correspond to rotating by three Euler angles (more properly, TaitBryan angles), using the xyz convention, which can be interpreted either as "rotate about the extrinsic axes (axes of the scene) in the order z, y, x (reading right-to-left)" or "rotate about the intrinsic axes (axes of the camera) in the order x, y, z) (reading left-to-right)". Note that if the camera is not rotated ( ), then the matrices drop out (as identities), and this reduces to simply a shift: Alternatively, without using matrices, (note that the signs of angles are inconsistent with matrix form):

This transformed point can then be projected onto the 2D plane using the formula (here, x/y is used as the projection plane, literature also may use x/z): Or, in matrix form using homogeneous coordinates, the system

in conjunction with an argument using similar triangles, leads to division by the homogeneous coordinate, giving

The distance of the viewer from the display surface,

, directly relates to the field of view,

where is the viewed angle. (Note: This assumes that you map the points (-1,1) and (1,1) to the corners of your viewing surface) Subsequent clipping and scaling operations may be necessary to map the 2D plane onto any particular display media Q 8. Explain Scanline method for hidden surface removal.

Ans: Scan-Line Algorithm The scan-line algorithm is another image-space algorithm. It processes the image one scan-line at a time rather than one pixel at a time. By using area coherence of the polygon, the processing efficiency is improved over the pixel oriented method. Using an active edge table, the scan-line algorithm keeps track of where the projection beam is at any given time during the scan-line sweep. When it enters the projection of a polygon, an IN flag goes on, and the beam switches from the background colour to the colour of the polygon. After the beam leaves the polygon's edge, the colour switches back to background colour. To this point, no depth information need be calculated at all. However, when the scan-line beam finds itself in two or more polygons, it becomes necessary to perform a z-depth sort and select the colour of the nearest polygon as the painting colour. Accurate bookkeeping is very important for the scan-line algorithm. We assume the scene is defined by at least a polygon table containing the (A, B, C, D) coefficients of the plane of each polygon, intensity/colour information, and pointers to an edge table specifying the bounding lines of the polygon. The edge table contains the coordinates of the two end points, pointers to the polygon table to indicate which polygons the edge bounds, and the inverse slope of the x-y projection of the line for use with scan-line algorithms. In addition to these two standard data structures, the scan-line algorithm requires an active edge list that keeps track of which edges a given scan line intersects during its sweep. The active edge list should be sorted in order of increasing x at the point of intersection with the scan line. The active edge list is dynamic, growing and shrinking as the scan line progresses down the screen. In Figure scan-line S1 must deal only with the left-hand object. S2 must plot both objects, but there is no depth conflict. S3 must resolve the relative z-depth of both objects in the region between edge E5 and E3 . The right-hand object appears closer.

Figure 9.1: Scan-line hidden surface algorithm. The active edge list for scan line S1 contains edges E1 and E2 . From the left edge of the viewport to edge E1 , the beam paints the background colour. At edge E1 , the IN flag goes up for the left-hand polygon, and the beam switches to its colour until it crosses edge E2 , at which point the IN flag goes down and the colour returns to background. For scan-line S2 , the active edge list contains E1 , E3 , E5 , and E6 . The IN flag goes up and down twice in sequence during this scan. Each time it goes up pointers identify the appropriate polygon and look up the colour to use in painting the polygon. For scan line S3 , the active edge list contains the same edges as for S2 , but the order is altered, namely E1 , E5 , E3 , E6 . Now the question of relative z-depth first appears. The IN flag goes up once when we cross E1 and again when we cross E5 , indicating that the projector is piercing two polygons. Now the coefficients of each plane and the (x,y) of the E5 edge are used to compute the depth of both planes. In the example shown the z-depth of the right-hand plane was smaller, indicating it is closer to the screen. Therefore the painting colour switches to the right-hand polygon colour which it keeps until edge E6 . Note that the technique is readily extended to three or more overlapping polygons and that the relative

depths of overlapping polygons must be calculated only when the IN flag goes up for a new polygon. Since this occurrence is far less frequent than the number of pixels per scan line, the scan-line algorithm is more computationally efficient than the z-buffer algorithm. The scan-line hidden surface removal algorithm can be summarized as: 1. Establish the necessary data structures. a. Polygon table with coefficients, colour, and edge pointers. b. Edge table with line end points, inverse slope, and polygon pointers. c. Active edge list, sorted in order of increasing x. d. An IN flag for each polygon. 2. Repeat for all scan lines: a. Update active edge list by sorting edge table against scan line y value. b. Scan across, using background colour, until an IN flag goes on. c. When 1 polygon flag is on for surface P , enter intensity (colour) into refresh buffer. d. When 2 or more surface flags are on, do depth sort and use intensity In for surface n with minimum z-depth. e. Use coherence of planes to repeat for next scan line. The scan-line algorithm for hidden surface removal is well designed to take advantage of the area coherence of polygons. As long as the active edge list remains constant from one scan to the next, the relative structure and orientation of the polygons painted during that scan does not change. This means that we can "remember" the relative position of overlapping polygons and need not recompute the zdepth when two or more IN flags go on. By taking advantage of this coherence we save a great deal of computation. Q 9. Explain Radiosity model in detail. Ans: Radiosity methods accurately simulate diffuse indirect illumination and shadows, and thus are used to generate realistic-looking lighting models for a variety of virtual environments, including building interiors. A difficult challenge for radiosity systems is managing the algorithmic complexity (O(n^2)) and massive data storage requirements (GBs) typical in such computations. We have developed a radiosity system that computes radiosity solutions for very large polygonal models. The first innovation in this system is that it uses visibility oracles and hierarchical methods to: 1) reduce the number of polygon-polygon interactions considered, and 2) partition the computation into a sequence of subcomputations each requiring a relatively small working set. Unlike any other system, the radiosity solver stores the evolving solution in a disk-resident database and loads only the working set for the current subcomputation into memory as the computation proceeds. Subcomputations are ordered so as to minimize the impact of database I/O operations. Using these techniques, the system is able to cull over 99.999999% of the potential interactions and requires only 0.24% of the database (14.5MB) to be stored in memory at any given time during experiments with large architectural models. The second innovation is that it supports execution of multiple hierarchical radiosity solvers working on the same radiosity solution in parallel. The system is based on a group iterative approach that repeatedly: 1) partitions patches into groups, 2) distributes a copy of each group to a slave processor which updates radiosities for all patches in that group, and 3) merges the updates back into a master solution. The primary advantage of this approach is that separate instantiations of a hierarchical radiosity solver can gather radiosity to patches in separate groups in parallel with very little contention or communication overhead. This feature, along with automatic partitioning and dynamic load balancing algorithms, enables our implemented system to achieve significant speedups running on moderate numbers of workstations connected by a local area network. This system has been used to compute the radiosity solution for a furnished model of Soda Hall. The model represents five floors of a large building with approximately 250 rooms containing furniture. It was constructed with 14,234 clusters comprising 280,836 patches, 8,542 of which were emitters and served as the only light sources. The total area of all surfaces was 75,946,664 square inches. Three complete iterations were made through all patches using an average of 4.96 slave processors in 168 hours. The entire computation generated 7,649,958 mesh elements and evaluated 374,845,618 elementto-element links. Q 10. Explain Z buffer method for hidden surface removal.

Ans: The easiest way to achieve hidden-surface removal is to use the depth buffer (sometimes called a zbuffer). A depth buffer works by associating a depth, or distance from the viewpoint, with each pixel on the window. Initially, the depth values for all pixels are set to the largest possible distance, and then the objects in the scene are drawn in any order. Graphical calculations in hardware or software convert each surface that's drawn to a set of pixels on the window where the surface will appear if it isn't obscured by something else. In addition, the distance from the eye is computed. With depth buffering enabled, before each pixel is drawn, a comparison is done with the depth value already stored at the pixel. If the new pixel is closer to the eye than what's there, the new pixel's colour and depth values replace those that are currently written into the pixel. If the new pixel's depth is greater than what's currently there, the new pixel would be obscured, and the colour and depth information for the incoming pixel is discarded. Since information is discarded rather than used for drawing, hidden-surface removal can increase your performance. Advantages of z-buffer algorithm: It always works and is simple to implement. Disadvantages: May paint the same pixel several times and computing the color of a pixel may be expensive. So might compute the color only if it passes the z_buffer test. Might sort the polygons and scan front to back (reverse of painter's algorithm) - this still tests all the polygons but avoids the expense of computing the intensity and writing it to the frame buffer. Large memory requirements: if used real (4 bytes) then for 640 x 480 resolution: 4bytes/pixel = 1,228,000 bytes. We usually use a 24-bit z-buffer so 900,000 bytes or 16 - bit z-buffer = 614,000 bytes. Note: For VGA mode 19 (320 x 200 and only use 240 x 200) then need only 96,000 bytes for a 16-bit z_buffer. However, may need additional z - buffers for special effects, e.g. shadows.

Q 1. What is phenomenon of specular reflection ? Ans: Specular reflection is when the reflection is stronger in one viewing direction, i.e., there is a bright spot, called a specular highlight. This is readily apparent on shiny surfaces. For an ideal reflector, such as a mirror, the angle of incidence equals the angle of specular reflection, as shown below.

Unit 4

Q 2. Explain the term illumination. Ans: The total illumination of each pixel to be displayed by a real-time computer image generator, is determined for at least one source illuminating a scene to be displayed, by storing in an observer depth buffer data signals representing those portions of object polygons visible to the observer in each pixel of the display, and storing in a source depth buffer, associated with each of the at least one sceneilluminating light sources, data signals representing illumination intensity received by each polygon pixel viewable from that associated source. Q 3. Define Hue and Saturation. Ans: Hue is one of the main properties of a color, defined technically ,as "the degree to which a stimulus can be described as similar to or different from stimuli that are described as red, green,blue, and yellow,"The other main correlatives of color appearance are colorfulness, chroma, saturation, lightness, andbrightness. Saturation is one of three coordinates in the HSL and HSV color spaces. Note that virtually all computer software implementing these spaces use a very rough approximation to calculate the value they call "saturation", such as the formula described for HSV and this value has little, if anything, to do with the description shown here. Q 4. Define Laberts law. Ans: The law that the illumination of a surface by a light ray varies as the cosine of the angle of incidence between the normal to the surface and the incident ray. The law that the luminous intensity in a given direction radiated or reflected by a perfectly diffusing plane surface varies as the cosine of the angle between that direction and the normal to the surface. Q 5. Explain phong illumination model. Ans: An illumination model developed by Bui-Tong Phong in 1975 and still very popular. In particular, it is designed to model specular reflection. A purely specular surface (like a really good mirror -- this should come as no surprise, since specular derives from the latin for "mirror" -- speculum) abides by the law of reflection: the angle of incidenceequals the angle of reflection. Now, imagine that you have a point light source. The only way that you'd be able to see the light source is if the light source is reflected directly into your eye. But real surfaces don't work like that. They "blur" a little. Sure, you get most of the light reflected along the angle of reflection, but say you're 3 degrees from the angle of reflection, then you still get some intensity. How can you model this? Well, the intensity depends on the vector from the surface of interest to your eye -- this is V, and it depends on the reflected ray R. In particular it depends on the angle between them. Modelling it properly is one thing (a-la the Cook-Torrance model), but we're looking for something that can be implemented efficiently. So what do you do? Answer: you hack a solution. The Phong illumination equation is:

I = Il ks(V.R)n V.R is the dot product of the vector to the viewer and the reflected ray. Really, though, this is a way of computing the cosine of the angle between them. Il is the intensity of the light source. ks is the coefficientof specular reflection ... how much of the specular light is reflected. n is the Phong exponent. Q 6. What do you mean by shading model ? Explain Gourand model. Ans: A shading model is a method of applying a local illumination model to an object, usually an object modelled as a polygon mesh. There are four shading models we will consider: Constant, Faceted, Gouraud, and Phong. They give increasingly good images and are increasingly computationally expensive. Gouraud shading model. The second shading model, Gouraud shading, computes an intensity for each vertex and then interpolates the computed intensities across the polygons. Gouraud shading performs a bi-linear interpolation of the intensities down and then across scan lines. It thus eliminates the sharp changes at polygon boundaries The algorithm is as follows: 1. 2. 3. 4. Compute a normal N for each vertex of the polygon. From N compute an intensity I for each vertex of the polygon. From bi-linear interpolation compute an intensity Ii for each pixel. Paint pixel to shade corresponding to Ii.

How do we compute N for a vertex? Let N = the average of the normals of the polygons which include the vertex. Note that 4 sided polygons have 4 neighbors and triangles have 6 neighbors. We can find the neighbors for a particular vertex by searching the polygons and including any polygons which include the vertex. Now look at performing the bi-linear intensity interpolation. This is the same as the bi-linear interpolation for the z depth of a pixel (used with the z buffer visible surface algorithm).

Advantages of Gouraud shading: Gouraud shading gives a much better image than faceted shading (the facets no longer visible). It is not too computationally expensive: one floating point addition (for each color) for each pixel. (the mapping to actual display registers requires a floating point multiplication) Disadvantages to Gouraud shading: It eliminates creases that you may want to preserve, e.g. in a cube. We can modify data structure to prevent this by storing each physical vertex 3 times, i.e. a different logical vertex for each polygon. here is a data structure for a cube that will keep the edges:

Q 7. Describe parallel and perspective projection with its type. Ans: TYPES OF PARALLEL PROJECTION Various special cases of parallel projections are given special names because of their frequent occurrence in practice (e.g. in engineering or architectural drawings). The book discusses these, but is somewhat confusing in several respects. For that reason I am expanding on the book with the following discussion. Again, this applies only to parallel projections. Definitions: Orthographic Projections: DOP is perpendicular to the projection plane. If Projection plane is one of xy, yz or zx these are called front, top or side elevation (not necessarily in that order - it depends on which waythe object is oriented). Otherwise (i.e. if projection plane is not perpendicular to a coord axis)they are called axonometric orthographic projections. A special case of axonometric orthographic is isometric projection where projection plane normal (and DOP) makes equal angles with all three axes. Since there are eight octants in 3D, there are 8 possible cases of isometric projections depending on which of the 8 octants the DOP points into. Oblique Projections: DOP is not perpendicular to the projection plane. At page 235 the book says (sentence 1) that oblique projections have projection plane normal to a principal axis. This is not necessary at all and is clearly an error. Special cases of Oblique Projections are defined as follows Cavalier Projection: DOP is at 45 degrees to projection plane normal. Cabinet Projection: DOP is at 63.4 degrees to projection plane normal. The purpose of the special cases is to choose views that show more than just a face, while at the same time preserving some of the length information in a view. Cavalier: In the cavalier case, lengths perpendicular to the projection plane are preserved, and as in all parallel projection, lengths parallel to the projection plane are preserved. Thus in this case lengths in all three directions are preserved. For example the projection of a unit cube using a cavalier projection will be an object whose sides are all of length 1, although angles will not be 90 of course. There are an infinite number of different cavalier projection corresponding to different DOP directions that all make a 45 degree angle with the projection plane. You can think of the DOP as being allowed to rotate around on a cone at a 45 degree vertex angle at the origin. Note that to achieve equal length between parallel and perpendicular lines, the DOP will need to treat these directions equally, or in other words the angle between DOP and each must be equal. This can occur only if the angles are 45. Hence the above definition for cavalier. Cabinet: In the cabinet case, lines perpendicular to the projection plane project at 1/2 their length, while lines parallel to plane project at full length. Thus in this case their is a foreshortening which is more realistic to the eye, while at the same time allowing measurement to be made from a drawing, if one remembers the factor 2 scaling in the one direction. For example the projection of a unit cube using a cabinet projection will be an object 2 of whose sides are of length 1, with one side of length 1/2. Angles will not be 90 between these sides of course. There are an infinite number of different cabinet projections corresponding to different DOP directions that all make a 63.4 degree angle with the projection plane. You can think of the DOP as being allowed to rotate around on a cone at a 63.4 degree vertex angle at the origin. TYPES OF PERSPECTIVE PROJECTION perspective projections define a major subclass of planar geometric projections. Divisions within perspective projections are consistent in that the center of projection (PRP) is placed at a finite distance

from the viewplane. Because of this finite distance between the camera and the viewplane, projectors are no longer parallel. By placing the camera near the viewplane, as shown for the perspective projection in figure below, projectors from the PRP to the edges of the projection window, located on the u-, v-plane, define a pyramidal view volume. As shown in figure below, the projectors from the center of projection to line AB form a much shorter line AB in the viewplane. The reduction in length of the projected line is attributed to the decreasing distance between the two projectors as the viewing surface becomes nearer to the center of projection. Perspective projections are typically separated into three classes: one-point,two-point, and three-point projections. In a one-point perspective projection, lines of a three-dimensional object along a major axis converge to a single vanishing point while lines parallel to the other axes remain horizontal or vertical in the viewplane. To create a one-point perspective view, the viewplane is set parallel to one of the principal planes in the world coordinate system. The viewplane normal is set parallel to a major axis and the viewplane normal vector n is initialized such that two of its three components are zero. Figurebelow shows a one-point perspective view of a cube. In this projection, the viewplane is positioned in front of the cube and parallel to the x- and y-plane.

A two-point perspective projects an object to the viewplane such that lines parallel to two of the major axes converge into two separate vanishing points. To create a two-point perspective, the viewplane is set parallel to a principal axis rather than a plane. In satisfying this condition, the viewplane normal vector n should be set perpendicular to one of the major world coordinate system axes. In this case, two of the components of n = (nx, ny, nz) are nonzero, while the third is zero. Figure below shows a two-point perspective view of a cube. In this figure, lines parallel to the x-axis converge to vanishing point VP1 while lines parallel to the z-axis converge to vanishing point VP2. Two-point perspective views often provide additional realism in comparison to other projection types; therefore, they are commonly used in architectural, engineering, industrial design, and in advertising drawings.

A three-point perspective has three vanishing points. In this case, the viewplane is not parallel to any of the major axes. To position the viewplane, each component of the viewplane normal is set to a non-zero value so that the viewplane intersects the three major axes. Vanishing points are often used by artists for highlighting features or increasing dramatic effects. However, many disagree as to the extent of their utility.

Q 8. Explain Depth buffer algo. Ans: Depth-Buffer Method (Z-Buffer Method) This approach compare surface depths at each pixel position on the projection plane. Object depth is usually measured from the view plane along the z axis of a viewing system. This method requires 2 buffers: one is the image buffer and the other is called the z-buffer (or the depth buffer). Each of these buffers has the same resolution as the image to be captured. As surfaces are processed, the image buffer is used to store the color values of each pixel position and the z-buffer is used to store the depth values for each (x,y) position. Algorithm: 1. Initially each pixel of the z-buffer is set to the maximum depth value (the depth of the back clipping plane). 2. The image buffer is set to the background color. 3. Surfaces are rendered one at a time. 4. For the first surface, the depth value of each pixel is calculated. 5. If this depth value is smaller than the corresponding depth value in the z-buffer (ie. it is closer to the view point), both the depth value in the z-buffer and the color value in the image buffer are replaced by the depth value and the color value of this surface calculated at the pixel position. 6. Repeat step 4 and 5 for the remaining surfaces. 7. After all the surfaces have been processed, each pixel of the image buffer represents the color of a visible surface at that pixel. This method requires an additional buffer (if compared with the Depth-Sort Method) and the overheads involved in updating the buffer. So this method is less attractive in the cases where only a few objects in the scene are to be rendered. Simple and does not require additional data structures. The z-value of a polygon can be calculated incrementally. No pre-sorting of polygons is needed. No object-object comparison is required. Can be applied to non-polygonal objects. Hardware implementations of the algorithm are available in some graphics workstation. For large images, the algorithm could be applied to, eg., the 4 quadrants of the image separately, so as to reduce the requirement of a large additional buffer. Advantages: Simple to use Can be implemented easily in object or image sapce Can be executed quickly, even with many polygons Disadvantages: Takes up a lot of memory Can't do transparent surfaces without additional code Q 9. Explain XYZ color model. Ans: CIE XYZ Color Model The XYZ color space is an international standard developed by the CIE (Commission Internationale de lEclairage). This model is based on three hypothetical primaries, XYZ, and all visible colors can be represented by using only positive values of X, Y, and Z. The CIE XYZ primaries are hypothetical because they do not correspond to any real light wavelengths. The Y primary is intentionally defined to match closely to luminance, while X and Z primaries give color information. The main advantage of the CIE XYZ space (and any color space based on it) is that this space is completely device-independent. The chromaticity diagram in Figure "CIE xyY Chromaticity Diagram and Color Gamut" is in fact a twodimensional projection of the CIE XYZ sub-space. Note that arbitrarily combining X, Y, and Z values within nominal ranges can easily lead to a "color" outside of the visible color spectrum.

The position of the block of RGB-representable colors in the XYZ space is shown in Figure "RGB Colors Cube in the XYZ Color Space". RGB Colors Cube in the XYZ Color Space

Q 10. Difference between phong and Gouraud shading. Ans: Gouraud Shading is effective for shading surfaces which reflect light diffusely. Specular reflections can be modelled using Gouraud Shading, but the shape of the specluar highlight produced is dependent on the relative positions of the underlying polygons. The advantage of Gouraud shading is that it is computationally the less expensive of the two model, only requring the evaluation of the intensity equation at the polygon vertices, and then bilinear interpolation of these values for each pixels. Phong Shading produces highlights which are much less dependent on the underlying polygons. However, more calculation are required, involving the interpolation of the surface normal and the evaluation of the intensity function for each pixel. Q 12. Classify visible surface detection algo with example. Ans: Classification of Visible-Surface Detection Algorithms: 1.Object-space Methods Compare objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible: For each object in the scene do Begin 1. Determine those part of the object whose view is unobstructed by other parts of it or any other object with respect to the viewing specification. 2. Draw those parts in the object color. End

Compare each object with all other objects to determine the visibility of the object parts. If there are n objects in the scene, complexity = O(n2) Calculations are performed at the resolution in which the objects are defined (only limited by the computation hardware). Process is unrelated to display resolution or the individual pixel in the image and the result of the process is applicable to different display resolutions. Display is more accurate but computationally more expensive as compared to image space methods because step 1 is typically more complex, eg. Due to the possibility of intersection between surfaces. Suitable for scene with small number of objects and objects with simple relationship with each other. 2. Image-space Methods (Mostly used) Visibility is determined point by point at each pixel position on the projection plane. For each pixel in the image do Begin 1. Determine the object closest to the viewer that is pierced by the projector through the pixel 2. Draw the pixel in the object colour. End For each pixel, examine all n objects to determine the one closest to the viewer. If there are p pixels in the image, complexity depends on n and p ( O(np) ). Accuarcy of the calculation is bounded by the display resolution. A change of display resolution requires re-calculation.

Q1. What do you mean by animation ? Ans: A simulation of movement created by displaying a series of pictures, or frames. Cartoons on television is one example of animation. Animation on computers is one of the chief ingredients of multimedia presentations. There are many software applications that enable you to create animations that you can display on a computer monitor. Q2. Discuss the concept of key framing ? Ans: Keyframing is the simplest form of animating an object. Based on the notion that an object has a beginning state or condition and will be changing over time, in position, form, color, luminosity, or any other property, to some different final form. Keyframing takes the stance that we only need to show the "key" frames, or conditions, that desribe the transformation of this object, and that all other intermediate positions can be figured out from these. Take an object like the one shown at right - a simple box. The condition at the top is the starting position of motion. We might label this keyframe "Box at Beginning". The condition below that shows the final position of the box after it has been moved. This keyframe is "Box at End". All of the intermediate stages of the box's motion from point A to point B can be calculated by breaking the distance traveled into the number of frames, 5 in thie case, that it takes to get there. Each intermediate frame then moves the box by that resultant distance. This process of figuring out the frames in between two keyframes is called "in-betweening" or simply "tweening". The frames played in succession yields a simple, though complete, keyframed animation Q3. What do you mean by morphing in animation ? Ans; Morphing is the process of transforming two images where it seems like the the first melts, dissolves and rearranges itself to become the second.BitMorph is a free morphing program which takes two images and creates a sequence of images showing them morphing. My animation started with five images which had been enhanced with an assortment of filters and saved as separate files. Two at a time were then loaded into BitMorph and the morph sequences generated. These sequences were then moved to Animation Shop (which comes with Paint Shop Pro) to create the animation. Q4. What is Quaternion ? Ans: A quaternion represents two things. It has an x, y, and z component, which represents the axis about which a rotation will occur. It also has a w component, which represents the amount of rotation which will occur about this axis. In short, a vector, and a float. With these four numbers, it is possible to build a matrix which will represent all the rotations perfectly, with no chance of gimbal lock. (I actually managed to encounter gimbal lock with quaternions when I was first coding them, but it was because I did something incorrectly. I'll cover that later). So far, quaternions should seem a lot like the axis angle representation. However, there are some large differences, which start....now. A quaternion is technically four numbers, three of which have an imaginary component. As many of you probably know from math class, i is defined as sqrt(-1). Well, with quaternions, i = j = k = sqrt(-1). The quaternion itself is defined as q = w + xi + yj + zk. w, x, y, and z are all real numbers. The imaginary components are important if you ever have a math class with quaternions, but they aren't particularly important in the programming. Here's why: we'll be storing a quaternion in a class with four member variables: float w, x, y, z;. We'll be ignoring i, j, and k, because we never liked them anyway. Okay, so we're actually just ignoring them because we don't need them. We'll define our quaternions (w, x, y, z). Q5. Write short note : Procedural Animation A procedural animation is a type of computer animation, used to automatically generate animation in real-time to allow for a more diverse series of actions than could otherwise be created using predefined animations. Procedural animation is used to simulate particle systems ,cloth and clothing, rigid body dynamics, and hair and fur dynamics, as well as character animation.

Unit 5

In video games it is often used for simple things like turning a character's head when a player looks around (as in Quake III Arena) and more complex things, like ragdoll physics, which is usually used for the death of a character in which the ragdoll will realistically fall to the floor. A ragdoll usually consists of a series of connected rigid bodies that are programmed to have Newtonian physics acting upon them; therefore, very realistic effects can be generated that would very hardly be possible with traditional animation. For example, a character can die slumped over a cliff and the weight of its upper-body can drag the rest of it over the edge. Octree Octrees are a very efficient data structure to store 3 dimensional data of any form. Their big advantage is, that you can search and insert in such an octree very fast. However, deleting and moving entries is a slow proccess. If you're familiar with binary trees you'll have no problem to understand the basic concept of the octrees. They're just like binary trees but have 8 subnodes instead of two. However, a short refesh of trees can't hurt, so you might read this chapter to refresh your knowlage. If all this is new to you you should read the entire article, even if you don't understand it. After you saw an example how to use them you'll surely understand them. A Octree is a tree-structure which contains data. The data is stored in a hirarchical way so that you can search an element very fast. Every tree has at least a root-node which is the anchor of all subnodes. Every Node has pointers to 8 subnodes. A C(++) structure of such a node will look like this: Sweep technique A technique is presented for generating implicit sweep objects that support direct specification and manipulation of the surface with no topological limitations on the 2D sweep template. The novelty of this method is that the underlying scalar field has global properties which are desirable for interactive implicit solid modeling, allowing multiple sweep objects to be composed. A simple method for converting distance fields to bounded fields is described, allowing implicit sweep templates to be generated from any set of closed 2D contours (including "holes"). To avoid blending issues arising from gradient discontinuities, a general distance field approximation technique is presented which preserves sharp creases on the contour but is otherwise C2 smooth. Flat endcaps are introduced into the 3D sweep formulation, which is implemented in the context of an interactive hierarchical implicit volume modeling tool. Q6. What are fractals? Write characteristics. Ans: fractals are complex images of extraordinary beauty which arise out of fairly simple mathematical functions. One feature which distinguishes a fractal image from other types of graphics is its property of self-similarity; an arbitrarily small region of a fractal looks like the entire fractal. Thus, fractals are analogous to DNA: just as all the information for a living organism is contained in its DNA, so does a small region (as small as you'd like!) contains all the information for the "parent" image. Here is a list of various different types of fractal images. Just select from the list to see an image. Blowups of the Mandelbrot set are particularly interesting, as are some of the Julia sets. The Mandelbrot set was named after Benoit Mandelbrot, a mathematician who did much of the modern-day pioneering of fractal imaging and applications. A fractal often has the following features: It has a fine structure at arbitrarily small scales. It is too irregular to be easily described in traditional Euclidean geometric language. It is self-similar (at least approximately or stochastically). It has a Hausdorff dimension which is greater than its topological dimension (although this requirement is not met by space-filling curves such as the Hilbert curve). It has a simple and recursive definition. Self-similarity. Despite its apparent solidity the Universe is a fractal. A fractal is a self-similar pattern or series of patterns with infinite detail. Self-similarity refers to the fact that the patterns repeat themselves within the system but they never repeat exactly. For example, there are many, many galaxies in our Universe but no two are exactly the same. Some look similar to each other (self-similarity) but you could search the whole Universe and definitely not find two with exactly the same details. Our bodies are also self-similar with patterns repeating themselves on different levels. The Chinese and Indian cultures have known this for years, mapping the whole body onto different parts of the body like feet, ears and eyes. Acupuncture and reflexology (to name a few) are based on this theory.

Fractal dimension. Another important fact about fractals is that they exist between dimensions. We are use to describing the universe in 2, 3 and even 4 dimensions (including time) but what appears to be solid matter in the universe (galaxies, stars, planets, trees and animals etc) actually form and exist in the space referred to as the fractal dimension. Fractal just means fraction so a fractal dimension is really just a fractional dimension and would look something like 2.34784 or 3.48723 instead of exactly 2-D or 3-D. Each person has a unique fractal dimension and will respond to different fractal images in different ways. Q7. Describe generation of terrain random midpoint displacement. Ans: The diamond-square algorithm, which I will describe later, uses a kind of midpoint-displacement algorithm in two dimensions. To help you get a grip on it, we'll look at it first in one dimension. One-dimensional midpoint displacement is a great algorithm for drawing a ridgeline, as mountains might appear on a distant horizon. Here's how it works: Start with a single horizontal line segment. Repeat for a sufficiently large number of times { Repeat over each line segment in the scene { Find the midpoint of the line segment. Displace the midpoint in Y by a random amount. Reduce the range for random numbers. } } How much do you reduce the random number range? That depends on how rough you want your fractal. The more you reduce it each pass through the loop, the smoother the resulting ridgeline will be. If you don't reduce the range very much, the resulting ridgeline will be very jagged. It turns out you can tie roughness to a constant; I'll explain how to do this later on. Let's look at an example. Here, we start with a line from -1.0 to 1.0 in X, with the Y value at each endpoint being zero. Initially we'll set the random number range to be from -1.0 to 1.0 (arbitrary). So we generate a random number in that range, and displace the midpoint by that amount. After doing this, we have:

Now the second time through the outer loop, we have two segments, each half the length of the original segment. Our random number range is reduced by half, so it is now -0.5 to 0.5. We generate a random number in this range for each of the two midpoints. Here's the result:

We shrink the range again; it is now -0.25 to 0.25. After displacing the four midpoints with random numbers in this range, we have:

Two things you should note about this. First, it's recursive. Actually, it can be implemented quite naturally as an iterative routine. For this case, either recursive or iterative would do. It turns out that for the surface generation code, there are some

advantages to using an iterative implementation over a recursive one. So for consistency, the accompanying sample code implements both the line and surface code as iterative. Second, it's a very simple algorithm, yet it creates a very complex result. That is the beauty of fractal algorithms. A few simple instructions can create a very rich and detailed image. Here I go off on a tangent: The realization that a small, simple set of instructions can create a complex image has lead to research in a new field known as fractal image compression. The idea is to store the simple, recursive instructions for creating the image rather than storing the image itself. This works great for images which are truly fractal in nature, since the instructions take up much less space than the image itself. Chaos and Fractals, New Frontiers of Science 3has a chapter and an appendix devoted to this topic and is a great read for any fractal nut in general.

S-ar putea să vă placă și