Documente Academic
Documente Profesional
Documente Cultură
1he brightness of the line is dependent on the orientation of the line. We can observe that the
effective spacing between pixels for the 45 line is greater than for the vertical and horizontal
lines. 1his will make the vertical and horizontal lines appear brighter than the 45 line.
Complex calculations are required to provide equal brightness along lines of varying length
and orientation. 1herefore, to draw line rapidly some compromises are made such as
Calculate only an approximate line length.
Reduce the calculations using simple integer arithmetic
Implement the result in hardware or firmware
A Line Algorithm:
1. Read the line end points (x1,y1 ) and (x2,y2) such that they are not equal.
[if equal then plot that point and exitj
2. Ax and Ay
3. If then
Length
else
Length
end if
4. (x2-x1)/length
(y2-y1)/length
1his makes either or equal to 1 because the length is either
[ x2-x1[ or [y2-y1[, the incremental value for either x or y is 1.
5. x x1+.5 sign( )
y y1+.5sign( )
[Here the sign function makes the algorithm worl in all quadrant. It returns -1, ,1 depending
on whether its argument is <, , > respectively. 1he factor .5 makes it possible to round
the values in the integer function rather than truncating themj
. i1 [begins the loop, in this loop points are plottedj
7. while(i length)
{
!lot (Integer(x), Integer(y))
x x+Ax
y y+Ay
ii+1
]
8. stop
resenham's Line Algorithm:
resenham's line algorithm uses only integer addition and subtraction and multiplication by
2, and we know that the computer can perform the operations of integer addition and
subtraction very rapidly. 1he computer is also time-efficient when performing integer
multiplication by powers of 2. 1herefore, it is an efficient method for scan-converting straight
lines.
1he basic principle of resenham's line algorithm is to select the optimum raster locations to
represent a straight line. 1o accomplish this, the algorithm always increments either x or y by
one unit depending on the slope of line. 1he increment in the other variable is determined by
examining the distance between the actual line location and the nearest pixel. 1his distance is
called decision variable or the error. 1his is illustrated in the Fig.3.8
resenham's Line Algorithm:
1. Read the line end points (x1, y1) and (x2, y2) such that they are not equal.
[if equal then plot that line and exitj
2. Ax and Ay
3. [initialize starting pointj
x x1
y y1
4. e 2 Ay - Ax
[ Initialize value of decision variable or error to compensate for non zero interceptsj
5. i 1 [initialize counterj
. !lot (x, y)
7. while (e >)
{
y y+1
e e-2 Ax
]
x x+1
e e+2 Ay
8. i i+1
9. if ( i Ax) then go to step
1. stop
July 2011
Master of Computer Application (MCA) - Semester 3
Mc0071 Mc0072 computer 6rophics 4 credits
4ssiqnment 5et 2
1. Write a short note on the followings:
4) video mixinq
8) lrome buffer
c) co/or tob/e
4ns1
Jideo Mixing:
Jideo controller provides the facility of video mixing. In which it accepts information of two
images simultaneously. One from frame buffer and other from television camera, recorder or
other source. 1his is illustrated in fig 2.7. 1he video controller merges the two received images
to form a composite image.
Fig. 2.8: Jideo mixing
1here are two types of video mixing. In first, a graphics image is set into a video image. Here
mixing is accomplished with hardware that treats a designated pixel value in the frame buffer
as a flag to indicate that the video signal should be shown instead of the signal from the frame
buffer, normally the designated pixel value corresponds to the background color of the frame
buffer image.
In the second type of mixing, the video image is placed on the top of the frame buffer image.
Here, whenever background color of video image appears, the frame buffer is shown,
otherwise the video image is shown.
lrome8uffer
In raster scan displays a special area of memory is dedicated to graphics only. 1his memory
area is called frame buffer. It holds the set of intensity values for all the screen points. 1he
stored intensity values are retrieved from frame buffer and displayed on the screen one row
(scan line) at a time.
Usually, frame buffer is implemented using rotating random access semiconductor memory.
However, frame buffer also can be implemented using shift registers. Conceptually, shift
register is operated as first-in-first-out (FIFO) fashion, i.e. similar to queue. We know that,
when queue is full and if we want to add new data bit then first data bit is pushed out from the
bottom and then the new data bit is added at the top. Here, the data ejected out of the queue
can be interpreted as the intensity of a pixel on a scan line.
Fig. 3.2 shows the implementation of frame buffer using shift register. As shown in the Fig.
3.2, one shift register is required per pixel on a scan line and the length of shift register in bits
is equal to number of scan lines. Here, there are 8 pixels per scan line and there are in all 5
scan lines. 1he synchronization between the output of the shift register and the video scan rate
is maintained data corresponding to particular scan line is displayed correctly.
Color tables:
In color displays, 24- bits per pixel are commonly used, where 8-bits represent 25 levels for
each color. Here it is necessary to read 24-bits for each pixel from frame buffer. 1his is very
time consuming. 1o avoid this video controller uses look up table (LU1) to store many entries
of pixel values in RC format. With this facility, now it is necessary only to read index to the
look up table from the frame buffer for each pixel. 1his index specifies the one of the entries
in the look-up table. 1he specified entry in the loop up table is then used to control the
intensity or color of the CR1.
Usually, look-up table has 25 entries. 1herefore, the index to the look-up table has 8-bits and
hence for each pixel, the frame buffer has to store 8-bits per pixel instead of 24 bits. Fig. 2.
shows the organization of a color (Jideo) look-up table.
1here are several advantages in storing color codes in a lookup table. Use of a color table can
provide a "reasonable" number of simultaneous colors without requiring Iarge frame buffers.
For most applications, 25 or 512 different colors are sufficient for a single picture. Also,
table entries can be changed at any time, allowing a user to be able to experiment easily with
different color combinations in a design, scene, or graph without changing the attribute
settings for the graphics data structure. In visualization and image-processing applications,
color tables are convenient means for setting color thresholds so that all pixel values above or
below a specified threshold can be set to the same color. For these reasons, some systems
provide both capabilities for color-code storage, so that a user can elect either to use color
tables or to store color codes directly in the frame buffer.
2. escribe the following with respect to methods of generating characters:
4) 5troke method 8) 5torbust method c) 8itmop method
Ans:2
Stroke method
Fig. 5.19: Stroke method
1his method uses small line segments to generate a character. 1he small series of line
segments are drawn like a stroke of pen to form a character as shown in the fig. 5.19.
We can build our own stroke method character generator by calls to the line drawing
algorithm. Here it is necessary to decide which line segments are needed for each character
and then drawing these segments using line drawing algorithm.
Starbust method:
In this method a fix pattern of line segments are used to generate characters. As shown in the
fig. 5.2, there are 24 line segments. Out of these 24 line segments, segments required to
display for particular character are highlighted. 1his method of character generation is called
starbust method because of its characteristic appearance
Fig. 5.2
Fig. 5.2 shows the starbust patterns for characters A and M. the patterns for particular
characters are stored in the form of 24 bit code, each bit representing one line segment. 1he
bit is set to one to highlight the line segment; otherwise it is set to zero. For example, 24-bit
code for Character A is 11 11 11 111 1 and for character M is 11
11 1111 11.
1his method of character generation has some disadvantages. 1hey are
1. 1he 24-bits are required to represent a character. Hence more memory is required
2. Requires code conversion software to display character from its 24-bit code
3. Character quality is poor. It is worst for curve shaped characters.
itmap method:
1he third method for character generation is the bitmap method. It is also called dot matrix
because in this method characters are represented by an array of dots in the matrix form. It is
a two dimensional array having columns and rows. An 5 7 array is commonly used to
represent characters as shown in the fig 5.21. However 7 9 and 9 13 arrays are also used.
Higher resolution devices such as inkjet printer or laser printer may use character arrays that
are over 1 1.
Fig. 5.21: character A in 5 7 dot matrix format
Each dot in the matrix is a pixel. 1he character is placed on the screen by copying pixel values
from the character array into some portion of the screen's frame buffer. 1he value of the pixel
controls the intensity of the pixel.
uiscuss the homoqeneous coordinotes for trons/otion rototion ond sco/inq
4ns
nomoqeneous coordinotes ond Motrix kepresentotion of 2u 1ronsformotions
ln desiqn ond picture formotion process mony times we moy require to perform trons/otion rototions
ond sco/inq to fit the picture components into their proper positions ln the previous section we hove
seen thot eoch of the bosic tronsformotions con be expressed in the qenero/ motrix form
PM1 + M2 {12)
lor trons/otion
ie M1 ldentity motrix
M2 1rons/otion vector
lor rototion
ie M1 kototiono/ motrix
M2 0
lor sco/inq
ie M1 5co/inq motrix
M2 0
1o produce o sequence of tronsformotions with obove equotions such os trons/otion fo//owed by
rototion ond then sco/inq we must co/cu/ote the tronsformed coordinotes one step ot o time lirst
coordinotes ore rototed 8ut this sequentio/ tronsformotion process is not efficient 4 more efficient
opprooch is to combine sequence of tronsformotions into one tronsformotion so thot the fino/
coordinote positions ore obtoined direct/y from initio/ coordinotes 1his e/iminotes the co/cu/otion of
intermediote coordinote vo/ues
ln order to combine sequence of tronsformotions we hove to e/iminote the motrix oddition ossocioted
with the trons/otion term in M2 {kefer equotion 12) 1o ochieve this we hove to represent motrix M1
os motrix insteod of 2 2 introducinq on odditiono/ dummy coordinote w nere points ore
specified by three numbers insteod of two 1his coordinote system is co//ed homoqeneous coordinote
system ond it o//ows us to express o// tronsformotion equotions os motrix mu/tip/icotion
1he homoqeneous coordinote is represented by o trip/et
where
lor two dimensiono/ tronsformotions we con hove the homoqeneous porometer w to be ony non tero
vo/ue 8ut it is convenient to hove w 1 1herefore eoch two dimensiono/ position con be represented
with homoqeneous coordinote os {x y 1)
5ummerinq it o// up we con soy thot the homoqeneous coordinotes o//ow combined tronsformotion
e/iminotinq the co/cu/otion of intermediote coordinote vo/ues ond thus sove required time for
tronsformotion ond memory required to store the intermediote coordinote vo/ues Let us see the
homoqeneous coordinotes for three bosic tronsformotions
1 nomoqeneous coordinotes for 1rons/otion
1he homoqeneous coordinotes for trons/otion ore qiven os
{1)
1herefore we hove
x + tx y + ty 1 {14)
2 nomoqeneous coordinotes for kototion
1he homoqeneous coordinotes for rototion ore qiven os
{15)
1herefore we hove
{1)
nomoqeneous coordinotes for 5co/inq
1he homoqeneous coordinote for sco/inq ore qiven os
4. excrlbe the followlng wlth rexpect to Projectlon:
4) Poro//e/ 8) 1ypes of Poro//e/ Projections
4ns4
!rojection:
After converting the description of objects from world coordinates to viewing coordinates, we
can project the three dimensional objects onto the two dimensional view plane. 1here are two
basic ways of projecting objects onto the view plane : !arallel projection and !erspective
projection.
1. !arallel !rojection:
In parallel projection, z coordinate is discarded and parallel lined from each vertex on the
object are extended until they intersect the view plane. 1he point of intersection is the
projection of the vertex. We connect the projected vertices by line segments which correspond
to connections on the original object.
As shown in the Fig. 7.8, a parallel projection preserves relative proportions of objects but
does not produce the realistic views.
2. 1ypes of !arallel !rojections:
!arallel projections are basically categorized into two types, depending on the relation between
the direction of projection and the normal to the view plane. When the direction of the
projection is normal (perpendicular) to the view plane, we have an orthographic parallel
projection.