Sunteți pe pagina 1din 6

Q-1 Explain the following

(a) Image enhancement


(b) Pattern detection and recognition
(c) Scene analysis and computer
(d) Visualization
(A) Image enhancement: The image enhancement deals with the improvement in the image
quality by eliminating noise or by increasing image contrast.
(B) Pattern detection and recognition: Pattern detection and recognition deal with the detection
and clarification of standard patterns and finding deviations from these patterns. The optical
character recognition (OCR) technology is an practical example of pattern detection and
recognitiosn.
(C) Scene detection and recognition: Scene analysis and computer vision deals with the
recognition and construction of 3D model of scene from several 2 D images.
The above three fields of image processing proved their importance in many areas such as finger
print detection and recognition, modeling of building, ships, automobiles etc., and so on.
Computer graphics and image processing of computer processing of picture in the initial stages
were quite separate disciplines. But now a day they use some common features, and overlap
between them is growing and they both use raster displays.
(D) Visualization: Visualization is a technique for creating images, diagrams, or animations to
communicate a message. Visualization through visual imagery has been an effective way to
communicate both abstract and concrete ideas. Visualization today has ever-expanding
applications in science, education, engineering (e.g., product visualization), interactive
multimedia, medicine, etc. Typical visualization application is the field of computer graphics. The
invention of computer graphics may be the most important development in visualization since the
invention of central perspective. The use of visualization to present information is not a new
phenomenon. It has been used in maps, scientific drawings, and data plots for over a thousand
years. Computer graphics has from its beginning been used to study scientific problems.
Most people are familiar with the digital animations produced to present meteorological data
during weather reports on television, though few can distinguish between those models of reality
and the satellite photos that are also shown on such programs. TV also offers scientific
visualizations when it shows computer drawn and animated reconstructions of road or airplane
accidents. Some of the most popular examples of scientific visualizations are: computergenerated images that show real spacecraft in action, out in the void far beyond Earth, or on
other planets. Dynamic forms of visualization, such as educational animation or timelines, have
the potential to enhance learning about systems that change over time.
Apart from the distinction between interactive visualizations and animation, the most useful
categorization is probably between abstract and model- based scientific visualizations. Data
visualization is a related subcategory of visualization dealing with statistical graphics and
geographic or spatial data (as in thematic cartography) that is abstracted in schematic form.
Scientific visualization Scientific visualization is the transformation, selection, or representation of
data from simulations or experiments, with an implicit or explicit geometric structure, to allow the
exploration, analysis, and understanding of the data. Traditional areas of scientific visualization
are flow visualization, medical visualization, astrophysical visualization, and chemical
visualization. There are several different techniques to visualize scientific data, with surface
reconstruction and direct volume rendering being the more common.
Q-2 Write a note on digitizers
Digitizer is a device used to convert an image into a series of dots that can be read, stored and
manipulated by the computer. It converts analog or physical input into digital images. Significance
of digitizers is carrying out important work in computer-aided design, graphics design and

engineering. They also help convert hand-drawn images into textures and animation in video
games and movie CGI. Modern digitizers appear as flat scanning surfaces or tablets that connect
to a computer workstation. The surface is touch-sensitive, sending signals to the software, which
translates them into images on the screen. Digitizers have an input stylus that acts as a pen.
Mode of input does vary with earlier models relied on simple pressure and electrical impulses,
while more advanced designs offer better accuracy with lasers and even camera pens. Important
factors to consider when looking at digitizers are resolution, sensitivity and image recognition.
While users can input any image, the tablet and software may not be able to convert it fully. Also,
handwriting recognition and text auto-detect are popular features of digitizers.
The tablet is the most common locator device. A typical graphics tablet is shown in figure 2.2.
Tablets may be used either in conjunction with CRT graphics display or stand alone. In the latter
case they are frequently referred to as digitizers. The tablet itself consists of a flat surface and
pen like stylus which is used to indicate a location on the tablet surface. Usually the proximity of
the stylus to the tablet surface is also sensed. When used in conjunction with a CRT display,
feedback from the CRT face is provided by means of small tracking symbol called a cursor, which
follows the movement of the stylus on the tablet surface. When used as a standalone digitizer,
feedback is provided by digital readouts.

Graphics tablet
Tablets provide either two or three dimensional coordinate information. The values returned are in
tablet coordinates. Software converts the tablet coordinates to user typical resolution and accuracy is 0.01
to 0.001 inch.
Q-3 Explain 3D viewing
Viewing a scene in 3D is much more complicated than 2D viewing, where in the latter the viewing
plane on which a scene is projected from WCs is basically the screen, except for its dimensions.
In 3D, we can choose different viewing planes, directions to view from and positions to view from.
We also have a choice in how we project from the WC scene onto the viewing plane. In the
process of viewing a 3D scene, we set up a coordinate system for viewing, which holds the
viewing or camera parameters: position and orientation of a viewing or projection plane (~
camera film).
3D Viewing pipeline Generating a view of a 3D scene on an output device is similar to taking a
photograph of it, except that many more possibilities are open to us in theway the camera is

positioned, its aperture (view volume) is chosen, the orientation and position of the view plane is
selected etc. The following summarizes the steps involved from the actual construction of a 3D
scene to its ultimate depiction on a device:
Construct objects in modeling coordinates (MCs)
Pass object description through the modeling transformation to a WC scene.
Pass scene description through the viewing transformation to view coordinates (VCs)
Pass through the projection transformation to projection coordinates (PCs)
Pass through the normalizing transformation and clipping algorithms to normalized
coordinates (NCs)
Pass through the viewport transformation to device coordinates (DCs)
3D viewing coordinate system As in 2D we choose in WCs, an origin P0 = (x0 , y0 , z0 ) for it,
called the view point or viewing position (also called the eye position or camera position in some
packages). Then we choose a view up vector V which defines its y-direction, yv and in addition a
vector giving the direction along which viewing is done defining its zv direction. The view plane or
projection plane is usually taken as a plane that is zv -axis and is set at a position zvp from the
origin. Its orientation is specified by a choosing view-plane normal vector N which also specifies
the direction of the positive zv direction. In figure 7.14 right-handed systems are indicative of the
set up typically employed.
The direction of viewing is usually taken as the N (or zv ) direction, for RH coordinate systems
(or in the opposite direction corresponding to LH coordinate systems). .
Choosing the view-plane normal N :
can take as out from object by taking N = P0 OriginWC
or, from a reference point ref Pref (look at point) in scene to P0 i.e. N=p0 Pref
or, define direction cosines for it using angles WC X,Y,Z axes.
Choosing the view-up vectorV :
Require it to be N, but since not easy to establish usually take
V = (0,1,0) = WC Y direction and
adjust or let code/package adjust
Forming the viewing coordinate frame: Having chosen N we form the unit normal vector n, for the
zv direction, form the unit vector u for the xv direction, and then adjust V to get a new unit vector
v for the yv direction, using cross-products to obtain each one orthogonal to the plane of the other
two:

N
=( nx , ny , nz )
|N|

n=

u=

V n
=
(ux,uy,uz)
|V |

V =nu= vx,vy,vz)

We then call this system a uvn viewing coordinate reference frame.


Setting up the view-plane: Finally, the view-plane is chosen as a plane n (or the zy -axis, at
some point on it (at some distance from the view-frame origin).

Q-4 Explain different types of coherence


Types of coherence:
1. Object Coherence: Visibility of an object can often be decided by examining a circumscribing
solid (which may be of simple form, eg. A sphere or a polyhedron.)
2. Face Coherence: Surface properties computed for one part of a face can be applied to
adjacent parts after small incremental modification. (eg. If the face is small, we sometimes can
assume if one part of the face is invisible to the viewer, the entire face is also invisible).
3. Edge Coherence: The Visibility of an edge changes only when it crosses another edge, so if
one segment of a nonintersecting edge is visible, the entire edge is also visible.
4. Scan line Coherence: Line or surface segments visible in one scan line are also likely to be
visible in adjacent scan lines. Consequently, the image of a scan line is similar to the image of
adjacent scan lines.
5. Area and Span Coherence: A group of adjacent pixels in an image is often covered by the
same visible object. This coherence is based on the assumption that a small enough region of
pixels will most likely lie within a single polygon. This reduces computation effort in searching for
those polygons which contain a given screen area (region of pixels) as in some subdivision
algorithms.
6. Depth Coherence: The depths of adjacent parts of the same surface are similar.
7. Frame Coherence: Pictures of the same scene at successive points in time are likely to be
similar, despite small changes in objects and viewpoint, except near the edges of moving objects.
Most visible surface detection methods make use of one or more of these coherence properties
of a scene.
Q-5 What is multimedia? Explain briefly the uses of multimedia
Multimedia:
Multimedia is the media that uses multiple forms of information content and information
processing (e.g. text, audio, graphics, animation, and video, interactivity) to inform or entertain the
user. Multimedia also refers to the use of electronic media to store and experience multimedia
content. Multimedia is similar to traditional mixed media in fine art, but with a broader scope. The
term "rich media" is synonymous for interactive multimedia.
The introduction to terminology begins with the notion multimedia, followed by the description of
media and the important properties of multimedia systems. Actually the word Multimedia comes
from the Latin words multus which means numerous and media which means middle. Recently
the word media conveys the meaning intermediary therefore, multimedia means multiple
intermediaries or multiple means.
Briefly the uses of multimedia:
Multimedia has found large applications in various areas including, but not limited to,
advertisements, art, education, entertainment, engineering, medicine, mathematics, business,
scientific research and spatial temporal applications. Some examples are as follows:
Entertainment: Multimedia is heavily used in the entertainment industry, especially to develop
special effects in movies and animations. Computer games are also one of the main applications
of multimedia because of the high amount of interactivity involved.
Education: In Education, multimedia is used to produce computer-based training courses
(popularly called CBTs) and reference books like encyclopedia and manuals. A CBT lets the user
go through a series of presentations, text about a particular topic, and associated illustrations in
various information formats. Edutainment is an informal term used to describe combining
education with entertainment, especially multimedia entertainment.
Industry: In the industrial sector, multimedia is used as a way to help present information to
shareholders, superiors and co-workers. Multimedia is also helpful for providing employee
training, advertising and selling products all over the world via virtually unlimited web-based
technology. For example, in case of tourism and travel industry, travel companies can market

packaged tours by showing glimpse of the places they would like to visit, details on lodging and
fooding, site seeing, special offers etc.
Medicine: In Medicine, multimedia technologies are used to produce high quality images of
human bodies and practice complicated surgical procedures. Doctors can get trained by looking
at a virtual surgery or they can simulate how the human body is affected by diseases spread by
viruses and bacteria and then develop techniques to prevent it. Tele-medicine is one example.
Engineering Applications: Multimedia is used widely in designing mechanical, electrical,
electronic and architectural parts through the use of Computer Aided Design (CAD) and
Computer Aided Manufacturing (CAM) applications. They enable engineers to develop a model of
products from various perspectives. It allows them to try out different combinations depending on
the requirements before deciding the final product implementation.
Q-6 Explain the following
(a) Full animation
(b) Limited animation
(c) Rotoscoping

Full animation : Full animation refers to the process of producing high-quality traditionally
animated films, which regularly use detailed drawings and possible movement. Fully animated
films can be done in a variety of styles, from more realistically animated works such as those
produced by the WaltDisney studio (Beauty and the Beast, Aladdin, Lion King) to the more
'cartoony' styles of those produced by the Warner Bros. animation studio.
Limited animation : Limited animation is a process of making animated cartoons that does not
redraw entire frames but variably reuses common parts between frames. One of its major
trademarks is the stylized design in all forms and shapes, which in the early days was referred to
as modern design. Pioneered by the artists at the American studio United Productions of America,
limited animation can be used as a method of stylized artistic expression. Its primary use,
however, has been in producing cost-effective animated content for media such as television (the
work of Hanna-Barbera, Filmation, and other TV animation studios) and later the Internet (web
cartoons).
Rotoscoping: Rotoscoping is an animation technique in which animators trace over live- action
film movement, frame by frame, for use in animated films. Originally, recorded live-action film
images were projected onto a frosted glass panel and re-drawn by an animator. This projection
equipment is called a rotoscope, although this device has been replaced by computers in recent
years. In the visual effects industry, the term rotoscoping refers to the technique of manually
creating a matte for an element on a live-action plate so that it may be composited over another
background.

S-ar putea să vă placă și