Documente Academic
Documente Profesional
Documente Cultură
Prepared by:
Mrs. C. Kalpana, Asst.Prof /CSE
Mrs. P. Bhavani, Asst.Professor/CSE
UNIT- I
Graphics Systems and Graphical User Interface: Pixel – Resolution – types of video display
devices – Graphical input devices – output devices – Hard copy devices – Direct screen interaction –
Logical input function – GKS User dialogue – Interactive picture construction techniques.
Programming (OpenGL)
1.2 PIXEL
Definition:
A pixel is generally thought of as the smallest complete sample of an image.
(or)
A pixel is a single point in a graphic image.
Each such point (or) information element is not really a dot, nor a square.
Pixels are measured in dpi (dots per inch) (or) ppi (pixels per inch).
The more pixel used to represent an image
History
The term Pixel is the Abbreviation for “Picture Element “.
The word pixel was first published in 1965 by Frederic C. Billingsley to describe the
picture elements of video images from space probes to the moon & mars.
The word Pix was actually coined in 1932 in a magazine as an abbreviation for the word
pictures in reference to movies.
The earliest publications of the term picture element was in “Wireless World “magazine
in 1927.
Terminologies
Some of the terminologies related to Pixel are given below:
i) Resolution
ii) Sub pixel
iii) Megapixel
iv) Bits per inch ( BPI )
Resolution
The number of pixels in an image is called the Resolution.
Eg: 640 by 480 display
(or)
640* 480 display
ie 640 pixels from side to side & 480 pixels from top to bottom
640 * 480 = 307,200 pixels
(Or)
0.3 Megapixels.
Bits per Pixel
The number of distinct colors that can be represented by a pixel depends on the number
of “bits per pixel” (bpp).
The maximum number of colors a pixel can take can be found by taking two to the power
of the color depth.
Eg: 256 colors = 2^8, 8 bpp
2^16 = 65536 colors = 16 bpp (high color or thousands)
2^24 = 16,777,216 colors = 24 bpp (true color or millions)
2^48 = 48 bpp (for all practical purposes & in flatbed scanners)
256 colors
Stored in the computers video memory.
Examples: Animated startup logos of Windows 95 & windows 98.
16 bpp
Divided into its RGB components.
i.e: 5 bits for Red.
5 bits for Blue &6 bits for green.
24 bpp
Divided into its RGB components with 8 bits each for Red, Blue & Green.
Sub Pixel
Many display and image acquisition systems are not capable of displaying (or) sensing
the different color channels at the same site.
The above problem is generally resolved by using multiple sub pixels, each of which
handles a single color channel Red, Green (or) Blue.
Example,
i) LCD’s typically dividing each pixel horizontally into three sub pixels.
ii) Most LCD displays divide each pixel into four sub pixels; one Red, one Green
and two Blue.
Mega pixel
A megapixel is 1 million pixels.
Eg: 2048 * 1536 pixels = 3.1 megapixel (3,145,728 pixels)
Several other types of object are derived from the idea of pixel namely
Voxel - volume element
Texel - texture element
Surfel - surface element
Have been created for computer graphics & image processing uses.
1.3 Resolution
Resolution is defined as the total number of pixels per image.
Image resolution
Image resolution describes the detail an image holds. The term applies to raster digital
images, film images, and other types of images. Higher resolution means more image
detail.
Image resolution can be measured in various ways. Basically, resolution quantifies how
close lines can be to each other and still be visibly resolved.
Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to the
overall size of a picture (lines per picture height, also known simply as lines, or TV
lines), or to angular subtenant. Line pairs are often used instead of lines; a line pair
comprises a dark line and an adjacent light line.
Pixel resolution
The term resolution is often used for a pixel count in digital imaging.
The pixel counts are referred to as resolution, the convention is to describe the pixel
resolution with the set of two positive integer numbers, where the first number is the
number of pixel columns (width) and the second is the number of pixel rows (height), for
example as 640 by 480.
Other conventions include describing pixels per length unit or pixels per area unit, such
as pixels per inch or per square inch.
Pixel resolutions are true resolutions, but they are widely referred to as such; they serve
as upper bounds on image resolution.
Spatial resolution
The measure of how closely lines can be resolved in an image is called spatial resolution,
and it depends on properties of the system creating the image, not just the pixel resolution
in pixels per inch (ppi).
For practical purposes the clarity of the image is decided by its spatial resolution, not the
number of pixels in an image. In effect, spatial resolution refers to the number of
independent pixel values per unit length.
Spectral resolution
Color images distinguish light of different spectra. Multi-spectral images resolve even
finer differences of spectrum or wavelength than is needed to reproduce color. That is, they can
have higher spectral resolution.
Display resolution
The display resolution of a digital television or display device is the number of distinct
pixels in each dimension that can be displayed. It can be an ambiguous term especially as
the displayed resolution is controlled by all different factors in cathode ray tube (CRT)
and flat panel or projection displays using fixed picture-element (pixel) arrays.
One use of the term “display resolution” applies to fixed-pixel-array displays such as
plasma display panels (PDPs), liquid crystal displays (LCDs), Digital Light Processing
(DLP) projectors, or similar technologies, and is simply the physical number of columns
and rows of pixels creating the display (e.g., 1920×1080).
A consequence of having a fixed grid display is that, for multi-format video inputs, all
displays need a "scaling engine" (a digital video processor that includes a memory array)
to match the incoming picture format to the display.
The term “display resolution” is usually used to mean pixel dimensions, the number of
pixels in each dimension (e.g., 1920×1080), which does not tell anything about the
resolution of the display on which the image is actually formed: resolution properly refers
to the pixel density, the number of pixels per unit distance or area, not total number of
pixels.
In digital measurement, the display resolution would be given in pixels per inch. In
analog measurement, if the screen is 10 inches high, then the horizontal resolution is
measured across a square 10 inches wide. This is typically stated as "lines horizontal
resolution, per picture height
Computer Monitors
Computer monitors have higher resolutions than most televisions.
1024×768 extended Graphics Array was the most common display resolution When a
computer display resolution is set higher than the physical screen resolution, some video
drivers make the virtual screen scrollable over the physical screen thus realizing a two
dimensional virtual desktop with its viewport.
Most LCD manufacturers do make note of the panel's native resolution as working in a
non-native resolution on LCDs will result in a poorer image, due to dropping of pixels to
make the image fit (when using DVI) or insufficient sampling of the analog signal (when
using VGA connector).
1.4 Video Display Devices
The primary output device in a graphics system is a video monitor.
The operation of most video monitors is based on the standard cathode-ray tube.
The video display devices discussed here are given below:
Refresh cathode ray tubes
Raster scan displays
Random scan displays
Color CRT monitors
Direct view storage tubes
Flat panel displays &
Three dimensional viewing devices
Refresh Cathode Ray Tubes
The electron beam passes through focusing and deflection systems that direct it towards specified
positions on the phosphor-coated screen.
When the beam hits the screen, the phosphor emits a small spot of light at each position contacted by
the electron beam.
It redraws the picture by directing the electron beam back over the same screen points quickly.
Deflection of the electron beam can be controlled either with electric fields or with
magnetic fields.
Cathode ray tubes constructed with magnetic deflection coils mounted on the outside of
the CRT envelope.
Two pairs of coils are used.
In a raster scan system, the electron beam is swept across the screen, one row at a time
from top to bottom. As the electron beam moves across each row, the beam intensity is
turned on and off to create a pattern of illuminated spots.
Picture definition is stored in memory area called the Refresh Buffer or Frame Buffer.
This memory area holds the set of intensity values for all the screen points. Stored
intensity values are then retrieved from the refresh buffer and “painted” on the screen
one row (scan line) at a time as shown in the following illustration.
Each screen point is referred to as a pixel (picture element) or pel. At the end of each
scan line, the electron beam returns to the left side of the screen to begin displaying the
next scan line.
Fig: A raster-scan system displays an object as a set of displays points across each
scan line.
Picture definition is stored in memory area called Refresh buffer (or) Frame buffer.
The frame buffer holds the set of intensity values for all the screen points.
The stored intensity values are then retrieved from the refresh buffer and painted on the
screen one row at a time.
One row is also referred to as Scan Line.
Each screen point is referred to as Pixel
Random scan monitors draw a picture one line at a time & for this reason they are
referred to as vector displays or stroke writing or calligraphic displays
The component lines of a picture can be drawn and refreshed by as random scan system
in any specified order.
A Pen plotter operates in a similar way and is an example of random scan,hard copy
devices
Refresh Rate
Refresh rate depends on the number of lines to be displayed.
A refresh buffer or refresh display file or display list or display program is used to store
picture definition. The line drawing commands are stored in the refresh buffer.
To display a specified picture the system processes the set of drawing commands.
Random scan displays are designed to draw all the components lines of a picture 30 to 60
times each second.
Random scan system are designed for line drawing applications and cannot display
realistic shaded scenes.
Random scan displays produce smooth line drawings because the CRT beam directly
follows the line path.
A shadow mask CRT has three phosphor color dots at each pixel position.
One for red light, one for green light & one for blue light.
This type of CRT has three electron guns, one for each color dot.
This CRT also has a shadow mask grid just behind the phosphor coated screen.
The three electron beams are deflected & focused as a group onto the shadow mask.
The shadow mask are aligned with the phosphor dot patterns.
When the beams pass through the holes in the shadow mask they activate a dot triangle.
Another configuration for the electron guns is an in-line arrangement where the 3
electron beams are aligned to a single scan line.
By varying the intensity of levels of the three electron beams various color variations are
obtained.
The color got depends on the amount of red, green and blue phosphors.
A white area indicates that all the three electron beams are with the same
intensity.
Color Combinations
Yellow – green & red beams
Magenta - blue & red beams
Cyan – blue & green beams
More sophisticated systems can set intermediate intensity levels for the electron beams.
Direct view storage tubes
An alternative method for maintaining a screen image is to store the picture information inside
the CRT instead of refreshing the screen.
Two electron guns are used in DVST
DVST stores the picture information as a change distribution just behind the phosphor coated
screen.
The primary electron gun stores the picture pattern.
The flood gun maintains the picture display.
Advantages
i) No refreshing is needed.
ii) Very complex pictures can be displayed at very high resolutions without flicker.
Disadvantages
Emissive displays
Convert electrical energy into light.
Examples
i) Plasma panels
ii) Thin film electroluminescent displays
iii) Light emitting diodes.
Plasma Panels
Plasma Panels Also called as Gas-discharge displays.
Plasma panels are constructed by filling the region between two glass plates with a
mixture of gases that usually include Neon.
Flat panel displays use nematic (thread like ) liquid crystal compounds.
The liquid crystal material is sandwiched between two glass plates each containing a light
polarizer at right angles to the plate.
Horizontal conductors are built into one glass plate & vertical conductors are built into
another glass plate.
The intersection of the two conductors define a pixel position.
Polarized light passing through the material is twisted so that it will pass through the
opposite polarizer.
The light is then reflected back to the viewer.
This type of flat panel device is referred to as a passive matrix LCD.
Another method for constructing LCD’s is to place the transistor at each pixel location
using thin- film transistor technology. These devices are called Active – Matrix LCD.
Three Dimensional Viewing Devices
Graphics monitors for the display of three dimensional scenes have been devised using a
technique that reflects a CRT image from a vibrating, flexible mirror.
As the mirror vibrates it changes focal length.
These vibrations are synchronized with the display of on a CRT so that each point on the
object is reflected from the mirror into a spatial position corresponding to the distance of
that point from a specified viewing position.
This allows a person to walk around an object or science and view it from different sides.
Real – time example
Geisco space graph system – used in medical applications
For analyzing data from ultrasonography & CAT scan devices.
In geological applications to analyze topographical & scismic data.
ii) mouse
iii) trackball & space ball
iv) joysticks
v) data glove
vi) digitizers
vii) image scanners
viii) touch panels
ix) light pens
x) voice systems
Keyboards
The keyboard is an efficient device for inputting such nongraphic data as picture labels
associated with a graphic display.
Alphanumeric keyboard
Keyboards are provided with features to facilitate entry of screen co-ordinates, menu
selections or graphics functions.
The common features on general purpose keyboards are
i) cursor-control keys
ii) function keys
(i) Cursor control keys:
Can be used to select displayed objects or co-ordinate positions by positioning the screen cursor.
(ii)Functional keys:
Allow users to enter frequently used operations in a single keystroke. Additionally a numeric
keypad is often included on the keyboard for fast entry of numeric data.
Mouse
A mouse is a small hand held device used to position the screen cursor.
Wheels or rollers on the bottom of the mouse can be used to record the amount &
direction of movement.
Another method for detecting mouse movements is with an optical sensor.
One, two or three buttons are usually included on the top of the mouse for signaling the
execution of some operation.
Z-Mouse
Additional devices can be included in the basic mouse design to increase the number of
allowable input parameters.
The z-mouse includes three buttons, a thumbwheel on the side, a trackball on the top & a
standard mouse ball underneath.
With the z-mouse, one can pick up an object, rotate it and move it in any direction or the
navigation of one’s viewing position & ordination through a 3D scene.
Application of z-mouse are
virtual reality
CAD
Animation
Trackball And Space ball
Trackball
Trackball is a ball that can be rotated with the fingers or palm of the hand to produce
screen cursor movements.
Potentiometers attached to the ball measure the amount & direction of rotation.
They are often mounted on keyboard or z-mouse.
Trackball is a two dimensional Positioning device
Space ball
A space ball provides size degrees of freedom.
A space ball does not actually move.
Strain gauges measure the amount of pressure applied to the space ball to provide input
for spatial positioning and orientation as the ball is pushed or pulled in various directions.
Space balls are used for three-dimensional positioning
Applications where space balls are used are
i) 3D positioning
ii) Virtual reality systems
iii) Modeling
iv) Animation
v) CAD
Joysticks
A joystick consists of a small vertical lever called the stick
The stick is mounted on a base which steers the screen around.
electric signal is induced in a wire coil in an activated stylus or hand cursor to record a
tablet position. Depending on the technology, either signal strength, coded pulses, or
phase shifts can be used to determine the position on the tablet.
Image Scanners
An image scanner is used for storing drawings, graph, color & black and white photos or
text by passing an optical scanning mechanism for computer processing.
The gradations of gray scale or color are then recorded and stored in an array.
Transformations can be applied to rotate, scale or crop the picture to a particular screen
area.
Various image processing methods can be applied to modify the array representations or
text & they come in a variety of sizes & capabilities.
Touch Panels
Touch panel allow displayed objects or screen positions to be selected with the touch of a
finger.
A typical application of touch panel’s is for the selection of processing options that are
represented with graphical icons.
o Eg ATM center touch panel
Touch input can be recorded using optical, electrical or acoustical methods.
Optical Touch Panels
These panels employ a line of infrared light emitting diodes ( LED’s) along one vertical
edge & along one horizontal edge of the frame.
The opposite vertical edge & horizontal edge contain light detectors.
When the panel is touched these light detectors record which beams are interrupted.
The 2 cross beams that are interrupted identify the horizontal & vertical co-ordinates of
the screen position selected.
Positions can be selected with an accuracy of about ¼ inch.
The LED’s operate at infrared frequencies so that the light is not visible to a user.
Electrical Touch Panel
These plates are constructed with 2 transparent plates separated by a small distance.
One plate is coated with a conducting material & the other with resistive material.
Touching the outer plate forces it into contact with the inner plate.
This contact creates a voltage drop across the resistive plate that is converted into co-
ordinate values of the selected screen position.
Acoustical Touch Panel
Here high frequency sound waves are generated by the horizontal & vertical directions
across a glass plate.
Touching the screen causes part of each wave to be reflected from the fingers to the
emitters.
The screen position is calculated from a measurement of the time interval between the
transmission of each wave & its reflection to the emitter.
Light Pen
Light pens are pencil-shaped devices used to select screen positions by detecting the light
coming from points on the CRT screen.
Light pens are sensitive to short burst of light emitted from the phosphor coating of the
CRT.
Other light sources are not usually detected by a light pen.
An activated light pen pointed at a spot on the screen as the electron beam lights up that
spot generates an electric pulse that causes the co-ordinate position of the electron beam
to be recorded.
The recorded light pen co-ordinates can be used to position an object or to select a
processing option.
Disadvantages
i) Screen image obscured by hand & light pen
ii) Prolonged use causes arm fatigue
iii) Light pens require special implementation for some applications.
iv) Sometimes light pens give false readings due to background lighting in room.
Voice Systems
Speech recognizers are used in some graphics workstations as input devices to accept
voice commands.
The voice system input can be used to initiate graphics operations or to enter data.
These systems operate by matching an input against a predefined dictionary of words and
pharses.
Output Devices
Speakers
The primary method of sound output in most computers today. Sound is translated from bits to
electrical signals in a sound card, which then channels the signals to the speakers.
MIDI
MIDI files are typically created using computer-based sequencing software (or
sometimes a hardware-based MIDI instrument or workstation) that organizes MIDI
messages into one or more parallel "tracks" for independent recording and editing.
Microphone
A microphone is a device used to change sound into electric signals. Microphones are
used in telephones, tape recorders, hearing aids and many other devices.
Printer
A printer is a device that prints text or images, making a physical copy (usually with
some kind of ink on paper). Printers are classified into two types:
(i) Impact printers rely upon a mechanical impact to transfer ink to paper. Early printers
were more like electric typewriters, striking an ink ribbon against the paper with a lever with a
raised image of a letter on the end, or a rotating ball with the same.
Laser Printer
A laser printer directs a laser beam onto a rotating drum, covered by a photoconductor (a
material that conducts electricity when illuminated), carrying an electrostatic charge.
The laser "erases" the charge in areas it strikes.
Then a powdered ink ("toner"), charged the same polarity as the original surface charge
on the drum, is spread across the surface, and it is repelled from the areas that still carry
the original charge, but it is attracted to the discharged areas where the image was
"written" by the laser.
The drum then rolls the image formed onto paper, which is then be heated ("fused") to
make the toner stick to itself and the paper.
Inkjet Printer
An inkjet printer is a type of computer printer that creates a digital image by propelling
variably-sized droplets of liquid material (ink) onto a page. Inkjet printers are the most common
type of printer and range from small inexpensive consumer models to very large and expensive
professional machines.
Hard-Copy Devices
Hard copy output for images can be obtained in several formats
Users can put pictures on paper by directing graphics output to a printer or plotter.
The quality of pictures obtained from a device depends on the dot size & dot per inch or
lines per inch.
Smooth characters can be produced in printed text strings by high quality printers.
These printers shift dot positions so that adjacent dots overlap
There are 2 major methods or types of printers namely
GARPHICS PACKAGES
A set of libraries that provide programmatically access to some kind of graphics 2D
functions.
Types:
GKS-Graphics Kernel System – first graphics package – accepted by ISO & ANSI
PHIGS (Programmer’s Hierarchical Interactive Graphics Standard)-accepted by ISO &
ANSI
PHIGS + (Expanded package)
Silicon Graphics GL (Graphics Library)
Open GLPixar Render Man interface
Postscript interpretersPainting, drawing, design packages
1.6 Direct screen interaction Method
Interaction on touch sensitive screens is literally the most "direct" form of HCI, where information display
and control are but one surface.
The zero displacement between input and output, control and feedback, hand action and eye gaze, makes
touch screens very intuitive to use, particularly for novice users. Not surprisingly, touch screens have been
widely and successfully used in public information kiosks, ticketing machines, bank teller machines and the
like.
Being direct between control and display, touch screens also have special limitations. First, the user's
finger, hand and arm can obscure part of the screen.
Second, the human finger as a pointing device has very low "resolution". It is difficult to point at targets
that are smaller than the finger width. As touch screen technology becomes more available at a lower price
and better quality, we expect its greater use in many different domains.
We set out to explore touch screen interaction techniques that can handle pointing at individual pixel
levels. High precision interaction on touch screens is necessary and important in many situations including
dealing with geographical systems or high precision drawings.
One area in particular is command and control where many characteristics of the touch screen mentioned
earlier are desirable, but where high accuracy techniques have to be developed in order to deal with
geographical information.
For example, computer supported command and control systems used in military vehicles are constrained
by space limitations and rugged environments.
Screen size is therefore limited. To interact with these systems--for example when deploying geographical
orders--users need to maintain an overview of an area of interest (which determines the zoom level) and yet
be able to point at precise locations.
Introduction
In computer music, different strategies are possible to control sound processes. A first one consists in
using the computer properties of calculation power and flexibility in the design phase of an
instrument. Today, many researches are conducted to create powerful digital musical instruments.
To design them, a critical part of the work consists in the mapping between the gestural devices and
the sound processes to control . Those instruments tend to reproduce the “instrumental link” that is
intrinsic to the acoustic instruments and that has often disappeared in the electronic and numerical
systems.
A second strategy consists in using the computer for its powerful interaction trough a graphical user
interface (GUI).Regarding today musical software’s, they essentially use a mouse and a keyboard
with a current GUI: all sound parameters are controllable via graphical objects that generally
represent real objects like piano keyboards, faders, etc.
Complete studios equipments and electronic instruments emulators are now integrated in the
computer. The GUIs tend to reproduce on the screen an interaction area close to the real one, like
front panels of electronic instruments. The aim of such interfaces is to give the user the impression of
real objects in front of him.
Nevertheless, with a single mouse, the interaction process i spoor: the gesture space (the place where
is the mouse) is separated from the interaction space (the screen) and only one object can be
manipulated at one time. This explains why many software programs are configured to use “external”
devices like MIDI controllers, software-specific control surfaces or alternative controllers. In this
case, the full system is similar to those of the first strategy; the graphical objects, which are designed
for interaction, are only used for visual feedback or not used at all.
The system we introduce in this article enables the control of graphical objects in GUI’s like real
objects and rather follows the second strategy. This new powerful multimodal system, the Pointing
Fingers, performs a direct control on GUIs with a multi-touch touch screen-like device, designed for
musical control. The interaction principle; It describes the gesture device and the software
implementation. The musical examples of what is possible with such a system are exposed.
The system is based on the combination of two crucial features: the superposition of both gesture
spatial place and visual feedback spatial place and the ability to have multiple simultaneous controls
when using a GUI.
Some systems that have these two features already exist; one of them was developed to control
musical processes: the Audio Pad based on tangible interfaces in which the objects to manipulate are
real and interact with graphics. Our system is closer from current GUIs because the objects to
manipulate are the virtual graphical objects displayed on screen. This type of system provides the
most direct and intuitive interaction possible: our fingers are manipulating graphical objects as if they
were real objects.
There are no material constraints on the objects: they can change in position, size, shape and
function. It is possible to display some information be side the objects to help the user. It is a very
efficient system to control virtual copies of real objects.
Finally, interaction situations that are impossible in the real world can be implemented here, like
manipulating moving objects, as it will be demonstrated in the section 5.In interaction with a real
object, this object provides so mephitic feedback: the contact with the object shape, the force it needs
to be manipulated, the degrees of freedom it offers and the spatial limits of its displacement.
This feedback is so important that the user could manipulate an object with thee yes closed. With our
system, the hap tic feedback is reduced to the contact between fingers and screen. Sight and hearing
are fully used; sight permits to locate the position of the object sin the screen and hearing can
reinforce sight when an object is manipulated, through the effect of manipulation on the sound.
The GUI of our system is close to those using a mouse to control graphical object; the differences
are that the object needs a bigger size, because a fingertip is bigger than a mouse pointer. The screen
area contains different interaction zones; each zone will have its own interaction mode and
connection to the sound process parameters. Different types of gestures are necessary to act in a zone:
selection gesture to select the chosen zone among several zones, modulation or continuous gesture to
modify the parameters that are associated with the zone, and decision gesture to stop the interaction.
For example, if the user wants to manipulate the graphical object “ fader”, he selects this fader with
one of his fingers, manipulates it, and then he lifts his finger off the screen area.
We want a device that follows our requirements: having multi-touches and interacting directly
with the interface. Commercial touch screens fulfill the second point, but solutions have been developed in
different labs, as the following examples: the Smart Skin system combines apron to type of multi-touch
surface with a video projection; the vision-based finger tracking determinates the fingers’ positions through
a video analysis;
The device we introduce now is a first prototype we have made to perform multi-touches on a screen.
It consists of 2semi-gloves (recovering the thumb and the index) with two 3Dposition/orientation
sensors and two switches per hand (This device is close to Mulder’s Cyber Gloves and Polhemus
system , but is less expensive in hardware and simpler to implement.
This device can give the position of 4 digits (the thumb and the index of each hand) with
approximately 1 mm accuracy and the on/off values of the switches (an equivalent of the mouse click
button) localized at the extremity of the fingers; those switch buttons indicate if the fingertips are
physically touching the screen or not. All the data of the sensors are processed in the Max/MSP
environment.
The flock of birds is a commercial device composed of a transmitter and several receivers, called
birds; the device communicates with the computer trough a serial interface and a serial/USB
converter. We use the serial object of Max to receive the data.
The switches are connected to the electronic of an USB joy stick and we receive its data, using the in
sprock object. However, this device has some limitations.
The flock of bird’s device introduces some latency: we have not measured it but we estimate it to be
approximately 30 ms with four sensors; this lag is too important to create really reactive instruments,
but is acceptable for our experiments and applications with modulation-like instruments. Another
problem is the choice of the screen: CRT screens are disturbed by magnetic fields, and some LCD
screens disturb the magnetic field of the sensor.
We have developed a specific C object for Max to transform the data of the birds and find the
fingertips coordinates in the screen base. The sensor gives the absolute position and orientation of the 4 birds
in space, relatively to the transmitter; with this data, the object calculates the position of the tips using the
rotation matrix between each bird base and the transmitter base.
This coordinates are then rotated and translated to the screen base and rescaled in order to obtain the position
of the tips in pixel, which is the mouse coordinates unit .(A calibration procedure calculates the screen
position and size in the transmitter base and then determines the screen base. Visual feedback spatial
correspondence Sound feedback Gesture Graphical objects
We develop how the data of our gesture device or any equivalent device will be processed. Indeed, in
our approach, we try to build modular systems. So the control of the graphical object is completely
independent from the gesture device: we consider that any gesture devices that can give us lists with
the point number, (X,Y) coordinates in the screen basis and the value of a on/off button can be used
instead of the Pointing Fingers.
For this reason, we will call pointer a point on the screen that is given by the gesture device. Our
gesture device gives simultaneously 4 pointers. We used the Max/MSP environment and we created a
specific Max object to manage the data for a given zone of the screen: multipoint provides some
confusion problems that did not exist with the only mouse. The object receives all data lists from all
pointers.
The delimitations of the object action zone are given by sending specific instructions to it. describes
how this Max object manages multiple points for a given zone.
The outputs of this max object can be connected with many graphical objects, taking care of the
coherence between the visual effects of the interaction on the graphical object and the position of the
pointer on the screen.
This implementation is simple but sufficient to perform numerous things. Firstly, lots of Max
graphical objects can be used with our Max object and can be manipulated simultaneously. Secondly,
many original graphical objects or interaction zones can be created and used with our system, and we
can imagine multipoint interaction zones using several units of our Max object.
Conclusion - perspectives
The computer often seems to be a powerful creature inside a closed box. Its screen shows us
marvelous worlds, but interacting with a mouse is frustrating, especially when we want to perform
music.
As the two examples shows, our system will help to design musical instruments that benefits of the
advantages of the computers’ universality and flexibility, through a powerful control of Graphical
User Interfaces. Our works on this system have just established its basis; in the future, we will
develop new objects, implement other synthesis techniques and improve the system to provide a
complete environment to create new digital musical instruments.
deliver data to an input queue, All input data are saved. When the program requires new
data, it goes to the data queue.
Any number of devices can be operating at the same time in sample and event modes.
Some can be operating in sample mode, while others are operating in event mode. But
only one device at a time can be providing input in request mode.
An input mode within a logical class for a particular physical device operating on a
specified workstation is declared with one of six input-class functions of the form
set . . . Mode (us, device Code, input Mode. echo flag) where device code is a positive
integer; input Mode is assigned one of the values:request,sample, or event;
Request Mode
Input commands used in this mode correspond to standard input functions in a high-level
programming language. When we ask for an input in request mode, other processing is
suspended until the input is received.
After a device has been assigned to request mode. as discussed in the preceding section,
input requests can be made to that device using one of the six logical-class functions
represented by the following:
r e q u e s t . . . (ws, devicecode, status . . . . )
Values input with this function are the workstation code and the device code. Returned
values are assigned to parameter status and to the data parameters corresponding to the
requested logical class.
A value of ok or none is returned in parameter status, according to the validity of the
input data. A value of none indicates that the input device was activated so as to produce
invalid data. For locator input, this could mean that the coordinates were out of range. For
pick input, the device could have been activated while not pointing at a structure. Or a
"break" button on the input device could have been pressed. A returned value of none can
be used as an end-of-data signal to terminate a programming sequence.
.
Valuator lnput in Request Mode
A numerical value is input in request mode with
requestvaluator (ws, devcode, status, value)
Parameter value cal be assigned any real-number value.
Choice lnput in Request Mode
We make a menu selection with the following request function:
request choice (ws, devCode, status, itemNum)
Parameter itemNum is assigned a positive integer value corresponding to the menu item selected.
Pick lnput in Request Mode
For this mode, we obtain a structure identifier number with the function
requestpick (ws, devCode, maxPathDepth, status. pathDepth,pickpath)
Parameter pickpath is a list of information identifying the primitive selected.This list contains the
structure name, pick identifier for the primitive, and the element sequence number. Parameter
pickDepth is the number of levels returned in pickpath, and maxPathDepth is the specified
maximum path depth that can be included in pickpath.
Sample Mode
Once sample mode has been set for one or more physical devices, data input begins
without waiting for program direction. If a joystick has been designated as a locator
device in sample mode, coordinate values for the current position of the activated joystick
are immediately stored. As the activated stick position changes, the stored values are
continually replaced with the coordinates of the current stick position.
Sampling of the current values from a physical device in this mode begins when a sample
command is encountered in the application program.
A locator device is sampled with one of the six logical-class functions represented by the
following:
An example of the simultaneous use of input devices in different modes is given in the
following procedure. An object is dragged around the screen with a mouse. When a final position
has been selected, a button is pressed to terminate any further movement of the object. The
mouse positions are obtained in sample mode, and the button input is sent to the event queue
For a particular application, the user’s model serves as the basis for the design of the
dialogue.
The user's model describes what the system is designed to accomplish and what graphics
operations are available. It state; the type of objects that can be displayed and how the
objects can be manipulated.
For example, if the graphics system is to be used as a tool for architectural design, the
model describes how the package can be used to construct and display views of buildings
by positioning walls, doors, windows, and other building; components.
Similarly,for a facilitv-layout system, objects could be defined as a set of furniture items
(tables, chair, etc.), and the available operations would include those for positioning and
removing different pieces of furniture within the facility layout
A circuit-design program might use electrical or logic elements for objects, with
positioning operations. ,available for adding or deleting within the overall circuit design
All information in the user dialogue is then presented in the language of the application.
In an architectural design package, this means that all interactions are described only in
architectural terms, without reference to particular data structures or other concepts that
may be unfamiliar to an architect.
Windows and Icons
Visual representations are used both for objects to be manipulated in an application and
for the actions to be performed on the application objects.
A window system provides a window-manager interface for the user and functions for
handling the display and manipulation of the windows.
Common functions for the window system are opening and closing windows,
repositioning windows, resizing windows, and display routines that provide interior and
exterior clipping and other graphics functions.
Typically, windows are displayed with sliders, buttons, and menu icons for selecting
various window options.
Some general systems, such as X Windows and News, are capable of supporting multiple
window managers so that different window styles can be accommodated, each with its
own window manager. The window managers can then be designed for particular
applications.
Icons representing objects such as furniture items and circuit elements are often referred
to as application icons. The icons representing actions, such as rotate, magnify, scale,
clip, and paste, are called control icons, or command icons.
Accommodating Multiple Skill Levels
Interactive graphical interfaces provide several methods for selecting actions.
For example, options could be selected by pointing at an icon and clicking different
mouse buttons, or by accessing pull-down or pop-up menus, or by typing keyboard
commands. This allows a package to accommodate users that have different skill levels.
For a less experienced user, an interface with a few easily understood operations and
detailed prompting is more effective than one with a large, comprehensive operation set.
A Simplified set of menus and options is easy to learn and remember and the user can
concentrate on the application instead of on the details of the interface.
Experienced users typically wants speed this means fewer prompts and more inputs from
the keyboard or with multiple mouse button clicks.
Actions are selected with function keys or with combination of keyboard keys .since
experienced users will remember these shortcuts for commonly used actions.
Similarly, help facilities can be designed on several levels so that beginners can carry on
detailed dialogue, while more experienced users can reduce or eliminate prompts and
messages
Help facilities also include one or more tutorial applications
Consistencies
Color coding so that the same color does not have different meaning in different
situations.
Generally a complicated, inconsistent model is difficult for a user to understand and to
work in an effective way. The objects and operations provided should be designed to
form a minimum and consistent set so that the system is easy to learn
Minimizing Memorization
Operations in an interface should also be structured that they are easy to understand and
to remember.
Abbreviated command formats lead to confusion and reduction in the effectiveness of the
use of the package. One key or button used for all delete operations f r example, is easier
to remember than a number of different keys for different types of delete operations
Icons and windows systems also aid in minimizing memorization.
Different kinds of information can be separated into different windows, so that we d o not
have to rely on memorization when different information displays overlap.
We can simply retain the multiple information on the screen in different windows, and
switch back and forth between windows areas. Icons are used to reduce memorizing by
displaying easily recognizable shapes for various objects and actions.
Backup and Error handling
Backup can be provided in many forms. A standard undo key or command is used to
cancel a single operation.
Sometimes a system can be backed up through several operations, allowing us to reset
the system to some specified point.
In a system with extensive backup capabilities, all inputs could be saved so that we can
back up and "replay" any part of a session.
Sometimes operations cannot be undone. Once we have deleted the trash in the desktop
waste basket, for instance, we cannot recover the deleted files. In this case, the interface
would ask us to verify the delete operation before proceeding.
Good diagnostics and error messages are designed to help determine the cause of an
error.
Additionally, interfaces attempt to minimize error possibilities by anticipating certain
actions that could lead to an error. Examples of this are not allowing us to transform an
object position or to delete an object when no object has been selected, not allowing us to
select a line attribute if the selected object is not a line, and not allowing us to select the
paste operation if nothing is in the clipboard.
Feedback
Interfaces are designed to carry on a continual interactive dialogue so that we are
informed of actions in progress at each step. This is particularly important when the
response time is high. Without feedback, we might begin to wonder what the system is
doing and whether the input should be given again.
As each input is received, the system normally provides some type of response.
An object is highlighted, an icon appears, or the message is displayed. This not only
informs us that the input has been received, but it also tells us what the system is doing.
If processing cannot be completed within a few seconds, several feedback messages
might be displayed to keep us informed of the progress of the system. In some cases, this
could be a flashing message indicating that the system is still working on the input
request.
With function keys, feedback can be given as an audible click or by lighting up the key
that has been pressed.
Audio feedback has the advantage that it does not use up screen space, and we do not
need to take attention from the work area to receive the message. When messages are
displayed on the screen, a fixed message area can be used so that we always know where
to look for messages.
In some cases, it may be advantageous to place feedback messages in the work area near
the cursor. Feedback can also be displayed in different colors to distinguish it from other
displayed objects.
To speed system response, feedback techniques can be chosen to take advantage
of the operating characteristics of the type of devices in use.
Special symbols are designed for different types of feedback
1.9 Interactive Picture construction techniques
An interactive construction technique means we will use a technique to construct a picture.
There are different picture construction techniques are available to construct a picture.
(i) Positioning – In this method we will position different objects according to the
requirement of an application. We will use various input device like mouse and
various other device to change the location of an object. In this method we will decide
the position of an object.
Advantages:
We can easily change the location of an object with the help of mouse
We can easily view where our object will appear on the window.
We can easily observe one object can overlap with each other or not.
Disadvantages
We cannot get the actual position of an object.
We cannot calculate accurate positioning point of an objects.
(ii) Dragging - In this method we will drag the object from one location to another location .In
this method we will select an object from one location and drag that object to another location.
This method is quite good to see the appearance of an object by dragging them from one location
to another location.
Advantages:
We can easily drag an object from one location to another location
We can easily see the appearance of final output that will come in the front of us.
Disadvantages
We are totally bounded to mouse.
Floating point position cannot be given to change the location of an object
(iii) Constraints - In this method we will use various rules that can be used to make a particular
picture. Constraints will contain various rules in the form of tools. Constraints can be
implemented according to the requirement of the user. Constraints can include eraser and shape
of pens and different other options.
Advantages:
We can modify a picture according to our requirement.
We can perform various operation on picture modification.
We can easily implement the rules over picture construction and user can be bounded to
do the task.
Disadvantages
Some time one rule or constraint cannot be implemented on any application or object.
(iv)Grids -In this method we will divide a picture into grids. Grids will be constructed
according to the size of an image. If image is large then grid will be more otherwise grid will be
less. Object will be arranged according to intersection of row and column . Where intersection
is happening where object will be placed. Suppose one object location position at 4.6 then it is
automatically shifted to 5 if it is at 4.3 then it is automatically shifted to 4.
Advantages:
Object will provide accurate position of an object
Object can be moved from one grid location to another location
Object are automatically shifted to rounded location
Disadvantages:
We cannot get accurate location of an object.
We cannot get accurate value of an object.
(v) Gravity field- In this method we will make gravity field around a particular line. Gravity
field will make a gravity around a line that look like any two lines are combining with each
other.
Advantages:
We can combine various shapes with each other without combining them.
We can change the appearance of different shapes without any changes in actual shape.
Disadvantages:
We cannot get actual appearance of a picture.
We cannot get the actual co-ordinates of an object.
(vi) Rubber-Band Method.
Straight lines can be constructed and positioned using rubber-band methods, which
stretch out a line from a starting position as the screen cursor is moved. Rubber-band
methods are used to construct and position other objects besides straight lines.
Line widths, line styles, and other attribute options are also commonly found in -painting
and drawing packages.
12. Consider a raster scan system with the resolution of 1024*768 pixels and the color palette
calls for 65,536 colors. What is the minimum amount of video RAM that the computer
must have to support the above mentioned resolution and number of colors?(NOV2016)
13. Consider two raster systems with the resolutions of 640*480 and 1280*1024(NOV2016)
(a).How many pixels could be accessed per second in each of these systems by a display
controller that refreshes the screen at a rate of 60 frames per second?
(b).What is the access time per pixel in each system?
14.Explain and differentiate the functionality of LED and LCD(NOV2017)
15.Explain Rubber band method,Zooming,Panning and Dragging. (NOV2017)
16..Discuss input devices in detail(NOV2018)