Documente Academic
Documente Profesional
Documente Cultură
COLLEGE
(An Autonomous Institution ,Affiliated to Anna University , Chennai)
An NBA Accredited Institution
Vattamalaipalayam,N.G.G.O. Colony Post,
Coimbatore-641022
Phone : 0422 - 2460088, 2461588
PROJECT PRESENTATION
NAME
: JAYACHANDRAN.R, KARTHIKPRABHU.S
DEPARTMENT : COMPUTER SCIENCE ENGINEERING
REGISTER NO
: 1301166, 1301160
SEMESTER
:2rd yr, 3rd sem
COLLEGE NAME: SRI RAMAKRISHNA ENGINEERINGCOLLEGE
Senseboard
Comparison between Traditional Keyboard and
Virtual Keyboard
Structure of the Virtual Keyboard system
Practical Setup
Use Case Diagram
Sequence Diagram for Image Acquisition
Sequence Diagram for Interrupt Detection in Frame
Sequence
NAME OF THE FIGURE
Sequence Diagram for Finger Extraction using
o Threshold Algorithm
Sequence Diagram for Finger Tip analysis by using
Edge detection
Sequence Diagram for Key Extraction
Class Diagram for Virtual Keyboard
Activity Diagram for Virtual Keyboard
Block Diagram for Image Acquisition
Snapshot of Image Acquisition
Block Diagram for Interrupt Detection in Frame
Sequence
Snapshot of Interrupt Detection
Block Diagram for Finger extraction using Threshold
3
Algorithm
Snapshot for Finger extraction using Threshold
Algorithm
Snapshot of Finger tip analysis by Edge Detection
Method
Snapshot Key Extraction
WinRunner Testing Result
WinRunner Testing Code
System Testing
Conclusion
Screen Shot for Choosing the Environment
LIST OF SYMBOLS
SYMBOL NAME
P(C/x)
p(C)
P(x/C)
DESCRIPTION
Posterior Probability
Prior Probability
Conditional Probability
p(X)
Feature Vector
max P (C\x)
CHAPTER 1
INTRODUCTION
1.1
1.2
PROBLEM STATEMENT
To design a vision based Virtual Keyboard which detects interrupt as key recognition
instead of mechanical transducer operations of key pressing. Mono-vision video of hand posture
for pressing the keys is analyzed. The analyzed hand posture is taken into account under various
5
transactions to estimate the key pressed. Mechanical transducers does two operations for key
estimation (key press and key release concepts), while Virtual Keyboard requires only key press
operation to estimate the key and not key release operation.
CHAPTER 2
SYSTEM ANALYSIS
2.1
EXISTING SYSTEM
Current virtual keyboards design appears in various forms such as finger-joint wearable
sensor gloves, thumb code, accelerometer based inputs, laser projected keyboards and gyroscope
based sensing. Each virtual keyboard has certain design characteristics. However, performance
parameters for evaluation of keyboards are same as number of discrete keys, response time and
failure rate. Other parameters included can be the ability to remap, key symbol mapping and
space requirements.
2.1.1 DRAWBACKS
The cost of the most successful Virtual Keyboard design is high due to design and
expensive technology.
2.2
An audible notification signals the recognition of a character after the 3 second significance
interval.
2.2.3 THUMBCODE
The Thumbcode method [14] described in figure 2.3 which defines the touch of the
thumb onto the other fingers phalanges of the same hand as key strokes. Consequently there are
12 discrete keys (three for each index, middle, ring finger and pinky). To produce up to 96
different symbols, the role between keys and operators is broken up: The four fingers can touch
7
each other in eight different ways, each basically representing a mode, or modifier key that
affects the mapping for the thumb touch. Tactile user feedback is implicit when touching another
finger with the thumb. A glove implementation was tested by the author.
2.2.5 FINGERING
FingeRing [4] uses accelerometers on each finger to detect surface impacts. In the
wireless version depicted in the figure 2.5 these rings communicate with a wrist-mounted data
processing unit. The interaction method is designed for one handed use, but could be extended to
8
two hands with obvious implications. In the current version, the finger movements to produce
one character are extensive: two chording patterns have to be typed within a time interval, each
consisting of a combination of fingers hitting the surface. Due to this piano-style typing method,
users with prior piano experience fare much better with this device; in fact, the full 2-stroke
chord mapping is rendered too difficult for novice users.
2.2.6 TOUCHSTREAM
The TouchStream keyboard stretches our definition of a VK as it has keys printed on the
surface. Yet the underlying technology permits software configuration of the sensed areas, equal
to the multi-point touchpad. Despite conventional touch-typing the TouchStream affords a
number of chording patterns as alternatives to modifier keys. These patterns are pressed by one
hand (anywhere on the pad) while the other touches the key that is to be modified. Picture of the
TouchStream keyboard is shown in the figure 2.6.
Inherent to this device are the same user feedback methods as for any of the devices employing
tabletop units: finger surface impacts.
2.2.8 VKEY
Virtual Devices Inc. recently announced a combined projection and recognition VK [12].
Little is known about this device, but their press release suggests that visual sensors (cameras)
detect the movement of all ten fingers as shown in figure 2.8. Just as the VKB device, the VKey
also consists of a tabletop unit and feedback is the tactile sensation of hitting a surface.
Figure 2.8.VKey
2.2.10 SCURRY
Tiny gyroscopes on each finger are the sensing technology in Samsungs Scurry [10].
The prototype suggests that these finger rings communicate with a wrist-mounted unit where the
data is processed. Finger accelerations and relative positions are detected, making it possible to
distinguish multiple key targets per finger. Pictorial representation of Samsungs Scurry is shown
figure 2.10. A surface impact is required to register a keystroke, making for the primary sensory
feedback to the user. Little LEDs on the rings potentially provide additional feedback.
2.2.11 SENSEBOARD
The Senseboard [11] consists of two rubber pads that slip onto the users hands. Muscle
movements in the palm are sensed (with unspecified, non-invasive means) and translated into
key strokes with pattern recognition methods. The only feedback other than characters appearing
on a screen comes from the tactile sensation of hitting the typing surface with the finger as in
figure 2.11.
11
12
CHAPTER 3
SYSTEM REQUIREMENTS
3.1
HARDWARE REQUIREMENTS
Processor
RAM
: 256 MB
Memory Space
Webcam
3.2
SOFTWARE REQUIREMENTS
Operating System
Media Software
13
CHAPTER 4
LITERATURE SURVEY
4.1
GESTURE
Gesture may be defined as the physical movement of hands, arm, face or body with the
intent to convey the information or command. Gesture recognition consists of tracking human
movement and interpretation of that movement as semantically meaningful commands.
There are three types of gesture. They are mimetic, deictic, and arbitrary. In mimetic
gestures, motions from an objects main shape or representative feature. These gestures are
intended to be a transparent. Deictic gestures are used to point at the important objects and each
gesture is transparent within its given context. These gestures can be specific, general or
functional. Specific gestures refer to one object. General gestures refer to class of objects.
Functional gestures represent intentions such as pointing to a chair to ask permission to sit.
Arbitrary gestures are those whose interpretation must be learned due to their opacity. Although
they are not common in a cultural setting.
4.2
GESTURE ARCHITECTURE
Like any other pattern recognition system, gesture recognition consists of three
components.
Gesture acquisition and preprocessing
Gesture feature extraction and representation
Gesture recognition and classification
14
Vision sensors are installed in mainly two configurations they are mono-vision and stereo
vision. Mono vision sensors incorporate one sensing camera naming CCD (charge coupled
device) or CMOS (complementary metal oxide semiconductor) with multiple possible interfacing
such as USB 2.0, Camera link, Ethernet, etc, for their video signal transmission. Similar kind of
acquisition sensors are utilized for stereo vision. However, the primary difference exists in the
further interpretation of stereo-imaging.
15
(1)
where p(C) is the prior probability, which in particular can be said the prior probability of
gesture. P(x\C) is the class likelihood and is the conditional probability that an event belonging
to see has observation x. Likelihood for gesture recognition can be specified as the conditional
probability that gesture belonging to class C as the feature vector x. p(x) is the evidence in the
sense that a particular feature vector for some gesture appears with this probability. Finally
posterior probability p(C/x) is calculated by combining the prior. Likelihood and evidence. For
multiple classes, the posterior probability can be calculated as
16
Finally for the minimum error Bayesian classifier selects the class with the highest
posterior probability i.e.
Select C if P (C\x) = max P (C\x)
(2)
4.3
REAL TIME
MONO VISION
GESTURE
BASED
VIRTUAL
KEYBOARD
Real Time Mono Vision Gesture Based Virtual Keyboard System paper presents a novel
mono-vision virtual keyboard design for consumers of mobile and portable computing devices
such as PDAs, mobile phones etc. Fuzzy for each symbol (rather than cording methods)
inherently approaches to gesture recognition are developed to reveal the realization of soft
keyboard. Key pressed over the printed sheet keyboard by analyzing the hand and finger gesture
captured in the video sequence. Real time system is developed by integrating camera with PDA
in the application environment. Reliable results are experienced by the implementation o f the
proposed real time mono vision gestured virtual keyboard system.
In this project a camera for video capture. Novel gesture recognition based virtual
keyboard system is designed. A gesture may be defined as the physical movement of hands, arm,
17
face and body with the intent to convey information or command. Gesture recognition consists
the tracking of human movement and interpretation of that movement as semantically
meaningful commands, Gesture recognition has the potential to be a natural and powerful tool
for intuitive interaction between the human and computer. Gesture recognition has been
successfully applied in virtual reality, human computer interaction, game control, robot
interaction, remote controlling of home and office appliances, sign language, activity
recognition, human behavior, and training systems etc. Gesture recognition system is designed in
four stages: gesture acquisition, feature extraction, classification, and learning. Gesture
acquisition is accomplished by position sensors, motion / rate sensors, and digital imaging.
Feature extraction and classification are real time stages to analyze the acquired gesture while
learning stage is off-line activity to learn the relationship between gesture and information or
command. a novel gesture recognition based virtual keyboard system which replicates the
transducer based keyboard system. Gesture acquisition is accomplished by a mono vision sensor.
Suppose the output of the keyboard system is defined as
C= {c1, c2, ...,cL} where c1 =A, C2=B etc where as L=63, its the total no of keys on the
keyboard.
Key stroke hand
Movement
Transducer Action
Character emitted
Traditional keyboard
Gesture analysis
Character emitted
Hand video is captured continuously, Then the video is disassemble into the frames. A
template is pre-stored in a database for the symbol A,B, etc. Such that the frames will
compare with the pre-stored data, stored in a database, when it is matched the corresponding
symbol will be printed.
CHAPTER 5
PROPOSED SYSTEM
5.1
PROPOSED WORK
19
In this project, a new perspective to view the problem of Virtual Keyboard is done by
using a simple mono vision camera. As the project is fully based on software centric and not
hardware centric, therefore project cost is drastically low. The proposed model of the system is
shown in the figure 5.1.
Steps involved in the proposed system:
Human hand finger movement is analyzed from a video sequence.
Background objects are eliminated (other than the finger) using Threshold algorithm.
Finger tip is analyzed and processed using Edge Detection Method.
Finally, key is evaluated from the process of Edge Detection Method.
WebCam
Capturing Key Strokes in the form of
Video
Separate Video into Frames
Identify the odd frame (varies maximum) from the frame
sequence
Convert frame into Binary Image by Thresholding and
extract the finger
Find the edge(x,y) of the finger from the obtained binary
image
5.2
PICTORIAL REPRESENTATION
20
A Camera is mounted over any desired location. But the only criteria are that the camera
should focus the entire keyboard layout. Practical representation of the Virtual Keyboard System
setup is shown in the figure 5.2.
Practical Setup
5.3
MODULES
21
5.3.4
a fundamental tool in image processing and computer vision, particularly in the areas of feature
detection and feature extraction, which aim at identifying points in a digital image at which the
image brightness changes sharply or, more formally, has discontinuities. Using vector calculation
method, if the estimated finger tip is located in the particular vector region then the desired key is
estimated.
5.3.5
KEY EXTRACTION
When key is evaluated, it should make readily available with all the text editors and other
applications. This is done by overriding the evaluated key through hardware. Robot package in
java does this process easily. Importing Robot package, Robot class is easily called with an
object. This object is called by two default functions such as keyPressed and keyReleased with a
single integer argument.
5.4
ADVANTAGES
Games can make the maximum utilization of the keyboard by displaying only those keys
that are used in the game.
Touch screen is similar to this implementation, but they do require additional effort and
are not ergonomically comfortable. User doesnt have to raise his arm to the monitor
every time to use it.
5.5
APPLICATIONS
As an alternative keypad for mobile phones or smart devices which have a frontal
camera. This is possible if the software is converted to J2ME.
At places where a computer or device has multi-lingual users like in net cafes.
Highly comfortable usage in areas like ATMs, Hospital Bill Checking, Railway
Reservation Center, etc.
5.6
5.6.1 THRESHOLDING
Thresholding is the simplest method of image segmentation. From a grayscale image,
thresholding can be used to create binary images. During the thresholding process,
individual pixels in an image are marked as object pixels if their value is greater than some
threshold value (assuming an object to be brighter than the background) and as background
pixels otherwise. This convention is known as threshold above. Variants include threshold below,
which is opposite of threshold above; threshold inside, where a pixel is labeled "object" if its
value is between two thresholds; and threshold outside, which is the opposite of threshold
inside . Typically, an object pixel is given a value of 1 while a background pixel is given a
value of 0. Finally, a binary image is created by coloring each pixel white or black, depending
on a pixel's labels.
will generally not be the case. A more sophisticated approach might be to create a histogram of
the image pixel intensities and use the valley point as the threshold. The histogram approach
assumes that there is some average value for the background and object pixels, but that the actual
pixel values have some variation around these average values. However, this may be
computationally expensive, and image histograms may not have clearly defined valley points,
often making the selection of an accurate threshold difficult. One method that is relatively
simple, does not require much specific knowledge of the image, and is robust against image
noise, is the following iterative method:
1. An initial threshold (T) is chosen; this can be done randomly or according to any other
method desired.
2. The image is segmented into object and background pixels as described above, creating
two sets:
1. G1 = {f(m,n):f(m,n)>T} (object pixels)
2. G2 = {f(m,n):f(m,n) T} (background pixels) (note, f(m,n) is the value of the
pixel located in the mth column, nth row)
3. The average of each set is computed.
1. m1 = average value of G1
2. m2 = average value of G2
4. A new threshold is created that is the average of m1 and m2
1. T = (m1 + m2)/2
5. Go back to step two, now using the new threshold computed in step four, keep repeating
until the new threshold matches the one before it (i.e. until convergence has been
reached).
5.6.1.1ADAPTIVE THRESHOLDING
25
discontinuities in depth,
In the ideal case, the result of applying an edge detector to an image may lead to a set of
connected curves that indicate the boundaries of objects, the boundaries of surface markings as
well as curves that correspond to discontinuities in surface orientation. Thus, applying an edge
detection algorithm to an image may significantly reduce the amount of data to be processed and
may therefore filter out information that may be regarded as less relevant, while preserving the
important structural properties of an image. If the edge detection step is successful, the
subsequent task of interpreting the information contents in the original image may therefore be
substantially simplified. However, it is not always possible to obtain such ideal edges from real
life images of moderate complexity. Edges extracted from non-trivial images are often hampered
by fragmentation, meaning that the edge curves are not connected, missing edge segments as
26
well as false edges not corresponding to interesting phenomena in the image thus complicating
the subsequent task of interpreting the i
CHAPTER 6
PROPOSED SYSTEM DESIGN
6.1
UML DIAGRAMS
Set of fundamental design concepts have evolved over the past three decades. Each one
provides the software designer with the foundation from which more sophisticated design
methods can be applied. The choice of what models and diagrams one creates has a great
influence on how a problem was encountered, and how a corresponding solution is shaped.
Design is defined as a model of the system and continues by converting this model to
implementation of the new system. Every complex system is best approached through a small set
of nearly independent view of the model. Every models can be expressed at different levels. The
best models are connected to reality.
In the design process the following UML diagrams are used.
27
The use case model describes the uses of the system and shows the courses of events that
can be performed. In other words, it shows a system in terms of its users and how it is being used
from a user point of view. Furthermore, it defines what happens in the system when the use case
is performed. In essence, the use case model tries to systematically identify uses of the system
and therefore the systems responsibilities
A use case model also can discover classes and relationships among subsystems of the
systems. A use case model can be developed by talking to typical users and discussing the
various things they might want to do with the application being prepared. Each use or scenario
represents what the user wants to do.
The use case model expresses what the business or application will do and not how. Since
it is called as what model.
A use case is a sequence of transactions in a system whose task is to yield results of
measurable value to an individual actor of the system. Since the use case model provides an
external view of the system or application, it is directed primarily toward the users or the
actors of the system, not its implementers. An actor is a user playing a role with respect to the
system. Use case diagram of the proposed Virtual Keyboard System is shown in the figure 6.1.
28
V I DEO CAPTURE
FINGER OCCURANCE
USER
BACKGROUND ELIMINATION
SYSTEM
FINGER ESTIMATION
KEY EXTRACTION
29
box at the top of a dashed vertical line. Each message is represented by an arrow between the
lifelines of two objects. The order in which this messages occur is shown top to bottom on the
page. Thus a sequence diagram is very simple and has immediate visual appeal.
Lifelines start at the top of the sequence diagram and descend vertically to indicate the
passage of time. Interactions between objects- messages and replies-messages are drawn as
horizontal direction arrows connecting lifelines. In addition, boxes known as combine fragments
are drawn around sets of arrows to mark alternative actions, loops and other control structures.
Sequence diagram for Image Acquisition module is shown in the figure 6.2.
USER
CAMERA
SYSTEM
FRAME
30
SYSTEM
FRAME
INTERRUPT
DETECTION
FRAME
SEPERATION
Seperated frame
THRESHOLD
Seperated frame
Figure 6.4 Sequence Diagram for Finger Extraction using Threshold Algorithm
Sequence Diagram for Finger Tip analysis by using Edge detection module is
shown in the figure 6.5.
31
SYSTEM
EDGE
DETECTION
VECTOR
VALUE
Binary image
Edge is detected
Figure 6.5 Sequence Diagram for Finger Tip analysis by using Edge detection
Sequence Diagram for Key Extraction module is shown in the figure 6.6.
SYSTEM
KEY
EXTRACTION
ROBOT CLASS
32
SCREEN
Video to Frames
Frame extracted
Buffered image
Frame saving
Threshold
Get RGB value
Average RGB
Binary image
save()
frame extract()
get RGB()
avg RGB()
Edge detection
Get finger tip vector
Vector matching
get estimate finger tip()
process. The purpose of an activity diagram is to provide a view of flows and what is going on
inside a use case or among several classes. However, an activity diagram can also be used to
represent a classs method implementation as well.
An activity diagram is used mostly to show the internal state of an object, but external
may appear in them. An external event appears when the object is in a wait state, a state during
which there is no internal activity by the object and object is waiting for some external event to
occur as the result of an activity by another object. The two states are wait state and activity
state. The Activity diagram of the proposed system is shown in the figure 6.8.
VIRTUAL
KEYBOARD
VIDEO
CAPTURE
DIVIDE INTO
FRAMES
FINGER
OCCURANCE
BACKGROUND
ELIMINTION
FINGER
ESTIMATION
FINGER TIP
ANALYSIS
KEY
EXTRACTED
34
CHAPTER 7
PROJECT DESCRIPTION
7.1
IMAGE ACQUISITION
Image acquisition is first step of this project where acquisition process is done by using
simple mono vision camera. Simple mono vision camera is not a special one just like ordinary
camera such as external camera or web cam integrated. Camera should provide atleast 320x240
pixels of image. Camera pixel should be 1.3 or above. More the pixel greater the accurate rate
and decreases the error rate.
Camera default device driver is required for hardware overriding .When multiple cameras
are connected, a window is opened to select required camera. vfw://0 is the key word used to
detect multiple connected cameras.
Acquisition is in the form of the video. Video is sequence of multiple frames. When video
splitted into frames, each frames deserved as images 10-15 frames are created for every second.
During the process of image acquisition, all the obtained images are stored in any desired
location if necessary. The block diagram of the Image Acquisition is shown in the figure 7.1 and
Snapshots are on the figure 7.2.
SURFACE
CAMERA
VIDEO
CAPTURE
CONVERTING
VIDEO TO
FRAMES
35
FRAMES COMBINES
TOGETHER AND FORMS A
VIDEO
VIDEO INTO
FRAMES
7.2
compared with previous images and grab the image which has maximum variation. The block
diagram of the Interrupt Detection in Frame Sequence is shown in the figure 7.3 and Snapshots
are on the figure 7.4.
Check for maximum
variation in 5 frames
Yes
FRAMES
No
Grab the
Frame
PROCEEDS TO NEXT
LEVEL
36
INTERRUPT DETECTED
FRAME
SEQUENCE OF FRAMES
7.3
color model. RGB color model is removed from each image and only black and white color
model remains. These two colors are called as binary colors and finally a binary image is created.
Thus by converting a colored image into binary image helps in calculating and analyzing
the fore ground objects. Ultimately background object details are quietly often reduced to greater
level.
Combining all these red >16, green >8, blue>0 produces binary image. The block
diagram of the Finger extraction using Threshold Algorithm is shown in the figure 7.5 and
Snapshots are on the figure 7.6.
37
7.3.1
IMAGE PIXEL
Red >16
WHITE PIXEL
no
yes
Green >8
no
yes
Blue >0
no
yes
BLACK PIXEL
RGB IMAGE
BINARY IMAGE
7.4
38
BINARY IMAGE
7.5
KEY EXTRACTION
Once when the Virtual Keyboard model is implemented, evaluated key or the model
should act as like as traditional keyboard i.e., when key is evaluated, it should made readily
39
available with all the text editors and other applications. This is done by overriding the evaluated
key through hardware. Robot package in java does this process easily. Importing Robot package,
Robot class is easily called with an object. This object is called by two default functions such as
keyPress and keyRelease with a single integer argument. But in case of this Virtual Keyboard
only keyPress function in sufficient for Hardware overriding. The figure 7.8 represents the
method of Key Extraction.
40
CHAPTER 8
SYSTEM TESTING
8.1
SYSTEM TESTING
System testing is the stage of implementation. It aims at testing and ensuring that the
system works accurately and efficiently before live operation commences. The logical design and
physical design should be thoroughly and continually examined on paper to ensure that they will
work when implemented. Thus the system in implementation should be a confirmation that all
system works. The testing phase includes entering the sample data to verify whether the system
is suitably working to the requirements mentioned. This phase is important in the way that it
actually deals with the real data.
Software testing is an important element of software quality assurance and ultimate
review of specification, design and coding. In testing, the engineer creates a series of test cases
that are indented to demolish the software that has been built.
8.2
OBJECTIVE OF TESTING
The rules that serve for testing are,
finding an error.
A good testing is the one that has the high probability of finding undiscovered errors.
A successful test is the one that uncovers a discovered error.
41
If testing is conducted successfully according to the above objectives, it will uncover the
8.3
UNIT TESTING
Unit testing is a procedure used to validate the individual unit of code. A unit is the
smallest testable part of an application. The goal of unit testing is to isolate each part of the
program and show that the individual parts are correct. A unit test provides a strict, written
contract .
8.4
INTEGRATION TESTING
Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the
phase of software testing in which individual software modules are combined and tested as a
group. Integration testing takes as its modules that have been unit tested, group them in larger
aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its
output the integrated system ready for system testing.
8.5
VALIDATION TESTING
Validation testing provides the final assurance that the software meets all functional
behaviour and performance requirements. The software once validated must be combined with
other system elements. After each validation test cases has been conducted, two possible
conditions exist.
They are:
42
8.6
INTRODUCTION TO WINRUNNER
Win Runner facilitates easy test creation by recording how work on applications work as
you point and click GUI objects in your applications. You can generate a test script in the C-like
Test Script Language (TSL). You can further enhance your test script with manual programming.
Win Runner includes the function generator which helps you to quickly and easily add functions
to your recorded test.
CHAPTER 9
CONCLUSION AND FUTURE ENHANCEMENT
9.1
CONCLUSION
Results showed a very reliable and practical system. The proposed system is less cost due
to its software centric mechanism rather a hardware centric mechanism. Performance of the
system had been tested over Personal Computer. The data set involved in the development of the
system can be easily altered as user requests. Standard data set style is implemented. Response
time for Key Extraction is less quite compared to system like Finger-Joint Gesture Wearable
Keypad, Thumbcode, etc.
9.2
FUTURE ENHANCEMENT
Failure rate of the system entirely depends on the light intensity. The system works on
dim light also but failure rate is high due to shadowing effects. Future work relies on high
efficiency of the system with rid of light intensity.
43
44
OUTPUT
45
INPUT
OUTPUT
46
47