Sunteți pe pagina 1din 5

Research Article

A Comparative Study on Paper Currency

Recognition and Identification Using Image
Processing Techniques
Shivani Joshi1, Kavleen Kaur Banga2, Sameer Prabhu3
Almost all the countries across the world today have their own individual currency systems for different
denominations. Various features that accompany these currencies like images, logos, watermarks, serial
numbers, textures, font styles etc., tell them apart from each other. These features at the same time can
be used for counterfeit currency production, which if not put a check on, can become a threat to any
nation’s economy. In this paper we have tried to review the currencies of various countries, their markers
and features and the algorithms and techniques used to recognize and authenticate them.

Keywords: Image correlation, Gaussian radial basis function, Image segmentation, Back propagation
neural network, Gaussian Blurring, Image colour planes, Region of Interest(ROI)

Various currency recognition and authentication systems are in place today. But many of those are not conveniently
available and also are very costly systems for the small to medium size businesses to install at their work location. Also
such systems are available mostly at banks and not where the actual money flow takes place. Therefore, in this paper
we have compared different algorithms for a number of portable and accessible currency authentication and recognition
systems for various countries based on image processing techniques.

We first discuss their image acquisition methods and the data sets used for authentication or recognition in Section
2, followed by a variety of pre-processing techniques applied on the images taken in section3. Section 4 of this paper
deals with various features that are identified and then extracted to produce and achieve the desired results. Section5
compares various classification algorithms that were used for either recognition or authentication, their accuracies and
few of their drawbacks, finally leading to our conclusion that is section 6, which compares all the papers simultaneously
in a comparison table format.

Dataset Collection
Image acquisition is the first step in detection of the paper currency. There are two methods for collection of data set
namely scanner based and camera based. Scanner based systems become more complicated and heavy therefore
camera based systems are preferred.

Paper [1] focuses on identification and recognition of Indian Paper Currency of all denominations (Rs.10, Rs.20, Rs.50,
Rs.100, Rs.500 and Rs.1000). However, Rs.500 and Rs.1000 denomination have been banned with effect from 8th
November 2016 and have been replaced by new Rs.500 and Rs.2000 notes. The dataset of [1] comprises of 10 samples
of each denomination taken with obverse and reverse direction. For proving the system’s accuracy they included 10
counterfeit notes along with 60 real notes.

Student, 3Asst.Professor, Electronics and Telecommunications Dept, NMIMS’s MPSTME, Mumbai, India.

E-mail Id:

Orcid Id:

How to cite this article: Joshi S, Banga KK, Prabhu S, A Comparative Study on Paper Currency Recognition and Identification Using Image
Processing Techniques. J Adv Res Image Proc Appl 2017; 4(3&4): 7-11.

© ADR Journals 2017. All Rights Reserved.

Joshi S et al. J. Adv. Res. Image Proc. Appl. 2017; 4(3&4)

Whereas [2] deals with Saudi Arabian currency (Saudi riyals) Image Pre-Processing
of denomination 1, 5, 10, 20, 50, 100, 200 and 500 Riyals. It
only recognizes the Saudi Arabian currency denomination Image Pre-processing is one of the vital steps in any image
and is not concerned with its validity. There were a total based systems. This is because it enhances the input image
of 110 samples in the database out of which 10 samples such that it favors feature extraction. Applying image
were taken with a tilt angle of less than 15o, 50 samples enhancement techniques such as converting to grey scale,
were noisy and remaining 50 were normal. converting to binary or HSV image or histogram modelling
makes the image features more prominent and painless to
In [3] Egyptian Currency has been studied with denomi- extract. Following is the list of pre-processing techniques.
nations of: 5,10,20,50,100 and 200. Its dataset consists of
120 images with 20 samples per denomination. The images RGB to Grey Scale
were captured using normal mobile phone cameras. At the
end of denomination recognition is speaks out the face A Gray scale image is an image in which every pixel carries
value of the currency in Arabic. only the intensity information. It is different from a binary
image in a way that, a binary image has only two values-
In [4] bank note authentication and recognition is studied zero or one; whereas a grayscale image has varying values
where a sample data set of 167 banknote images was of grey. Converting the colour image of a paper currency
acquired with a count of 10 samples from each side for to grayscale is important because it reduces computation.
all the denominations of USA and New Zealand currency. In an RGB image all the calculations have to be done on all
Denominations used were 100 and 10 dollar bills. the three planes i.e red, green and blue plane while in a
grayscale image there is only one plane. Hence conversion
In [5] 256-colored banknote images are obtained where to grayscale considerably reduces the computational time.
they are processed in various angles and for both front In [1] and [2] this technique is one of their pre-processing
and back sides. It deals with European currency notes of techniques. Equation (i) represents the conversion of RGB
various denominations namely,5euro, 10euro, 20euro, image to a gray scale image[9].
50euro, 100euro, 200euro and 500euro.
Y=0.21 R + 0.72 G +0.07B ---------------(i)
In [6]Image acquisition is in real time via a camera or
through a scanner to check for the authentication of real RGB to HSV
and fake currency notes. The currency under consideration
is Indian currency note of denominations Rs.2000. An RGB image consists of three planes- Red, Green and
Blue. The pixel values are nothing but the intensity of

Fig 1.RGB colour space and HSV colour space[8]

each primary light colour in the image. This can also be normalization and defines Cmax, Cmin and delta. Whereas
represented as Hue, Saturation and Value (HSV). When equations (iv),(v),(vi) show the RGB to HSV conversion[7].
each RGB pixel value is converted to respective HSV value
then the image is said to be an HSV image. Figure(1) shows R’=R/255;G’=G/255;B’=B/255. -------------------(ii)
the RGB colour cube and the HSV cylinder. [1] applies this
technique for hue calculation, which leads to a major Cmax = max(R’,G’,B’)
feature distinction of a real and a counterfeit note For Cmin = min(R’,G’,B’)
conversion first the pixel values are divided by 255 so that
0-255 range is converted to 0-1. Now all the pixel values Δ=Cmax-Cmin --------------------(iii)
will range from 0 to 1. Equations (ii) and (iii) represents the

J. Adv. Res. Image Proc. Appl. 2017; 4(3&4) Joshi S et al.

to enhance the contrast of the image, hence making it

more clear.

In [4] the resized image is further processed using histogram

equalization for adjusting image intensities to enhance

Otsu thresholding

This algorithm calculates the optimum threshold value

which separates the foreground and the background pixels.
These pixels are considered as two separate classes. It
constantly searches for such a threshold value which
minimizes the intra-class variance. Based on this threshold
value then it performs thresholding, that is pixels having
value above the optimum threshold value are equated to
maximum intensity(white) and pixels having value below
In [6] the image acquired is converted to HSV and is from
the threshold value are equated to least intensity(black).
decomposed to find the decomposition difference in the
Algorithm: 1)Plot histogram and calculate probability of
real and fake currency notes in all the three planes.
all the pixel values. 2) Setup initial weights of both the
Gaussian Blurring classes as zero. 3) Step through values of t from one to
maximum. 4) Update weights and variances of the two
When an image is convolved with a Gaussian function then classes. 5) Maximum variance corresponds to the desired
it is known as Gaussian smoothing or Gaussian blurring. threshold. This techniques is used in [3] to separate the
Paper [3] effectively uses this technique to filter noise currency note from the background.
the image. But Gaussian function does not filter salt and
pepper noise adequately. Below equation(vii) is the general Edge detection
Gaussian function[3].
Edge detection techniques mainly aim at identifying points
where the brightness of the image changes sharply or
has discontinuities due to sudden change. These points
Grey to Binary are integrated into a set of curved line segments termed
edges. This enhances the object against a background.
A binary image has only two pixel values-zero and one. Work done in [6]has applied image detection technique
Converting from grayscale to binary is an application of to find various objects present against the background
thresholding. By specifying a particular threshold, all the of the currency note in order for easier recognition and
values above the threshold value are replaced by 1 and extraction of features of the currency.
all the values below threshold are replaced by zero. In [2]
and [3] this is used as it makes edge detection and ROI Feature Extraction
extraction easier. Equation (viii) shows the condition for
converting grey image to binary image[9]. Every set of currencies have various features which
distinguishes them. Some features are very apparent such
as colour, emblem, bank name, numerical printing etc by
which anyone can recognize the denomination and state
the origin of the currency. However, there are some hidden
security features which are obscure and not very apparent.
Therefore such features need to be extracted and tested
Histogram Equalization to prove the authenticity of the paper currency.

Histogram of an image is a plot of all the pixel intensities Firstly out of a single paper note such areas are separated
of the image against the number of total pixels having from the other parts of the note which contain a specific
that particular intensity. By looking at a histogram of any feature. Similarly many such areas may be extracted because
image a general overview of how the image; low contrast, in a single paper currency there are many security features
high contrast etc can be said. However to improve an to look for. Such extraction is called as region of interest
image a technique called histogram equalization is applied extraction (ROI). This is the most common techniques and
such that the pixels spread almost equally among all the is used in [1] and [3].
pixel intensities.[3] implements histogram equalization

Joshi S et al. J. Adv. Res. Image Proc. Appl. 2017; 4(3&4)

In [3] there is only one ROI which is extracted and cross images, the input image will then be identified as a particular
correlation is calculated between the database and the currency of a specified denomination. This process is called
input image. This cross-correlation function forms it’s as classification.
feature. Whereas in [1] there are many ROIs, one such being
watermarked area. In [1] many such areas are extracted and There are various methods of classifying the input currency
various mathematical functions are calculated for them. image. Simple template matching and correlation matching
For example, for the watermark area mean pixel intensity techniques have been used in [3] and [1] respectively. In
and standard deviation is calculated. For micro lettering [1] they have considered various features. For the input
validation advanced (Optical Character Recognition) OCR image all these features are extracted and matched against
techniques are used on the enhanced ROI. the database image features. Only if all the features match
the currency is declared to be authentic otherwise it is
In [2] there is no ROI extraction but it forms features of the declared as fake. Similarly , in [3] various parameters have
image as a whole. It calculates the image height and width been calculated for all the denomination types. Then
in pixels, image area without mask, with mask 1(Prewitt same parameters are calculated for the input image. It
mask) and mask 2(canny mask). It also calculates Euler’s then matches these set of input image parameters against
number of the image and the correlation between the all the parameter sets of the database images. The class
database image and the input image. All of these form corresponding to the one with highest degree of matching
feature of a single currency denomination. is declared as the output.

In[4] two sub-sets of features are extracted from the grey In [2] the classification technique is a bit advanced and
levels that are calculated to form a 640x312 pixel image, not as simple as in [1] and [2]. It uses Radial Basis Neural
namely colour and texture features. To calculate the shape Network function for classification. Such systems learn
descriptors a histogram of the grey levels is calculated for progressively to classify things by prior training. In the
frequencies ranging from dark to light colour. The six shape training stage the classes are manually defined and stated.
descriptor metrics that are obtained are, kurtosis, central The database is divided into various classes. Each class
moment, mean, variance, standard deviation and skew. The corresponds to one particular denomination of currency.
five text descriptors that are extracted are calculated along For example, denomination of 10Rs, 50Rs, 100Rs will form
with four other features from the gray-level co-occurrence three different classes. Then they will be manually stated
matrix (GLCM).Those are correlation, contrast, energy and as class one through three and also vice versa. Therefore
homogeneity respectively. These colour and text features when an image is given as input, its features are extracted
when linked together form the feature vector which is the and weighted against the defined classes. If input image
input to the FNN classifier. is of a 10Rs note then its features will resemble or match
with class one. Therefore the system will say that the
In [5] the feature extracted was a 8-pixel Same-Coloured input image belongs to class one and further class one
Area which is the darkest on the special block in the corresponds to 10Rs note. Therefore the note placed is a
banknotes. They have to be as dark as possible, because 10Rs note. Radial basis neural network is a kind of neural
black colour features are robust to noise. The continuous network function in which the weights are calculated using
same coloured area was found by using a search algorithm radial basis function during the training procedure.
that helped to find the next 7 pixels that are same as the
base pixel on the special block. It is I-pixel long and 8-pixel In [4] the sample bank note is classified according to its
wide. Then a proper distinctive data with a starting point is respective denomination, either for back or front side
obtained where the starting point is the distinctive point, using various algorithms like AdaBoost, pattern recognition
thus reducing the number of distinctive data. The origin trained Feedforward Neural Network (PRFNN), Cascade
of the distinctive point is the upper left comer point of the forward Neural Network (CNN) and FNN. Out of which
banknote image FNN yielded the highest accuracy and was trained using
Bayesian regulation back propagation.
[6]Follows image segmentation after edge detection
process. The HSV decomposed image is segmented in In [5] The input vectors of the neural network were the
such a way that the ROI remains to be only the security starting points of the Same-Coloured Areas that were
thread that is required for further feature extractions to detected. With this target vector at hand and input vectors
authenticate the currency note in question. consisting of distinctive points, the neural network for
back-propagation was trained.
In [6]the image is concluded to be either fake or authentic
For any system, an input image of any currency needs on the basis of the black pixels extracted from the HSV
to be matched with the set database. Based on the level decomposed security thread. If it is a continuous strip
of similarity between the input image and the database of black pixels without any prominent discontinuities

J. Adv. Res. Image Proc. Appl. 2017; 4(3&4) Joshi S et al.

Table 1.Comparison of six paper

Ref. Dataset Features Extracted Classification Result
no Technique
1 10 samples of each Various All the features are Recognition :
denomination, with features using ROI. Example: checked and compared 100%
theobverse and watermark, micro- lettering, against the reference Identification of
reversefaces. 60 real notes latent image etc. values. counterfeit: 90%
and 10counterfeit notes.
2 110 samples, Image Gaussian Radial Normal Non-
10 images with tilt angle dimensions, image areas, basis function with 25 Tilted Images: 95.37%,
<15deg; Rest 50-50 noisy Euler number, and image neurons in the hidden Noisy Tilted Images:
and normal. correlation layer. 91.65%, and Tilted
Images: 87.5%
3 120 images, 20 images per ROI Template matching Recognition:89%
denomination Extraction and Cross- of input image and
correlation database images
4 167 sample Feature Adaboost, PRFNN, FNN shows
banknote images, 10 Extraction: colour features, CNN and FNN 98.6 % accuracy
samples from each side of central moment, mean, AdaBoost achieves
each denomination were variance, standard deviation 53 %, PRFNN 95.7
used. , skew and texture % and CNN 94.3 %
5 256-coloured distinctive Classification: 5euro-100%
banknote images point extraction and Same- Back-propagation 10euro-100% 20euro-
Coloured Area as distinctive neural network 100% 50euro-100%
data area. 100euro-95% 200euro-
95% 500euro- 100%
6 Real time Image Black pixel 100% accuracy
image acquisition segmentation of HSV discontinuities from
decomposed image to the HSV decomposed
obtained security thread security thread.
strip as ROI.

the currency is termed as authentic and in cases of 3. NouraA.Semary” CurrencyRecognitionSystemforVisu-

discontinuities occurring in the black pixel strip it is termed allyImpaired:EgyptianBanknoteasaStudyCase”,IEEE,
as fake. 2016.
4. W.Q.Yan,J.ChambersandA.Garhwal,“Anempiricalap-
Conclusion proachforcurrencyidentification,”SpringerScience,Busi-
ness Media New York 2014,2014
There are various ways in which a currency can be recognized 5. Jae-Kang Lee “New Recognition Algorithm for Various
or identified. Each section describes how each objective Kinds of Euro Banknotes ”, IEEE2003
can be obtained using so many different algorithm. Though 6. Snehlata “Identification of Fake Currency: A Case Study
using different methods the accuracy achieved is different. of Indian Scenario”, International Journal of Advanced
Table 1 provides a crisp and quick comparison of all the six Research in Computer Science 2017.
papers. It states the dataset of the paper, features extracted, 7. Martin Loesdau et al “Hue and Saturation in the RGB
classifiers used and the accuracy achieved. Color Space”, 6th International Conference, ICISP 2014.
References Lecture Notes in Computer Science., At Cherbourg,
France, Volume: 8509, pp 203-212.
1. Sahana Murthy et al “Design and Implementation of 8. Jack Wu “Image Processing in iOS part 1”,raywendelich,
Paper Currency Recognition with Counterfeit Detec- July2014.
tion”, IEEE2016. 9. Rafeal C.Gonzalez, Richard E. Woods, “Digital Image
2. Muhammad Sarfraza “An intelligent paper currency Processing”, Third Edition.
recognition system”,Elsevier,2015.