Sunteți pe pagina 1din 6

OpenCV C# Wrapper Based Video Enhancement

Using Different Optical Flow Methods in the


Super-Resolution
Nagy A. and Vmossy Z.
Budapest Tech/John von Neumann Faculty of Informatics, Budapest, Hungary
vamossy.zoltan@nik.bmf.hu

Abstract This article presents a simple method on how to video. In the OpenCV library, several optical flow
implement a super-resolution based video enhancement methods can be found. Our program makes it possible to
technique in .NET using the functions of the OpenCV choose any of them to be used for the video processing.
library. First, we outline the goal of this project and after This way it is possible to compare what optical flow
that, a short review of the steps of super-resolution algorithm produces the best result for a given situation.
technique is given. As a part of the discussion about the
concrete implementation, firstly the general design aspects II. SUPER-RESOLUTION
are detailed. Then, the OpenCV C# wrapper and the
different optical flow algorithms are analyzed. Finally, the Two main approaches exist for super-resolution. One is
achieved results can be seen and after that section, a short the reconstruction-based approach [2] and the second is
general conclusion can be read. the so-called learning algorithm based super-resolution
[3]. These approaches are substantially different. In case
of the learning algorithm based technique, the problem is
I. INTRODUCTION generally solved using neural networks. It only requires an
Our purpose is to develop an application what can input image to work, so there is no need for video but a
process an input video file and create a new stream from it database of sample images is necessary. From the
what has higher resolution than the original one. The goal database, the algorithm can learn how to enlarge certain
is that more detail should be seen on the processed video parts of the input image. In nutshell, this method works
then on the original. A program like this would be useful like this: a database of images is used to find out how a
for many tasks. For example, it could improve the poor certain part looks like in low and high resolution. After the
image quality and resolution of videos created by security learning step, the algorithm is executed on the image what
cameras or mobile phones. Generally, the program is need to be enhanced and every portion of the input image
capable of the quality enhancement in case of low is replaced with the high-resolution version of it.
resolution and/or if the input has noticeable noise on the Hereinafter the reconstruction-based approach is reviewed
frames. These capabilities exist because of the features of and the term super-resolution is only applied to that.
the super-resolution technique what is presented in section The point of reconstruction-based super-resolution is
II. To get an impression about the quality enhancement that an input video stream is given and the algorithm
what can be achieved using our system see Fig. 1. During processes every frame of it sequentially. For every frame,
the development of the system, we lean upon the functions an optical flow vector field has to be calculated. This
of the quite effective OpenCV image processing library vector field describes the displacement of each pixel of the
[1]. Hereby, the comparability of the implemented frames. Since each frame is given and information about
methods is ensured as the next paragraph outlines it. the displacement of pixels is known, it is possible to use
Reconstruction based super-resolution technique consist this information to approximate a frame from its
of more steps. One of these is the optical flow calculation. neighbours. The pixels of any frame i can be transformed
The purpose of this step is to find out how did the pixels using their offset vectors to approximate an arbitrary
of the frames move compared to the previous frame of the frame j. For example, if the pixels of the frame n are
translated using the optical flow vectors, a new image can
be got, what approximates the frame n+1. Using these new
images, extra information can be gained what makes it
possible to enhance the resolution and content wealthiness
of the frames. Of course, all this is only theoretical
because it is possible, for example, that a car appears on
the frame 20, but not on the frame 5000 by the reason of
movement of the camera. This is one of the reasons, why
this super-resolution method can work only between
certain limits. However, we assume that the consecutive
frames are almost the same, namely the movement speed
Figure 1. Enlarged frame of the original video as later, always to the
left using bicubic interpolation.The result image as later, always to
of the camera is less than a particular limit.
the right generated by our program. The quality enhances mainly at Optical flow calculation has several other pitfalls such
the engine hood and at the roof. as the appearance of transparent and reflective surfaces,

1-4244-2407-8/08/$20.00 2008 IEEE

Authorized licensed use limited to: National Taipei University of Technology. Downloaded on October 13, 2009 at 03:31 from IEEE Xplore. Restrictions apply.
problems related to the characteristics of the surface
sample (like repetition, homogeneous color fragments,
etc.). Unfortunately, because of these pitfalls, this super-
resolution method is not perfect at all, but in certain
conditions it can produce quite good result. In the next
subsections, the main steps of the super-resolution
technique are reviewed in short to be easier to demonstrate
the working of our system later. The main steps of super-
resolution are considered as in [2]. A schematic figure of
these steps can be seen on Fig. 2.
A. Registration
Figure 2. Schematic figure about the main steps of super-resolution
The purpose of this first step is to enlarge the resolution [2].
of the input video and execute optical flow calculation on
it. For the enlarging, conventional subpixelling techniques Any number of input frames we use for this step, the
can be used such as nearest neighbour algorithm, bilinear output will be k pieces of images what illustrate mostly the
or bicubic interpolation. In practice, only the bicubic same instant with small differences.
interpolation is appropriate since it results quite precise
outcome. The other two algorithms are not producing such C. Fusion
fine result so they are suitable only for testing and The last major step of super-resolution is the fusion.
comparing. Moreover, several special subpixelling The goal of this is to produce a single output frame from
algorithms were designed specifically for super- the warped k input images. The fusioned (and enhanced)
resolution: BLAZE, SIAD [4] and LAZA [5]. output frame is the outcome of our system. Numerous
When the source frame is enlarged, optical flow methods exist to perform this kind of merging on a
calculation has to be performed on it. This is necessary collection of input images. The most trivial method is the
because using the calculated offset vectors of each pixel; simple averaging. In that method, every pixel intensity at a
the given frame can be transformed to any other. For given coordinate of the warped frames is averaged and the
instance, to retrieve extra information for the frame n, result is written into the output image at the exact same
several other frames are needed to be transformed to coordinate. Besides, many other techniques exist for
approximate it. OpenCV implements Lukas-Kanade, fusion such as median and other more sophisticated
pyramid based Lukas-Kanade, Horn-Schunck, and block algorithm as they are summarized in [2]. Since averaging
matching optical flow algorithms. In our system, any of produces relative good result according to its simplicity,
these can be chosen for optical flow calculation. The we use that in our implementation.
quality of the outcome is highly affected by the selected
method and the passed parameters. D. Deblurring
In fact, the super-resolution algorithm implemented by
B. Warping our program ends with the fusion step. In [2] a post-
When the enlarged version of the input frame is created processing/deblurring step is defined as an option. The
and the offset vector for each pixel is calculated, these can purpose of that is to achieve additional quality
be used to perform a per-pixel geometrical transformation enhancement on the output image of fusion. To obtain
(warping) on the subpixelled image. To do this, the good enough result with deblurring, the authors propose a
positions of each pixel are needed to be translated with the deconvolution technique. That is based on the idea of
appropriate optical flow vector or with its inverse. Thus, a reversing the image distortion what leaded to the blurry
new image m is generated which is almost the same as the image. For deconvolution, every parameter of the camera
frame n+1, but small differences can occur. The extra is needed to be modeled. This includes the modelling of
information is provided by these differences, and aparture, focus distance, the lenses, geometrical features,
eventually they ensure that new details became visible. and the refraction characteristics. These parameters were
Naturally, there are no constraints on how many not available for us because we wanted to create a general-
consecutive frames can be used for warping. For example, purpose solution and not a solution what works only with
using the frames n-2, n-1, n+1, n+2 and perform the the images of a specific camera. Therefore, for deblurring
transformation on them, multiple images can be mapped we tried other more common techniques like unsharped
to approximate the frame n. The number of frames used masking and Laplacian sharpening. However, the
by this transformation also highly affects the quality of the application of these did not produce sensible enhancement
result image created by this super-resolution algorithm. If in quality so in the future developments we rather try to
the camera or the objects in the video move slowly, more prevent the formation of blur in the subpixelling or fusion
frames can be used since this can improve the overall stage.
quality of the generated video. Otherwise, it is much better
to use small number of frames for this operation to avoid III. THEORETICAL BACKGROUND OF IMPLEMENTATION
blurring and distortion. Those side effects come from the The basic concept of the design and implementation
imprecision and the errors of optical flow calculation. The was to create a relatively efficient program with an easy to
farther the frame from the frame n, the errors became read and clear source structure.
larger because of the additional errors of the optical flow
For implementation, Microsoft Visual C# 2005 and
calculation. .NET 2.0 are been used, but the using of these inhibit the
utilization of the Intel OpenCV library, because in .NET

Authorized licensed use limited to: National Taipei University of Technology. Downloaded on October 13, 2009 at 03:31 from IEEE Xplore. Restrictions apply.
environment it is not possible to use directly an read. The working of these applications is very different
unmanaged library like OpenCV. However, if an from the ones written e.g. in C language. The main
intermediate layer (a wrapper) is placed between the .NET difference is that applications written in C language are
application and the unmanaged library, it is possible to use compiled into a platform specific machine code.
OpenCV indirectly. Therefore, a wrapper is necessary to Contrarily, .NET applications are compiled to an
bridge the differences between the working of managed intermediate language called Common Intermediate
and unmanaged code. Some wrappers already exist for Language (CIL). At the starting of the program these CIL
OpenCV ie. [6, 7] but these were not good for us. Some of instructions are transformed into a well optimized
them are obsolete or buggy, purely documented, examples platform specific machine code by a module called
are not available for them, or simply they do not make Common Language Runtime (CLR). Other features of
possible the usage of certain functions what we needed. CLR are i.e. interoperability with older applications,
Because of these, we decided to develop a new .NET language independence, automatic memory and exception
wrapper for OpenCV. management, security, etc. [8]. Another difference is that
During the designing of our program, object oriented in .NET, operation system services are available only
paradigm was fallowed and we aspired to create a through a virtual machine. This enables the platform
program what can be easily extended with new independency, which is one of the biggest benefits of
algorithms. Extensibility is present in each step of super- .NET. To bridge the fundamental differences between the
resolution; new techniques can be implemented and tested two concepts, a wrapper module is required.
easily. As the developers of .NET were farsighted, they
To achieve the requested extensibility, abstract base implemented many features to enable the using of services
classes were defined for each step of super-resolution. of earlier libraries written in C or C++. For our wrapper,
Specific algorithms of subpixelling, optical flow marshalling, p/invoke and interop services features of
calculation, warping, and fusion can be chosen arbitrarily. .NET were used [9]. These features allow the linking with
Each base class determinates the inputs and outputs of a a non-.NET library. The goal is to declare how to interpret
given step and the concrete implementation is completely the function parameters and return values for each
irrelevant outside of a specific class. Beside the base necessary native OpenCV function. This includes the
classes, there is a main control class too. It coordinates the declaration of parameter types and its passing type, what
steps of super resolution and the working of the entire can be pass-by-value or pass-by-reference. In some cases,
system. Specific classes what implement a given step of the importation of certain functions can be more
super-resolution can get the values of general-purpose complicated. This occurs mainly where the type of
parameters from this control class to do their job. parameter or return value is a structure pointer, an indirect
pointer or maybe a pointer what is handled like a base
First, the class of subpixelling does its task and enlarges
memory address of a multi-dimensional array. The dealing
the input frame. After this, the optical flow calculator
class does its job. Optical flow is calculated between the with these parameters is more difficult since .NET does
current and the previous frame, and the result is stored in a not support their automatic marshalling1 as it highly
depends on the memory managing features of the system.
queue. This storage is useful otherwise, the algorithm
Instead of pointers, rather the IntPtr type was used in the
should have to calculate the vectors for the same frames
wrapper, because pointers can only be used in the so-
more times what is obviously unnecessary. If in iteration i,
the flow vectors are calculated between frames n and n+1 called unsafe code.2 That type is specifically designed to
store a memory address. It is appropriate where the only
then in the next iteration the same optical flow vectors
knowledge of the memory address is sufficient but there is
would be calculated. This come from sequential
no need to access the data at the address. However, this
processing. In iteration i+1 the frame n is replaced with
n+1, and n-1 frame replaced with n, so eventually vectors solution is not suitable, i.e. for passing of structure
pointers since the fields of the structure located at the
are needed to be calculated between n and n+1 again,
memory pointer cannot be accessed easily. The situation is
since the algorithm uses several frame around n (i.e. n-
similar if it is necessary to handle the pointer as an array.
2..n+2). Similar optimization can be performed for the
storing of warped frames, but in that case, only the frames Fortunately, .NET can handle one-dimensional arrays
implicitly through automatic marshalling. To use that,
before n are needed to be stored. After the calculation of
function parameters are needed to be declared using the
flow vectors, warping is executed on the subpixelled
array declaration syntax of C#. After that, CLR does all
frames to approximate frame n. As a last step, the class of
fusion merges the previously warped frames. Our other thing automatically, and the array parameter can be
implementation uses the simple averaging technique to do used as usual. Similarly, automatic marshalling also
supports the basic .NET types such as integers, character,
this.
or string. Logical type is also available and can be used
where the native function requires a 32-bit integer.
IV. THE DIRECTCV WRAPPER
Reasonably, automatic marshalling does not work i.e. for
custom types and structure pointers so the marshalling of
A. Introduction to wrapper developement those is needed to be done manually using the marshalling
As it could be seen, it is vitally important for the services of .NET. For example, to load an image
cooperation to place an intermediate layer between the cvLoadImage function is needed to be called and it returns
unmanaged OpenCV and the .NET. This section is about a
wrapper for this task. The main concepts of it are
reviewed and some basic techniques are introduced for 1
Marshalling is the technique what describes how to send data from
.NET wrapper development. Before the detailed one memory address to an unmanaged module.
2
discussion of our wrapper, a short review about .NET Unsafe code is a C# only feature. If the wrapper used unsafe code, it
environment and features of .NET applications can be would not be appropriate for other .NET languages. (like Visual Basic)

Authorized licensed use limited to: National Taipei University of Technology. Downloaded on October 13, 2009 at 03:31 from IEEE Xplore. Restrictions apply.
a memory address to a native IplImage structure. To Many of the OpenCV functions have some integer
access the data fields of that, a structure with the same parameters where only some predefined constants can be
fields is needed to be declared in C#. It is important to passed. For these parameters, in the wrapper rather the
keep the order and size of fields the same as they are in enumeration type of C# was used, because it has
the OpenCV implementation. Otherwise, the marshalling numerous benefits over against integer constants. It
of the field values would be incorrect. After the proper prevents the passing of wrong values, and the code will be
declaration of the structure, data can be copied from the much easier to read as enumerated constants can have
returned memory address into an instance of the new talkative names.
structure using the Marshal.PtrToStruct function. This Developers of the Intel supplied the functions with
way, data fields of the native IplImage structure can be default parameters in many cases. These are quite useful
accessed easily through the C# structure. However, it is where many parameters can be passed. C# does not
important to know that with this, only a new C# structure support default parameters, however the problem can be
is allocated and initialized using the data located at the solved with method overloading. Different versions of the
given address. The effect of this is that if OpenCV same functions with different parameter lists can be
changed the value of a data field then it would not be defined easily, and this way the wrapper can provide full
altered in the C# structure, as one would expect it, since support for default parameters too.
the OpenCV and the C# structure are located at different As summary, the developed DirectCV is a well usable
places of the memory. Similar thing happens when a field
and simple wrapper for OpenCV. DirectCV imports most
of C# structure is changed. For the correct working, it is
of the OpenCV functions, and using it can be suitable to
useful to take care of the synchronization between the two
solve many tasks in computer vision and image
versions of the same structure. Synchronization can be processing. Since the wrapper is very simple, robust and
performed using the PtrToStructure, and StructureToPtr
lightweight, it can be used as a base of a more complex,
methods of the Marshal class. Now that some technical
fully object oriented .NET wrapper for OpenCV.3
details are discussed, some implementation features can
be read in the next subsection.
V. REVIEW OF THE OPTICAL FLOW METHODS
B. Implementation of the DirectCV wrapper In OpenCV, four optical flow algorithms are
The purpose of this module is to provide a simple implemented. This section gives a short review about the
managed interface for the Intel OpenCV library. This general features of these and how they affect the output of
wrapper is heavily used by our application, since basic the super-resolution algorithm.
image processing functions can only be accessed using Each method is differential optical flow estimation,
this layer. Therefore, it is planned to be easy-to-use, what takes two sequential frames as input. X and Y
efficient, and robust. components of the estimated offset vectors are calculated
The direct word in the name of the project refers that into separate floating-point images.4 Unfortunately,
direct calling of OpenCV functions is possible with this optical flow calculation is a problem what cannot be
wrapper. This means that the names of methods in the solved squarely since equations contain too many
wrapper are the almost same as in OpenCV but there is no unknowns. Anyway, to solve the calculation problem,
cv prefix in these method names. It would be some constraints and conditions are needed to be defined
unnecessary, since most of the OpenCV functions are to have more equations. The various optical flow
available as a member of a static class called CV. Function algorithms use different constraints and conditions.
parameter lists are the same as in original OpenCV, but Because of these, there is no perfect algorithm exists. The
since in C# only unsafe functions can take pointers as a task and the input determinate what technique gives the
parameter or a return value, there are some differences. As best result in a certain situation.
it is written in the previous subsection, pointers can be
replaced with IntPtr type, arrays or out / ref parameters. A. Lukas-Kanade optical flow method
It is important to emphasize that the only purpose of The simplest optical flow calculator function is the
DirectCV is to enable a simple interface for the calling of cvCalcOpticalFlowLK what implements the Lukas-
OpenCV functions. This means that the wrapper tries to Kanade method. The initial constraint of this method is
be as similar with the OpenCV as possible and for that, that the optical flow has to be constant in a small window.
sometimes it does not follow object oriented design First, both X and Y direction of partial derivative needed to
patterns. One of the most important goals was that the be calculated and then a Gaussian filter is performed on
name, the parameters and working of these methods be the input images to reduce the noise and get finer vectors.
almost the same as they are in the native OpenCV. A great One of the features of this method is that the offset
benefit of this is that the using of DirectCV can be learned vectors are accurate mainly at the borders of the moving
in no time, if a developer already knows the image- regions. The vectors are calculated locally and because of
processing library of Intel. Another advantage is that the this, they change quickly between consecutive frames. A
OpenCV function reference can be used for DirectCV too; big advantage is that noise does not affect the computed
available C samples can be transformed and tried out in vectors so much, and the offset vectors are estimated for
C#, as almost everything is the same. The methods of the each pixel. In super-resolution, this is necessary, since
wrapper are well-documented based on the function later in the warping step; every pixel has to transform
reference, raising the usability of the module. The
documentation is converted to a form that IntelliSense can 3
DirectCV is under the GNU General Public License V3, and can be
handle it, so useful information can be seen about the
used freely by anyone. The latest version with source code can be
method, the parameters or result value at the time of downloaded at Google Code [10].
typing. 4
An image with a single channel of 32-bit floating point values.

Authorized licensed use limited to: National Taipei University of Technology. Downloaded on October 13, 2009 at 03:31 from IEEE Xplore. Restrictions apply.
using its offset vector. As we shall see later, not all optical
flow algorithms calculate the vectors per pixel. For those
algorithms, we have to ensure that the vectors are known
for every pixel. The simplest way to do this is to enlarge
the floating-point images of offsets to the same size as the
resolution of the subpixelled frames.
According to our tests, this function is not the best to
calculate optical flow for super-resolution since the
vectors are accurate only at the borders of homogeneous
regions and only small offset vectors can be calculated. Figure 4. Motion blur is generated when fast moving happens. This
These can lead to the blurring of the enhanced image. was caused by the warping step, as the length of offset vectors of optical
flow is too small. Motion blur present at each technique more or less.
B. Lukas-Kanade method with Gaussian pyramids
feature points. In this way, later only the resizing of X
The cvCalcOpticalFlowPyrLK function implements the and Y offset images is needed to be performed.
Gaussian pyramid based Lukas-Kanade algorithm. In this
This optical flow technique is useful where the
case, optical flow is calculated on the levels of Gaussian
following of certain feature points is sufficient and large
pyramids what are built from the input images. The
pixel offsets can occur between sequential frames. For
advantage of this is that first, the function works only on
super-resolution this method does not provide good
the small version of the images, and estimates the offsets
enough result because of the possible large errors of the
from those. If the error of the result is too large, the
offset vectors and hence because of the raindrop-like
algorithm processes finer levels of the Gaussian pyramid.
effect.
When the result of processing of the smaller levels is quite
accurate, it is not necessary to continue the algorithm on C. Horn - Schunck optical flow method
levels that are more detailed. This speeds up the working,
since generally offsets can be estimated quite accurately at Generally, our super-resolution system provides the
the small levels. Therefore, the technique needs to process best quality when Horn-Schunck optical flow method was
much less pixels. Another feature is that larger offset used. It works similarly like the basic Lukas-Kanade
vectors can appear. This eliminates one of the algorithm but it defines a global constraint for smoothness
disadvantages of the basic Lukas-Kanade technique. to solve the optical flow calculation. This means that it is
necessary for the adjacent offset vectors to be similar with
The larger offset vectors can have bigger errors. The each other. By this, the technique is able to estimate the
big errors in the optical flow can cause raindrop-like inside of homogeneous regions accurately. Therefore,
distortions on the result of the super-resolution as Fig. 3 while Lukas-Kanade algorithm can only estimate
shows. The reason for this is that, a very distant random accurately the vectors at the borders of moving regions,
image part is mapped to a small portion of the image Horn-Schunck algorithm generates accurate vectors inside
because of the large and bad offset vectors. the homogeneous areas too. For this, it uses the calculated
Opposite to the previously presented method, this does not vectors at the borders of moving areas to the rest of the
estimate the vectors for each pixel. It does that only for the homogeneous region. Since the vectors at these borders
given feature points. There are several algorithms for influence the offset values of a relatively big region, it is
finding feature points5, but as these points are placed crucial for the border vectors to be accurate. As this
randomly across the image it would be hard to find out the cannot be guaranteed, the technique is more sensitive to
offsets of all pixels. To solve this, coordinates of a regular noise. Even so, this algorithm generates the best result for
grid are used to estimate the offsets. For example, every super-resolution, because of the handling of homogeneous
fifth or tenth pixel coordinates are appropriate to be regions. As result, the vector field is more accurate, less
blurring is evolved at the homogeneous regions, since
warping can use more correct optical flow vectors.
This technique is implemented by the
cvCalcOpticalFlowHS function, what is well-scalable
through its parameters. Besides of the common
parameters, termination criteria can be passed to describe
how accurate estimation should be done for the vectors.
D. Block matching optical flow method
The only technique what has not been discussed yet is
the block matching algorithm. It works by finding similar
blocks on the consecutive frames where the intensities of
the pixels are almost identical. The shifts of these blocks
give the offset vectors of optical flow. This kind of block-
matching technique is originally used for video
compression to minimize the redundancy of the video
Figure 3. Raindrop-like effect on the output of super-resolution when
using the pyramid based Lukas-Kanade technique. In this sample, bad
frames and hence make the size of the video smaller.
parameters were passed for the function, so the effect is more At the cvCalcOpticalFlowBM function, block size and
noticeable. other parameters can be passed. Similar to
cvCalcOpticalFlowPyrLK, this function does not produce
5
Like cvGoodFeaturesToTrack corner detector function of OpenCV.
optical flow vectors for each pixel of the input images.

Authorized licensed use limited to: National Taipei University of Technology. Downloaded on October 13, 2009 at 03:31 from IEEE Xplore. Restrictions apply.
Figure 5. Wrong block sizes can cause the false estimation of flow Figure 7. Bicubic interpolation of a video frame (left) and the
vectors. This can lead to the creation of blur and distortions. enhanced frame (right) created by the program. For the enhancement
Horn-Schunck optical flow algorithm was used. .
The size of the vector field is determined by the block
size, and after the calling of the function, resizing of the
vector field is crucial. If the size of the original input VII. CONCLUSION
image is O and the block size is B, then the resolution of In summary, we developed a well-extensible, simple
the vector field is O / B. handable system, in which different reconstruction based
Block matching optical flow is the second best super-resolution methods can be tested and compared with
algorithm for our super-resolution program. Offset vectors each other. In contempt of simplicity, sometimes quite
change smoothly amongst the frames, and the vectors are good results can be achieved. During the comparison of
quite accurate at homogeneous regions too. So blurring is the different optical flow methods for super resolution,
not significant, however sometimes false offset vectors are sometimes the generated output image has intensive blur,
estimated at the edges of the images when wrong block mainly because of the imprecision of the given optical
size was used as Fig. 5 shows. flow method.
Furthermore, a .NET wrapper was developed for
VI. ACHIEVED RESULTS OpenCV, what is available freely for anyone, easy-to-use,
As in the previous section can be read, Horn-Schunck well-documented, and has extensive IntelliSense support.
optical flow method produces the best result for our super-
resolution implementation and for the test videos. Besides, REFERENCES
bicubic interpolation is used for subpixelling, and the [1] Intel, Open Source Computer Vision Library,
cvRemap function for warping. cvRemap maps each pixel http://www.intel.com/technology/computing/opencv, visited on
to a given coordinate what is exactly the thing need. For 2008-08-12
fusion only the simple average of the warped images are [2] S. Baker and T. Kanade. Super-Resolution Optical Flow, CMU-
RI-TR-99-36, 1999
used. The fusioned image is the image with enhanced
[3] L. Zhouchen, H. Junfeng, T. Xiaoou, T. Chi-Keung, Limits of
quality and it is the output of our system. Fig. 6 and 7, Learning-Based Superresolution Algorithms, Technical Report
shows some sucsessful outputs of the program. MSR-TR-2007-92
Generally, the outputs of the program have much less [4] S. Battiato, G. Gallo, F. Stanco, Smart Interpolation by
noise than the inputs, because many frames are averaged Anisotropic Diffusion, IEEE 12th Int. Conf. on Image Analysis
during the fusion step. Furthermore, the result is much and Processing, 2003, pp. 572577.
smoother and aliasing is less noticeable as in Fig. 7 can be [5] S. Battiato, G. Gallo, F. Stanco, A Locally Adaptive Zooming
seen. Motion blur and other distortions can occur when Algorithm for Digital Images Elsevier Image and Vision
Computing, 20/11, 2002, pp. 805-812
there is a lot of changing on the frames like in case of Fig.
[6] OpenCVDotNet, .NET Framework Wrapper for Intel's OpenCV
4. Package, http://code.google.com/p/opencvdotnet/, visited on 2008-
08-12
[7] Rhodes University, SharperCV project,
http://www.cs.ru.ac.za/research/groups/SharperCV/, visited on
2008-08-12
[8] J. Richter, Applied Microsoft .NET Framework Programming,
Microsoft Press, 2002.
[9] C. Nagel, B. Evjen, J. Glynn, M. Skinner, K. Watson, A. Jones:
Professional C# 2005, Wiley Publishing, Inc., 2006.
[10] A. Nagy, DirectCV, A lightweight .NET wrapper for Intel's
OpenCV library., http://code.google.com/p/directcv/ visited on
2008-08-13

Figure 6. Bicubic interpolation (left) and the enhanced frame (right).


Compare the images at the shoulder of the man. The right image is
much more realistic since it is smoother and there is less noise.

Authorized licensed use limited to: National Taipei University of Technology. Downloaded on October 13, 2009 at 03:31 from IEEE Xplore. Restrictions apply.