Sunteți pe pagina 1din 23

Introduction

ViSP [21] is a modular C++ library that allows fast development of visual servoing applications.
ViSP is developed and maintained by the Inria Lagadic team located at Inria Rennes.
ViSP official site is http://team.inria.fr/lagadic/visp
If you have any problems or find any bugs, please report them at
http://gforge.inria.fr/tracker/?group_id=397. If you may need help, please use the available
forums http://gforge.inria.fr/forum/?group_id=397 or mailing lists
http://gforge.inria.fr/mail/?group_id=397.
It is also possible to contact ViSP main developers using: visp@inria.fr
Download
From http://team.inria.fr/lagadic/visp/download.html you can either download the latest
stable release or the current development distribution using Subversion.
Installation
Because ViSP is a multi platform library that works under Linux, OSX and Windows, to install
ViSP you need the CMake configuration tool available from http://www.cmake.org.
Furthermore, depending on your operation system and the capabilities (framegrabber, display,
simulation, ...) you need, prior to install ViSP you may install third party libraries
http://team.inria.fr/lagadic/visp/libraries.html.
ViSP full installation procedure using CMake is detailed in the introduction tutorials. Getting
started documents in pdf are also available from
http://team.inria.fr/lagadic/visp/publication.html.
Tutorials
Here after you will find a list of tutorials that show the basic use of ViSP classes with a small first
code.
Introduction

Tutorial: Installation from prebuilt packages on Linux Ubuntu


In this first tutorial you will learn how to install ViSP prebuilt library from Ubuntu
packages.
Tutorial: Installation from source on Linux Ubuntu
In this first tutorial you will learn how to install ViSP from source on Linux Ubuntu.
Tutorial: Installation from source on Linux Fedora
In this other tutorial you will learn how to install ViSP from source on Linux Fedora.
Tutorial: Installation from source on Raspberry Pi
In this tutorial you will learn how to install ViSP from source on Raspberry Pi.
Tutorial: Installation from source on Windows
In this tutorial you will learn how to install ViSP from source on Windows.

Tutorial: Installation from source on OSX for iOS devices


In this tutorial you will learn how to install ViSP from source on OSX for iOS project.
Tutorial: Getting started
This tutorial shows how to build a project that uses ViSP to read and display an image.
Tutorial: Getting started for iOS
This tutorial shows how to build a project that uses ViSP on iOS devices.

Image manipulation

Tutorial: Image frame grabbing


This tutorial shows how to acquire images from a camera.
Tutorial: Image filtering
This tutorial shows how to filter an image with ViSP.
Tutorial: Planar image projection
This tutorial shows how to project the image of a planar scene to a given camera position.

Camera calibration

Tutorial: Camera calibration


This tutorial explains how to calibrate a camera.

Tracking

Tutorial: Blob tracking


This tutorial introduces blob tracking and detection.
Tutorial: Keypoint tracking
This tutorial focuses on keypoint tracking using Kanade-Lucas-Tomasi feature tracker.
Tutorial: Moving-edges tracking
This tutorial focuses on line and ellipse tracking using moving-edges.
Tutorial: Model-based tracking
This tutorial focuses on model-based trackers using either edges, keypoints or and hybrid
scheme that uses edges and keypoints.
Tutorial: Template tracking
This tutorial focuses on template trackers based on image registration approaches.

Keypoint

Tutorial: Keypoint matching


This tutorial shows how to detect and match SURF keypoints.

Computer vision

Tutorial: Pose estimation from points


This tutorial focuses on pose estimation from planar or non planar points.
Tutorial: Homography estimation from points
Here we explain how to estimate an homography from couples of matched points.

Visual servoing

Tutorial: Image-based visual servo


This tutorial explains how to simulate an IBVS.

Tutorial: Visual servo simulation on a pioneer-like unicycle robot


This tutorial focuses on visual servoing simulation on a unicycle robot. The study case is
a Pioneer P3-DX mobile robot equipped with a camera.
Tutorial: How to boost your visual servo control law
This tutorial explains how to speed up the time to convergence of a visual servo.

Other tools

Tutorial: Real-time curves plotter tool


This tutorial explains how to plot curves in real-time during a visual servo.
Tutorial: Debug and trace printings
This tutorial explains how to introduce trace in the code that could be enabled for
debugging or disabled.

Using ViSP
ViSP C++ classes are organized in modules that may help the user during his project
implementation.
From the example page, you will also find examples showing how to use the library to acquire
and display an image, compute a camera pose, estimate an homography, servo a real robot or a
simulated one using a 2D, 2D half or 3D visual servoing scheme, ...

Image frame grabbing


Images from firewire cameras
The next example shows how to use a framegrabber to acquire color images from a firewire
camera under Unix. The following example suppose that libX11 and libdc1394-2 3rd party are
available.
#include <visp/vp1394TwoGrabber.h>
#include <visp/vpDisplayX.h>
#include <visp/vpImage.h>
int main()
{
#ifdef VISP_HAVE_DC1394_2
try {
vpImage<unsigned char> I; // Create a gray level image container
bool reset = true; // Enable bus reset during construction (default)
vp1394TwoGrabber g(reset); // Create a grabber based on libdc1394-2.x third party lib
g.setVideoMode(vp1394TwoGrabber::vpVIDEO_MODE_640x480_MONO8);

g.setFramerate(vp1394TwoGrabber::vpFRAMERATE_60);
g.open(I);
std::cout << "Image size: " << I.getWidth() << " " << I.getHeight() << std::endl;
#ifdef VISP_HAVE_X11
vpDisplayX d(I);
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
while(1) {
g.acquire(I);
vpDisplay::display(I);
vpDisplay::flush(I);
if (vpDisplay::getClick(I, false))
break;
}
}
catch(vpException e) {
std::cout << "Catch an exception: " << e << std::endl;
}
#endif
}
Here after we explain the new lines that are introduced.
First an instance of the frame grabber is created. During the creating a bus reset is send. If you
don't want to reset the firewire bus, just turn reset to false.
vp1394TwoGrabber g(reset);
Once the grabber is created, we set the camera image size, color coding, and framerate.
g.setVideoMode(vp1394TwoGrabber::vpVIDEO_MODE_640x480_MONO8);

g.setFramerate(vp1394TwoGrabber::vpFRAMERATE_60);
Note that here you can specify some other settings such as the firewire transmission speed. For a
more complete list of settings see vp1394TwoGrabber class.
g.setIsoTransmissionSpeed(vp1394TwoGrabber::vpISO_SPEED_800);
Then the grabber is initialized using:
g.open(I);
From now the color image I is also initialized with the size corresponding to the grabber
settings.
Then we enter in a while loop where image acquisition is simply done by:
g.acquire(I);
We are waiting for a non blocking mouse event to break the while loop before ending the
program.
if (vpDisplay::getClick(I, false)) break;
In the previous example we use vp1394TwoGrabber class that works for firewire cameras under
Unix. If you are under Windows, you may use vp1394CMUGrabber class. A similar example is
provided in tutorial-grabber-CMU1394.cpp.
Images from other cameras
If you want to grab images from an usb camera under Unix, you may use vpV4l2Grabber class.
To this end libv4l should be installed. An example is provided in tutorial-grabber-v4l2.cpp.
It is also possible to grab images using OpenCV. You may find examples in tutorial-grabberopencv.cpp and tutorial-grabber-opencv-bis.cpp.
Images from a video stream
With ViSP it also possible to get images from an input video stream. Supported formats are
*.avi, *.mp4, *.mov, *.ogv, *.flv and many others... To this end we exploit ffmpeg or OpenCV
3rd parties.
If ViSP was build with ffmpeg 3rd party support (cmake -DUSE_FFMPEG=ON ...), we use
ffmpeg capabilities to decode the video stream. If ffmpeg is not found or used (cmake DUSE_FFMPEG=OFF ...) and if OpenCV is available (cmake -DUSE_OPENCV=ON ...) we
use rather OpenCV capabilities. This new feature was introduced in ViSP 2.10.0 and is
especially useful under Windows where installing ffmpeg is quite complex.
The example below available in tutorial-grabber-video.cpp shows how o consider an mpeg video
stream.

Warning
We recall that this example works only if ViSP was build with ffmpeg or OpenCV
support.
#include <visp/vpDisplayGDI.h>
#include <visp/vpDisplayOpenCV.h>
#include <visp/vpDisplayX.h>
#include <visp/vpTime.h>
#include <visp/vpVideoReader.h>
int main()
{
try {
vpImage<unsigned char> I;
vpVideoReader g;
g.setFileName("./video.mpg");
g.open(I);
std::cout << "video framerate: " << g.getFramerate() << "Hz" << std::endl;
std::cout << "video dimension: " << I.getWidth() << " " << I.getHeight() << std::endl;
#ifdef VISP_HAVE_X11
vpDisplayX d(I);
#elif defined(VISP_HAVE_GDI)
vpDisplayGDI d(I);
#elif defined(VISP_HAVE_OPENCV)
vpDisplayOpenCV d(I);
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
vpDisplay::setTitle(I, "Video grabber");

while (! g.end() ) {
double t = vpTime::measureTimeMs();
g.acquire(I);
vpDisplay::display(I);
vpDisplay::flush(I);
if (vpDisplay::getClick(I, false)) break;
vpTime::wait(t, 1000. / g.getFramerate());
}
}
catch(vpException e) {
std::cout << e.getMessage() << std::endl;
}
}
We explain now the new lines that were introduced.
#include <visp/vpTime.h>
#include <visp/vpVideoReader.h>
Include the header of the vpTime class that allows to measure time, and of the vpVideoReader
class that allows to read a video stream.
vpVideoReader g;
Create an instance of a video reader.
g.setFileName("./video.mpg");
Set the name of the video stream. Here video.mpg refers to a mpeg file located in the same
folder than the executable.
The vpVideoReader class can also handle a sequence of images. For example, to read the
following images:
% ls *.png
image0000.png

image0001.png
image0002.png
image0003.png
image0004.png
...
you may use the following
g.setFileName("./image%04d.png");
where you specify that each image number is coded with 4 digits. Here, ffmpeg is no yet
mandatory, but rather libpng or OpenCV that should be available to read PNG images.
Supported image formats are PPM, PGM, PNG and JPEG.
Then as for any other grabber, you have to initialize the frame grabber using:
g.open(I);
Then we enter in the while loop until the last image was not reached:
while (! g.end() ) {
To get the next image in the stream, we just use:
g.acquire(I);
To synchronize the video decoding with the video framerate, we measure the beginning time of
each loop iteration:
double t = vpTime::measureTimeMs();
The synchronization is done by waiting from the beginning of the iteration the corresponding
time expressed in milliseconds by using:
vpTime::wait(t, 1000. / g.getFramerate());
You are now ready to see the next Tutorial: Blob tracking.
IMAGE FILTERING

Introduction
In this tutorial you will learn how to use ViSP filtering functions implemented in vpImageFilter
class.
Let us consider the following source code that comes from tutorial-image-filter.cpp.

#include <visp/vpDisplayD3D.h>
#include <visp/vpDisplayGDI.h>
#include <visp/vpDisplayGTK.h>
#include <visp/vpDisplayX.h>
#include <visp/vpDisplayOpenCV.h>
#include <visp/vpImageIo.h>
#include <visp/vpImageFilter.h>
void display(vpImage<unsigned char> &I, const std::string &title);
void display(vpImage<double> &D, const std::string &title);
void display(vpImage<unsigned char> &I, const std::string &title)
{
#if defined(VISP_HAVE_X11)
vpDisplayX d(I);
#elif defined(VISP_HAVE_OPENCV)
vpDisplayOpenCV d(I);
#elif defined(VISP_HAVE_GTK)
vpDisplayGTK d(I);
#elif defined(VISP_HAVE_GDI)
vpDisplayGDI d(I);
#elif defined(VISP_HAVE_D3D9)
vpDisplayD3d d(I);
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
vpDisplay::setTitle(I, title.c_str());
vpDisplay::display(I);
vpDisplay::displayCharString(I, 15,15, "Click to continue...", vpColor::red);
vpDisplay::flush(I);
vpDisplay::getClick(I);
}
void display(vpImage<double> &D, const std::string &title)
{
vpImage<unsigned char> I; // Image to display
vpImageConvert::convert(D, I);
display(I, title);
}
int main(int argc, char** argv )
{
try {
if(argc != 2) {
printf( "Usage: %s <image name.[pgm,ppm,jpeg,png,bmp]>\n", argv[0] );
return -1;
}
vpImage<unsigned char> I;
try {
vpImageIo::read(I, argv[1]);
}
catch(...) {
std::cout << "Cannot read image \"" << argv[1] << "\"" << std::endl;
return -1;
}
display(I, "Original image");

vpImage<double> F;
vpImageFilter::gaussianBlur(I, F);
display(F, "Blur (default)");
vpImageFilter::gaussianBlur(I, F, 7, 2);
display(F, "Blur (var=2)");
vpImage<double> dIx;
vpImageFilter::getGradX(I, dIx);
display(dIx, "Gradient dIx");
vpImage<double> dIy;
vpImageFilter::getGradY(I, dIy);
display(dIy, "Gradient dIy");
#if (VISP_HAVE_OPENCV_VERSION >= 0x020100)
vpImage<unsigned char> C;
vpImageFilter::canny(I, C, 5, 15, 3);
display(C, "Canny");
#endif
vpMatrix K(3,3); // Sobel kernel along x
K[0][0] = 1; K[0][1] = 0; K[0][2] = -1;
K[1][0] = 2; K[1][1] = 0; K[1][2] = -2;
K[2][0] = 1; K[2][1] = 0; K[2][2] = -1;
vpImage<double> Gx;
vpImageFilter::filter(I, Gx, K);
display(Gx, "Sobel x");
size_t nlevel = 3;
std::vector< vpImage<unsigned char> > pyr(nlevel);
pyr[0] = I;
for (size_t i=1; i < nlevel; i++) {
vpImageFilter::getGaussPyramidal(pyr[i-1], pyr[i]);
display(pyr[i], "Pyramid");
}
return 0;
}
catch(vpException &e) {
std::cout << "Catch an exception: " << e << std::endl;
return 1;
}
}
Once build, you should have tutorial-image-filter binary. It shows how to apply different
filters on an input image. Here we will consider lena.pgm as input image.

To see the resulting filtered images, just run:

./tutorial-image-filter lena.pgm
The following sections give a line by line explanation of the source code dedicated to image
filtering capabilities.

Gaussian blur
Lena input image is read from disk and is stored in I which is a gray level image declared as
vpImage<unsigned char> I;
To apply a Gaussian blur to this image we first have to declare a resulting floating-point image F.
Then the blurred image could be obtained using the default Gaussian filter:
vpImage<double> F;
vpImageFilter::gaussianBlur(I, F);
The resulting image is the following:

It is also possible to specify the Gaussian filter kernel size and the Gaussian standard deviation
(sigma) using:
vpImageFilter::gaussianBlur(I, F, 7, 2); // Kernel size: 7, sigma: 2
We thus obtain the following image:

Gradients computation

To compute the gradients or the spatial derivative along X use:


vpImage<double> dIx;
vpImageFilter::getGradX(I, dIx);
Gradients along Y could be obtained using:
vpImage<double> dIy;
vpImageFilter::getGradY(I, dIy);
The resulting floating-point images dIx, dIy are the following:

Canny edge detector


Canny edge detector function is only available if ViSP was build with OpenCV 2.1 or higher.
After the declaration of a new image container C, Canny edge detector is applied using:
#if (VISP_HAVE_OPENCV_VERSION >= 0x020100)
vpImage<unsigned char> C;
vpImageFilter::canny(I, C, 5, 15, 3);
#endif
Where:

5: is the low threshold


15: is the high threshold set in the program as three times the lower threshold (following
Cannys recommendation)
3: is the size of the Sobel kernel used internally.

The resulting image C is the following:

Convolution
To apply a convolution to an image, we first have to define a kernel. For example, let us consider
the 3x3 Sobel kernel defined in K.

vpMatrix K(3,3); // Sobel kernel along x


K[0][0] = 1; K[0][1] = 0; K[0][2] = -1;
K[1][0] = 2; K[1][1] = 0; K[1][2] = -2;
K[2][0] = 1; K[2][1] = 0; K[2][2] = -1;
After the declaration of a new floating-point image Gx, the convolution is obtained using:
vpImage<double> Gx;
vpImageFilter::filter(I, Gx, K);
The content of the filtered image Gx is the following.

Gaussian image pyramid


To construct a pyramid of Gaussian filtered images as a vector of images implemented in pyr[]
you may use:
size_t nlevel = 3;
std::vector< vpImage<unsigned char> > pyr(nlevel);
pyr[0] = I;
for (size_t i=1; i < nlevel; i++) {
vpImageFilter::getGaussPyramidal(pyr[i-1], pyr[i]);
display(pyr[i], "Pyramid");
}
The content of pyr[0], pyr[1], pyr[2] is the following:

PLANAR IMAGE PROJECTION

Introduction
The aim of this tutorial is to explain how to use vpImageSimulator class to project an image of a
planar scene at a given camera position. For example, this capability can then be used during the
simulation of a visual-servo as described in Tutorial: Image-based visual servo to introduce an
image processing.

Image projection
Given the image of a planar 20cm by 20cm square target as the one presented in the next image,
we show here after how to project this image at a given camera position, and how to get the
resulting image.

Image of a planar 20cm by 20cm square target.

This is done by the following code also available in tutorial-image-simulator.cpp:


#include <visp/vpDisplayX.h>
#include <visp/vpDisplayGDI.h>
#include <visp/vpImageIo.h>
#include <visp/vpImageSimulator.h>
int main()
{

try {
vpImage<unsigned char> target;
vpImageIo::read(target, "./target_square.pgm");
vpColVector X[4];
for (int i = 0; i < 4; i++) X[i].resize(3);
// Top left Top right Bottom right Bottom left
X[0][0] = -0.1; X[1][0] = 0.1; X[2][0] = 0.1; X[3][0] = -0.1;
X[0][1] = -0.1; X[1][1] = -0.1; X[2][1] = 0.1; X[3][1] = 0.1;
X[0][2] = 0; X[1][2] = 0; X[2][2] = 0; X[3][2] = 0;
vpImage<unsigned char> I(480, 640);
vpCameraParameters cam(840, 840, I.getWidth()/2, I.getHeight()/2);
vpHomogeneousMatrix cMo(0, 0, 0.35, 0, vpMath::rad(30), vpMath::rad(15));
vpImageSimulator sim;
sim.setInterpolationType(vpImageSimulator::BILINEAR_INTERPOLATION);
sim.init(target, X);
// Get the new image of the projected planar image target
sim.setCleanPreviousImage(true);
sim.setCameraPosition(cMo);
sim.getImage(I, cam);
try {
vpImageIo::write(I, "./rendered_image.jpg");
}
catch(...) {
std::cout << "Unsupported image format" << std::endl;
}
#if defined(VISP_HAVE_X11)
vpDisplayX d(I);
#elif defined(VISP_HAVE_GDI)

vpDisplayGDI d(I);
#else
std::cout << "No image viewer is available..." << std::endl;
#endif
vpDisplay::setTitle(I, "Planar image projection");
vpDisplay::display(I);
vpDisplay::flush(I);
std::cout << "A click to quit..." << std::endl;
vpDisplay::getClick(I);
}
catch(vpException e) {
std::cout << "Catch an exception: " << e << std::endl;
}
}

The result of this program is shown in the next image.

Resulting projection of the planar image at a given camera position.

The provide hereafter the explanation of the new lines that were introduced.
#include <visp/vpImageSimulator.h>

Include the header of the vpImageSimulator class that allows to project an image to a given
camera position.
Then in the main() function we create an instance of a gray level image that corresponds to the
image of the planar target, and then we read the image from the disk.
vpImage<unsigned char> target;
vpImageIo::read(target, "./target_square.pgm");

Since the previous image corresponds to a 20cm by 20cm target, we initialize the 3D coordinates
of each corner in the plane Z=0. Each
vpColVector X[4];
for (int i = 0; i < 4; i++) X[i].resize(3);
// Top left Top right Bottom right Bottom left
X[0][0] = -0.1; X[1][0] = 0.1; X[2][0] = 0.1; X[3][0] = -0.1;
X[0][1] = -0.1; X[1][1] = -0.1; X[2][1] = 0.1; X[3][1] = 0.1;
X[0][2] = 0; X[1][2] = 0; X[2][2] = 0; X[3][2] = 0;

Then we create an instance of the image I that will contain the rendered image from a given
camera position.
vpImage<unsigned char> I(480, 640);

Since the projection depends on the camera, we set its intrinsic parameters.
vpCameraParameters cam(840, 840, I.getWidth()/2, I.getHeight()/2);

We also set the render position of the camera as an homogeneous transformation between the
camera frame and the target frame.
vpHomogeneousMatrix cMo(0, 0, 0.35, 0, vpMath::rad(30), vpMath::rad(15));

We create here an instance of the planar image projector, set the interpolation to bilinear and
initialize the projector with the image of the target and the coordinates of its corners.
vpImageSimulator sim;
sim.setInterpolationType(vpImageSimulator::BILINEAR_INTERPOLATION);
sim.init(target, X);

Now to retrieve the rendered image we first clean the content of the image to render, set the
camera position, and finally get the image using the camera parameters.
sim.setCleanPreviousImage(true);
sim.setCameraPosition(cMo);
sim.getImage(I, cam);

Then, if libjpeg is available, the rendered image is saved in the same directory then the
executable.
#ifdef VISP_HAVE_JPEG

vpImageIo::write(I, "./rendered_image.jpg");
#endif

Finally, as in Tutorial: Getting started we open a window to display the rendered image.
Note that this planar image projection capability has been also introduced in vpVirtualGrabber
class exploited in tutorial-ibvs-4pts-image-tracking.cpp. Thus the next Tutorial: Image-based
visual servo shows how to use it in order to introduce an image processing that does the tracking
of the target during a visual-servo simulation.

Camera calibration
This tutorial focuses on pinhole camera calibration. The goal of the calibration is here to estimate
some camera parameters that allows to make the relation between camera's natural units (pixel
positions in the image) and real world units (normalized position in meters in the image plane).

Introduction
If we denote
the position of a pixel in the digitized image, this position is related to the
corresponding coordinates
in the normalized space.
In ViSP we consider two unit conversions:

From meters to pixels we consider the following formula:

with

From pixels to meters we consider the following formula:

with
In this model we consider the parameters

where:

are the coordinates of the principal point in pixel;


are the ratio between the focal length and the size of a pixel;
are the parameters used to correct the distortion.

Note that the container dedicated to camera parameters is implemented in the


vpCameraParameters class. It allows to consider two kind of models; with or without distortion.
The calibration process allows to estimate the values of these parameters. To this end, one of the
following calibration grid can be used:

a black and white chessboard [OpenCV_Chessboard.pdf];


a symmetrical circle pattern [grid2d.pdf].

To calibrate your camera you need to take snapshots of one of these two patterns with your
camera. At least 5 good snapshots of the input pattern acquired at different positions are
requested for good results.

Source code
The source code of the calibration tool is available in camera_calibration.cpp located in
example/calibration folder.
We will not describe in detail the source, but just mention that:

the image processing is performed using OpenCV;


the estimation of the parameters is done using a virtual visual servoing scheme;
the calibration tool takes as input a configuration file that allows to precise the kind of
pattern used in the images (chessboard or circles grid), and the location of the images
used as input. If libjpeg and libpng 3rd party libraries are installed and detected during
ViSP configuration, you may consider .pgm, .ppm, .jpg, .png images. Default
configuration files are provided in example/calibration folder;
the resulting parameters are saved in camera.xml file.

Calibration from a chessboard


In this section we consider the OpenCV chessboard pattern that has a size of 9 by 6. Each square
of the chessboard is 0.025 meters large. We took 5 images called chessboard-01.png,
chessboard-02.png, ..., chessboard-05.png. Hereafter we give an example of one of these
images.

Snapshot example of the chessboard used to calibrate the camera.


Before starting the calibration we need to create a configuration file. We create defaultchessboard.cfg with the following content:
# Number of inner corners per a item row and column. (square, circle)
BoardSize_Width: 9
BoardSize_Height: 6
# The size of a square in meters
Square_Size: 0.025
# The type of pattern used for camera calibration.
# One of: CHESSBOARD or CIRCLES_GRID
Calibrate_Pattern: CHESSBOARD

# The input image sequence to use for calibration


Input: chessboard-%02d.png
# Tempo in seconds between two images. If > 10 wait a click to continue
Tempo: 1
Note
The images and the configuration file used in this tutorial are available in ViSP source
code and copied in the same folder than the camera_calibration binary.
To estimate the camera parameters, just enter in ViSP <binary_dir>/examples/calibration
and run:
./camera_calibration default-chessboard.cfg
This command will produce the following output:
grid width : 9
grid height: 6
square size: 0.025
pattern : CHESSBOARD
input seq : chessboard-%02d.png
tempo : 1
frame: 1, status: 1, image used as input data
frame: 2, status: 1, image used as input data
frame: 3, status: 1, image used as input data
frame: 4, status: 1, image used as input data
frame: 5, status: 1, image used as input data
Calibration without distorsion in progress on 5 images...
Camera parameters for perspective projection without distortion:
px = 278.5184659 py = 273.9720502
u0 = 162.1161106 v0 = 113.1789595
Global reprojection error: 0.2784261067
Camera parameters without distortion successfully saved in "camera.xml"
Calibration with distorsion in progress on 5 images...
Camera parameters for perspective projection with distortion:
px = 276.3370556 py = 271.9804892
u0 = 162.3656808 v0 = 113.4484506
kud = 0.02739893948
kdu = -0.02719442967
Global reprojection error: 0.2602153289
Camera parameters without distortion successfully saved in "camera.xml"
Estimated pose on input data 0: 0.1004079988 0.07228624926 0.2759094615 0.1622201484 0.04594748279 -3.067523182
Estimated pose on input data 1: 0.1126235389 0.09590025615 0.2967542475 0.5743609549 0.1960511892 -2.915893698
Estimated pose on input data 2: 0.09983133876 0.08044014071 0.2920209765 -0.02917708148 0.6751719307 3.046437745
Estimated pose on input data 3: 0.07481330068 0.0832284992 0.2825482261 -0.09487329058 0.220597075 -2.747906623
Estimated pose on input data 4: 0.08061439562 0.08765353523 0.2837166409 0.1009190234 0.09325252997 -2.906079819
The resulting parameters are also saved in ./camera.xml file.

Calibration from a circles grid


In this section we consider the ViSP symmetric circles grid pattern that has a size of 6 by 6. Each
circle center of gravity is 0.034 meters distant from it's horizontal or vertical neighbor. We took
5 images called circles-01.pgm, circles-02.pgm, ..., circles-05.pgm. Hereafter we give an
example of such an image.

Snapshot example of the symmetric circles grid used to calibrate the camera.
Before starting the calibration we need to create a configuration file. We create circlesgrid.cfg with the following content:
# Number of inner corners per a item row and column. (square, circle)
BoardSize_Width: 6
BoardSize_Height: 6
# The size of a square in meters
Square_Size: 0.034
# The type of pattern used for camera calibration.
# One of: CHESSBOARD or CIRCLES_GRID
Calibrate_Pattern: CIRCLES_GRID
# The input image sequence to use for calibration
Input: circles-%02d.pgm
# Tempo in seconds between two images. If > 10 wait a click to continue
Tempo: 1
Note
The images and the configuration file used in this tutorial are available in ViSP source
code and copied in the same folder than the camera_calibration binary.
To estimate the camera parameters, just enter in ViSP <binary_dir>/examples/calibration
and run:
./camera_calibration circles-grid.cfg
This command will produce the following output:
grid width : 6
grid height: 6
square size: 0.034
pattern : CIRCLES_GRID
input seq : circles-%02d.png
tempo : 1
frame: 1, status: 1, image used as input data
frame: 2, status: 1, image used as input data
frame: 3, status: 1, image used as input data

frame: 4, status: 1, image used as input data


frame: 5, status: 1, image used as input data
Calibration without distorsion in progress on 5 images...
Camera parameters for perspective projection without distortion:
px = 276.7844987 py = 273.2284128
u0 = 164.029061 v0 = 113.2926414
Global reprojection error: 0.3245572722
Camera parameters without distortion successfully saved in "camera.xml"
Calibration with distorsion in progress on 5 images...
Camera parameters for perspective projection with distortion:
px = 272.6576029 py = 268.9209423
u0 = 163.3267494 v0 = 112.9548567
kud = 0.03132515383
kdu = -0.03098719022
Global reprojection error: 0.2985458516
Camera parameters without distortion successfully saved in "camera.xml"
Estimated pose on input data 0: -0.08883802146 -0.07573082723 0.254649414 0.009277810667
-0.1162730223 -0.06217958144
Estimated pose on input data 1: -0.03031929668 -0.07792577124 0.226777101 0.04390110018 0.474640394 0.09584680839
Estimated pose on input data 2: 0.02757364367 -0.08075508106 0.2416734821 0.2541005213 0.469141926 0.5746851856
Estimated pose on input data 3: -0.08528071 -0.0552184701 0.216359278 0.433944401 0.01692119727 -0.01151973247
Estimated pose on input data 4: -0.1104723502 -0.0854285443 0.2684946566 0.4130829919
0.1926077657 0.2736623762
The resulting parameters are also saved in ./camera.xml file.

Distorsion removal
Once the camera is calibrated, you can remove the distortion in the images. The following
example available in tutorial-undistort.cpp shows how to do it.
#include <visp/vpImageIo.h>
#include <visp/vpImageTools.h>
#include <visp/vpXmlParserCamera.h>
int main()
{
try {
vpImage<unsigned char> I;
vpImageIo::read(I, "chessboard.pgm");
vpCameraParameters cam;
#ifdef VISP_HAVE_XML2
vpXmlParserCamera p;
vpCameraParameters::vpCameraParametersProjType projModel;
projModel = vpCameraParameters::perspectiveProjWithDistortion;
if (p.parse(cam, "camera.xml", "Camera", projModel, I.getWidth(), I.getHeight()) !=
vpXmlParserCamera::SEQUENCE_OK) {
std::cout << "Cannot found parameters for camera named \"Camera\"" << std::endl;
}

#else
cam.initPersProjWithDistortion(582.7, 580.6, 326.6, 215.0, -0.3372, 0.4021);
#endif
std::cout << cam << std::endl;
vpImage<unsigned char> Iud;
vpImageTools::undistort(I, cam, Iud);
vpImageIo::write(Iud, "chessboard-undistort.pgm");
}
catch(vpException e) {
std::cout << "Catch an exception: " << e << std::endl;
}
return 0;
}
In this example we first load the image chessboard.pgm
vpImage<unsigned char> I;
vpImageIo::read(I, "chessboard.pgm");
Then we read the camera parameters with distortion of a camera named "Camera" from
./camera.xml file. This is only possible if ViSP was build with libxml2 3rd party support.
vpCameraParameters cam;
#ifdef VISP_HAVE_XML2
vpXmlParserCamera p;
vpCameraParameters::vpCameraParametersProjType projModel;
projModel = vpCameraParameters::perspectiveProjWithDistortion;
if (p.parse(cam, "camera.xml", "Camera", projModel, I.getWidth(), I.getHeight()) !=
vpXmlParserCamera::SEQUENCE_OK) {
std::cout << "Cannot found parameters for camera named \"Camera\"" << std::endl;
}
If vpXmlParserCamera is not available (this may occur if ViSP was not build with libxml2), we
initialize the camera parameters "by hand" using the following code:
#else
cam.initPersProjWithDistortion(582.7, 580.6, 326.6, 215.0, -0.3372, 0.4021);
#endif
Finally, we create a new image Iud where distortion is removed. This image is saved in
chessboard-undistort.pgm.
vpImage<unsigned char> Iud;
vpImageTools::undistort(I, cam, Iud);
vpImageIo::write(Iud, "chessboard-undistort.pgm");
The resulting chessboard-undistort.pgm image is the following.

chessboard-undistort.pgm

image where distortion was removed.

S-ar putea să vă placă și