Sunteți pe pagina 1din 44

1

VISION BASED SHAPE IDENTIFICATION AND POSITIONING (Robotic Arm) ADVANCED ROBOTICS ASSIGNMENT

Table of Contents

TITLE1 OBJECTIVES.3 INTRODUCTION...4 REQUIREMENTS17 METHOD17 PROGRAM.41 RESULTS.42 IMPROVEMENTS..44 CONCLUSION..44 REFERENCE45

TITLE: Vision based shape identification and position

OBJECTIVES: Using Roborealm software the shape classification must be identified. The Lynx motion arm robot must perform pick and place operation using this information. (Was not able to perform pick and place as software crashed again and again, only could move the arm to a particular point)

Identifying the shapes: i) The first objective is to identify shapes and should be classified using different values. This should be done in a single program. ii) The shapes identification algorithm should not be sensitive to orientation or position. iii) The shapes identification algorithm should also not be sensitive to size variation.

Shape position: i) Once the shape has been identified the coordinates must be transfer to the robot, to perform pick and place operation (Was not able to pick and place as more than 1 lynx motion command crashed in the roborealm software)

Student will need to come up with one application method for image classification. Detailed explanation must be given on this method with proper calculation if necessary.

INTRODUCTION (Detailed): 1. Roborealm:

It is a powerful robotic vision software application for use in computer vision, image processing, and robot vision tasks. Using an easy to use point and click interface, complex image analysis and robot control becomes easy.

Theory

The main purpose of roborealm is to translate what a camera sees into meaningful numbers in order to cause a reaction to what the image contains. This requires the processing of an image into a few or a single number that is meaningful in the current context of whatever project you have in mind. As each project has vastly different requirements with different attributes that need to be detected, roborealm is written in a flexible approach where the basic tools are provided to you in the form of modules in order to be combined into a relevant combination. For example, if your task is to track a red ball then the color red can be picked up by a color filter module as the first step. But if you wanted to track any type of ball then you would not use the red color as an attribute but instead a round shape detector. The reason you would ever want to use a color tracker instead of just using a shape tracker is that the color tracker may be more reliable and repeatable in some environments. Some features of roborealm are as,

Inexpensive Vision Application Links Vision to Motion Interactive GUI Interface Socket based Server API Multiple Interfaces (Disk, Web, FTP, Email, etc) Plug-in Framework for Custom Modules

Compatibility

Roborealm is compatible with the PCTx, Servo Controller, and Analog Readers for servo and motor control and sensor acquisition as well as many other devices from various manufacturers.

2. Vision System:

Computer based device for interpreting visual signals from a video camera. Computer vision is important in robotics where sensory abilities would considerably increase the flexibility and usefulness of a robot. Vision systems very often use pattern recognition and are used in a variety of applications, from automated stock control robots to quality control in automated manufacturing systems.

It is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner.

As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems. Examples of applications of computer vision include systems for:

Controlling processes (e.g. an industrial robot or an autonomous vehicle) Detecting events (e.g. for visual surveillance or people counting) Organizing information (e.g. for indexing databases of images and image sequences) Modeling objects or environments (e.g. industrial inspection, medical image analysis or topographical modeling) Interaction (e.g. as the input to a device for computer-human interaction)

Computer vision is closely related to the study of biological vision. The field of biological vision studies and models the physiological processes behind visual perception in humans and other animals. Computer vision, on the other hand, studies and describes the processes implemented in software and hardware behind artificial vision systems. Interdisciplinary exchange between biological and computer vision has proven fruitful for both fields. Computer vision is, in some ways, the inverse of computer graphics. While computer graphics produces image data from 3D models, computer vision often produces 3D models from image data. There is also a trend towards a combination of the two disciplines, e.g., as explored in augmented reality. Sub-domains of computer vision include scene reconstruction, event detection, video tracking, object recognition, learning, indexing, motion estimation, and image restoration. The fields most closely related to computer vision are image processing, image analysis and machine vision. There is a significant overlap in the range of techniques and applications that these cover. This implies that the basic techniques that are used and developed in these fields are more or less identical, something which can be interpreted as there is only one field with different names. On the other hand, it appears to be necessary for research groups, scientific journals, conferences and companies to present or market themselves as belonging specifically to one of these fields and, hence, various characterizations which distinguish each of the fields from the others have been presented.

The following characterizations appear relevant but should not be taken as universally accepted: Image processing and image analysis tend to focus on 2D images, how to transform one image to another, e.g., by pixel-wise operations such as contrast enhancement, local operations such as edge extraction or noise removal, or geometrical transformations such as rotating the image. This characterization implies that image processing/analysis neither require assumptions nor produce interpretations about the image content.

Computer vision tends to focus on the 3D scene projected onto one or several images, e.g. how to reconstruct structure or other information about the 3D scene from one or several images. Computer vision often relies on more or less complex assumptions about the scene depicted in an image.

Machine vision tends to focus on applications, mainly in manufacturing, e.g. vision based autonomous robots and systems for vision based inspection or measurement. This implies that image sensor technologies and control theory often are integrated with the processing of image data to control a robot and that real-time processing is emphasized by means of efficient implementations in hardware and software. It also implies that the external conditions such as lighting can be and are often more controlled in machine vision than they are in general computer vision, which can enable the use of different algorithms.

There is also a field called imaging which primarily focus on the process of producing images, but sometimes also deals with processing and analysis of images. For example, medical imaging contains lots of work on the analysis of image data in medical applications.

Finally, pattern recognition is a field which uses various methods to extract information from signals in general, mainly based on statistical approaches. A significant part of this field is devoted to applying these methods to image data.

Typical tasks of computer vision Each of the application areas employ a range of computer vision tasks; more or less well-defined measurement problems or processing problems, which can be solved using a variety of methods. Some examples of typical computer vision tasks are presented below. Recognition The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity. This task can normally be solved robustly and without effort by a human, but is still not satisfactorily solved in computer vision for the general case: arbitrary objects in arbitrary situations. The existing methods for dealing with this problem can at best solve it only for specific objects, such as simple geometric objects (e.g. polyhedrons), human faces, printed or hand-written characters, or vehicles, and in specific situations, typically described in terms of well-defined illumination, background, and pose of the object relative to the camera.

8 Different varieties of the recognition problem are:

Object recognition: One or several pre-specified or learned objects or object classes can be recognized, usually together with their 2D positions in the image or 3D poses in the scene.

Identification: An individual instance of an object is recognized. Examples: identification of a specific person's face or fingerprint, or identification of a specific vehicle.

Detection: The image data is scanned for a specific condition. Examples: detection of possible abnormal cells or tissues in medical images or detection of a vehicle in an automatic road toll system. Detection based on relatively simple and fast computations is sometimes used for finding smaller regions of interesting image data which can be further analyzed by more computationally demanding techniques to produce a correct interpretation.

Motion analysis Several tasks relate to motion estimation where an image sequence is processed to produce an estimate of the velocity either at each point in the image or in the 3D scene, or even of the camera that produces the images. Examples of such tasks are:

Ego motion: Determining the 3D rigid motion (rotation and translation) of the camera from an image sequence produced by the camera.

Tracking: Following the movements of a (usually) smaller set of interest points or objects (e.g., vehicles or humans) in the image sequence.

Optical flow: To determine, for each point in the image, how that point is moving relative to the image plane, i.e. its apparent motion. This motion is a result both of how the corresponding 3D point is moving in the scene and how the camera is moving relative to the scene.

Scene reconstruction Given one or (typically) more images of a scene, or a video, scene reconstruction aims at computing a 3D model of the scene. In the simplest case the model can be a set of 3D points. More sophisticated methods produce a complete 3D surface model.

Image restoration The aim of image restoration is the removal of noise (sensor noise, motion blur, etc.) from images. The simplest possible approach for noise removal is various types of filters such as low-pass filters or median filters. More sophisticated methods assume a model of how the local image structures look like, a model which distinguishes them from the noise. By first analyzing the image data in terms of the local image structures, such as lines or edges, and then controlling the filtering based on local information from the analysis step, a better level of noise removal is usually obtained compared to the simpler approaches.

Computer vision systems The organization of a computer vision system is highly application dependent. Some systems are standalone applications which solve a specific measurement or detection problem, while others constitute a sub-system of a larger design which, for example, also contains sub-systems for control of mechanical actuators, planning, information databases, man-machine interfaces, etc. The specific implementation of a computer vision system also depends on if its functionality is pre-specified or if some part of it can be learned or modified during operation. There are, however, typical functions which are found in many computer vision systems.

10

Image acquisition: A digital image is produced by one or several image sensors, which, besides various types of light-sensitive cameras, include range sensors, tomography devices, radar, ultra-sonic cameras, etc. Depending on the type of sensor, the resulting image data is an ordinary 2D image, a 3D volume, or an image sequence. The pixel values typically correspond to light intensity in one or several spectral bands (gray images or color images), but can also be related to various physical measures, such as depth, absorption or reflectance of sonic or electromagnetic waves, or nuclear magnetic resonance.

Pre-processing: Before a computer vision method can be applied to image data in order to extract some specific piece of information, it is usually necessary to process the data in order to assure that it satisfies certain assumptions implied by the method. Examples are

Re-sampling in order to assure that the image coordinate system is correct. Noise reduction in order to assure that sensor noise does not introduce false information.

Contrast enhancement to assure that relevant information can be detected. Scale-space representation to enhance image structures at locally appropriate scales.

Feature extraction: Image features at various levels of complexity are extracted from the image data. Typical examples of such features are Lines, edges and ridges. Localized interest points such as corners, blobs or points.

Detection/segmentation: At some point in the processing a decision is made about which image points or regions of the image are relevant for further processing. Examples are

11

Selection of a specific set of interest points Segmentation of one or multiple image regions which contain a specific object of interest.

High-level processing: At this step the input is typically a small set of data, for example a set of points or an image region which is assumed to contain a specific object. The remaining processing deals with, for example:

Verification that the data satisfy model-based and application specific assumptions. Estimation of application specific parameters, such as object poses or objects size. Classifying a detected object into different categories.

3. Image Processing:

Image processing is a physical process used to convert an image signal into a physical image. The image signal can be either digital or analog. The actual output itself can be an actual physical image or the characteristics of an image.

The most common type of image processing is photography. In this process, an image is captured using a camera to create a digital or analog image. In order to produce a physical picture, the image is processed using the appropriate technology based on the input source type.

In digital photography, the image is stored as a computer file. This file is translated using photographic software to generate an actual image. The colors, shading, and nuances are all

12 captured at the time the photograph is taken, the software translates this information into an image.

When creating images using analog photography, the image is burned into a film using a chemical reaction triggered by controlled exposure to light. The image is processed in a darkroom, using special chemicals to create the actual image. This process is decreasing in popularity due to the advent of digital photography, which requires less effort and special training to product images.

In addition to photography, there are a wide range of other image processing operations. The field of digital imaging has created a whole range of new applications and tools that were previously impossible. Face recognition software, medical image processing and remote sensing are all possible due to the development of digital image processing. Specialized computer programs are used to enhance and correct images. These programs apply algorithms to the actual data and are able to reduce signal distortion, clarify fuzzy images and add light to an underexposed image.

Image processing techniques were first developed in 1960 through the collaboration of a wide range of scientists and academics. The main focus of their work was to develop medical imaging, character recognition and create high quality images at the microscopic level. During this period, equipment and processing costs were prohibitively high.

The financial constraints had a serious impact on the depth and breadth of technology development that could be done. By the 1970s, computing equipment costs had dropped substantially making digital image processing more realistic. Film and software companies invested significant funds into the development and enhancement of image processing, creating a new industry.

There are three major benefits to digital image processing. The consistent high quality of the image, the low cost of processing and the ability to manipulate all aspects of the process are all great benefits. As long as computer processing speed continues to increase while the cost of storage memory continues to drop, the field of image processing will grow.

13 Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subfield of digital signal processing, digital image processing has many advantages over analog image processing; it allows a much wider range of algorithms to be applied to the input data, and can avoid problems such as the build-up of noise and signal distortion during processing.

Digital image processing allows the use of much more complex algorithms for image processing, and hence can offer both more sophisticated performance at simple tasks, and the implementation of methods which would be impossible by analog means. In particular, digital image processing is the only practical technology for:

Classification Feature extraction Pattern recognition Projection Multi-scale signal analysis Some techniques which are used in digital image processing include:

Pixelization Linear filtering Principal components analysis Independent component analysis Hidden Markov models Partial differential equations Self-organizing maps Neural networks Wavelets

14

Finally summing it up, Image processing converts a target image captured by a Charged Couple Device (CCD) camera into a digital signal and then performs various arithmetic operations on the signal to extract the characteristics of the target, such as area, length, quantity and position. Finally, a good result is output based on preset tolerance limits.

15 4. Some applications of vision systems and image processing:


Computer vision (I) Optical sorting (I) Augmented Reality (I) Face detection (I) Feature detection (I) Lane departure warning system (I) Non-photorealistic rendering (I) Medical image processing (I) Microscope image processing (I) Morphological image processing (I) Remote sensing (I) Vision System Measures Scallops (V) Inspecting turbine blades in aircraft engines (V) Vision automates parking surveillance (V) Vision helps delta robot sort biscuits (V) Laser Marking and Image-based Industrial ID Reader Save Hundreds of Thousands of Dollars (V) Automotive Supplier Achieves Perfect Quality with Low-Cost Machine Vision (V) Vision System Prevents Injection Molding Tool Damage and Improves Part Quality (V) Inspection of band saw blades made easy (V)

5. Robot Vision (Optional Summing up for Robots):

The field of robot vision guidance is developing rapidly. The benefits of sophisticated vision technology include savings, improved quality, reliability, safety and productivity. Robot vision is used for part identification and navigation. Vision applications generally deal with finding a part and orienting it for robotic handling or inspection before an application is performed. Sometimes vision guided robots can replace multiple mechanical tools with a single robot station.

16 Creating Sight A combination of vision algorithms, calibration, temperature software, and cameras provide the vision ability. Calibration of robot vision system is very application dependent. They can range from a simple guidance application to a more complex application that uses data from multiple sensors.

Algorithms are consistently improving, allowing for sophisticated detection. Many robots are now available with collision detection, allowing them to work alongside other robots without the fear of a major collision. They simply stop moving momentarily if they detect another object in their motion path.

Seeing Savings Robotic vision makes processes simpler, more straightforward, thus cutting costs:

Fixtures: Robot vision eliminates any need for hard tooling or fixtures. Now, products can be identified and applications performed without any need for securing.

Labor: There are labor and machinery savings that come with robotic vision. There is no need for sorts, feeders or upstream actuators anymore. Nor is there any need for labor to load or orient parts.

Finding the Right Vision When deciding on the right robot vision guidance, work with an integrator you can trust and consider the following:

Communication: Robot vision must work and connect with the robot system and application. A disconnect could harm the robot or the product and cause loss of production and quality.

17

Environment: The workplace must be controlled so that robot vision remains sharp. Every contributing element in the environment, including lighting, product color changes, airborne chemicals, must be considered and tested.

REQUIREMENTS: 1. Roborealm Software (For Image processing) 2. 5 Images (Square, Circle, Triangle, Rectangle and all together) 3. Camera (For Vision) 4. Lynx motion SSC 32 (Robotic arm)

METHOD: Firstly we need to install roborealm into the PC, and then connect the USB webcam to the PC. After that we start our roborealm software. Go to the options and select video and input your cameras name in the camera text box. After finishing this, go to main screen and press camera tab, doing so the camera will start working and you can see your image in the roborealm work screen. Now we take the all in one image (having square, circle, triangle and rectangle together) and put it under the camera. So now we can see the image on the roborealm work screen. So we see that our vision system has worked. That means our camera has worked and is connected to the software. Now comes the part of image processing. So, as we have the image already on the work screen, we go further with image processing coding (Filtering). As the first step of image processing we, 1. Make our image inverted using the Negative module under adjust

A positive image is a normal image. A negative image is a tonal inversion of a positive image, in which light areas appear dark and vice versa. A negative color image is additionally color reversed, with red areas appearing cyan, greens appearing magenta and blues appearing yellow.

18 Film negatives usually also have much less contrast than the final images. This is compensated by the higher contrast reproduction by photographic paper or by increasing the contrast when scanning and post processing the scanned images.

The Negative (Solarize) module inverts all pixel values. For example, if a pixel is white it is changed to black, if it is black it is changed to white.

R=255-R G=255-G B=255-B

We use this module to sharpen up the image, remove noise and smoothen it up. Using this filter we can clearly specify the object in the image separately. It helps us to know our objects and background.

(The normal image which we put below the camera)

19

(This is the image we get after using negative module on the original image)

Seeing the image we can clearly identify the objects and the background. It makes the image sharper and easy to identify.

20 2. After this we use the Auto Threshold module under threshold

Thresholding is the simplest method of image segmentation. From a grayscale image, thresholding can be used to create binary images

The Auto Threshold modules will automatically threshold the current image into a binary black and white image. The threshold level is automatically determined based on the method selected. The appropriate method to use will depend on your application. Select Cluster (Otsu) if you are looking for a standard technique that is most often referenced by the current machine vision literature.

This module is useful when working with blob analysis or shape recognition whose background image can change and a manual threshold is not reliable enough.

The following briefly outlines the algorithms used by the thresholding methods to allow you to choose the most appropriate for your application. Note that they all operate on the image's histogram.

Two Peaks - Detects the two highest peaks in the histogram separated by the distance specified. The distance will ensure that peaks close to each other are not selected. The threshold is then found by finding the deepest valley between these two peaks.

Mean Level - the average pixel value is determined using the image histogram. All pixel intensities below that value are set to black with all pixel intensities above the mean set to white.

Black Percent - Also known as P-Tile. The threshold level is set based on the specified percent of suggested dark pixels (or background) there are in the image. The histogram is used to indicate how much of the image at a certain threshold would be set to black. Once this amount exceeds the specified percent the current histogram index (0-255) is used as the threshold.

Edge Percent - Similar to the black percent the edge percent threshold is determined by the specified percent of edge pixels that exist below the threshold. Instead of just counting every pixel the edge percent basis its measurement on how much a pixel is part of an edge by performing a laplacian filter prior to threshold determination.

21

Entropy (Kapur) - Utilizes Kapur's entropy formula to find the threshold that minimizes the entropy between the two halves of the histogram created by a threshold.

Cluster (Otsu) - One of the most popular threshold techniques that creates two clusters (white and black) around a threshold T and successively tests the within-class variance of the clusters for a minimum. This algorithm can also be thought of as maximizing the between-class variance.

Symmetry - Assumes that the largest peak in the histogram is somewhat symmetrical and uses that symmetry to create a threshold just before the beginning of the largest peak. This technique is particularly useful to segment objects from large background planes.

Triangle - Works well with histograms that don't have well defined peaks. This technique finds the maximum distance between a suggested threshold value and a line that connects the first non-zero pixel intensity with the highest peak. Inherent in this technique is the distance of a point to a line equation. For our image we use the Otsu algorithm of the auto threshold module. In computer vision and image processing, Otsu's method is used to automatically perform histogram shape-based image thresholding or the reduction of a gray level image to a binary image. The algorithm assumes that the image to be threshold contains two classes of pixels (e.g. foreground and background) then calculates the optimum threshold separating those two classes so that their combined spread (intra-class variance) is minimal. The extension of the original method to multi-level thresholding is referred to as the Multi Otsu method. In Otsu's method we exhaustively search for the threshold that minimizes the intra-class variance, defined as a weighted sum of variances of the two classes:

Weights i are the probabilities of the two classes separated by a threshold t and of these classes.

variances

Otsu shows that minimizing the intra-class variance is the same as maximizing inter-class variance:

22 Which is expressed in terms of class probabilities i and class means i which in turn can be updated iteratively. This idea yields an effective algorithm.

Algorithm Compute histogram and probabilities of each intensity level Set up initial i(0) and i(0) Step through all possible thresholds Update i and i Compute maximum intensity

Desired threshold corresponds to the maximum

23

(This is the image got after applying auto threshold module to the previous image)

3. After this we use the Blob filter under blobs

The blob filter module (also known as Particle Filter or Analysis) provides a way to identify a particular blob based on features that describe aspects of the blob itself and in relation to other blobs. The purpose of the blob filter is to provide enough feature descriptions of the desired blob to ensure that the identification of a particular blob is reliable and dependable despite a noisy background.

24 The blob filter must be used after a blob segmentation module like the RGB Filter, Segment Colors, Flood Fill, Threshold, etc. modules that will group pixels in some meaningful way into blobs of a single color with black pixels being the background. The module you will use to perform this segmentation will depend on your particular project task. Once the image has been grouped into blobs this Blob Filter module is then used to remove or filter those blobs remaining in the image that are not wanted. For example, if you have an image that was detected for the red color using the RGB Filter module and the image included a red or orange cone the blob filter can be used to remove all blobs that are too small and not triangular shaped in the image. Thus any red dirt or tree bark while present after the red color detection would be removed by using the blob filter as they would fail a triangular shape test (assuming this is one of the attributes filtered on).

Once you have your images segmented into various blobs you then add in each blob attribute seen below and specify a weight threshold or count threshold to remove those unwanted blobs. Keep in mind that you can add multiple attributes one after the other that will remove blobs along the way in order to finish with the final desired blob. Look for attributes that create a wide distinction between your desired blob and other unwanted blobs (see the Show Weights checkbox to see all weights given the current attribute). Using the checkbox Create Blob Array will create the final BLOBS array variable that will contain the COG (center of gravity) of the blobs left after the filtering process. This variable can then be used to react to the presence of a particular object.

This will allow us to filter out the unwanted shapes. Firstly we start with a triangle. We just need our work screen to show the image of triangle and filter out the rest of the shapes. So for doing that we use triangle deviation (under shapes) with weight threshold of >=0.65. So that when we put 4 shapes under the camera with blob filter still open we should just see triangle. And if we put any other shape it should not be visible. In a way we are constraining. If any shape has triangle deviation with weight threshold of >=0.65 then only well see triangle otherwise wont see anything.

Triangle Deviation: Estimates a perfect triangle from blob outline and then determines how well the blobs outlines deviates from the ideal extracted triangle.

25

(We are just able to see the triangle as other shapes are filtered out)

26 Secondly we filter for a circle. We just need our work screen to show the image of circle and filter out the rest of the shapes. So for doing that we use circular deviation (under shapes) with weight threshold of >=0.89 and circular with weight threshold of >=0.78. So that when we put 4 shapes under the camera with blob filter still open we should just see circle. And if we put any other shape it should not be visible. In a way we are constraining. If any shape has circular deviation with weight threshold of >=0.89 and circular with weight threshold of >=0.78 then only well see circle otherwise wont see anything.

Circular: Circular blobs get higher weights.

Circular deviation or variance: The circular variance provides a measure of the spread of a set of dihedral angles. It is applied here to each residue's distribution of Fi, i, c1, c2 and angles across all the members of the NMR ensemble. So, for example, it can provide a measure of how tightly or loosely a given residue's torsion angles cluster together across the entire ensemble of models.

It is defined as

Where, The expression

, n being the number of members in the ensemble and R is given by

27 The value of the circular variance varies from 0 to 1, with the lower the value the tighter the Clustering of the values about a single mean value.

For two dimensional distributions, such as the distributions of the Residue Ramachandran plots, the expression for

values on the residue-by-

above is modified to

28

(We are just able to see the circle as other shapes are filtered out)

Thirdly we filter for a rectangle. We just need our work screen to show the image of rectangle and filter out the rest of the shapes. So for doing that we use circular deviation (under shapes) with weight threshold of >=0.88 (inverted), quadrilateral sides with weight threshold of >=0.75 and quadrilateral area with weight threshold of >=0.8. So that when we put 4 shapes under the camera with blob filter still open we should just see rectangle. And if we put any other shape it should not be visible. In a way we are constraining. If any shape use circular deviation with weight threshold of >=0.88 (inverted), quadrilateral sides with weight threshold of >=0.75 and quadrilateral area with weight threshold of >=0.8 then only well see rectangle otherwise wont see anything.

Quadrilateral Sides: Estimates a perfect rectangle from blob outline and then determines squareness of a blob by comparing how well two sides of the ideal extracted rectangle compare in length. The more rectangular the blob the more the opposing sides will match in length.

29 Quadrilateral Area : Estimates a perfect rectangle from blob outline and then compares how well does the blob's area matches the area determined by the ideal four sides.

30

(We are just able to see the rectangle as other shapes are filtered out)

Fourthly we filter for a square. We just need our work screen to show the image of square and filter out the rest of the shapes. So for doing that we use circular deviation (under shapes) with weight threshold of >=0.88, quadrilateral sides with weight threshold of >=0.75 and quadrilateral area with weight threshold of >=0.8. So that when we put 4 shapes under the camera with blob filter still open we should just see square. And if we put any other shape it should not be visible. In a way we are constraining. If any shape use circular deviation with weight threshold of >=0.88, quadrilateral sides with weight threshold of >=0.75 and quadrilateral area with weight threshold of >=0.8 then only well see square otherwise wont see anything.

31

(We are just able to see the square as other shapes are filtered out)

32 4. After this we use Watch variables under statements

The Watch Variables module allows you to peek into the current variable state being maintained by Roborealm. This module will list out all current variables that are accessible by other modules and plugins.

5. After this we use the If statement under statements

The If Statement module allows you to create a condition on which the enclosed modules will get executed or not. This is similar to the conditional statements seen in the VBScript and other scripting languages. Using the interface you can compare Roborealm variables against other variables or values to determine if the following modules should be executed or not.

33 For executing our program this module is very important. It constrains all the blob filters for a particular shape together. Like for square the quadrilateral area (A2), quadrilateral sides and circular deviation under shapes are constrained together (values shown in blob filter). So if any shape fulfilling the above requirement would come under square.

If command is always followed by then and end If command (Finish). We use then command to use the speak function under audio plus, lynx motion SSC 32 under servos under control. The speak command would help us to know when particular shape is detected and lynx motion SSC-32 command will help to move the arm to particular point when previous happens.

(For triangle)

(For circle)

34

(For rectangle)

(For square)

6. After this we use the Speak module under audio

The Speak module uses the Microsoft Speech 5.0 download to speak written text. This module is useful for indicating certain states within the execution of the program when no screen is available on the robot. There are two ways to use the speech module. You can either specify a variable that will contain text that will be spoken or you can type in the text directly. Using a variable is useful if you wish to change the spoken text based on a VBScript module or other plugins.

35

So when a particular shape is detected then using If command the voice speaks the name of the shape detected.

36 7. After this we use Lynx motion SSC 32 command under servos which is under control

The Lynx motion SSC-32 module allows you to interface Roborealm to servos using a controller made by Lynx motion called the SSC-32 Servo Controller. The servo controller supports up to 32 channels of 1uS resolution. In addition the board provides synchronized movement so that all servos will update at the same time. The board also supports 4 digital or analog inputs for adding additional sensors to your robotic projects.

This command is used with If command. Once a shape is detected the speak module (optional) automatically starts. After which the Lynx motion SSC 32 command comes into play, the robotic arms are moved to a particular co-ordinates according to the shape detected.

(For triangle)

37

(For circle)

(For rectangle)

38

(For square)

So using this command the robotic arm will move to specified position when a particular shape is detected by the camera. (Default position adjustable starting position)

39 PROGRAM:

Have shown just one lynx motion SSC command because if I try to put more than 1 lynx motion SSC command in the program the roborealm crashes and computer hangs. For the presentation purpose we can use the program without the speak command.

40 RESULTS: After making the program we tested it with the various shapes at different orientations. We got very good results. Our program was able to detect various shapes at different orientations and spoke the name of the shape when detected plus moved the robotic arm to the position we have input. Circle Orientation 1 Orientation 2 Orientation 3 Orientation 4 Good Good Good Good Triangle Good Good Good Good Rectangle Bad Bad Good Good Square Good Good Bad Good

Circle Orientation 1 Orientation 2 Orientation 3 Orientation 4 Good Good Good Good

Triangle Good Good Good Good

Rectangle Bad Good Good Good

Square Good Bad Good Good

Circle Orientation 1 Orientation 2 Orientation 3 Orientation 4 Good Good Good Good

Triangle Good Good Good Good

Rectangle Good Good Good Good

Square Bad Good Good Good

41 Circle Orientation 1 Orientation 2 Orientation 3 Orientation 4 Good Good Good Good Triangle Good Good Good Good Rectangle Good Good Good Good Square Good Good Good Good

Circle Orientation 1 Orientation 2 Orientation 3 Orientation 4 Good Good Good Good

Triangle Good Good Good Good

Rectangle Good Good Good Good

Square Good Good Good Good

Circle Orientation 1 Orientation 2 Orientation 3 Orientation 4 100% 100% 100% 100%

Triangle 100% 100% 100% 100%

Rectangle 60% 80% 100% 100%

Square 80% 80% 80% 100%

We see that testing for circle and triangle was perfect. Which means the camera could detect these 2 shapes easily and go forward with the commands decides. But this isnt the same case with rectangle and square. Reason being it was very difficult to separate square from rectangle. As both of them are quadrilateral. But at end after changing the values of the blob filter we got perfect output. Which means camera was able to detect square and rectangle separately and go forward with the commands decided.

42 IMPROVEMENTS: We can use Bitmap under blob filter along the present program to have a more accurate output. Or we can use shape matching (with COG under analysis) under matching along with the present program and constrain its variables using the If command to the present If command. Doing so we would have a higher rate of accuracy. Plus would come to know how far is the shape from centre of screen. As this program can be applied to pick bottles, boxes etc as from top the bottles, boxes etc have a 2D shape. For e.g. bottle top would look like a circle, carton box from top would look like a square or rectangle etc. So to have higher rate of accuracy, higher speed of work etc we need to have our camera put just above the gripper of the robotic arm. So, as to be precise about the location of the object. Plus we have to put the camera in an environment so that there is no noise in the image. Otherwise the program wont be able to detect a particular shape as there would be numerous blobs in form of noise.

CONCLUSION: According to me this program is better than any other program, which just uses bitmap, shape matching etc is that this program has many constrains in deciding the shape of a object. The object has to undergo various constrains as to finally make it under a particular shape. Where as in bitmap or shape matching success isnt guaranted. The reason being the bitmap shape or the shape matching shape which we input into the program can vary from the object under the camera. The object or shape should exactly resemble the shape in the program. If not it wont be detected. Doing this assignment I have learnt a lot about robotics. I have come to know how does a robot see and analyze the information, which we human do with very less effort. I have come to under much about image processing and motion of robot practically which I had never known earlier. This assignment may help me a lot when I do my final year project. As it has a wide scope.

43 REFERENCE: ONLINE http://www.wisegeek.com/what-is-image-processing.htm accessed on 21st April, 210 http://www.gisdevelopment.net/tutorials/tuman005p.htm accessed on 24th April, 210 http://www.fiserlab.org/manuals/procheck/manual/man_cv.html accessed on 24th April, 210 http://www.endurance-rc.com/roborealm.html accessed on 26th April, 210 http://www.roborealm.com accessed on 27th April, 210 http://www.ukiva.org/pages/applications.html accessed on 27th April, 210 http://www.cognex.com/ApplicationsIndustries/IndustryApps/default.aspx?id=72 accessed on 28th April, 210 http://www.matrox.com/imaging/en/press/feature/packaging/robot/ accessed on 29th April, 210 http://www.spacetechhalloffame.org/inductees_1994_Digital_Image_Processing.htm accessed on 29th April, 210

Books Tinku Acharya and Ajoy K. Ray, Image Processing - Principles and Applications Wilhelm Burger and Mark J. Burge (2007), Digital Image Processing: An Algorithmic Approach Using Java R. Fisher, K Dawson-Howe, A. Fitzgibbon, C. Robertson, E. Trucco (2005), Dictionary of Computer Vision and Image Processing Bernd Jahne (2002), Digital Image Processing Tim Morris (2004), Computer Vision and Image Processing

44

S-ar putea să vă placă și