Sunteți pe pagina 1din 24

6.

141 - RSS - Spring 2012

Team 7 Grand Challenge Proposal

Christopher S. Chin Mike Salvato

Alec Poitzsch Eugene Y. Sun

April 30, 2012

Contents
1 Introduction 1.1 Background & Inspiration 1.2 Problem Specication . . 1.3 Solution Overview . . . . 1.4 Technical Focus . . . . . . 1.5 Assumptions of Solution . 2 Hardware System 2.1 Body . . . . . . . . . 2.2 Motor & Wheels . . 2.3 Bump Sensor . . . . 2.4 Sonar Sensor . . . . 2.5 Kinect . . . . . . . . 2.6 Block Collector . . . 2.6.1 Robot Arm . 2.6.2 Block Storage 2.7 Block Dispenser . . . 3 Software System 3.1 System Overview . 3.2 Motor Control . . 3.3 Vision . . . . . . . 3.4 Local Navigation . 3.5 Global Navigation 3.6 Goal Planner . . . 3.7 Arm Control . . . 2 2 3 3 4 5

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

6 . 6 . 7 . 7 . 8 . 9 . 9 . 9 . 10 . 11 12 12 14 14 15 16 17 17

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

4 Milestones & Implementation Plan 19 4.1 Tasks, Dates & Durations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 4.2 Foreseeable Diculties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 5 Conclusion 22 5.1 Accomplishments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2 Possible Future Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter 1

Introduction
Excitement could be seen in peoples faces when the challenge was revealed; Build a Shelter on Mars is an exciting yet dicult mission to accomplish. In this chapter, the following topics will be discussed: The background of the challenge, and inspiration we drew in order to understand the magnitude and requirements of the challenge. The specication of the problem, what tasks we need to accomplish, and what implementation steps we need to take. An introduction to our approach and solution. The assumptions we made in our solution.

1.1

Background & Inspiration

The use of robots in space exploration is essential because they can operate in the vacuum of space and endure both high levels of radiation and extreme temperatures, allowing them to explore and perform in places not currently accessible to human beings. In this paper we detail a robotics design project inspired by the Mars rover missions led by the National Aeronautics and Space Administration (NASA). Rover missions to Mars date as far back as 1971, when the USSR managed to send two rovers to the Mars surface but was unable to collect any data or command their robots to move. Successful attempts were rst accomplished in 1997 by NASA, and to this day NASA currently operates a rover (Opportunity) on the surface of Mars. These Mars rover missions all encountered signicant technological challenges involved with the physics of space travel as well as communications networks, but the motivation for this project lies in the tasks that these rovers are challenged to perform on a day to day basis. Rovers on the surface of Mars navigate the surface of the planet for mapping purposes but also perform manipulation tasks in order to collect samples for geological analysis. We wish to emulate and expand upon the tasks performed by a robot on unfamiliar terrain and develop functional and creative navigation, object detection, and object manipulation systems.

1.2

Problem Specication

The Grand Challenge for 6.141 Spring 2012 is to design and construct a robot with a navigation, perception, and manipulation framework designed to build a shelter on Mars. Given a map of the surface of Mars (modeled by a maze) and a set of known block locations, the robot needs to be able to successfully navigate the terrain, using its global knowledge of the map as well as local sensors (camera, sonar, and bump sensors) for obstacle detection. Construction materials in the form of blocks are populated on the terrain and can be dierentiated by their color. The robot must be able to autonomously sense the type of each material through color detection as well as its location through image analysis. It must be able to visually servo to the construction materials and gather them using a grasping robotic arm. Finally, the nished robot must be able to deposit these construction materials to reassemble them into the form of a structure resembling a shelter.

1.3

Solution Overview

In this paper we detail all the hardware and software components that comprise our robot design, a system designed to meet the specications of the Grand Challenge. In particular, the sensors and modules of our robots are heavily inspired by the labs we undertook throughout the semester of 6.141. The physical design and layout of the robot is similar to that from the labs. An example image of the robot we used in our labs throughout the course can be seen below in Figure 1.1.

Figure 1.1: Prototype of robot from Labs 1-6. The design used in the Grand Challenge will contain all the pictured components as well as a robotic arm, block storage unit, and block dispenser. We employ a robot similar to the one pictured in Figure 1.1. It utilizes the same chassis and motors for locomotion. A netbook running a ported version of Willow Garages Robot Operating System 3

(ROS) powers the brain of the robot, which interfaces digitally with the sensors on the exterior of the robot via a Orcboard. To analyze the details of its environment, the robot employs several sensors: bump sensors for detection of immediate obstacles, sonars for mapping terrain, and a camera for imaging, detecting blocks, and visual servoing. We plan to replace the camera shown in the image with a Microsoft Kinect, however. Additionally, we have an array of components for manipulating the blocks: a robotic arm for grasping, a tower on the back of the robot for storing blocks, and a dispenser for placing sets of blocks on the ground. The software systems employed by our robot draw from the modules implemented during the lab exercises in the rst half of the course. The robot controls its rotational and translational movement through its MotionControl system. It processes input from the camera and issues navigation commands via its VisualServo system. It aggregates sensor (bump and sonar) information to build a map of the environment through its LocalNavigation system, and employs this map to run path planning and waypoint navigation algorithms to provide point-to-point navigation capabilities within its GlobalNavigation system. Finally, the robot manipulates its grasping arm and picks up blocks as they are identied through its ArmControl system, and dispenses them in an appropriate conguration by its BlockDispense system. Our nal implementation performs the tasks of the Grand Challenge. Under the conditions of the challenge, the information initially available to the robot when it is deposited in the challenge environment is a partial map of the terrain and a set of known block locations. It travels from one known block to another, gathering them one by one. Any blocks that are encountered en route from one block to another are detected by the camera and collected. Any unknown obstacles it encounters, either physically through collision with a bump sensor or detection by sonar or the camera, are mapped and added to the robots known set of obstacles. The robot is able to navigate around obstacles in the form of walls and visually servo to blocks, mechanically picking them up with its arm and adding it to the block storage tower on its chassis. When the robot cannot hold any more blocks or the set of known blocks have been collected, the robot pauses and dispenses sets of blocks in a wall conguration, representing a shelter. Thus it is possible for the robot to build several shelters if enough blocks are found. Finally, the robot terminates when time has run out.

1.4

Technical Focus

Our approach to solving this problem branches focuses on two priorities: the integration of a Kinect sensor and the use of vision processing for navigation and localization. We wish to replace the standard camera with a Kinect for several reasons. Initially, the built-in camera of the Kinect is a higher resolution than the simple web camera, which will allow us smoother and more robust object detection from the hardware end. Secondly, the use of depth information from the Kinect will allow us to expand the vision functionality of our robot. In the labs, we only used vision to detect objects. This allowed the robot to classify the object based on color and know its distance from the block based on an expected size and position in the frame of view of the camera. We rmly believe that the vast wealth of vision data the camera acquires can be put to additional uses to enhance the functionality of our robot. The camera sees the walls in front of the robot, and the depth sensors from the Kinect will tell us exactly how far away it is. This gives us more accurate local obstacle detection as well as providing secondary information so that the robot can localize itself in the world of the map. Additionally, it is easier to 4

immediately classify newly discovered obstacles using this depth information. Edge detection using the images and depth grids processed by the Kinect should allow us to spot multiple obstacles and their boundaries on the y while knowing exactly where they are relative to the robot. Should the resolution of the Kinects depth sensor prove to be high enough, we may use it to detect the blocks for grasping and visual servoing purposes. Finally, we will extend the functionality of the basic vision to detect known ducials placed around the perimeter of the map in order to localize the robot.

1.5

Assumptions of Solution

Our approach to solving the Grand Challenge contains several assumptions which limit the complexity of this system and make the design, construction, and execution process feasible within the allotted time frame. Initially we assume that the entire environment is at, so we can navigate and classify obstacles in a 2-dimensional plane. The environment is unchanging with the exception of the shelters we build, so obstacles, once discovered, are assumed to be permanent. All walls are assumed to be straight and thus can easily be characterized by line segments with a single start and end point. We assume that all of the blocks are reasonably simple to identify. We assume that they are of a xed shape, size, robustness, and weight, with the possibility of encountering 1x1x1 blocks or 1x1x2 rectangular prisms. We assume that all blocks on the terrain belong to a xed set of colors, which vastly simplies our ability to detect them using a camera. These colors are assumed to dier signicantly from the walls or oor. Blocks are assumed to be at on the ground when they are located or discovered, which simplies the process required to visually servo to the block and pick it up with the arm. The environment we will be travelling in will be entirely traversable with wheels, and require no movement to areas of dierent heights that would require use of dierent movement mechanisms. The lighting in the room should be roughly uniform, and bright enough to view the walls, eld and blocks. The eld is static, as in no pieces will move (except the robot and the blocks it moves), and no new pieces will be introduced while the robot is operating. Finally, we make additional assumptions about the general execution of the robot. We assume that the map will be suciently small as to allow for the robot to be able to travel across the map multiple times within the time allotted. Conditions will also be within reasonable operating ranges for the robot and wheels (heat, humidity, E&M elds, etc.). We assume that the number of known blocks is enough to build a single structure and that some unknown blocks can be found en route to the known blocks.

Chapter 2

Hardware System
The physical components, or the hardware system that we choose to use, dene the functionalities of the Mars rover and hence our approach to the grand challenge problem. In this chapter, the following physical components for our Mars rover will be discussed in detail: The basic components of the vehicle (the chassis, motor and wheels). The sensors we use to collect data from the environment. The block collector and dispenser we use to collect blocks and build structure.

2.1

Body

The frame of the robot consists of a chassis upon which a netbook, the Orcboard, and the sensors of the robot are mounted. The bulk of the chassis consists of a large chunk of pegboard reinforced by aluminum L-brackets and angle brackets around its edges. The chassis is roughly level with the surface of the ground and is rectangular in shape, and is held up by two wheels and two casters. The two wheels control the movement of the device and are in the front half of the robot while the casters are in rear half of the robot. Figure 2.1 displays the coordinate framework of the robot as it is dened on the planar surface of the chassis. The origin of the robot is dened in the center of the robot, directly between the two wheels. In the odometry calculations this will be where the robot perceives its center in the context of the world. Sensors we add such as sonars and cameras which could potentially derive distance information will return coordinates that have been appropriately geometrically transformed in order to be dened relative to this coordinate grid.

Figure 2.1: Structure of robot chassis. To the left is the 3D model of the frame and to the right is a birds-eye view of the chassis with inscribed coordinate frame.

2.2

Motor & Wheels

The wheels of the robot are actuated by gearmotors. These gearmotors are connected to shaft encoders that communicate with the netbook on the robot through the Orcboard. This setup allows the robot to receive feedback from its wheels in addition to commanding their movement. This allows us to build a functional wheel control class that can correct for slippage of the wheels as well as any torque imbalances. The gearmotor we are using is the GM9236S025 Lo-Cog DC Servo gearmotor. The maximum angular velocity of our motors is roughly 10 radians per second, and the maximum Pulse Width Modulation (PWM) is 255. The wheels possess the following physical attributes: Wheel diameter: 12.4 cm (right) 12.3 cm (left) Wheel circumference: 77.9 cm (right) 77.3 cm (left) Wheelbase: 43.3 cm

2.3

Bump Sensor

One formidable challenge that is part of the Grand Challenge is the implementation of an ability to detect new obstacles. We accomplish this in the simplest way possible by allowing the robot to bump into foreign objects. We detect collisions with two bump sensors placed on either side of the front of the robot; this placement allows the robot to align its front surface ush with the surface of an obstacle. These sensors are simple normally-open (NO) switches with a whisker mounted on it to increase the chance that a collision will hit the switch and not the rest of the robot.

Figure 2.2: Bump sensor schematic In the schematic for the basic bump sensor we can observe that by default, ground (0V) is connected to the signal port. When the switch is actuated by its whiskers collision with a foreign object, a connection is formed between the signal terminal and the +5V power supply. Thus we can detect very easily whether the bump sensor sees a collision or not simply by looking at the Signal port. The resistor in the diagram guarantees that the +5V supply is not connected directly to ground when the switch is closed, preventing it from drawing excess power.

2.4

Sonar Sensor

We will use small sonar sensors placed on the front and sides of our robot for a variety of navigational applications. These sensors give us simple distance information allowing us to navigate based on distances and assemble a rudimentary point cloud in order to classify newly discovered obstacles. The sonars that we are using are the SRF02 ultrasonic range nders and are small single transducer rangenders embedded in PCB. These sensors use a single transducer for both transmission and reception of sound waves. Figure 4 shows a collection of range values from a sample run. Its minimum measurement range varies from 15 to 18 cm.

Figure 2.3: Range data from a sample run, employing a simple distance threshold to detect obstacles. 8

2.5

Kinect

On the front of the robot we mount a Kinect to collect vision and depth information. The specic brand of device we are using is the Asus Xtion Pro, a depth sensor which is modelled after the Xbox Kinect. This sensor contains a camera, depth sensors, an infrared sensor, and a microphone. The camera will serve to perform visual servoing blocks based on color detection. It will also be used to recognize ducials based on color placed around the maze. The two depth sensors on the Kinect provide depth information regarding objects within the eld of view of the camera and will be very useful for obstacle detection as well as localization, detecting walls.

2.6
2.6.1

Block Collector
Robot Arm

We manually construct a robotic arm out of laser-cut plastic components and screws. The model of robotic arm we are using is the AEG2000B Robotic Arm. Our robotic arm breaks down into three major components, each of which consists of dozens of pieces and screws. The arm breaks down into the shoulder/upper arm, wrist, and gripper. Each of these parts has a servo in order to actuate movement. The shoulder servo allows the arm to rotate outward with a very large sweeping range of motion. The servo at the source of the gripper allows this hand to move radially about the forearm of the robot. This sort of movement would be very useful in precise, short distance movement to get the hand of the robot around an object it is trying to grasp. Finally, the servo in the hand allows the robot to grip, opening and closing the clasp of its hand. Combined, these three parts allow the arm to move to any point protruding radially outward in front of the very center of the robot. The completed arm can be seen in Figure 2.4.

Figure 2.4: Assembled arm in dedicated hands.

The range of movement of the arm is limited in that it cannot go to any side of the robot. In order to grasp any object, the robot must rotate until it is directly facing it. In addition to the plastic parts and servos, the arm contains a mechanism to detect whether or not objects are in the gripper. A breakbeam sensor consists of an infrared light emitting diode (LED) and infrared light detector on either side of the gripping prongs of the robotic arm. When there is an obstruction between the infrared LED and the detector, the robot assumes there is a graspable object within the immediate range of the gripper. However, when the infrared light is able to reach the detector uninterrupted, the robot knows safely that there is nothing in its hand, and it does not need to drive the servo controlling the gripper. We employ a Sharp IS471F and a 940nm infrared LED in our sensor; a circuit schematic of our sensor is shown in Figure 2.5.

Figure 2.5: Breakbeam sensor schematic Source: Arm assembly instructions

2.6.2

Block Storage

As blocks are found and gathered by an arm, the robot needs a mechanism by which it can store blocks for later use in constructing shelters. Our design for a block storage unit is a simple tower at the rear of the robot. It has a cross-sectional area large enough to hold one column of blocks. The top of the tower resembles a funnel, as shown in Figure 2.6. This wider area makes it easier for the robot to deposit blocks into the tower given the limited precision with which it can control its arm. If a block is dropped in o center, it will slide down the angled walls into the interior column. We estimate that this block storage unit will be able to hold roughly 6 blocks, enough to build a simple shelter. This tower will be machined out of aluminum.

10

Figure 2.6: Block storage unit model

2.7

Block Dispenser

Because our blocks are stored in a vertical column, our robot can exploit gravity to dispense blocks on the terrain surface. Since the size of the blocks is known, our tower will contain a series of gates spaced at multiples of block height. When blocks need to be deposited the gates open in order to allow blocks to drop down from the tower to the surface. We will have multiple gates for multiple potential heights of structures, and the gates will be controlled by plates attached to servos. A sketch of this implementation can be seen below in Figure 2.7. The blocks are inserted and stored in the tower as detailed in the previous section. When blocks are being stored, Gate 1 is open and Gate 2 is closed. These two gates consist of plates which are actuated by the servos. Blocks are dispensed one at a time; when a block is deposited from the bottom of the tower Gate 1 closes and Gate 2 opens. Gate 1 prevents more than one block from being dispensed at a time. Finally when a sucient amount of time (on the order of seconds) has elapsed for the dispensed block to fall through, the gates revert back to their original setup with Gate 2 closing and Gate 1 opening.

Figure 2.7: Block depositor via gates 11

Chapter 3

Software System
The software system is the brain of the Mars rover, the central part of our solution. It oers the robot the intelligence to see, navigate, obtain scattered blocks and build structures. In this chapter, the details of how we implement our software system will be given, mainly on: The system state diagram showing the high-level description of how we model the Mars rovers brain. The local and global navigation system detailing the robots localization and map-building process. The arm control system, of how the robot picks up blocks, stores them, and put them down strategically to build structures. The vision system, of how the robot perceives the environment through its sensors. The search and goal planner, of how the robot determines where to go and what to do at each time step. The input/output links among dierent modules connecting the subsystems together.

3.1

System Overview

The following will be a description of all of the necessary software systems used to control and navigate our robot. Some systems, such as motor control, are largely just means of controlling our hardware directly. However, on top of these basic control systems we will need to have larger planning systems, such as navigation and vision, which come together as demonstrated in Figure 3.1. A basic state machine outlining the general operation of the robot is shown in Figure 3.2.

12

Figure 3.1: A diagram of the inputs and outputs of our robot.

Figure 3.2: A state machine diagram describing the operation of our robot. One thing to note about the state machine diagram is its error-correction mechanism. In addition to the robots recovery from dropping blocks, during which it re-enters Looking-for-blocks state, the robot also recovers from unexpected collision into walls by entering a Relocalization state (all related transitions are colored blue). Though relocalization and map-correcting are periodically and constantly done by the robot in all states, in the explicit Relocalization state, the robot backs o from the wall, rotates for 360 degrees, and with its depth sensors recalculates its odometry in 13

the maze. After obtaining a corrected odometry, the robot goes back to its previous state, whether its Looking-for-blocks or Constructing. The robot also enters an end state after the time limit of the challenge is reached; in that state the robot stops any motion and stands its ground in an unforeseeable spirit of either failure or triumph.

3.2

Motor Control

The inputs to the motor control system are the number of ticks that the wheel has turned (which is inuenced by load on the wheel), and desired velocities as dictated by the local navigation system. This system uses a PWM signal to directly control the velocity of the wheels. In order to determine the PWM to send at the wheels at a given time step, it uses a PID signal to calculate at each step for each wheel what this PWM signal should be, in order to make sure there is not too much slippage and wheels are moving appropriately.

3.3

Vision

The inputs to the motor control system are the number of ticks that the wheel has turned (which is inuenced by load on the wheel), and desired velocities as dictated by the local navigation system. This system uses a PWM signal to directly control the velocity of the wheels. In order to determine the PWM to send at the wheels at a given time step, it uses a PID signal to calculate at each step for each wheel what this PWM signal should be, in order to make sure there is not too much slippage and wheels are moving appropriately. We hope to spend a fair amount of time implementing robust visual software. In our current system, we simply nd all pixels with a saturation values above a threshold, then nd the center of mass of those within a range of RGB values that we consider to be red. This system is less robust than desirable, and currently is only applicable for one color. In order to have it work for more than one color, we could simply adapt the system to check for which color range the highest number of pixels are, and assume there is a ball of that color in the range. However, instead we intend to implement a more robust solution. As it stands we do no preprocessing on images that come in. In order to have fewer invalid discrete boundaries, we will apply processing, such as a Gaussian blur, to our images in order to smooth them. We can then more easily use connected components analysis to determine where an object actually lies, instead of simply using all of the pixels in the screen. Additionally, in order to better distinguish objects from one another, we may apply additional edge detection algorithms, such as those used in the human eye. These algorithms decrease the brightness of adjacent pixels which share similar hues, in order to make edges between colors more prominent. If expanded enough, this could also act as an additional means of distinguishing the oor and walls, separate from our sonar and bump sensors. Additionally, in order to supplement our bump sensor and sonars, we can use the depth information of depth-sensing cameras like the Microsoft Kinect or the Asus Xtion. With this we will know how far away we are from walls and ducials, which we can pass up to the local navigation system. With this we can then parse the distance to nearby walls, and better localize ourselves.

14

3.4

Local Navigation

The local navigation system needs to decide where to go based on objects around it. The input will be maps of objects given to it by the vision system, current location and goal given to it by the global navigation system, and bump sensor and sonar readings. This system will determine what movement is viable along the path given to it by the global navigation system, by picking the set of movements that it can accomplish while staying closest to the global navigation path. It can do this by making sure that it stays a xed distance from the walls using the sonars and Kinect. Essentially, it will gure out the valid locations given these pieces of information, than minimize the distance from those to its goal line. It will then determine what distance and at what velocity the robot wants to move in a given time step. It will output this information to the motors, which will then move the robot. When the robot detects a new wall, it must classify it in order to maintain the map of the world. We will build upon the simple implementation we rst used in lab. When the robot bumped into a wall, it sought to follow the entire length of the wall in order to determine its coordinates and place it into the map of the world, as shown below in Figure 11. We believe that this will add too much time to the robots performance travelling around the maze. Instead, when a robot bumps into a wall with the bump sensor, it will only follow the wall in the direction of its current waypoint in path planning. It will take the newly discovered portion of the wall and add it to the set of obstacles. With the kinect, we intend to begin classifying newly discovered portions of the wall without having to deal with bumping into the wall, aligning itself with the wall, and rotating to follow the wall at a parallel angle. Instead when a new obstacle is found to be directly ahead of the robot, it will turn parallel to the wall at a set distance and follow it toward the waypoint, again determining coordinates of the wall and creating the obstacle object. We believe that this will further cut down time of operation. To keep the robot a xed distance from walls as it moves around. Additionally, we have developed software for line following. The simplest way to accomplish this task, using just sonar, Kinect, and bump sensors, is to have the robot minimize the distance from its centerpoint to the line while meeting the necessary conditions on the sonar distance and bump sensor response.

15

Figure 3.3: Rudimentary wall following and classication based on coordinates. Using sensor information, especially that from the Kinect, we can update our perceived location in the global map by simply shifting our distance to a given wall using the average of the depth information datapoints while the robot is facing that wall. Our system of localization uses dead reckoning as the basis for its understanding of where it is in the world. This information is corroborated with visual information the robot perceives in its trip around the map. When it reaches a known block it is heading toward, when it sees a previously known obstacle, and when it sees a known ducial around the edge of the of the map it checks this value with that produced by dead reckoning. If the dead reckoning estimate is o, it updates its perceived location. Additionally, since we are using a kinect rather than a simple camera, we are continuously receiving depth information of obstacles spotted in front of the robot. We hope to capitalize upon this information to implement a real-time localization strategy. If the robot does this constantly, it should have an approximately correct map of the world throughout the run.

3.5

Global Navigation

Our map consists of the set of known obstacles in the world as well as their coordinates. Initially obstacles and blocks are fed into the robot with this information, and the obstacle objects are created. As new obstacles are detected, their coordinates are calculated using the robots best known estimate, taking the distance from the object and using its own idea of where the robot is in the world. Thus a robust localization strategy is required for maintaining an accurate set of obstacles. Discovered obstacles are then added to the set of known obstacles to update the robots perceived map of the world. 16

The global navigation system will have to store a map of what the world looks like, by modifying the map we are originally given, and make decisions about the path the robot must take in order to accomplish the goal that it wants to accomplish next. The input to this system will be maps from the visual system, object detections from the local navigation, and goals by the planning system. The global navigation uses this to create a conguration space (C-Space) which will then be passed to the planner. It will also keep track of where known objects, such as blocks, previous structures, and unidentied objects exist. After it receives a desired location, it then calculates a path there from its current location. Currently our robot has code for a 2-dimensional C-space representation, in which the robot is represented as one object which is the size of the robots spun around its center by a full rotation (360 degrees). This allows us to make sure our robot ts into any valid C-space location in any rotation. We then construct a visibility graph of edges connecting the C-Space vertices and repeatedly run Dijkstras algorithm on this graph, which returns the optimal path the robot should follow. We also have the ability to create 3-dimensional C-space representations, which should allow us to create a larger number of correct movement paths. To this end, we will implement a potential eld controller, so that we can update the planned path of the robot at each step in real time, requiring less time precomputing a path.

3.6

Goal Planner

Goal planning uses as inputs the general map created by global navigation and the vision system. It will also take as an input information from the arm, for when it detects that it has picked up a block, as well as output when the arm should pick a block up. It will then keep track of how many blocks it has picked up, so that it knows when to release the blocks from the chassis to create a structure. This is done by keeping a counter of how many blocks it has picked up. Additionally, while the global navigation will create paths for the robot, the goal planner will decide what the goal itself is. We plan to simply have the path be the path to the nearest known block. However if the vision system detects another block nearby, this information will be passed to the goal planner, which will then change the goal to be this nearby block. Therefore the only outputs of this system is the current end goal, tasking the arm to pick a block up, and releasing the block container.

3.7

Arm Control

The last system to be discussed is the arm control system. The arm control system takes information from the general planning system to determine when it needs to be picking up a block. Additionally, the arm will have a break beam sensor that will detect when the hand has closed on an object. After the robot reaches the location where the object is supposed to be located and the vision system detects an object, goal planning will ne tune the path to make sure that robot is the appropriate distance from the block. The arm will then act to pick the block up. To this end, it will rst move the arm in front of the robot on the ground, as close to the robot as possible in a xed motion. After this the arm will slowly slide forward, while staying on the ground, until it detects an object with the gripper of the arm, using the break beam sensor. In order to prevent false positives, there will be a timer on how long the break beam sensors must be broken before the arm will close. It will then close by a predetermined angle, since we know how large the blocks are in advance to it

17

can execute identical actions. It will then perform a xed set of rotations of its joints to lift the block behind the robot, and drop it into our block container. It is particularly useful to us that the action of picking up a block will be essentially identical every time, so once we have already dened the set of motions required to pick up the blocks, we will not have to do much computation in real time to move the arm successfully.

18

Chapter 4

Milestones & Implementation Plan


Roughly a month is given to solve the Grand Challenge. The anticipated work is extensive, and requires careful planning. This section outlines: The logistics of the project and the time we expect to spend implementing each subsystem, and The diculties we may encounter during implementation.

4.1

Tasks, Dates & Durations

The implementation plan is summarized in the following Gantt chart:

19

Figure 4.1: The completed and remaining portions of tasks as of today. As could be seen from the charts y-axis labels, tasks have 3 dierent types: [H] - Hardware, the physical installation of the robot. [S] - Software, the programs running inside the robot. [H/S] - related to both the hardware and software parts of the robot. One thing to note is that the integration processes for hardware and software were made explicit; the integration processes were long and span across the completion of their corresponding individual components. Another thing to note is that the only task belonging to [H/S] is testing and bug xing. Technically this task should be an ongoing process done alongside the completion of each subsystem; however, we listed it here as a separate task, and allocated a signicant bulk of time to show our commitment to the building of a reliable and error-free Mars rover.

4.2

Foreseeable Diculties

The implementation of a complex project like this is likely to incur numerous challenges and diculties along the way. Since our team members have neither mechanical engineering background nor machine shop experience, the physical construction of the robot will likely to be a signicant challenge that we need to tackle. The construction of the block storage and dispenser, and even the aesthetics of the robot

20

may be time-consuming ordeals. The time allocated to these tasks are generous, which will help us accomplish the tasks on time. Reliable arm control in the software is essential to the success of our robot. The inability to grasp and put down the blocks with precision would be a nightmare. With three servos and numerous parameters for each one of them, achieving an overall desirable behavior will take much analysis as well as trial and error. Two weeks are assigned to its completion to ensure proper attention to detail over the control of the arm. Goal planning is another signicant part of the software system, because it links all the subsystems together and is the deciding factor of our robots nal behavior. We need to make sure that the robot can extract necessary information from the sensors, integrate all the data with the constructed map, decides on the next goal at every time step, and pass the correct commands to the motor. Utilizing Team Bevcos usual tactic when faced with dicult problems, we allocate two weeks to search and goal planning to ensure its completion.

21

Chapter 5

Conclusion
As the last chapter in the proposal, the conclusion will serve as a platform for us to discuss how our rover will solve the problem of the Grand Challenge. In detail, we shall discuss: The accomplishment of our Mars rover, its unique features and characteristics that make it stand out as the robot of choice. Possible future enhancements that could be made to the robot to make it even better.

5.1

Accomplishments

Once completed, our robot will successfully perform the tasks of the Grand Challenge. As the robot is placed in the new world, it will immediately process its known list of obstacles and blocks. Based on known coordinates these obstacles are placed into a map as our robots symbolic representation of the known world. The locations of the blocks will be established as waypoints for the robot to traverse in the most ecient way possible. The robot will begin on its path, using the shaft encoder on the motor to keep track of where it is in the world. The camera, sonars, and bump sensors will be utilized to detect new obstacles should they obstruct the robots path. As a new obstacle is discovered, the robot enters a complex sequence of operations in which it follows the wall, building a point cloud and categorizing it to add to its list of known obstacles. Once the robot reaches a block target or if a block is discovered en route, the camera will detect the object based on color and the robot will visually servo to the block in order to pick it up. The arm will servo to grasp the block and will deposit it over the back of the robot into our block storage unit, a funnel-shaped tower. Once enough blocks have been stored or the robot has fully traversed its path of known blocks, it pauses its movement in order to open gates in the block storage tower, allowing blocks to be deposited down the chute to the ground surface. The construction of structures can occur multiple times depending on the number of blocks in our world. Our robot is unique because it accomplishes the task of the Grand Challenge in what we believe is through the most ecient means possible. All of our components, whether software or hardware, are designed to be the simplest possible while covering all potential functions necessary. This we hope will greatly aid in debugging once we proceed to build the robot and run it through trials, testing performance. We are very excited about the block storage and depositing mechanisms because these are components that we are not basing on a lab from this semester of 6.141, and thus 22

we are looking to see how our unique implementation fares and compares with the designs of our peers. Additionally, the functionality of our design will hopefully lend us time to experiment with several potential enhancements we are looking to implement.

5.2

Possible Future Enhancements

We are very interested in applying several potential enhancements to the robot to both extend its functionality and solve existing problems in our proposed solution. One such problem is the robots reliance on physical collision to detect new obstacles in our environment. This is time consuming and introduces unneeded wear and tear to the robot. Therefore, one set of possible enhancements is the addition of additional sensors to our robot, like a LIDAR system or a Microsoft Kinect. Utilized properly, these sensors would provide sucient information such that they could replace our existing bump and sonar sensors as our primary means of mapping the environment, saving time used for mapping individual obstacles and preventing unnecessary damage to the robot. Another problem lies within the robots grasping arm. Currently, it has three degrees of freedom, which enable it to reach for an object if it is directly in front of the robot. However, it must undergo a tedious alignment process if the object does not meet this condition, which is most of the time. An improvement, then, would be to increase the degrees of freedom available to the robot arm by enabling it to rotate on a base. This would enable the arm to reach any point within a xed radius in front of the robot without requiring the robot itself to rotate. This would increase the versatility of the arm in cramped situations where the robot is unable to rotate freely, like narrow hallways and dead ends. One nal problem is our limited ability to build a structure. If we wish instead to be able to construct an arbitrary structure, like an arch or an open box, we would need a more sophisticated block storage and dispensing mechanism. We can improve these mechanisms by abandoning, or at least repurposing, our original block dispenser and using in its place the grasping arm as our construction tool of choice. With the higher degree of precision aorded by the arm, it would be possible to construct many more structures than a simple wall, a valuable ability to have if one structure is not enough to cover the needs of our robot.

23

S-ar putea să vă placă și