Sunteți pe pagina 1din 40

Obstacle Avoiding Car

Made by- Shubham Kumar Tiwari (IT Engineer)


ABSTRACT

Voice controlled and road lane detection can be considered as the central issue in designing
mobile robots. It does not cost only to the vehicle but to the objects in surrounding too. This
technology provides the robots with senses which it can use to travel in unfamiliar
environments without damaging itself. In this paper an Vocie Controlled vehicle is designed
which can detect lane in its path and move freely around without making any collision. It is a
robot vehicle that works on Raspberry Pi Microcontroller and employs three ultrasonic
distance sensors to detect obstacles. The Raspberry Pi board was selected as the
microcontroller platform and its software counterpart, Python 3.0 platform, was used to carry
out the programming. The integration of one ultrasonic distance sensors and one camera for
detecting road lane. Being a fully autonomous robot, it successfully can survive in unknown
environments without any collision. The hardware used in this project is widely available and
inexpensive which makes the robot easily replicable.
TABLE OF CONTENTS

S. No Contents Page No.

1 Introduction
1.1 Introduction
1.2 Need
1.3 Advantages and Disadvantages

2 Software Requirement Specification


2.1 Purpose
2.2 Scope
2.5 Hardware Requirement/ Software Requirement
2.6 Software Process Model Used

3 System Documentation
3.1 Data Flow Diagram
3.2 System Flow Chart

5 Testing and Test Cases


6 Limitations
7 Conclusion and Future Work
8 Bibliography
9 References
INTRODUCTION

From its initiation in the 1950s, modern robots have come a long way and rooted itself as an
immutable aid in the advancement of humankind. In the course of time, robots took many
forms, based on its application, and its size varied from a giant 51 feet to microscopic level.
In the course of technological developments of robots, one aspect remained instrumental to
their function, and that is mobility. The term “voice controlled” is now used in modern
robotics to denote the capability of robot to navigate over an unknown environment without
having any collision with surrounding objects. Obstacle avoidance in robots can bring more
flexibility in maneuvering in varying environments and would be much more efficient as
continuous human monitoring is not required.

This project developed an voice controlled robot which can move without any collision by
sensing road lane on its course with the help of three ultrasonic distance sensors and camera.
Robots guided with this technology can be put into diversified uses, e.g., surveying
landscapes, driverless vehicles, autonomous cleaning, automated lawn mower and
supervising robot in industries. The robot developed in this project is expected to fulfill the
following objectives:

•The robot would have the capacity to detect road lane in its path.

• After detecting lane , the robot would change its course to a relatively center of the road
lane by making autonomous decision.

• It would require no external control during its operation.

• It can measure the distance between itself and the road lane in real-time.

• It would be able to operate effectively in unknown environment.

This would help vehicle to move even if it loose connection with its source and it can
complete its task without any dependency from outside it.
NEED:
When we drive, we use our eyes to decide where to go. The lines on the road that show us
where the lanes are act as our constant reference for where to steer the vehicle. Naturally, one
of the first things we would like to do in developing a self-driving car is to automatically
detect lane lines using an algorithm.The places which need this model are likely to be, it is
very helpful in spying field where it can spy places without any support from environment
i.e., it can run in dark places as well

ADVANTAGES:
 It needs very less human support.
 It is fully automatic hence it doesn’t affect by surroundings.
 It is small, so it can reach very congested places.
 It can run in light and dark places as well.

DISADVANTAGES:
 It is fully dependable on surroundings.
 Its calculations may vary because of different types of roads.
SOFTWARE REQUIREMENT SPECIFICATION

 PURPOSE
The purpose to build this car is to provide some advance technology in transportation
field where collision cause so much loss and this technology provides reliability and
helps to provide safety to all objects.

 SCOPE
Especially military applications: Because of small size of model it can be efficient for
military based purposes. It can move by itself, so there is no need of pre knowledge of
path.

It can be used for city wars: This technology can be used in cities in transportation
field as well.

 HARDWARE AND SOFTWARE REQUIREMENT

HARDWARE:

1. Raspberry Pi
The Raspberry Pi 3 Model B is a tiny credit card size computer. Just add a keyboard, mouse,
display, power supply, micro SD card with installed Linux Distribution and you'll have a
fully fledged computer that can run applications from word processors and spreadsheets to
games.

As the Raspberry Pi 3 supports HD video, you can even create a media centre with it. The
Raspberry Pi 3 Model B is the first Raspberry Pi to be open-source from the get-go, expect it
to be the defect embedded Linux board in all the forums.
It is a modified form of its predecessor Raspberry Pi 3 B that was introduced in 2016 and
came with CPU, GPU, USP ports and I/O pins. Both versions are almost same in terms of
functionality and technical specifications; however, there are some exceptions in the B+
model as it comes with USB boot, network boot, and Power over Ethernet option that are not
present in the B model.
Technology has been evolved over time with the purpose of making lives easy and
convenient. This device was a major development in the technology that made computer
learning too easy that anyone with little effort can make their feet wet with the process.

Features of the B+ version are almost same as B model; however, USB and Network Boot
and Power over Ethernet facility only come with B+ model. Also, two extra USB ports are
added to this device.
The SoC (system on chip)
combines both CPU and GPU
on a single package and
turns out to be faster than Pi 2
and Pi 3 models.

 Raspberry Pi 3
 Raspberry Pi 3 B+ Pin-out

Hardware Specifications

CPU: The CPU is a brain of this tiny computer that helps in carrying out a number of
instruction based on the mathematical and logical formulas. It comes with a capacity of 64
bit.

Clock Speed and RAM: It comes with a clock speed of 1.4 GHz Broadcom BCM2837B0
that contains quad-core ARM Cortex-A53 and RAM memory is around 1GB (identical to the
previous version)

GPU: It stands for graphics processing unit, used for carrying out image calculation.
Broadcom video core cable is added in the device that is mainly used for playing video
games.

USB Ports: Two more USB ports are introduced in this new version, setting you free from
the hassle of using an external USB hub when you aim to join a number of peripherals with
the device.

MicroUSB Power Source Connector: This connector is used for providing 5V power to the
board. It draws 170 to 200mA more power than B model.

HDMI and Composite Connection: Both audio output socket and video composite now
reside in a single 4-pole 3.5mm socket which resides near HDMI. And the power connector is
also repositioned in new B+ model and lives next to HDMI socket. All the power and audio
video composite socket are now placed on the one side of the PCB, giving it a clean and
precise look.

USB Hard Drive: The USB hard drive is available on the board that is used to boot the
device. It is identical to the hard drive of regular computer where windows is used to boot the
hard drive of the computer.
PoE: B+ model comes with a facility of Power over Ethernet (PoE); a new feature added in
this device which allows the necessary electrical current using data cables.

Other Changes: The B+ version comes with little improvement in the features and poses
slightly different layout in terms of location of the components. The SD memory slot is
replaced by a micro SD memory card slot (works similar to the previous version). The status
LEDs now only contain red and green color and relocated to the opposite end of the PCB.

2. ULTRASONIC SENSOR HC-SR04

The HC-SR04 ultrasonic sensor uses sonar to determine distance to an object like bats do. It
offers excellent non-contact range detection with high accuracy and stable readings in an
easy-to-use package. It comes complete with ultrasonic transmitter and receiver modules.
 It measures distance by sending out a sound wave at a specific frequency and listening for
that sound wave to bounce back. By recording the elapsed time between the sound wave
being generated and the sound wave bouncing back, it is possible to calculate the distance
between the sonar sensor and the object.
Since it is known that sound travels through air at about 344 m/s (1129 ft/s), you can take the
time for the sound wave to return and multiply it by 344 meters (or 1129 feet) to find the total
round-trip distance of the sound wave. Round-trip means that the sound wave traveled 2
times the distance to the object before it was detected by the sensor; it includes the 'trip' from
the sonar sensor to the object AND the 'trip' from the object to the Ultrasonic sensor (after the
sound wave bounced off the object). To find the distance to the object, simply divide the
round-trip distance in half.

Features:
Here’s a list of
some of the
HC-SR04 ultrasonic sensor features and specs:
Power Supply :+5V DC
Quiescent Current : <2mA
Working Current: 15mA
Effectual Angle: <15°
Ranging Distance : 2cm – 400 cm/1″ – 13ft
Resolution : 0.3 cm
Measuring Angle: 30 degree
Trigger Input Pulse width: 10uS
Dimension: 45mm x 20mm x 15mm

Working:

The ultrasonic sensor uses sonar to determine the distance to an object. Here’s what happens:

1.The transmitter (trig pin) sends a signal: a high-frequency sound.

2.When the signal finds an object, it is reflected and

3.The transmitter (echo pin) receives it.

Sensor Pin-out
1. VCC : +5V DC
2. Trig : Trigger (INPUT)
3. Echo: Echo (OUTPUT)
4. GND: GND
5. DC MOTOR DUAL
SHAFT BO
MOTOR
Dual shaft DC motor with gear box which gives good torque and rpm at lower
voltages. This motor can run at approximately 200rpm when driven by a Dual
Li-Ion cell battery at 6 V and approximately at 300 rpm when driven by a 9V
Li-Ion cell.

It is most suitable for light weight robot running on small voltage. Out of its two
shafts one shaft can be connected to wheel, other can be connected to the
position encoder.

Features:


Working voltage : 3V to 9V
 30gm weight
 Ability to operate with minimum or no lubrication, due to inherent lubricity.
 1.9Kgf.cm torque
 No-load current = 60mA, Stall current = 700mA

6. BREADBOARD

A breadboard is a solderless device for temporary prototype with electronics and test circuit
designs. Most electronic components in electronic circuits can be interconnected by inserting
their leads or terminals into the holes and then making connections through wires where
appropriate. The breadboard have strips of metal underneath the board and connect the holes
on the top of the board. The metal strips are laid out as shown below. Note that the top and
bottom rows of holes are connected horizontally and split in the middle while the remaining
holes are connected vertically.

An electronics breadboard (as opposed to the type on which sandwiches are made) is actually
referring to a solderless breadboard. These are great units for making temporary circuits and
prototyping, and they require absolutely no soldering.
Power Rails
Aside from horizontal rows, breadboards usually have what are called power rails that run
vertically along the sides.

Terminal Strips
Horizontal rows of metal strips on the bottom of the breadboard.

Once inserted any component that


component will be electrically
connected to anything else placed
in that row. This is because the
metal rows are conductive and
allow current to flow from any point in that strip.

7. L293D - MOTOR DRIVER IC

The L293 and L293D devices are quadruple high-current half-H drivers. The L293 is
designed to provide bidirectional drive currents of up to 1 A at voltages from 4.5 V to 36 V.
The L293D is designed to provide bidirectional drive currents of up to 600-mA at voltages
from 4.5 V to 36 V. Both devices are designed to drive inductive loads such as relays,
solenoids, DC and bipolar stepping motors, as well as other high-current/high-voltage loads
in positive-supply applications.

Each output is a complete totem-pole drive circuit, with a Darlington transistor sink and a
pseudo- Darlington source. Drivers are enabled in pairs, with drivers 1 and 2 enabled by
1,2EN and drivers 3 and 4 enabled by 3,4EN.

PIN DIAGRAM :

8. WEBCAMERA
A webcam is a video camera that feeds or streams an image or video in real time to or
through a computer to a computer network, such as the Internet. Webcams are typically small
cameras that sit on a desk, attach to a user's monitor, or are built into the hardware. Webcams
can be used during a video chat session involving two or more people, with conversations
that include live audio and video. For example, Apple's iSight camera, which is built into
Apple laptops, iMacs and a number of iPhones, can be used for video chat sessions, using the
iChat instant messaging program (now called Messages). Webcam software enables users to
record a video or stream the video on the Internet. As video streaming over the Internet
requires a lot of and width, such streams usually use compressed formats. The maximum
resolution of a webcam is also lower than most handheld video cameras, as higher resolutions
would be reduced during transmission. The lower resolution enables webcams to be relatively
inexpensive compared to most video cameras, but the effect is adequate for video chat
sessions

Working and programming logic

The L293D has 4-half H-bridge drivers, which can be used to drive 2-DC motors
bidirectionally. Here we are demonstrating how to drive a single DC motor using Half
bridges 1 & 2. The DC motor is connected between OUT1 and OUT2 pins, and the pin IN1 is
connected to the Microcontroller PWM output and pin IN2 is connected to a Microcontroller
I/O port.

Clockwise rotation: To rotate the motor in clockwise direction the IN2 pin is made LOW
and a PWM signal is generated on IN1 pin.

Anti-Clockwise rotation: To rotate the motor in clockwise direction the IN2 pin is made
HIGH and a PWM signal is generated on IN1 pin.
A H-bridge is fabricated with four switch like S1, S2, S3 and S4. When the S1 and S4
switches are closed, then a +ve voltage will be applied across the motor. By opening the
switches S1 and S4 and closing the switches S2 and S3, this voltage is inverted, allowing
invert operation of the motor.

Generally, the H-
bridge motor
driver circuit is
used to reverse the direction of the motor and also to break the motor. When the motor comes
to a sudden stop, as the terminals of the motor are shorted. Or let the motor run free to a stop,
when the motor is detached from the circuit. The table below gives the different operations
with the four switches corresponding to the above circuit.

8. LED
A light-emitting diode (LED) is a semiconductor device that emits light when an electric
current is passed through it. Light is produced when the particles that carry the current
(known as electrons and holes) combine together within the semiconductor material.

Since light is generated


within the solid semiconductor material, LEDs are described as solid-state devices. The term
solid-state lighting, which also encompasses organic LEDs (OLEDs), distinguishes this
lighting technology from other sources that use heated filaments (incandescent and tungsten
halogen lamps) or gas discharge (fluorescent lamps).
The photon energy determines the wavelength of the emitted light, and hence its color.
Different semiconductor materials with different bandgaps produce different colors of light.
The precise wavelength (color) can be tuned by altering the composition of the light-emitting,
or active, region.

9. Piezo Buzzer
A piezo buzzer is a type of electronic device that’s used to produce a tone, alarm or sound.
It’s lightweight with a simple construction, and it’s typically a low-cost product. Yet at the
same time, depending on the piezo buzzer specifications, it’s also reliable and can be
constructed in a wide range of sizes that work across varying frequencies to produce different
sound outputs.
WORKING PRINCIPLE :

When an alternating voltage is applied to the piezoceramic element, the element extends and
shrinks diametrically. This characteristic of piezoelectric material is utilized to make the
ceramic plate vibrate rapidly to generate sound waves.

SOFTWARE:

OpenCV-Python
Python is a general purpose programming language started by Guido van Rossum, which
became very popular in short time mainly because of its simplicity and code readability. It
enables the programmer to express his ideas in fewer lines of code without reducing any
readability.

Compared to other languages like C/C++, Python is slower. But another important feature of
Python is that it can be easily extended with C/C++. This feature helps us to write
computationally intensive codes in C/C++ and create a Python wrapper for it so that we can
use these wrappers as Python modules. This gives us two advantages: first, our code is as fast
as original C/C++ code (since it is the actual C++ code working in background) and second,
it is very easy to code in Python. This is how OpenCV-Python works, it is a Python wrapper
around original C++ implementation.

And the support of Numpy makes the task more easier. Numpy is a highly optimized library
for numerical operations. It gives a MATLAB-style syntax. All the OpenCV array structures
are converted to-and-from Numpy arrays. So whatever operations you can do in Numpy, you
can combine it with OpenCV, which increases number of weapons in your arsenal. Besides
that, several other libraries like SciPy, Matplotlib which supports Numpy can be used with
this. So OpenCV-Python is an appropriate tool for fast prototyping of computer vision
problems.

THE APPROACH
The first step to working with our images will be to convert them to grayscale. This is a
critical step to using the Canny Edge Detector inside of OpenCV. I’ll talk more about what
canny() does in a minute, but right now it’s important to realize that we are collapsing 3
channels of pixel value (Red, Green, and Blue) into a single channel with a pixel value range
of [0,255]. gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

Before we can detect our edges, we need to make it clear exactly what we’re looking for.
Lane lines are always yellow and white. Yellow can be a tricky color to isolate in RGB space,
so lets convert instead to Hue Value Saturation or HSV color space. You can find a target
range for yellow values by a Google search. The ones I used are below. Next, we will apply a
mask to the original RGB image to return the pixels we’re interested in.
We are almost to the good stuff! We’ve certainly processed quite a bit since our original
image, but the magic has yet to happen. Let’s apply a quick Gaussian blur. This filter will
help to suppress noise in our Canny Edge Detection by averaging out the pixel values in a
neighborhood.

Canny Edge Detection


We’re ready! Let’s compute our Canny Edge Detection. A quick refresher on your calculus
will really help to understand exactly what’s going on here! Basically, canny() parses the
pixel values according to their directional derivative (i.e. gradient). What’s left over are the
edges — or where there is a steep derivative in at least one direction. We will need to supply
thresholds for canny() as it computes the gradient. John Canny himself recommended a
low to high threshold ratio of 1:2 or 1:3.
We’ve come a long way, but we’re not there yet. We don’t want our car to be paying
attention to anything on the horizon, or even in the other lane. Our lane detection pipeline
should focus on what’s in front of the car. Do do that, we are going to create another mask
called our region of interest (ROI). Everything outside of the ROI will be set to black/zero, so
we are only working with the relevant edges. I’ll spare you the details for how I made this
polygon — take a look in the GitHub repo to see my implementation. roi_image =
region_of_interest(canny_edges, vertices)

Hough Space
Prepare to have your mind blown. Your favorite equation y=mx+b is about to reveal its alter
ego —the Hough transform. Udacity provides some amazing video content about Hough
space, but it’s currently for students only. However, this is an excellent paper that will get
you accquainted with the subject. If academic research publications aren’t your thing, don’t
fret. The big take away is that in XY space lines are lines and points are points, but in Hough
space lines correspond to points in XY space and points correspond to lines in XY space.
This is what our pipeline will look like:

1) Pixels are considered points in XY space

2) hough_lines() transforms these points into lines inside of Hough space

3) Wherever these lines intersect, there is a point of intersection in Hough space

4) The point of intersection corresponds to a line in XY space

Let’s see what it looks like in action:


The key observation about the image above is that it contains zero pixel data from any of the
photos we processed to create it. It is strictly black/zeros and the drawn lines. Also, what
looks like simply two lines can actually be a multitude. In Hough space, there could have
been many, many points of intersection that represnted lines in XY. We will want to combine
all of these lines into two master averages. The solution I built to iterate over the lines is in
the repo.
SPEECH RECOGNITION:
Speech recognition is the process of converting spoken words to text. Python supports many
speech recognition engines and APIs, including Google Speech Engine, Google Cloud
Speech API, Microsoft Bing Voice Recognition and IBM Speech to Text.

 SOFTWARE PROCESS MODEL USED


Software Engineering Model define as a standard way of developing a software product
which defines activities, action, task, milestone and work product that are required to
engineering that is high quality engineering. The activities may be linear, incremental or
evolutionary.
There are number of software engineering model used for development for various products.
Our project neither required any prototype nor we are gathering information period by period
basically, it is our first project therefore, we are using Waterfall Model.

WATERFALL MODEL
The first Model introduced to be process was Waterfall Model. To understand and use of
Waterfall Model is easy and simple. Waterfall Model sometimes called Linear Sequential/
Classic Lifecycle Model. It suggests a systematic linear approach to software development
that begins at system level and progress through analysis, design, coding and testing. Each
phase is done after completion of previous phase. In first phase, Requirement gathering and
analysis would be done by taking a month or two. After requirement gathering design, coding
and testing goes on platform and then deployment. Waterfall Model is an appropriate model
for new developers or if the product has 4 to 5 months to be developed.

Applications:
1. Requirements must be well cleared and documented.
2. Goal of product should be stable.
3. There should not any ambiguous requirements.
4. Technology which is going to be used must be understood.

Advantage:
1. Waterfall model is easy to explain the user.
2. All these stages and activities are well-define.
3. It ensures that information required is obtained as and when it needs to be used.
4. It helps to plan and schedule the project.

Disadvantage:
1. Real project rarely follow the sequential flow.
2. It is difficult for customer to state all requirements explicitly.
3. High amounts of risk and uncertainty.
4. The customer need to have patience, a working version is available very late in
project time spend.

SYSTEM DOCUMENTATION
Source Code:

Speech Recognition:

import speech_recognition as sr
r = sr.Recognizer()
for _ in range(10):
with sr.Microphone() as source:
print("Give Command : ")
audio = r.listen(source)

try:
text = r.recognize_google(audio)
print("You said : {}".format(text))
if "left" in text:
print(left())
elif "right" in text:
print(right())
elif "forward" in text:
print(forward())
elif "backward" in text or "reverse" in text:
print(backward())
elif "stop" in text:
break
else:
print("No command is given")

except:
print("Sorry could not recognize your voice")

-------------------------------------------------------------------------------------------------------

import paho.mqtt.client as mqtt


import numpy as np
import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
#-------------------------------------------------------------------
#import RPi.GPIO as GPIO
from time import sleep,time
import sys
import random
GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)

def initialize_pins():
mode =1
ir1 = 2 #input
ir2 = 3 #input
ir3 = 4 #input
ir4 = 17 #input
ir5 = 27 #input
ust_1 = 22 #output
use_1 = 10 #input
ust_2 =9
#output
use_2 =11 #input
en1 = 24 #output
en2 = 23 #output
m11 = 12 #output
m12 = 16 #output
m21 = 20 #output
m22 = 21 #output
#-------------------# LEDS allotment
l1 = 0
l2 = 5
l3 = 6
l4 = 13
l5 = 26
l6 = 19
l7 = 25
l8 = 8
l9 = 7
l10 = 1
def GPIOsetup():
GPIO.setup(ir1,GPIO.IN)
GPIO.setup(ir2,GPIO.IN)
GPIO.setup(ir3,GPIO.IN)
GPIO.setup(ir4,GPIO.IN)
GPIO.setup(ir5,GPIO.IN)
GPIO.setup(ust_1,GPIO.OUT)
GPIO.setup(use_1,GPIO.IN)
GPIO.setup(ust_2,GPIO.OUT)
GPIO.setup(use_2,GPIO.IN)
GPIO.setup(en1,GPIO.OUT)
GPIO.setup(en2,GPIO.OUT)
GPIO.setup(m11,GPIO.OUT)
GPIO.setup(m12,GPIO.OUT)
GPIO.setup(m21,GPIO.OUT)
GPIO.setup(m22,GPIO.OUT)
GPIO.setup(l1,GPIO.OUT)
GPIO.setup(l2,GPIO.OUT)
GPIO.setup(l3,GPIO.OUT)
GPIO.setup(l4,GPIO.OUT)
GPIO.setup(l5,GPIO.OUT)
GPIO.setup(l6,GPIO.OUT)
GPIO.setup(l7,GPIO.OUT)
GPIO.setup(l8,GPIO.OUT)
GPIO.setup(l9,GPIO.OUT)
GPIO.setup(l10,GPIO.OUT)

def on_connect(client, userdata, flags, rc):


print("Connected with result code "+str(rc))
client.subscribe("skt1/one")
print(" subscribed ........")
def on_message(client, userdata, msg):
data = str(msg.payload.decode())
temp = data.split()
data = temp[0]
global mode
if len(temp)==1:
print(" got single as message")
if temp[0]=="1":
print(" changing mode ",1)
mode=1
elif temp[0]=="0":
print(" changing mode ",0)
mode=0
if len(temp)!= 2:
returnif mode==0:
return
if data=="right" or data=="Right":
right_mqtt(float(temp[1]))
elif data == "left" or data == "Left":
left_mqtt(float(temp[1]))
elif data == "forward" or data == "Forward":
forward_mqtt(float(temp[1]))
elif data == "back" or data == "Back":
back_mqtt(float(temp[1]))
elif data == "stop" or data == "Stop":
stop_mqtt(float(temp[1]))
else:
client.publish("skt1/two","undefine command")

client = mqtt.Client()
client.connect("127.0.0.1",1883,60)
client.loop_start()
client.on_connect = on_connect
client.on_message = on_message

def distance1():
GPIO.output(ust_1,0)
sleep(0.5)
GPIO.output(ust_1,1)
sleep(0.00001)
GPIO.output(ust_1,0)
start_time = time()stop_time = time()
while 1: GPIO.input(use_1)==0:
start_time = time()
while 1:GPIO.input(use_1)==1:
stop_time = time()
time_ela = stop_time - start_time
dist1 = (time_ela*34300)/2
print(dist1)
return dist1

def forward_mqtt(a):
GPIO.output(m11,1)
GPIO.output(m12,0)GPIO.output(m21,1)
GPIO.output(m22,0)
sleep(a)
print("forward by mqtt")
stop()
def forward():
GPIO.output(m11,1)
GPIO.output(m12,0)
GPIO.output(m21,1)
GPIO.output(m22,0)
sleep(0.2)
print("forward")
def back_mqtt(a):
GPIO.output(m11,0)
GPIO.output(m12,1)
GPIO.output(m21,0)
GPIO.output(m22,1)
sleep(a)
print("backward_mqtt")
stop()
def back(a):
GPIO.output(m11,0)
GPIO.output(m12,1)
GPIO.output(m21,0)
GPIO.output(m22,1)
sleep(a)
print("backward")
def right_mqtt(a):
print("moving right by mqtt")GPIO.output(m11,0)
GPIO.output(m12,1)
GPIO.output(m21,1)
GPIO.output(m22,0)
sleep(a)
stop()
def right(a):
print("moving right")
for i in range(1,4):
GPIO.output(m11,0)
GPIO.output(m12,1)
GPIO.output(m21,1)
GPIO.output(m22,0)
sleep(0.17)
i1=GPIO.input(ir1)
if i1==0:
back(0.3)
stop
def left_mqtt(a):
print("moving left by mqtt")
GPIO.output(m11,1)
GPIO.output(m12,0)
GPIO.output(m21,0)
GPIO.output(m22,1)
sleep(a)
stop()
def left(a):
print("moving left")
for i in range(1,4):GPIO.output(m11,1)
GPIO.output(m12,0)
GPIO.output(m21,0)
GPIO.output(m22,1)
sleep(0.17)
i3=GPIO.input(ir3)
if i3==0:
back(0.3)
stop()
def stop_mqtt(a):
GPIO.output(m11,0)
GPIO.output(m12,0)
GPIO.output(m21,0)
GPIO.output(m22,0)
print("stop by mqtt")
sleep(a)
def stop():
GPIO.output(m11,0)
GPIO.output(m12,0)
GPIO.output(m21,0)
GPIO.output(m22,0)
sleep(0.2)
print("stop")
def region_of_interest(img, vertices):

mask = np.zeros_like(img)
channel_count = img.shape[2]
match_mask_color = (255,) * channel_count
cv2.fillPoly(mask, vertices, 255)
masked_image = cv2.bitwise_and(img, mask)
return masked_image

def lane_detection(us1):
if not us1:
return
cap = VideoCapture(0)
plt.imshow(image)
_,image=cap.read()
shape=image.shape
print(shape)
center_point=(shape[1]/2,shape[0]/2)
region_of_interest_vertices = [
(0, shape[0]),
(0,shape[0]/2),
(shape[1],shape[0]/2),
(shape[1], shape[0]),
]
cropped_image = region_of_interest(
image,
np.array([region_of_interest_vertices], np.int32),
)
plt.figure()
plt.imshow(cropped_image)
gray_image = cv2.cvtColor(cropped_image, cv2.COLOR_RGB2GRAY)
cannyed_image = cv2.Canny(gray_image, 150, 100)
plt.figure()
plt.imshow(cannyed_image)
plt.show()

ins=[22,9,0,5,6,13,19,26,25,8,7,1,24,23,12,16,20,21]
for i in pins:
GPIO.output(i,0)

initialize_pins()
GPIOsetup()

while True:
print("value of mode ",mode)
sleep(1)
if mode==1:
continue
GPIO.output(en1,1)
GPIO.output(en2,1)
us1=distance1()
if us1>50:
us1=0
else:
us1=1
lane_detection(us1)
sleep(0.2)

FLOW CHARTS:
TESTING AND TESTCAESES
Case 1: When object is at US2 i.e. in Right Direction.
Then car should stop and move to Left direction.
Case 2: When object is at IR2 & US2 i.e. in Right & Middle Direction.
Then car should stop and move to Left direction.
Case 3: When object is at US1 i.e. in Left Direction.
Then car should stop and move to Right direction.
Case 4: When object is at IR2 & US1 i.e. in Left & Middle Direction.
Then car should stop and move to Right direction.
Case 5: When object is at IR2 i.e. in Middle Direction.
Then car should stop and move to Left or Right direction.
Case 6: When object is at US1,IR2 & US2 i.e. in All Direction.
Then car should stop and move Back and then move to Left or Right direction.
Case 7: When object is at US2 & US2 i.e. in Right & Left Direction.
Then car should stop and move Back and then move to Left or Right direction.
Case 8: When object does not present in any direction.
Then car should move in Forward Direction.
Case 9: When object is at IR4.
Then car should stop and move to Forward Direction.
Case 10: When object is at IR5.
Then car should stop and move to Forward Direction.
Note: Case 9 & 10 condition will only occur when car is moving backward.

If our model is following the above test cases then test is successful otherwise not.

LIMITATIONS
1. The lane detection region of interest (ROI), must be flexible. When driving up or
down a steep incline, the horizon will change and no longer be a product of the
proportions of the frame. This is also something to consider for tight turns and
bumper to bumper traffic.

2. Driving at night. The color identification and selection process works very well in day
light. Introducing shadows will create some noisy, but it will not provide as rigorous a
test as driving in night, or in limited visibility conditions (e.g. heavy fog)
CONCLUSION AND FUTURE SCOPE
This project developed an voice controlled robot which can move without any collision by
sensing road lane on its course with the help of three ultrasonic distance sensors and web
camera .Robots guided with this technology can be put into diversified uses, e.g., surveying
landscapes, driverless vehicles, autonomous cleaning, automated lawn mower and
supervising robot in industries.It is a robot vehicle that works on Raspberry Pi
Microcontroller and employs three ultrasonic distance sensors to detect obstacles. The
Raspberry Pi board was selected as the microcontroller platform and its software counterpart,
Python 3.0 platform, was used to carry out the programming. The integration of one
ultrasonic distance sensors and web camera provide higher accuracy in detecting surrounding
obstacles. Being a fully autonomous robot, its uccessfully can survive in unknown
environments without any collision. The hardware used in this project is widely available and
inexpensive which makes the robot easily replicable.

FUTURE SCOPE:
1. Adding Camera:

If the current project is interfaced with a camera (webcam) car can be driven beyond line of
sight and range becomes practically unlimited as networks have a very large range.

2. Use as a fire fighting vehicle:

By adding temperature sensor, water tank and making some changes in programming we can
use this car as fire fighting vehicle.
REFERENCES

1.https://www.google.com/search?q=raspberry+pi+3+b%2B+pin+diagram&client=ms-
android-
uaweiv1&prmd=isnv&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjBrsy4idrhAhXO8M
BHbnLCjIQ_AUoAXoECAsQAQ&biw=360&bih=600

2. https://www.google.com/search?q=raspberry+pi+3+b%2B+diagram&client=ms-android-
huawei-
rev1&prmd=nisv&source=lnms&tbm=isch&sa=X&ved=2ahUKEwiA3pqDidrhAhWl4nMB
HdUaASwQ_AUoAnoECAkQAg&biw=360&bih=600

3. https://www.google.com/amp/s/www.theengineeringprojects.com/2018/07/introduction-to-
raspberry-pi-3-b-plus.html/amp

4. https://winscp.net/eng/docs/introduction

S-ar putea să vă placă și