Documente Academic
Documente Profesional
Documente Cultură
Depth: Depth corresponds to the number of filters we use for the convolution
operation. In the network shown in Figure 7, we are performing convolution of the
original boat image using three distinct filters, thus producing three different
feature maps as shown. You can think of these three feature maps as stacked 2d
matrices, so, the ‘depth’ of the feature map would be three.
Stride: Stride is the number of pixels by which we slide our filter matrix over the input
matrix. When the stride is 1 then we move the filters one pixel at a time. When the
stride is 2, then the filters jump 2 pixels at a time as we slide them around. Having a
larger stride will produce smaller feature maps.
Zero-padding: Sometimes, it is convenient to pad the input matrix with zeros around
the border, so that we can apply the filter to bordering elements of our input image
matrix. A nice feature of zero padding is that it allows us to control the size of the
feature maps. Adding zero-padding is also called wide convolution, and not using
zero-padding would be a narrow convolution.
CNN PRODUCES FEATURE MAPS BY STACKING IMAGES(VIA FILTERS)
Trolley problem:
When one moving trolley is on a track which has 5 people on it and the other shiftable one
has one person (provided the trolley cant be stopped) which one would the AI choose?
Extra research:
The self-driving car requires many sensors on the vehicle that enable the complex software
stack to do its job of replicating the human control function safely.
At the same time that these sensors accurately perceive and understand the vehicle’s
surroundings in real time, they must also simultaneously localize the position and direction
(also referred to as heading) of the vehicle with far greater precision and reliability than
traditional car navigation systems (like GPS), can obtain.
Consider for a moment a truck travelling down an interstate freeway at 80 mph. Staying
safely centred in its lane requires lateral positional control on the order of 30 centimetres. The
truck is also traveling more than 3,600 centimetres every second. An error as small as 0.2
degrees in heading or direction will result in the vehicle drifting left or right by those 30
centimetres in just seconds.
To continuously maintain these tight tolerances, the navigation function in automated vehicle
control systems relies on a wide range of sensors and data sources.
These sources include the vehicle’s vision sensors (such as LIDAR and cameras), lower
resolution-ranging sensors (like radar and ultrasound), and classic navigation techniques (like
GPS and maps). However each of these sensors depend on the external environment, and
hence, can experience data-loss or degradation with little warning. For example, in snowy
conditions a LIDAR’s effective range and resolution is reduced. In downtown, urban areas,
GPS can suffer from severe multi-path errors and frequent outage.
The function of one or more inertial measurement unit sensors (IMUs) on the vehicle is to
provide a source of accurate short-term position and heading information to mitigate these
environmental challenges ensuring safe control of the vehicle at all times.
Figure 1: Inertial Measurement Unit (IMU) technology helps autonomous vehicles achieve
this precision localization. ACEINNA IMU381ZA 9-Axis Precision IMU.
Dead-reckoning places tremendous challenges on IMU accuracy and requires careful design
choices related to both algorithm design and sensor selection. Traditionally, systems capable
of dead-reckoning have been called Inertial Navigation Systems (INS), and also go by names
of Inertial Reference System (IRS), GPS/INS, or Enhanced GPS/INS (EGI).
Today, these systems are commonly found in aerospace and defense applications. Not only
are these systems typically $10,000 or more per unit, but they generally work exclusively
with GPS for navigation as opposed to the broader set of sensors in today's autonomous
vehicle and ADAS architectures. Hence, many designs require direct use of IMU data in a
sensor fusion algorithm that blends lidar, camera, and radar as well as GPS data into a
navigation state estimate.
An Automated Left-Turn
One common use case for the IMU is to help reliably navigate intersections. Street
intersections with their frequent lack of lane markers and wide-open space can challenge
vision systems. Furthermore, in urban environments, there may not be good GPS data
available, yet crossing or turning through an intersection safely is fundamental to automated
driving. Using this practical use case as motivation, the remainder of this article summarizes
error modeling, simulation, and empirical testing techniques to validate IMU accuracy and
performance for this application.
Figure 3: How IMU is used to Dead-Reckon - Free Integration Algorithm. (Image Source:
Researchgate.net)