Sunteți pe pagina 1din 6

CarSim, TruckSim, BikeSim

Simulation of the vehicle dynamics for SIL, HIL, and Driving Simulators

Highlights
 High-fidelity vehicle dynamics models for virtual testing
 Includes extensive portfolios of example vehicles and test maneuvers
 Easily extended using Simulink models or integrated VS Commands toolbox
 Fast execution for HIL and driving simulator applications
 Proven compatibility with over 10 HIL systems
 Integral plotting and animation capabilities
Description
CarSim, BikeSim, and TruckSim software tools simulate and animate dynamic tests of
cars, motorcycles, scooters, racecars, and trucks, using standard Windows PCs. The
math models, based on 30 years of university research in vehicle dynamics, simulate
braking, handling, ride, stability, and acceleration with high fidelity. Core models can be
extended with other software such as Simulink. The internal toolbox of VS Command
language adds a powerful run-time programming capability that allows users to write
their own code into the solver by which to define new variables, import and export
variables, perform basic math functions (including branching), or even add new
differential equations. Real-time versions test hardware-in-the-loop (HIL). Engineers use
driving simulators with CarSim to feel subtle aspects of performance.

Engineers at OEMs use our software to prove initial concepts, select components,
accelerate physical testing programs, and analyze proposed and existing vehicles.
Suppliers test components on vehicles that are not yet available. Easy sharing of data
sets and animations via e-mail facilitate joint development programs between OEMs and
suppliers. The software is easy to use, so more and more engineers making decisions
involving vehicle behavior use Mechanical Simulation products to make better decisions
more quickly.

Ideal for use for with Computer Vision Toolbox and Image Processing Toolbox for
advance driver assistance systems (ADAS) and autonomous vehicle development and
testing.
Image Processing Toolbox
Perform image processing, analysis, and
algorithm development
Image Processing Toolbox™ provides a comprehensive set of reference-
standard algorithms and workflow apps for image processing, analysis,
visualization, and algorithm development. You can perform image
segmentation, image enhancement, noise reduction, geometric
transformations, image registration, and 3D image processing.

Image Processing Toolbox apps let you automate common image processing
workflows. You can interactively segment image data, compare image
registration techniques, and batch-process large data sets. Visualization
functions and apps let you explore images, 3D volumes, and videos; adjust
contrast; create histograms; and manipulate regions of interest (ROIs).
You can accelerate your algorithms by running them on multicore processors
and GPUs. Many toolbox functions support C/C++ code generation for desktop
prototyping and embedded vision system deployment.

Computer Vision Toolbox


Design and test computer vision, 3D
vision, and video processing systems
Computer Vision Toolbox™ provides algorithms, functions, and apps for
designing and testing computer vision, 3D vision, and video processing
systems. You can perform object detection and tracking, as well as feature
detection, extraction, and matching. For 3D vision, the toolbox supports single,
stereo, and fisheye camera calibration; stereo vision; 3D reconstruction; and
lidar and 3D point cloud processing. Computer vision apps automate ground
truth labeling and camera calibration workflows.

You can train custom object detectors using deep learning and machine
learning algorithms such as YOLO v2, Faster R-CNN, and ACF. For semantic
segmentation you can use deep learning algorithms such as SegNet, U-Net,
and DeepLab. Pretrained models let you detect faces, pedestrians, and other
common objects.

You can accelerate your algorithms by running them on multicore processors


and GPUs. Most toolbox algorithms support C/C++ code generation for
integrating with existing code, desktop prototyping, and embedded vision
system deployment.

Moving Objects and ADAS Sensors


BikeSim, CarSim, and TruckSim include up to 200 objects whose locations and motions are
independent of the simulated vehicle. These objects can represent other vehicles, fixed objects (trees,
buildings), pedestrians, animals, paint markings, and other object of interest for applications involving
advanced driver assistance systems (ADAS).

VS Math Models also support ADAS sensors (camera, ultrasound, etc.) that each detect the target
objects.
Moving objects can mimic traffic vehicles for use with ADAS sensors.

Traffic and Target Objects


A VS Moving Object is something with a location and orientation that might be of interest when
simulating a vehicle in a VehicleSim product. As a minimum, the object is represented by a set of
variables that define a location and orientation for animation or communication with other software.
When combined with ranging sensors, the object becomes a target that can be detected.

The objects can be recycled so that when they go out of a specified distance range they will disappear
and then reappear in a new location; this capability raises the apparent number of objects in some
scenarios to be much higher than 200.

Two screens in the VS Browser support some common forms of behavior for the objects.

Advanced users can control the motions with external models (e.g., Simulink) or VS Commands. Each
object can be located in the XY plane using either X and Y global coordinates, or using a station
coordinate S along with a specified Reference Path ID. Vertical information can be set directly, or
based on S and L coordinate for a specified VS Road.
The sensing of object points takes occlusion (blocking) into account using rectangular or circular shapes.

ADAS Sensors
BikeSim, CarSim, and TruckSim support up to 99 ADAS sensors that interact with the moving objects.
An extended license is needed in order to use ADAS sensors. However, moving objects are available
with any license.

Each sensor has a location fixed in some part of the simulated vehicle, with a designated aiming
direction and sensitivity to radiation pattern and range. The sprung mass is the default location in all
VehicleSim products. Sensors in CarSim and TruckSim vehicles can alternatively be mounted in trailer
sprung masses if they exist; sensors in BikeSim can alternatively be mounted on the steering head.

The main outputs of interest are variables that link a sensor to a detected object. Each possible
combination of sensor and target object has an associated set of output variables that can be used in
user-defined models to simulate advanced intervention controls. These include bearing angles to the
left edge, right edge, and closest point; distances to the left edge, right edge, and closest point; ID of
object; relative speed to the closest point; width; coordinates in the sensor X-Y-Z coordinate system;
relative speeds in the sensor X and Z directions; etc.

Moving objects (e.g., red and green spheres) are located using Reference Paths and LTARG lane definitions.
A group of moving objects can be set up with built-in options. They are automatically linked to animation shapes,
initial positions, and motion equations.

S-ar putea să vă placă și