Documente Academic
Documente Profesional
Documente Cultură
Trimble
eCognition® Essentials
Version 1.3.1 1
July.2016
www.eCognition.com
Trimble Documentation
eCognition Essentials 1.3
User Guide
Imprint and Version
Document Version 1.3.1
Copyright © 2016 Trimble Germany GmbH. All rights reserved. This document may be copied and
printed only in accordance with the terms of the Frame License Agreement for End Users of the
related eCognition software.
Published by:
Trimble Germany GmbH, Arnulfstrasse 126, D-80636 Munich, Germany
Phone: +49–89–8905–710
Fax: +49–89–8905–71411
Web: www.eCognition.com
Dear User,
Thank you for using eCognition software. We appreciate being of service to you with image analysis
solutions. At Trimble we constantly strive to improve our products. We therefore appreciate all
comments and suggestions for improvements concerning our software, training, and
documentation. Feel free to contact us via the web form on www.eCognition.com/support. Thank
you.
Legal Notes
Trimble® and eCognition® are registered trademarks of Trimble Germany GmbH in Germany and
other countries. All other product names, company names, and brand names mentioned in this
document may be trademark properties of their respective holders.
Protected by patents EP0858051; WO0145033; WO2004036337; US 6,832,002; US 7,437,004; US
7,574,053 B2; US 7,146,380; US 7,467,159 B; US 7,873,223; US 7,801,361 B2.
Acknowledgments
Portions of this product are based in part on third-party software components:
eCognition Developer © 2016 Trimble Germany GmbH, Arnulfstrasse 126, 80636 Munich, Germany.
All rights reserved.
The Visualisation Toolkit (VTK) © 1993–2006 Ken Martin, Will Schroeder, Bill Lorensen. All rights
reserved.
Insight Segmentation and Registration Toolkit (ITK) © 1999-2003 Insight Software Consortium. All
rights reserved.
1 Overview 4
1.1 What is eCognition Essentials? 4
1.2 Image Analysis with eCognition Essentials 4
1.3 Key Features 5
2 Glossary 6
3 User Interface Components 7
3.1 Default View 7
3.1.1 Analysis Builder 7
3.1.2 Analysis Builder Toolbar 8
3.1.3 Legend 12
3.1.4 Thematic Layer Attribute Table 12
3.1.5 Results Panel 12
3.1.6 Sample Information 12
3.1.7 Report 13
3.1.8 Change Detection: Image vs. Image 13
3.1.9 Context menu 14
3.2 Menus 14
3.2.1 File Menu 14
3.2.2 View Menu 15
3.2.3 Help Menu 16
4 Workflow and Actions 19
4.1 Overview 19
4.2 Workflow 20
4.2.1 Configuration on subregion 20
4.2.2 Incremental improvement of classifier 21
4.3 Actions 22
4.3.1 Create / Modify Project 22
4.3.2 Multiresolution Segmentation 24
4.3.3 Threshold Segmentation | Classification 25
4.3.4 Vector Based Segmentation 26
4.3.5 Change Detection: Image vs. Image 26
4.3.6 Change Detection: Object vs. Vectors 28
4.3.7 Create Vector Layer 29
4.3.8 Supervised Classification 29
4.3.9 Manual Editing 32
4.3.10 Object Merge 33
4.3.11 Minimum Mapping Unit 33
4.3.12 Smooth Objects 34
4.3.13 Accuracy Assessment 34
4.3.14 Create Report 35
4.3.15 Export 35
1 Overview
1.1 What is eCognition Essentials?
eCognition Essentials is a new software for Remote Sensing users working with satellite imagery to
transform image data into intelligence in a timely and affordable manner.
It is designed to solve satellite imagery analysis tasks without having to get involved with
sophisticated rule set developments implemented in the eCognition development platform. The
complex and sophisticated image analysis routines implemented in eCognition are wrapped into an
easy-to-use tool that is designed to guide the user through semi-automated analysis workflows.
l Create custom layers such as NDVI, NDSI, NDWI, and NDSM- Create / Modify Project, page 22
l Choose the resolution to work on and if necessary a region of interest - Create / Modify Project,
page 22
l Create objects by segmenting image data into regions of similar spectral properties -
Multiresolution Segmentation, page 24
l Create objects by segmenting image data into regions according to a vector layer - Vector
Based Segmentation, page 26
l Classify changes by comparing a pair of images - Change Detection: Image vs. Image, page 26
l Classify changes by comparing classified objects and a vector layer - Change Detection: Object
vs. Vectors, page 28
l Create temporary vector layers from the classified image - Create Vector Layer, page 29
l Train a classifier using manual sample selection, samples based on thematic data, or a sample
statistics table- Supervised Classification, page 29
l Apply your classifier to the complete scene - Supervised Classification, page 29
l Classify objects based on thresholds - Threshold Segmentation | Classification, page 25
l Manually edit the result of the automatic classification - Manual Editing, page 32
l Merge objects of the same class - Object Merge, page 33
l Remove small objects - Minimum Mapping Unit, page 33
l Smooth objects - Smooth Objects, page 34
l Verify the accuracy of your classification using thematic validation data - Accuracy Assessment,
page 34
l Export final classification results and temporary vectors to a GIS layer or export temporary
raster images - Export, page 35
For each of these steps one or several actions are available that can be configured dependent on the
image analysis task. By dividing the task of creating a GIS layer into clearly defined subtasks (each
reflected in a distinct action) it is possible to optimize the configuration of one small step at a time
and arrive at satisfying results without any deeper knowledge of image analysis algorithms.
l Easy-to-use image analysis software covering the main steps in analyzing satellite images:
Creating objects, classifying objects, smart object-based refinement of results, information on
accuracy and easy export of results into GIS formats.
l Usage of the unique Multiresolution Segmentation method of the eCognition technology in an
easy-to-use environment. It is the first and most successful segmentation technique in the
geographic object-based image analysis (GEOBIA) framework.
l Availability of various common classification methods including SVM, Nearest Neighbor, CART,
Random Trees and Bayes in a single software solution.
2 Glossary
Action: Actions are the building blocks of an eCognition Essentials image analysis. Each action
achieves a clearly defined result, which the user can optimize by configuring the software.
Classification: A basic principle of the eCognition suite is the classification of image objects. In
eCognition Essentials threshold based classification as well as supervised classification are available.
Image Layer: In eCognition an image layer is the most basic level of information contained in a raster
image. All images contain at least one image layer.
Image Object: An image object is a defined group of neighboring pixels created by a segmentation
(see Multiresolution Segmentation, page 24).
Index Layer: In eCognition Essentials you can generate indices such as NDVI, NDSI and NDWI (see
Create / Modify Project, page 22).
Project: An eCognition Essentials project consists of image layers, thematic layers if available and all
actions. It reflects the current state of the configuration.
Raster Layer: Raster layers generally refer to Image Layers (see above) and include, for example
panchromatic or multispectral information.
Resolution: To speed up processing, the resolution of image data can be reduced before analyzing
(see Create / Modify Project, page 22).
ROI: In eCognition Essentials the analysis can be limited to a certain region of interest. The region of
interest must be defined as a set of polygons on a thematic layer (see Create / Modify Project, page
22).
Segmentation: During segmentation, image pixels are grouped into objects. eCognition Essentials
uses powerful Multiresolution Segmentation (see Multiresolution Segmentation, page 24).
Subsets: To reduce the time for configuration of actions, eCognition Essentials offers the option to
set up the whole workflow on subsets and then apply it to the whole scene afterwards (see
Workflow and Actions, page 19).
Thematic Layer: Thematic layers are raster or vector files that have associated attribute tables.
Vector Layer: Thematic vector layer containing polygons, lines or points.
If you press the Send current workflow to server button the Submit to server dialog opens:
Monitor: Opens a web-based monitoring overview that contains information on the submitted user
jobs, used engines, log files and configuration settings. The same dialog can be opened using a
double-click in the status bar (see figure below).
Manage local server section:
Number of Engines: Insert the number of engines to be used.
Start / Stop: Starts or stops the server.
Output folder: The default output folder can be changed in the Create / Modify Project action.
Choose either Normal cursor, Panning, Zoom in, Zoom out, Area zoom or Zoom to window. Note
that you can also adjust the zoom using the context menu (see Context menu, page 14).
Switch between visualization of image object outlines, classification or classification with outlines:
l Outlines display: Available after first segmentation when objects were created. Image object
outlines are shown in blue. Selected image objects have a red outline color.
l Classification display: Available after first classification with the class color shown in a semi-
transparent overlay on the selected image data.
l Outlines and classification display: Available for classified image objects, with black object
outlines and classification overlay.
Previous layer: Switch to previous image layer display of the image layer drop-down menu.
Next layer: Switch to the following image layer of the image layer drop-down menu.
Drop-down menu: Select the image layer of the active view. After assignment of image layers to their
respective band, RGB and CIR visualization is possible. In addition custom layers can be calculated
and visualized (NDVI, NDSI, NDSI, and NDSM). Image layer mixing can also be customized if you
select customize layer mixing and equalizing the following edit image layer mixing dialog opens:
Figure 3.6. Edit Image Layer Mixing dialog box. Changing the layer mixing and equalizing
options affects the display of the image only
You can define the color composition for the visualization of image layers for display. In addition,
you can choose from different equalizing options. This enables you to better visualize the image and
to recognize the visual structures without actually changing them. You can also choose to hide
layers, which can be very helpful when investigating image data and results.
NOTE: Changing the image layer mixing only changes the visual display of the image but not the
underlying image data – it has no impact on the process of image analysis.
l Define the display color of each image layer. For each image layer you can set the weighting of
the red, green and blue channels. Your choices can be displayed together as additive colors in
the map view. Any layer without a dot or a value in at least one column will not display.
l Choose a layer mixing preset:
l (Clear): All assignments and weighting are removed from the Image Layer table
l One Layer Gray displays one image layer in grayscale mode with the red, green and blue
together
l False Color (Hot Metal) is recommended for single image layers with large intensity ranges
rainbow colors. Here, the regular color range is converted to a color range between blue
for darker pixel intensity values and red for brighter pixel intensity values
l Three Layer Mix displays layer one in the red channel, layer two in green and layer three in
blue
l Six Layer Mix displays additional layers
l Change these settings to your preferred options with the Shift button or by clicking in the
respective R, G or B cell. One layer can be displayed in more than one color, and more than one
layer can be displayed in the same color.
l Individual weights can be assigned to each layer. Clear the No Layer Weights check-box and
click a color for each layer. Left-clicking increases the layer’s color weight while right-clicking
decreases it. The Auto update checkbox refreshes the view with each change of the layer mixing
settings. Clear this check box to show the new settings after clicking OK. With the Auto update
check box cleared, the Preview button becomes active.
l Compare the available image equalization methods and choose one that gives you the best
visualization of the objects of interest.
l Click the Parameter button to changing the equalizing parameters, if available.
Figure 3.7. Layer Mixing presets (from left to right): One-Layer Gray, Three-Layer Mix, Six-Layer
Mix
Center at previous or next ROI (only active if more than one region of interest is defined).
3.1.3 Legend
This window shows all active classes.
To edit the color and/or name of a class do a double click on it.
l insert samples using Load / Create samples > Sample selection mode > Add manually
l load samples using Load / Create samples > Sample selection mode > Add from statistics table
This dialog shows the applied feature space. Furthermore, it gives information on the number of
objects per class selected manually/local samples (column No. of objects) and those loaded via the
statistics table (column No. of table entries).
3.1.7 Report
This window appears after executing the Create Report action and shows the object statistics that
have been calculated and exported.
3.2 Menus
3.2.1 File Menu
Data Marketplace
Trimble Data Marketplace is a service that allows you to search for and download geospatial
datasets (maps) in TIF (.tif) or Shapefile (.shp) format that you can import into eCognition. Different
types of data sets are available from a variety of sources, including United States Geological Survey
(USGS), DigitalGlobe's Precision Aerial Imagery, Open Street Map, TIGER/Line GIS Data, Intermap,
and more.
Please refer to the Data Marketplace online help for detailed instructions.
Select File > Data Marketplace to display the Data Marketplace window allowing you to select the
data set you want to order for download.
To select an area do either of the following:
User Information
Display of user name and option to insert company and copyright information.
Exit
Select Exit to close eCognition Essentials.
Analysis Builder
Open and close the Analysis Builder window (see Analysis Builder, page 7).
Legend
Open and close the Legend window ( see Legend, page 12).
System Info
This dialog provides information about the software version and build, loaded portals and libraries,
configuration settings, memory usage, drivers and licensing settings.
Release Notes
Opens the eCognition Essentials Release Notes.
Help Keyboard
This window provides information about available keyboard commands.
Options
Opens the Options window, which allows the definition of user preferences.
General
Check for updates at Yes: Default. Check for software update at startup.
startup No: No update check.
Participate in customer Yes: Join customer feedback program. For details see chapter below.
feedback program No: No participation in customer feedback program.
Output Format
CSV
The CEP collects detailed information only about used buttons, algorithms, features,
dialogs/windows and summary information about the computer that it is running on (i.e. OS, RAM,
Screen size, etc.). No information about other applications running or just installed is collected.
The data gathered from your participation in the CFP is only accessed by the Trimble eCognition
development team and its affiliated employees. Data is used solely by Trimble eCognition Software.
Yes. At any time you can select Customer Feedback Options from within eCognition and change
your decision. If you opt out then data will stop being submitted within seconds.
The CFP data does not include your name, address, phone number, or other contact information.
The CFP generates a globally unique identifier on your computer to uniquely identify it. This is
randomly assigned and does not contain any personal information. This allows us to see continuity
of issues on a single system without requiring other identifiers.
The host name of your computer and the windows user name of the user running the affected
application is recorded and sent. To the extent that these individual identifiers are received, we do
not use them to identify or contact you. If information gathered from the CFP is ever published
beyond the authorized users it is as highly derived, summary data that cannot be related to a
specific user or company.
About
Opens the About information window.
To add an action click on the plus symbol located on the left side of the action bar.
Figure 4.1. Action with plus and minus symbol and small arrows
Remove an action by clicking on the minus symbol on the right side of the action bar.
Move actions by clicking on the small arrows on the right side of the action bar or by dragging the
action to the desired position. Note that the action Create / Modify Project always has to remain in
the first position.
4.2 Workflow
The approach in eCognition Essentials is to go step-by-step through the actions of your solution.
You should always start with the mandatory action Create / Modify Project.
As soon as the necessary steps of an action are completed, a green checkmark appears on the right
side of the action bar.
To change some of the settings you need to click the undo button first or alternatively click the
Revert selected action button in the Analysis Builder Toolbar. All subsequent actions are reverted,
too.
Overview
When you want to process large images with high resolution, Trimble strongly recommends to
follow a workflow where you configure your analysis steps first on one or more small subregions.
Only when you have optimized all analysis steps on those subregions select the Run until selected
action button, which will then execute all actions on the complete scene without need for any
further user interaction.
A) Configuration on subsets
on the number of samples you selected and opens as soon as you select a class for sample
selection.
Then save your configuration (Analysis Builder Toolbar), close the project and create a new project
with a new but comparable scene.
On this new scene reload the configuration via the Analysis Builder Toolbar, select the action
supervised classification and push the toolbar button to run until selected action.
As you can see in the sample info window, the samples that you selected on the previous project
are now treated as external samples (not represented by objects in your project). To improve the
classification result achieved with the classifier trained on the initial scene, you can manually add
local samples on the new scene, and improve your classifier iteratively. Then save your
configuration again and proceed to the next scenes.
Please note that the incremental improvement of the classifier is only supported for object-based
classifiers, but not layer-based classifiers.
4.3 Actions
4.3.1 Create / Modify Project
Create / Modify Project is a mandatory action. The main purpose is to insert image and thematic
layers into a project and to define specific project settings. File operations such as saving, loading or
closing a project are also possible within this action as is selecting the resolution for the analysis.
This action is completed as soon as image layers have been imported. Depending on the availability
of spectral bands, index generation is recommended.
Project data
New project: Select the image and thematic files to be loaded to your project. This is the first step
when working with eCognition Essentials. Visualize the layers using the Analysis Builder Toolbar,
page 8.
Add layer: Select additional image or thematic layers. Note that adding image files reverts your
already configured and completed analysis steps (for example, adding image layers reverts the
segmentation step). This is not the case for thematic layers. However, Trimble recommends loading
thematic layers containing sample information or reference data during this first step, even though
they will be used only in subsequent actions.
Load project: Reloads a previously saved project, including solution and configuration status. Note
that projects generated with other eCognition products cannot be loaded into eCognition
Essentials.
Save project: Saves the current project, actions and configuration status.
Close project: Closes the current project. Note that before creating a new project the current
project must be closed.
Results Folder
Folder: Browse to select the folder for results. Default name and location is
C:\Users\UserName\Desktop\eCognitionResults.
Resolution
Resolution for analysis: Analysis speed is considerably increased if you work on a reduced image
resolution.
NDSM generation
DTM: Select the image layer corresponding to the DTM.
DSM: Select the image layer corresponding to the DSM.
Spectral bands
The benefit of spectral band assignment is two-fold:
l Possibility to visualize image data in RGB and CIR display (see Analysis Builder Toolbar, page 8).
l Generation of index layers for improved visualization, spectral information extraction and
classification results.
Use spectral bands: Trimble recommends using For display and index layer creation. This enforces
the index generation as soon as the respective spectral bands are assigned (otherwise, the action is
not configured and does not get a green checkmark). However, for very large image data index
generation can be time consuming. To skip this step for prototyping purposes select For display
only.
Blue: Assigns the image layer corresponding to the blue band.
Green: Assigns the image layer corresponding to the green band.
Red: Assigns the image layer corresponding to the red band.
NIR: Assigns the image layer corresponding to the NIR band.
Run
Display RGB: RGB visualization of image layers, activated after assignment of the three layers in
section Spectral Bands.
Create Layers: Click to generate NDVI, NDSI and NDWI and NDSM layers. Once generated, these
layers can be displayed using the analysis builder toolbar. They are also used internally by the
software for achieving an optimal classification.
Multiresolution Segmentation is an action that can be inserted multiple times. Its main purpose is
to combine regions of similar spectral information into meaningful image objects.
Selecting Execute creates image objects based on the current settings. Thematic layers are not used
in the segmentation. The action is completed after the generation of image objects.
(Starting Version 1.1 the Multiresolution Segmentation uses up to 4 cores for parallel processing
thus increasing the performance significantly.)
Working domain
Select working domain: Select the input class(es) for classification. Other classes are not affected by
this action. If no objects have been created yet, only Pixel level is available.
Settings
Algorithm: Select the segmentation algorithm to generate objects:
Threshold Segmentation | Classification is an action that can be inserted multiple times. Its purpose
is to classify objects based on a threshold.
The action is completed if the classification has been applied based on the current settings.
Working domain
Select working domain: Select the input class(es) for classification. Other classes are not affected by
this action. If no objects have been created yet, only Pixel level is available.
Threshold settings
Use layer: Select the layer for the classification threshold.
Threshold: Pixels with values less than or equal to this value are assigned to one class, pixels above
this value to another class.
Split objects: Activate to split objects which contain both pixels above and pixels below the
threshold. If not activated, the mean value of the object is compared against the threshold to
decide which class it is assigned to.
Classification settings
Create class: Inserts a new class.
Edit classes: Change name or color of existing classes.
Class for dark: Pixel values less or equal to the threshold are assigned to this class.
Class for bright: Pixel values larger than the threshold are assigned to this class.
Execute: Apply classification based on current settings.
Undo: Reverts the threshold classification.
Vector Based Segmentation is an action that can be inserted multiple times. Its purpose is to create
image objects based on a point, line or polygon vector file.
Working domain
Select working domain: Select the input class(es) for classification. Other classes are not affected by
this action. If no objects have been created yet, only Pixel level is available.
Vector selection
Select vector layer: Select the vector layer for the segmentation.
Select attribute: Either use all vectors, or select a subset of vectors based on an attribute.
Select attribute value(s): Select vectors based on attribute values.
Show vectors: Button to visualize the selected vectors in the current view.
Attribute table: Button to visualize the thematic layer attribute table.
Classification
Classification mode:
Execution
Execute: Click to create objects based on selected settings.
Undo: Click to undo the segmentation.
Change Detection: Image vs. Image is an action that can be inserted only once. Its purpose is to
create difference layers for images from two different acquisition times which can be used in other
actions. Optionally you can classify changes.
Difference layers
Image pair assignment
l Automatic: The first half of the images are assigned to time 1, the second half to time 2.
l Manual: Assign image files to time 1 and to time 2 manually to create image pairs. Both files in a
pair must contain the same number of layers. Multiple file pairs can be defined in subsequent
steps.
Time 1: Select image layers of time 1.
Time 2: Select image layers of time 2.
Create: Create difference layers according to selection.
Diff. layers to be created: Number of difference layers to be created according to a previous or
loaded configurations.
Remove: Removes all difference layers and clears the configuration of manually assigned image
pairs.
Info window: Opens the information window.
Classification
Classify changes Select whether you want to classify changes directly in this action. Alternatively,
you can use the created differences layers in subsequent actions such as Supervised Classification,
or Threshold Segmentation | Classification.
Positive Class: Select the class to be assigned to the selected pixels according to the "Positive
threshold" condition.
Negative Class: Select the class to be assigned to the selected pixels according to the "Negative
threshold" condition.
Change Detection: Objects vs. Vectors is an action that can be inserted multiple times. Its purpose is
to classify differences between classified objects and a vector layer.
The user can decide what type of objects to classify using the parameter: Classify objects . For
classification the overlap between selected objects and selected vectors is calculated.
Object selection
Select class(es): Select the classes you want to compare to vectors.
Vector selection
Select vector layer: Choose the point, line or polygon vector layer for change detection.
Select attribute: Select a subset of vectors based on an attribute.
Select attribute value(s): Select vectors based on attribute values.
Display vector: Button to visualize the vector selection in the current view.
Classification settings
Create class: Button to insert classes.
Edit classes: Button to edit class names and colors.
Classify objects: Select what type of change you want to classify.
Min. overlap (%): Minimum overlap between vector and image object necessary to classify image
objects. (An object counts as overlapping with a vector if the percentage of its area that overlaps
with the vector is greater than the minimum overlap defined.)
Max. overlap (%): Maximum overlap between vector and image object necessary to classify image
objects. (An object counts as non-overlapping with a vector if the percentage of its area that does
overlap with the vector is below the maximum overlap defined here)
Run
Classify changes: Button to apply vector-object comparison.
Undo: Click to undo the classification.
Create Vector Layer is an action that can be inserted multiple times. Its purpose is to create polygon
vector layers based on existing image objects
Settings
Output name: Choose name for the vector layer. Default name is convertedFromObjects
Execute: Generates vectors based on image objects.
Undo: Removes created vector layer from project.
Supervised Classification is an action that can be inserted only once. Its purpose is to train and apply
a classifier using samples.
The following classification algorithms are available: Bayes, KNN, SVM, Decision Tree and Random
Trees.
The action is completed if the classification has been applied based on the current settings and
samples.
Working domain
Input class: Select the class which serves as an input to the classification. Other classes will not be
modified by this action. If Supervised Classification is the first classification action, only unclassified
is available.
Add manually
Create class: Inserts a new class.
Edit classes: Change the name or color of existing classes.
Manual sample selection: To insert samples, select a class and then single click on image objects in
the view. They are now samples for the respective class and displayed in their class color. To select
more than one sample at a time, hold down the left mouse button and move/brush over the image
objects.
Description of the Sample Information dialog see Sample Information, page 12
l Export name: Change name for sample export file. Default name is Samples. The extension .shp
is added for Export as .shp and extension or .csv for Export table. (Results folder can be
specified in Action Create /Modify Project.)
l Class column name: Change the name of the attribute table column where the class name will
be stored. Default name is ClassName.
Classifier Parameters
Classifier algorithm: Select one of the available classifiers (see also Supervised Classification, page
29). Default classifier is the KNN classifier.
Source: Choose one of the following sources
l object based: Applies samples to image objects (faster than layer based).
l layer based: Applies samples to image layer pixel values.
Select object features: Select the object features to define the applied feature space. Default setting
includes mean layer values of spectral bands and index layers of sample image objects.
Use layer(s): Select the image layers used in classification. Default setting includes all layers (spectral
bands and available index layers).
Display advanced settings: Activate to display and change advanced settings for the different
classifier algorithms.
Classify: Applies the selected classifier based on current samples.
Undo: Removes the classification (all samples remain).
Bayes
A Bayes classifier is a simple probabilistic classifier based on applying Bayes’ theorem with strong
independence assumptions. An advantage of the naive Bayes classifier is that it only requires a small
amount of training data to estimate the parameters (means and variances of the variables)
necessary for classification.
Random Trees
The random trees classifier is more a framework that a specific model. It uses an input feature vector
and classifies it with every tree in the forest. It results in a class label of the training sample in the
terminal node where it ends up. This means the label is assigned that obtained the majority of
"votes". Iterating this over all trees results in the random forest prediction. All trees are trained with
the same features but on different training sets, which are generated from the original training set.
Object reshaping
Cut objects: Activates the object cutting mode. In this mode, you first select an image object and
then click again to start drawing a polygon (which will be cut out of the object) or a splitting line.
Perform the object cut by double click or using the context menu available on right click after a
cutting line has started.
Merge all: Merges all objects of the same class. (In some cases this facilitates cutting objects).
Object classification
Create class: Inserts a new class.
Edit classes: Change name or color of existing classes.
Object annotation
Start annotation: Activates the manual annotation mode. Double click an image object to open the
Edit annotation dialog where you can insert a value for the selected object. Alternatively you can
right click in the view and select Annotate. Multiple-selection and annotation is possible using the
Shift key.
Undo
Undo: Reverts all manual edits.
Object Merge is an action that fuses image objects that belong to the same class.
The action is completed after selecting the Apply button.
Minimum mapping unit is an action that can be inserted multiple times. It can be applied to selected
classes or all classes. It removes objects below a user-defined minimum mapping unit and assigns
them to the surrounding image object.
The action is completed after the user has selected the Apply button.
Smooth objects is an action that can be inserted multiple times. It can be applied to selected classes
or all classes. Image object outlines are smoothed based on a generalization value and adjacent
image objects of the same class are merged.
The action is completed after the user has selected the Apply button.
Group
All classes/Ignore selected classes: Select if the action is applied to all classes or if selected classes
are excluded from smoothing.
Ignore classes: Select the classes that are excluded from smoothing (only available for Ignore
selected classes ). Classes selected are not modified. At least two classes need to remain for
smoothing .
Scale: Defines the smoothing scale (in pixels). Higher values lead to higher generalization of objects.
Apply: Applies action with current settings.
Undo: Reverts this action.
Accuracy assessment is an action that can be inserted multiple times. It validates the classification to
ground truth data provided as a thematic layer. It generates statistical output to describe the
quality of classification results. The statistical assessments can be saved as .csv and .html files.
If a thematic layer is selected, the action is completed as soon as you clicked on Execute with the
current settings. If the Thematic Layer is set to None, the action is considered complete as soon as it
is selected, providing that the previous action is also complete.
Validation data
Thematic layer: Select a thematic layer for data validation. If not already loaded the thematic layer
can be added using the Add layer button in the action Create / Modify Project, page 22
Class column: Select the column that contains the class names. Note that class names have to
match those used for the samples.
Execute: Displays and saves the specified result files to the results folder specified in action Create
/ Modify project. And displays the result panel that shows user’s and producer’s accuracies for the
involved classes, as well as an overall accuracy estimate.
Create Report is an action that computes statistics based on the classified objects, shows them in
the Report window, and exports them to a .csv and/or .html file. It is an action that can be inserted
multiple times.
Settings
Export format: Choose between:
l .csv and .html - exports the report to *.csv and *.html format.
l .csv - exports the report to *.csv format
File name: Choose a file name for the report. Default name consists of {:Project.Name}\Report
Area unit: Select a unit for image object area calculation. Choose between:
l m2 - square meters
l ha - hectare
l ft2 - square feet
l ac - acres
l pxl - pixels
Create: Exports the classification report.
Undo: Removes the exported classification report.
4.3.15 Export
The export action generates and saves a thematic layer of the classification results or exports a
temporary raster layer of the project. This action can be inserted multiple times.
The action is completed after selection of the Export button with the current settings.
Export type: Select the export type. Choose between:
l Objects -> Vector - exports all existing image objects to a vector file (with their classification if
available) .
l Existing Vector - exports a vector layer that was created or used in the current project.
l Existing Raster - exports a temporary raster layer of the project.
Class filter: Choose between All Classes for export or Selected Classes (only available for Export type
- Objects -> Vector).
Class selection: Select the classes for export (only available for Export type - Objects -> Vector, Class
filter > Selected Classes).
Attributes: Select the attributes for export (only available for Export type - Objects -> Vector).
Vector layer: Select the vector layer for export (only available for Export type - Existing Vector).
Export name: Insert filename for export (extension will be added automatically).
Export format: Depending on selection above choose .tif or .shp for shapefile or .gdb for FileGDB.
Export: Saves the specified export file to the results folder specified in action Create / Modify
project.
5 Acknowledgments
Portions of this product are based in part on the third-party software components. Trimble is
required to include the following text, with software and distributions.
l Redistributions of source code must retain the above copyright notice, this list of conditions
and the following disclaimer.
l Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials provided
with the distribution.
l Neither name of Ken Martin, Will Schroeder, or Bill Lorensen nor the names of any contributors
may be used to endorse or promote products derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
l Redistributions of source code must retain the above copyright notice, this list of conditions
and the following disclaimer.
l Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials provided
with the distribution.
l Neither the name of the Insight Software Consortium nor the names of its contributors may be
used to endorse or promote products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER
5.3.2 frmts/gtiff/gt_wkt_srs.cpp
Copyright © 1999, Frank Warmerdam, warmerdam@pobox.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.