Sunteți pe pagina 1din 40

USER GUIDE

Trimble
eCognition® Essentials

Version 1.3.1 1
July.2016
www.eCognition.com
Trimble Documentation
eCognition Essentials 1.3
User Guide
Imprint and Version
Document Version 1.3.1
Copyright © 2016 Trimble Germany GmbH. All rights reserved. This document may be copied and
printed only in accordance with the terms of the Frame License Agreement for End Users of the
related eCognition software.
Published by:
Trimble Germany GmbH, Arnulfstrasse 126, D-80636 Munich, Germany
Phone: +49–89–8905–710
Fax: +49–89–8905–71411
Web: www.eCognition.com
Dear User,
Thank you for using eCognition software. We appreciate being of service to you with image analysis
solutions. At Trimble we constantly strive to improve our products. We therefore appreciate all
comments and suggestions for improvements concerning our software, training, and
documentation. Feel free to contact us via the web form on www.eCognition.com/support. Thank
you.
Legal Notes
Trimble® and eCognition® are registered trademarks of Trimble Germany GmbH in Germany and
other countries. All other product names, company names, and brand names mentioned in this
document may be trademark properties of their respective holders.
Protected by patents EP0858051; WO0145033; WO2004036337; US 6,832,002; US 7,437,004; US
7,574,053 B2; US 7,146,380; US 7,467,159 B; US 7,873,223; US 7,801,361 B2.
Acknowledgments
Portions of this product are based in part on third-party software components:
eCognition Developer © 2016 Trimble Germany GmbH, Arnulfstrasse 126, 80636 Munich, Germany.
All rights reserved.
The Visualisation Toolkit (VTK) © 1993–2006 Ken Martin, Will Schroeder, Bill Lorensen. All rights
reserved.
Insight Segmentation and Registration Toolkit (ITK) © 1999-2003 Insight Software Consortium. All
rights reserved.

All rights reserved. © 2016 Trimble Documentation, Munich, Germany.


Day of print: July 27th, 2016

eCognition Essentials Documentation_ __1


Contents

1 Overview 4
1.1 What is eCognition Essentials? 4
1.2 Image Analysis with eCognition Essentials 4
1.3 Key Features 5
2 Glossary 6
3 User Interface Components 7
3.1 Default View 7
3.1.1 Analysis Builder 7
3.1.2 Analysis Builder Toolbar 8
3.1.3 Legend 12
3.1.4 Thematic Layer Attribute Table 12
3.1.5 Results Panel 12
3.1.6 Sample Information 12
3.1.7 Report 13
3.1.8 Change Detection: Image vs. Image 13
3.1.9 Context menu 14
3.2 Menus 14
3.2.1 File Menu 14
3.2.2 View Menu 15
3.2.3 Help Menu 16
4 Workflow and Actions 19
4.1 Overview 19
4.2 Workflow 20
4.2.1 Configuration on subregion 20
4.2.2 Incremental improvement of classifier 21
4.3 Actions 22
4.3.1 Create / Modify Project 22
4.3.2 Multiresolution Segmentation 24
4.3.3 Threshold Segmentation | Classification 25
4.3.4 Vector Based Segmentation 26
4.3.5 Change Detection: Image vs. Image 26
4.3.6 Change Detection: Object vs. Vectors 28
4.3.7 Create Vector Layer 29
4.3.8 Supervised Classification 29
4.3.9 Manual Editing 32
4.3.10 Object Merge 33
4.3.11 Minimum Mapping Unit 33
4.3.12 Smooth Objects 34
4.3.13 Accuracy Assessment 34
4.3.14 Create Report 35
4.3.15 Export 35

eCognition Essentials Documentation_ __2


5 Acknowledgments 37
5.1 The Visualization Toolkit (VTK) Copyright 37
5.2 ITK Copyright 38
5.3 Geospatial Data Abstraction Library (GDAL) Copyright 38
5.3.1 gcore/Verson.rc 38
5.3.2 frmts/gtiff/gt_wkt_srs.cpp 39

eCognition Essentials Documentation_ __3


1 Overview

1 Overview
1.1 What is eCognition Essentials?
eCognition Essentials is a new software for Remote Sensing users working with satellite imagery to
transform image data into intelligence in a timely and affordable manner.
It is designed to solve satellite imagery analysis tasks without having to get involved with
sophisticated rule set developments implemented in the eCognition development platform. The
complex and sophisticated image analysis routines implemented in eCognition are wrapped into an
easy-to-use tool that is designed to guide the user through semi-automated analysis workflows.

1.2 Image Analysis with eCognition Essentials


A typical application of eCognition Essentials is the generation of landcover maps to be exported
into a GIS database.
After loading your raster image data and your thematic data (shapefiles or file GDB) into the
software eCognition offers the following image analysis steps:

l Create custom layers such as NDVI, NDSI, NDWI, and NDSM- Create / Modify Project, page 22
l Choose the resolution to work on and if necessary a region of interest - Create / Modify Project,
page 22
l Create objects by segmenting image data into regions of similar spectral properties -
Multiresolution Segmentation, page 24
l Create objects by segmenting image data into regions according to a vector layer - Vector
Based Segmentation, page 26
l Classify changes by comparing a pair of images - Change Detection: Image vs. Image, page 26
l Classify changes by comparing classified objects and a vector layer - Change Detection: Object
vs. Vectors, page 28
l Create temporary vector layers from the classified image - Create Vector Layer, page 29
l Train a classifier using manual sample selection, samples based on thematic data, or a sample
statistics table- Supervised Classification, page 29
l Apply your classifier to the complete scene - Supervised Classification, page 29
l Classify objects based on thresholds - Threshold Segmentation | Classification, page 25
l Manually edit the result of the automatic classification - Manual Editing, page 32
l Merge objects of the same class - Object Merge, page 33
l Remove small objects - Minimum Mapping Unit, page 33
l Smooth objects - Smooth Objects, page 34

eCognition Essentials Documentation_ __4


1 Overview

l Verify the accuracy of your classification using thematic validation data - Accuracy Assessment,
page 34
l Export final classification results and temporary vectors to a GIS layer or export temporary
raster images - Export, page 35
For each of these steps one or several actions are available that can be configured dependent on the
image analysis task. By dividing the task of creating a GIS layer into clearly defined subtasks (each
reflected in a distinct action) it is possible to optimize the configuration of one small step at a time
and arrive at satisfying results without any deeper knowledge of image analysis algorithms.

1.3 Key Features

l Easy-to-use image analysis software covering the main steps in analyzing satellite images:
Creating objects, classifying objects, smart object-based refinement of results, information on
accuracy and easy export of results into GIS formats.
l Usage of the unique Multiresolution Segmentation method of the eCognition technology in an
easy-to-use environment. It is the first and most successful segmentation technique in the
geographic object-based image analysis (GEOBIA) framework.
l Availability of various common classification methods including SVM, Nearest Neighbor, CART,
Random Trees and Bayes in a single software solution.

eCognition Essentials Documentation_ __5


2 Glossary

2 Glossary
Action: Actions are the building blocks of an eCognition Essentials image analysis. Each action
achieves a clearly defined result, which the user can optimize by configuring the software.
Classification: A basic principle of the eCognition suite is the classification of image objects. In
eCognition Essentials threshold based classification as well as supervised classification are available.
Image Layer: In eCognition an image layer is the most basic level of information contained in a raster
image. All images contain at least one image layer.
Image Object: An image object is a defined group of neighboring pixels created by a segmentation
(see Multiresolution Segmentation, page 24).
Index Layer: In eCognition Essentials you can generate indices such as NDVI, NDSI and NDWI (see
Create / Modify Project, page 22).
Project: An eCognition Essentials project consists of image layers, thematic layers if available and all
actions. It reflects the current state of the configuration.
Raster Layer: Raster layers generally refer to Image Layers (see above) and include, for example
panchromatic or multispectral information.
Resolution: To speed up processing, the resolution of image data can be reduced before analyzing
(see Create / Modify Project, page 22).
ROI: In eCognition Essentials the analysis can be limited to a certain region of interest. The region of
interest must be defined as a set of polygons on a thematic layer (see Create / Modify Project, page
22).
Segmentation: During segmentation, image pixels are grouped into objects. eCognition Essentials
uses powerful Multiresolution Segmentation (see Multiresolution Segmentation, page 24).
Subsets: To reduce the time for configuration of actions, eCognition Essentials offers the option to
set up the whole workflow on subsets and then apply it to the whole scene afterwards (see
Workflow and Actions, page 19).
Thematic Layer: Thematic layers are raster or vector files that have associated attribute tables.
Vector Layer: Thematic vector layer containing polygons, lines or points.

eCognition Essentials Documentation_ __6


3 User Interface Components

3 User Interface Components


3.1 Default View
The eCognition Essentials view consists of several default windows: in addition to the central image
view where the image data is displayed - the Analysis Builder (left side), the Analysis Builder Toolbar
(at the top), the Legend (top right) and the Information window (bottom right) are shown.

Figure 3.1. eCognition Essentials View

3.1.1 Analysis Builder


The analysis builder contains two distinct sections. The upper section displays the currently loaded
actions. The lower section allows you to configure the currently selected action. For details on
configuration of individual actions, see Workflow and Actions, page 19.

Figure 3.2. Analysis Builder window - upper section

eCognition Essentials Documentation_ __7


3 User Interface Components

3.1.2 Analysis Builder Toolbar

Figure 3.3. Analysis Builder Toolbar


The analysis builder toolbar controls how image and thematic data are visualized. It contains the
following items:

Run until selected action on complete scene (ignore ROI)


Run the analysis until a selected action (selected action included) and apply it to the complete
project (ignores ROIs if selected).

Revert a selected action. All subsequent actions are reverted, too.

Send current workflow to server:


Similar to Run until selected action but processes always the full solution in the background on a
defined server.
Samples cannot be edited or added after execution therefore an iterative workflow is not possible.
Distinctive sample information is not maintained.

If you press the Send current workflow to server button the Submit to server dialog opens:

Figure 3.4. Submit to server dialog


In the Job Scheduler section you can insert the server address for processing. The server can be
local or another machine in the network. Default setting is http://localhost:8187
Submit: Submits the solution to the specified job scheduler server.

eCognition Essentials Documentation_ __8


3 User Interface Components

Monitor: Opens a web-based monitoring overview that contains information on the submitted user
jobs, used engines, log files and configuration settings. The same dialog can be opened using a
double-click in the status bar (see figure below).
Manage local server section:
Number of Engines: Insert the number of engines to be used.
Start / Stop: Starts or stops the server.
Output folder: The default output folder can be changed in the Create / Modify Project action.

Figure 3.5. Status bar - double-click Processing to open Job Scheduler

Save the current configuration.

Load an existing configuration.

Choose either Normal cursor, Panning, Zoom in, Zoom out, Area zoom or Zoom to window. Note
that you can also adjust the zoom using the context menu (see Context menu, page 14).

Switch between visualization of image object outlines, classification or classification with outlines:

eCognition Essentials Documentation_ __9


3 User Interface Components

l Outlines display: Available after first segmentation when objects were created. Image object
outlines are shown in blue. Selected image objects have a red outline color.
l Classification display: Available after first classification with the class color shown in a semi-
transparent overlay on the selected image data.
l Outlines and classification display: Available for classified image objects, with black object
outlines and classification overlay.

Previous layer: Switch to previous image layer display of the image layer drop-down menu.
Next layer: Switch to the following image layer of the image layer drop-down menu.
Drop-down menu: Select the image layer of the active view. After assignment of image layers to their
respective band, RGB and CIR visualization is possible. In addition custom layers can be calculated
and visualized (NDVI, NDSI, NDSI, and NDSM). Image layer mixing can also be customized if you
select customize layer mixing and equalizing the following edit image layer mixing dialog opens:

Figure 3.6. Edit Image Layer Mixing dialog box. Changing the layer mixing and equalizing
options affects the display of the image only
You can define the color composition for the visualization of image layers for display. In addition,
you can choose from different equalizing options. This enables you to better visualize the image and
to recognize the visual structures without actually changing them. You can also choose to hide
layers, which can be very helpful when investigating image data and results.
NOTE: Changing the image layer mixing only changes the visual display of the image but not the
underlying image data – it has no impact on the process of image analysis.

eCognition Essentials Documentation_ __10


3 User Interface Components

l Define the display color of each image layer. For each image layer you can set the weighting of
the red, green and blue channels. Your choices can be displayed together as additive colors in
the map view. Any layer without a dot or a value in at least one column will not display.
l Choose a layer mixing preset:
l (Clear): All assignments and weighting are removed from the Image Layer table
l One Layer Gray displays one image layer in grayscale mode with the red, green and blue

together
l False Color (Hot Metal) is recommended for single image layers with large intensity ranges

to display in a color range from black over red to white.


l False Color (Rainbow) is recommended for single image layers to display a visualization in

rainbow colors. Here, the regular color range is converted to a color range between blue
for darker pixel intensity values and red for brighter pixel intensity values
l Three Layer Mix displays layer one in the red channel, layer two in green and layer three in

blue
l Six Layer Mix displays additional layers

l Change these settings to your preferred options with the Shift button or by clicking in the
respective R, G or B cell. One layer can be displayed in more than one color, and more than one
layer can be displayed in the same color.
l Individual weights can be assigned to each layer. Clear the No Layer Weights check-box and
click a color for each layer. Left-clicking increases the layer’s color weight while right-clicking
decreases it. The Auto update checkbox refreshes the view with each change of the layer mixing
settings. Clear this check box to show the new settings after clicking OK. With the Auto update
check box cleared, the Preview button becomes active.
l Compare the available image equalization methods and choose one that gives you the best
visualization of the objects of interest.
l Click the Parameter button to changing the equalizing parameters, if available.

Figure 3.7. Layer Mixing presets (from left to right): One-Layer Gray, Three-Layer Mix, Six-Layer
Mix

eCognition Essentials Documentation_ __11


3 User Interface Components

If thematic layers were loaded they can be selected in the drop-down


menu and are visualized in green.

Saves the current image and vector display settings.


Restores the last saved image and vector display settings.

Activates and inactivates horizontal or vertical split of the view.

Center at previous or next ROI (only active if more than one region of interest is defined).

3.1.3 Legend
This window shows all active classes.
To edit the color and/or name of a class do a double click on it.

3.1.4 Thematic Layer Attribute Table


The thematic layer attribute table is not visible by default. Select View > Thematic Layer Attribute
Table to open this window.
If you select an item in the table it is highlighted in red outlines in your view and vice versa. To
modify the thematic layer attribute table, use the context menu to sort, add, edit or delete table
columns or rows.

3.1.5 Results Panel


The results panel gives additional information for selected actions (supervised classification,
accuracy assessment) and is described further in Workflow and Actions, page 19.

3.1.6 Sample Information


During supervised classification the Sample Information dialog opens if you:

l insert samples using Load / Create samples > Sample selection mode > Add manually
l load samples using Load / Create samples > Sample selection mode > Add from statistics table
This dialog shows the applied feature space. Furthermore, it gives information on the number of
objects per class selected manually/local samples (column No. of objects) and those loaded via the
statistics table (column No. of table entries).

eCognition Essentials Documentation_ __12


3 User Interface Components

Figure 3.8. Sample Information Window

3.1.7 Report
This window appears after executing the Create Report action and shows the object statistics that
have been calculated and exported.

Figure 3.9. Report Window

3.1.8 Change Detection: Image vs. Image


This window appears while working with the action Change Detection: Image vs. Image and it
shows the names of the difference layer(s) and their status (created: yes/no).

eCognition Essentials Documentation_ __13


3 User Interface Components

Figure 3.10. Change Detection Information Window

3.1.9 Context menu


Right-click in your image view to select Normal Cursor, Zoom In, Zoom Out, Area Zoom or Panning
for the mouse pointer.
Show Scale Bar: Displays a scale bar in the view.
Copy View to Clipboard: Copies a screenshot of the view to the clipboard.

3.2 Menus
3.2.1 File Menu

Figure 3.11. File Menu

Data Marketplace
Trimble Data Marketplace is a service that allows you to search for and download geospatial
datasets (maps) in TIF (.tif) or Shapefile (.shp) format that you can import into eCognition. Different
types of data sets are available from a variety of sources, including United States Geological Survey

eCognition Essentials Documentation_ __14


3 User Interface Components

(USGS), DigitalGlobe's Precision Aerial Imagery, Open Street Map, TIGER/Line GIS Data, Intermap,
and more.
Please refer to the Data Marketplace online help for detailed instructions.
Select File > Data Marketplace to display the Data Marketplace window allowing you to select the
data set you want to order for download.
To select an area do either of the following:

l Enter an address, state or zip code to select an area of interest.


l Zoom in to select an area.

Data Marketplace Orders


To view all of your Data Marketplace orders, including those that have been downloaded and
imported and those that have not, select File > Data Marketplace Orders.
All orders are displayed on the Data Marketplace Order window containing information on the
Order number, creation date and status.

User Information
Display of user name and option to insert company and copyright information.

Exit
Select Exit to close eCognition Essentials.

3.2.2 View Menu

Figure 3.12. View Menu

Analysis Builder
Open and close the Analysis Builder window (see Analysis Builder, page 7).

Analysis Builder Toolbar


Open and close the Analysis Builder Toolbar window (see Analysis Builder Toolbar, page 8).

eCognition Essentials Documentation_ __15


3 User Interface Components

Thematic Layer Attribute Table


Open and close the Thematic Layer Attribute Table window (see Thematic Layer Attribute Table,
page 12

Legend
Open and close the Legend window ( see Legend, page 12).

3.2.3 Help Menu

Figure 3.13. Help Menu

System Info
This dialog provides information about the software version and build, loaded portals and libraries,
configuration settings, memory usage, drivers and licensing settings.

eCognition Essentials User Guide


Opens this user guide.

Release Notes
Opens the eCognition Essentials Release Notes.

Help Keyboard
This window provides information about available keyboard commands.

Options
Opens the Options window, which allows the definition of user preferences.

eCognition Essentials Documentation_ __16


3 User Interface Components

General

Check for updates at Yes: Default. Check for software update at startup.
startup No: No update check.

Check for maintenance Yes: Default. Check maintenance state at startup.


at startup No: No maintenance check.

Participate in customer Yes: Join customer feedback program. For details see chapter below.
feedback program No: No participation in customer feedback program.

Output Format
CSV

Decimal separator for Use period (.) as decimal separator.


CSV file export

Column delimiter for Use semicolon (;) as column delimiter.


CSV file export
Reports

Date format for reports DD.MM.YYYY or MM/DD/YYYY


Select or edit the notation of dates used in reports exported by export
actions.

Customer Feedback Program


We are passionate about providing reliable and useful tools - that just work for you in the real
world. We use our Customer Feedback Program (CFP), along with our own internal testing and
direct customer feedback to make sure we're achieving that goal.
If you elect to participate in the CFP, the session data recorded will be sent to us securely and in the
background. Participation is voluntary and your choice will not affect your ability to get support
from us. We encourage you to participate so that everyone can benefit from what we can learn by
seeing the widest set of user experience data possible.

What Information is collected for the CFP?

The CEP collects detailed information only about used buttons, algorithms, features,
dialogs/windows and summary information about the computer that it is running on (i.e. OS, RAM,
Screen size, etc.). No information about other applications running or just installed is collected.

Who can access the data?

The data gathered from your participation in the CFP is only accessed by the Trimble eCognition
development team and its affiliated employees. Data is used solely by Trimble eCognition Software.

eCognition Essentials Documentation_ __17


3 User Interface Components

It is not shared, traded, or sold to third parties.

Can I change my Opt In or Opt Out decision?

Yes. At any time you can select Customer Feedback Options from within eCognition and change
your decision. If you opt out then data will stop being submitted within seconds.

How is my privacy protected if I participate?

The CFP data does not include your name, address, phone number, or other contact information.
The CFP generates a globally unique identifier on your computer to uniquely identify it. This is
randomly assigned and does not contain any personal information. This allows us to see continuity
of issues on a single system without requiring other identifiers.
The host name of your computer and the windows user name of the user running the affected
application is recorded and sent. To the extent that these individual identifiers are received, we do
not use them to identify or contact you. If information gathered from the CFP is ever published
beyond the authorized users it is as highly derived, summary data that cannot be related to a
specific user or company.

About
Opens the About information window.

Contact eCognition Support


Opens the eCognition support portal.

eCognition Essentials Documentation_ __18


4 Workflow and Actions

4 Workflow and Actions


4.1 Overview
In eCognition Essentials the workflow is represented by individual actions. These actions can be
combined and customized to some degree to be adapted to specific image analysis question
formulations. The following actions are available:

Action Name Included in default Multiple insertion possible


workflow
Create/Modify Project ü
Multiresolution Segmentation ü ü
Threshold Segmentation / Classification ü
Vector Based Segmentation ü
Supervised Classification ü
Change Detection: Image vs. Image
Change Detection: Objects vs. Vectors ü
Create Vector Layer ü
Manual Editing ü
Object Merge ü ü
Minimum Mapping Unit ü
Smooth Objects ü
Accuracy Assessment ü ü
Create Report ü
Export ü ü

eCognition Essentials Documentation_ __19


4 Workflow and Actions

To add an action click on the plus symbol located on the left side of the action bar.

Figure 4.1. Action with plus and minus symbol and small arrows
Remove an action by clicking on the minus symbol on the right side of the action bar.
Move actions by clicking on the small arrows on the right side of the action bar or by dragging the
action to the desired position. Note that the action Create / Modify Project always has to remain in
the first position.

4.2 Workflow
The approach in eCognition Essentials is to go step-by-step through the actions of your solution.
You should always start with the mandatory action Create / Modify Project.
As soon as the necessary steps of an action are completed, a green checkmark appears on the right
side of the action bar.
To change some of the settings you need to click the undo button first or alternatively click the
Revert selected action button in the Analysis Builder Toolbar. All subsequent actions are reverted,
too.

Figure 4.2. Completed segmentation action with green checkmark

4.2.1 Configuration on subregion

Overview
When you want to process large images with high resolution, Trimble strongly recommends to
follow a workflow where you configure your analysis steps first on one or more small subregions.
Only when you have optimized all analysis steps on those subregions select the Run until selected
action button, which will then execute all actions on the complete scene without need for any
further user interaction.

eCognition Essentials Documentation_ __20


4 Workflow and Actions

Step by step workflow example

A) Configuration on subsets

Action Create /Modify Project:

l Project data > create New project


l Region of Interest (ROI) > Select analysis area User defined and press button Add rectangleto
select several representative rectangles where the classification should be trained.
l Press button Finish editing
l Assign spectral bands and press buttton Create layers

Action Multiresolution Segmentation:

Insert appropriate settings and select Execute.

Action Supervised Classification:

l Select one of the Sample Selection Modes e.g. Add manually.


l Create classes and insert samples for each class.
l To delete samples select the button in section Delete / Export samples.
l Select the classifier parameters and classify the subsets.
l Check the classification results. You can step through your ROIs by clicking the blue arrow
buttons Center at next/previous ROI in the Analysis Builder Toolbar.
l If necessary improve the results by iteratively refining the classification. (Selecting Undo will
keep your samples, add new ones or modify existing using the respective functionality.)

B) Apply configuration to the complete image/other images


l By clicking the green arrow button Run until selected action in the Analysis Builder Toolbar you
can now apply the selected samples to the whole image loaded in your current project (keep
Supervised Classification selected).
l In section Delete / Export samples select Export table to export all samples to a sample statistics
file that can be applied to other images.

4.2.2 Incremental improvement of classifier


One of the most critical aspects of a successful application of eCognition Essentials is the quality of
the classifier. eCognition Essentials allows you to optimize your classifier by combining samples from
different scenes. The most straightforward way to do this is to incrementally improve your classifier
by collecting additional samples on new scenes.
To do this follow all necessary steps to configure the actions create/modify project, multiresolution
segmentation and supervised classification. The sample information window gives you information

eCognition Essentials Documentation_ __21


4 Workflow and Actions

on the number of samples you selected and opens as soon as you select a class for sample
selection.
Then save your configuration (Analysis Builder Toolbar), close the project and create a new project
with a new but comparable scene.
On this new scene reload the configuration via the Analysis Builder Toolbar, select the action
supervised classification and push the toolbar button to run until selected action.
As you can see in the sample info window, the samples that you selected on the previous project
are now treated as external samples (not represented by objects in your project). To improve the
classification result achieved with the classifier trained on the initial scene, you can manually add
local samples on the new scene, and improve your classifier iteratively. Then save your
configuration again and proceed to the next scenes.
Please note that the incremental improvement of the classifier is only supported for object-based
classifiers, but not layer-based classifiers.

4.3 Actions
4.3.1 Create / Modify Project

Create / Modify Project is a mandatory action. The main purpose is to insert image and thematic
layers into a project and to define specific project settings. File operations such as saving, loading or
closing a project are also possible within this action as is selecting the resolution for the analysis.
This action is completed as soon as image layers have been imported. Depending on the availability
of spectral bands, index generation is recommended.

Project data
New project: Select the image and thematic files to be loaded to your project. This is the first step
when working with eCognition Essentials. Visualize the layers using the Analysis Builder Toolbar,
page 8.
Add layer: Select additional image or thematic layers. Note that adding image files reverts your
already configured and completed analysis steps (for example, adding image layers reverts the
segmentation step). This is not the case for thematic layers. However, Trimble recommends loading
thematic layers containing sample information or reference data during this first step, even though
they will be used only in subsequent actions.
Load project: Reloads a previously saved project, including solution and configuration status. Note
that projects generated with other eCognition products cannot be loaded into eCognition
Essentials.
Save project: Saves the current project, actions and configuration status.
Close project: Closes the current project. Note that before creating a new project the current
project must be closed.

eCognition Essentials Documentation_ __22


4 Workflow and Actions

Results Folder
Folder: Browse to select the folder for results. Default name and location is
C:\Users\UserName\Desktop\eCognitionResults.

Resolution
Resolution for analysis: Analysis speed is considerably increased if you work on a reduced image
resolution.

Region of Interest (ROI)


Select analysis area: Option to limit the analysis to a region of interest defined in a thematic layer in
polygon file format or insert the subregion manually. This accelerates processing on a subregion
instead of the complete image.
Complete scene: Processing of complete image loaded to project.
User defined: Add rectangle(s) to limit the analysis to rectangular subregions or add polygon(s)
manually to define the analysis area. To delete single user defined regions activate the Delete button
and select the region to be removed.
Note that you have to select Finish editing to apply all user defined changes.
Add layer from file: Select or add a thematic layer to your project to limit the analysis to a defined
area. If a thematic layer is chosen, all regions outside the polygons of that layer are classified as class
_IGNORE by subsequent actions.

NDSM generation
DTM: Select the image layer corresponding to the DTM.
DSM: Select the image layer corresponding to the DSM.

Spectral bands
The benefit of spectral band assignment is two-fold:

l Possibility to visualize image data in RGB and CIR display (see Analysis Builder Toolbar, page 8).
l Generation of index layers for improved visualization, spectral information extraction and
classification results.
Use spectral bands: Trimble recommends using For display and index layer creation. This enforces
the index generation as soon as the respective spectral bands are assigned (otherwise, the action is
not configured and does not get a green checkmark). However, for very large image data index
generation can be time consuming. To skip this step for prototyping purposes select For display
only.
Blue: Assigns the image layer corresponding to the blue band.
Green: Assigns the image layer corresponding to the green band.
Red: Assigns the image layer corresponding to the red band.
NIR: Assigns the image layer corresponding to the NIR band.

eCognition Essentials Documentation_ __23


4 Workflow and Actions

Run
Display RGB: RGB visualization of image layers, activated after assignment of the three layers in
section Spectral Bands.
Create Layers: Click to generate NDVI, NDSI and NDWI and NDSM layers. Once generated, these
layers can be displayed using the analysis builder toolbar. They are also used internally by the
software for achieving an optimal classification.

l NDSI: (red-blue)/(red+blue+0.0001) and requires bands red and blue.


l NDVI: (NIR-red)/(red+NIR+0.0001) and requires bands NIR and red.
l NDWI: (green-NIR)/(green+NIR+0.0001) and requires bands green and NIR.
l NDSM: DSM - DTM and requires DSM and DTM.
Undo: Removes generated layers from project and reactivates possibility to select e.g. resolution for
analysis and ROIs.

4.3.2 Multiresolution Segmentation

Multiresolution Segmentation is an action that can be inserted multiple times. Its main purpose is
to combine regions of similar spectral information into meaningful image objects.
Selecting Execute creates image objects based on the current settings. Thematic layers are not used
in the segmentation. The action is completed after the generation of image objects.
(Starting Version 1.1 the Multiresolution Segmentation uses up to 4 cores for parallel processing
thus increasing the performance significantly.)

Working domain
Select working domain: Select the input class(es) for classification. Other classes are not affected by
this action. If no objects have been created yet, only Pixel level is available.

Settings
Algorithm: Select the segmentation algorithm to generate objects:

l Original multiresolution: eCognition's multiresolution segmentation.


l Region grow on objects: edge-based multiresolution region grow (faster alternative in
comparison to original multiresolution segmentation).
Scale: Modify the scale parameter to increase or decrease the size of the generated objects.
Execute: Generates image objects based on the current settings.
Undo: Removes segmentation results and allows to adjust settings.
Advanced settings: Activate this check-box to obtain the following advanced options:

eCognition Essentials Documentation_ __24


4 Workflow and Actions

l Ignore layers: Select image layers to be excluded from segmentation.


l Scale slider min. value: Changes the minimum value of the scale slider.
l Scale slider max. value: Changes the maximum value of the scale slider.
l Color/Shape: Allows you to decide how much influence the color criterion has (pixels with
similar spectral information) versus the influence of the shape criterion (optimizes image
objects in regard to smooth or compact object shape, see below).
l Smoothness/Compactness: Use this slider to change the weights of the shape criterion (see
above) towards smoothness or compactness. A higher value for smoothness optimizes the
image objects in regard to smooth borders whereas a higher value for compactness results in
compact objects.

4.3.3 Threshold Segmentation | Classification

Threshold Segmentation | Classification is an action that can be inserted multiple times. Its purpose
is to classify objects based on a threshold.
The action is completed if the classification has been applied based on the current settings.

Working domain
Select working domain: Select the input class(es) for classification. Other classes are not affected by
this action. If no objects have been created yet, only Pixel level is available.

Threshold settings
Use layer: Select the layer for the classification threshold.
Threshold: Pixels with values less than or equal to this value are assigned to one class, pixels above
this value to another class.
Split objects: Activate to split objects which contain both pixels above and pixels below the
threshold. If not activated, the mean value of the object is compared against the threshold to
decide which class it is assigned to.

Classification settings
Create class: Inserts a new class.
Edit classes: Change name or color of existing classes.
Class for dark: Pixel values less or equal to the threshold are assigned to this class.
Class for bright: Pixel values larger than the threshold are assigned to this class.
Execute: Apply classification based on current settings.
Undo: Reverts the threshold classification.

eCognition Essentials Documentation_ __25


4 Workflow and Actions

4.3.4 Vector Based Segmentation

Vector Based Segmentation is an action that can be inserted multiple times. Its purpose is to create
image objects based on a point, line or polygon vector file.

Working domain
Select working domain: Select the input class(es) for classification. Other classes are not affected by
this action. If no objects have been created yet, only Pixel level is available.

Vector selection
Select vector layer: Select the vector layer for the segmentation.
Select attribute: Either use all vectors, or select a subset of vectors based on an attribute.
Select attribute value(s): Select vectors based on attribute values.
Show vectors: Button to visualize the selected vectors in the current view.
Attribute table: Button to visualize the thematic layer attribute table.

Classification
Classification mode:

l do not classify - no class assignment


l classify using attribute from attribute table - classify according to an attribute in the attribute
table
l Select attribute- attributes to be used as class names.
l classify using a specific class - selection for classification of new segmentation objects:
l Select class - select an existing class or create a new class to be used for classifying

Execution
Execute: Click to create objects based on selected settings.
Undo: Click to undo the segmentation.

4.3.5 Change Detection: Image vs. Image

Change Detection: Image vs. Image is an action that can be inserted only once. Its purpose is to
create difference layers for images from two different acquisition times which can be used in other
actions. Optionally you can classify changes.

eCognition Essentials Documentation_ __26


4 Workflow and Actions

Difference layers
Image pair assignment

l Automatic: The first half of the images are assigned to time 1, the second half to time 2.
l Manual: Assign image files to time 1 and to time 2 manually to create image pairs. Both files in a
pair must contain the same number of layers. Multiple file pairs can be defined in subsequent
steps.
Time 1: Select image layers of time 1.
Time 2: Select image layers of time 2.
Create: Create difference layers according to selection.
Diff. layers to be created: Number of difference layers to be created according to a previous or
loaded configurations.
Remove: Removes all difference layers and clears the configuration of manually assigned image
pairs.
Info window: Opens the information window.

Classification
Classify changes Select whether you want to classify changes directly in this action. Alternatively,
you can use the created differences layers in subsequent actions such as Supervised Classification,
or Threshold Segmentation | Classification.

l no- changes are not classified


l yes - changes are classified

Only available for Classify changes - yes:


Select working domain Apply the change detection to all objects or objects of selected classes.
Other classes are not affected by this action. If no objects have been created yet, only Pixel level is
available.
Layers Select the difference layers used for classification. The average of these layers is used for
evaluating changes.
Threshold type

l percentage (area): The threshold defines an area percentage of differences to be classified.


l pixel value: The threshold defines the pixel value necessary for a change to be classified.

For Threshold type – percentage (area):


Positive Threshold: This threshold defines the percentage area of the image to be classified; pixels
with greater values will be the first to be classified (positive changes).
Negative Threshold: This threshold defines the percentage area of the image to be classified; pixels
with lower values will be the first to be classified (negative changes).

eCognition Essentials Documentation_ __27


4 Workflow and Actions

For Threshold type – pixel value:


Positive Threshold: Pixels with a value greater than the selected threshold value will be classified.
Negative Threshold: Pixels with a value lower than the selected threshold value will be classified.

Positive Class: Select the class to be assigned to the selected pixels according to the "Positive
threshold" condition.
Negative Class: Select the class to be assigned to the selected pixels according to the "Negative
threshold" condition.

4.3.6 Change Detection: Object vs. Vectors

Change Detection: Objects vs. Vectors is an action that can be inserted multiple times. Its purpose is
to classify differences between classified objects and a vector layer.
The user can decide what type of objects to classify using the parameter: Classify objects . For
classification the overlap between selected objects and selected vectors is calculated.

Object selection
Select class(es): Select the classes you want to compare to vectors.

Vector selection
Select vector layer: Choose the point, line or polygon vector layer for change detection.
Select attribute: Select a subset of vectors based on an attribute.
Select attribute value(s): Select vectors based on attribute values.
Display vector: Button to visualize the vector selection in the current view.

Classification settings
Create class: Button to insert classes.
Edit classes: Button to edit class names and colors.
Classify objects: Select what type of change you want to classify.

l sel. classes that overlap with sel. vectors (reflecting no change)


l sel. classes that do not overlap with sel. vectors (objects reflect "new appearance" if vector data
is older than image data)
l non-sel. classes that overlap with sel. vectors (objects reflect "disappearance" if vector data is
older than image data)
Class: Select the class used for classification.

eCognition Essentials Documentation_ __28


4 Workflow and Actions

Min. overlap (%): Minimum overlap between vector and image object necessary to classify image
objects. (An object counts as overlapping with a vector if the percentage of its area that overlaps
with the vector is greater than the minimum overlap defined.)
Max. overlap (%): Maximum overlap between vector and image object necessary to classify image
objects. (An object counts as non-overlapping with a vector if the percentage of its area that does
overlap with the vector is below the maximum overlap defined here)

Run
Classify changes: Button to apply vector-object comparison.
Undo: Click to undo the classification.

4.3.7 Create Vector Layer

Create Vector Layer is an action that can be inserted multiple times. Its purpose is to create polygon
vector layers based on existing image objects

Settings
Output name: Choose name for the vector layer. Default name is convertedFromObjects
Execute: Generates vectors based on image objects.
Undo: Removes created vector layer from project.

4.3.8 Supervised Classification

Supervised Classification is an action that can be inserted only once. Its purpose is to train and apply
a classifier using samples.
The following classification algorithms are available: Bayes, KNN, SVM, Decision Tree and Random
Trees.
The action is completed if the classification has been applied based on the current settings and
samples.

Working domain
Input class: Select the class which serves as an input to the classification. Other classes will not be
modified by this action. If Supervised Classification is the first classification action, only unclassified
is available.

eCognition Essentials Documentation_ __29


4 Workflow and Actions

Load / Create Samples

Add manually
Create class: Inserts a new class.
Edit classes: Change the name or color of existing classes.
Manual sample selection: To insert samples, select a class and then single click on image objects in
the view. They are now samples for the respective class and displayed in their class color. To select
more than one sample at a time, hold down the left mouse button and move/brush over the image
objects.
Description of the Sample Information dialog see Sample Information, page 12

Add from statistics table


Select table (.csv): Browse to select a sample statistics table that can be used for training.
Load statistic: Click to load the selected sample statistics table.

Add from shape file


Select thematic layer: Select the layer with samples in point or polygon file format. In case no
thematic layer is available, samples can be selected manually. If not loaded already the thematic
layer can be added using the Add layer button in the action Create / Modify Project, page 22
Class column: Select the column that contains the class name.
Create samples: Loads samples and creates image objects based on the selected thematic layer.
Classes with appropriate names are automatically generated. It is possible to load samples
iteratively from different thematic layers because samples from deselected thematic layers are not
removed.

Delete / Export samples


Delete samples: To delete samples, activate the Delete sample button and click or brush over the
samples you want to delete.
Delete local: Removes all samples of thematic layers and manual insertion.
Delete table: Removes samples loaded from a statistics table.
Export as .shp: Exports sample polygons to .shp file (including those loaded from a thematic layer).
Export table: Exports a sample statistics table (.csv).
Display sample export options: Activate to change the following export names:

l Export name: Change name for sample export file. Default name is Samples. The extension .shp
is added for Export as .shp and extension or .csv for Export table. (Results folder can be
specified in Action Create /Modify Project.)
l Class column name: Change the name of the attribute table column where the class name will
be stored. Default name is ClassName.

eCognition Essentials Documentation_ __30


4 Workflow and Actions

Classifier Parameters
Classifier algorithm: Select one of the available classifiers (see also Supervised Classification, page
29). Default classifier is the KNN classifier.
Source: Choose one of the following sources

l object based: Applies samples to image objects (faster than layer based).
l layer based: Applies samples to image layer pixel values.
Select object features: Select the object features to define the applied feature space. Default setting
includes mean layer values of spectral bands and index layers of sample image objects.
Use layer(s): Select the image layers used in classification. Default setting includes all layers (spectral
bands and available index layers).
Display advanced settings: Activate to display and change advanced settings for the different
classifier algorithms.
Classify: Applies the selected classifier based on current samples.
Undo: Removes the classification (all samples remain).

Description of Supervised Classification Algorithms

Bayes
A Bayes classifier is a simple probabilistic classifier based on applying Bayes’ theorem with strong
independence assumptions. An advantage of the naive Bayes classifier is that it only requires a small
amount of training data to estimate the parameters (means and variances of the variables)
necessary for classification.

KNN (K Nearest Neighbor)


The k-nearest neighbor algorithm (k-NN) is a method for classifying objects based on closest training
examples in the feature space. k-NN is a type of instance-based learning where the function is only
approximated locally and all computation is deferred until classification. The k-nearest neighbor
algorithm is amongst the simplest of all machine learning algorithms: an object is classified by a
majority vote of its neighbors, with the object being assigned to the class most common amongst
its k nearest neighbors.

SVM (Support Vector Machine)


A support vector machine (SVM) is a concept in computer science for a set of related supervised
learning methods that analyze data and recognize patterns, used for classification. The standard
SVM takes a set of input data and predicts, for each given input, which of two possible classes the
input is a member of. Given a set of training examples, each marked as belonging to one of two
categories, an SVM training algorithm builds a model that assigns new examples into one category
or the other. Support Vector Machines are based on the concept of decision planes defining
decision boundaries. A decision plane separates between a set of objects having different class
memberships.

eCognition Essentials Documentation_ __31


4 Workflow and Actions

Decision Tree (CART resp. classification and regression tree)


Decision tree learning is a method commonly used in data mining where a series of decisions are
made to segment the data into homogeneous subgroups. The model looks like a tree with branches
- while the tree can be complex, involving a large number of splits and nodes. The goal is to create a
model that predicts the value of a target variable based on several input variables. A tree can be
“learned” by splitting the source set into subsets based on an attribute value test. This process is
repeated on each derived subset in a recursive manner called recursive partitioning. The recursion is
completed when the subset at a node all has the same value of the target variable, or when splitting
no longer adds value to the predictions. The purpose of the analyses via tree-building algorithms is
to determine a set of if-then logical (split) conditions.

Random Trees
The random trees classifier is more a framework that a specific model. It uses an input feature vector
and classifies it with every tree in the forest. It results in a class label of the training sample in the
terminal node where it ends up. This means the label is assigned that obtained the majority of
"votes". Iterating this over all trees results in the random forest prediction. All trees are trained with
the same features but on different training sets, which are generated from the original training set.

4.3.9 Manual Editing

Manual Editing is an action that can be inserted multiple times.


It is considered completed as soon as a manual edit is completed.

Object reshaping
Cut objects: Activates the object cutting mode. In this mode, you first select an image object and
then click again to start drawing a polygon (which will be cut out of the object) or a splitting line.
Perform the object cut by double click or using the context menu available on right click after a
cutting line has started.
Merge all: Merges all objects of the same class. (In some cases this facilitates cutting objects).

Object classification
Create class: Inserts a new class.
Edit classes: Change name or color of existing classes.

Object annotation
Start annotation: Activates the manual annotation mode. Double click an image object to open the
Edit annotation dialog where you can insert a value for the selected object. Alternatively you can
right click in the view and select Annotate. Multiple-selection and annotation is possible using the
Shift key.

eCognition Essentials Documentation_ __32


4 Workflow and Actions

When activating the annotation mode the cursor changes to a pen.


Annotation list: Opens the Image Object table containing the attribute Annotation. Right click an
object in the table and select Edit Object Annotation to enter a value.

Undo
Undo: Reverts all manual edits.

4.3.10 Object Merge

Object Merge is an action that fuses image objects that belong to the same class.
The action is completed after selecting the Apply button.

Merge if same class


All classes/Select classes: Select if the action is applied to all classes or on selected classes only.
Classes: Select the classes for object merge (only available for Select classes).
Apply: Merges objects of the same class.
Undo: Reverts this action.

4.3.11 Minimum Mapping Unit

Minimum mapping unit is an action that can be inserted multiple times. It can be applied to selected
classes or all classes. It removes objects below a user-defined minimum mapping unit and assigns
them to the surrounding image object.
The action is completed after the user has selected the Apply button.

Remove objects with an area


All classes/Select classes: Select if the action is applied to all classes or on selected classes only.
Classes: Select the classes for object removal (only available for Select classes).
Minimum mapping unit (pxls): Defines the minimum mapping unit in pixels.
Update on object selection: Starts a mode in which selecting an image object sets the minimum
mapping unit to the size of that object.
Apply: Remove objects with a size less or equal to the minimum mapping unit.
Undo: Reverts this action.

eCognition Essentials Documentation_ __33


4 Workflow and Actions

4.3.12 Smooth Objects

Smooth objects is an action that can be inserted multiple times. It can be applied to selected classes
or all classes. Image object outlines are smoothed based on a generalization value and adjacent
image objects of the same class are merged.
The action is completed after the user has selected the Apply button.

Group
All classes/Ignore selected classes: Select if the action is applied to all classes or if selected classes
are excluded from smoothing.
Ignore classes: Select the classes that are excluded from smoothing (only available for Ignore
selected classes ). Classes selected are not modified. At least two classes need to remain for
smoothing .
Scale: Defines the smoothing scale (in pixels). Higher values lead to higher generalization of objects.
Apply: Applies action with current settings.
Undo: Reverts this action.

4.3.13 Accuracy Assessment

Accuracy assessment is an action that can be inserted multiple times. It validates the classification to
ground truth data provided as a thematic layer. It generates statistical output to describe the
quality of classification results. The statistical assessments can be saved as .csv and .html files.
If a thematic layer is selected, the action is completed as soon as you clicked on Execute with the
current settings. If the Thematic Layer is set to None, the action is considered complete as soon as it
is selected, providing that the previous action is also complete.

Validation data
Thematic layer: Select a thematic layer for data validation. If not already loaded the thematic layer
can be added using the Add layer button in the action Create / Modify Project, page 22
Class column: Select the column that contains the class names. Note that class names have to
match those used for the samples.

Export classification accuracy values


Export format: Select between export in *.csv and *.html format or *.csv format only.
Export name: Insert filename for export (extension will be added automatically).

eCognition Essentials Documentation_ __34


4 Workflow and Actions

Execute: Displays and saves the specified result files to the results folder specified in action Create
/ Modify project. And displays the result panel that shows user’s and producer’s accuracies for the
involved classes, as well as an overall accuracy estimate.

4.3.14 Create Report

Create Report is an action that computes statistics based on the classified objects, shows them in
the Report window, and exports them to a .csv and/or .html file. It is an action that can be inserted
multiple times.

Settings
Export format: Choose between:

l .csv and .html - exports the report to *.csv and *.html format.
l .csv - exports the report to *.csv format
File name: Choose a file name for the report. Default name consists of {:Project.Name}\Report
Area unit: Select a unit for image object area calculation. Choose between:

l m2 - square meters
l ha - hectare
l ft2 - square feet
l ac - acres
l pxl - pixels
Create: Exports the classification report.
Undo: Removes the exported classification report.

4.3.15 Export

The export action generates and saves a thematic layer of the classification results or exports a
temporary raster layer of the project. This action can be inserted multiple times.
The action is completed after selection of the Export button with the current settings.
Export type: Select the export type. Choose between:

l Objects -> Vector - exports all existing image objects to a vector file (with their classification if
available) .
l Existing Vector - exports a vector layer that was created or used in the current project.
l Existing Raster - exports a temporary raster layer of the project.

eCognition Essentials Documentation_ __35


4 Workflow and Actions

Class filter: Choose between All Classes for export or Selected Classes (only available for Export type
- Objects -> Vector).
Class selection: Select the classes for export (only available for Export type - Objects -> Vector, Class
filter > Selected Classes).
Attributes: Select the attributes for export (only available for Export type - Objects -> Vector).
Vector layer: Select the vector layer for export (only available for Export type - Existing Vector).
Export name: Insert filename for export (extension will be added automatically).
Export format: Depending on selection above choose .tif or .shp for shapefile or .gdb for FileGDB.
Export: Saves the specified export file to the results folder specified in action Create / Modify
project.

eCognition Essentials Documentation_ __36


5 Acknowledgments

5 Acknowledgments
Portions of this product are based in part on the third-party software components. Trimble is
required to include the following text, with software and distributions.

5.1 The Visualization Toolkit (VTK) Copyright


This is an open-source copyright as follows:
Copyright © 1993–2006 Ken Martin, Will Schroeder and Bill Lorensen.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted
provided that the following conditions are met:

l Redistributions of source code must retain the above copyright notice, this list of conditions
and the following disclaimer.
l Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials provided
with the distribution.
l Neither name of Ken Martin, Will Schroeder, or Bill Lorensen nor the names of any contributors
may be used to endorse or promote products derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

eCognition Essentials Documentation_ __37


5 Acknowledgments

5.2 ITK Copyright


Copyright © 1999–2003 Insight Software Consortium
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted
provided that the following conditions are met:

l Redistributions of source code must retain the above copyright notice, this list of conditions
and the following disclaimer.
l Redistributions in binary form must reproduce the above copyright notice, this list of
conditions and the following disclaimer in the documentation and/or other materials provided
with the distribution.
l Neither the name of the Insight Software Consortium nor the names of its contributors may be
used to endorse or promote products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

5.3 Geospatial Data Abstraction Library (GDAL)


Copyright
5.3.1 gcore/Verson.rc
Copyright © 2005, Frank Warmerdam, warmerdam@pobox.com
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.

eCognition Essentials Documentation_ __38


5 Acknowledgments

THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER

5.3.2 frmts/gtiff/gt_wkt_srs.cpp
Copyright © 1999, Frank Warmerdam, warmerdam@pobox.com
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
associated documentation files (the “Software”), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial
portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

eCognition Essentials Documentation_ __39

S-ar putea să vă placă și