Sunteți pe pagina 1din 119

Definiens Developer 7.

Essentials Training Module1:


Complete Basic Workflow
Operation Principals and Tools;
the main Features and Functions
Definiens Developer 7.0 Essentials Training Module1: Complete Basic Workflow - Operation Principals and Tools;
the main Features and Functions

Imprint and Version


Document Version
Copyright 2008 Definiens AG. All rights reserved.
Published by
Definiens AG
Trappentreustr. 1
D-80339 Mnchen
Germany
Phone +49-89-231180-0
Fax +49-89-231180-90
Web http://www.definiens.com

Legal Notes
Definiens, Definiens Cellenger and Definiens Cognition Network Technology are
registered trademarks of Definiens AG in Germany and other countries. Cognition
Network Technology, Definiens eCognition, Enterprise Image Intelligence, and
Understanding Images, are trademarks of Definiens AG in Germany and other
countries.
All other product names, company names, and brand names mentioned in this
document may be trademark properties of their respective holders.
Protected by patents US 7146380, US 7117131, US 6832002, US 6738513, US 6229920,
US 6091852, EP 0863485, WO 00/54176, WO 00/60497, WO 00/63788 WO 01/45033,
WO 01/71577, WO 01/75574, and WO 02/05198. Further patents pending.

2
Definiens Developer 7.0 Essentials Training Module1: Complete Basic Workflow - Operation Principals and Tools;
the main Features and Functions

Table of Contents
Essentials Training Module1: Complete Basic Workflow _____________________ 1
Imprint and Version __________________________________________________ 2
Legal Notes_________________________________________________________ 2
Table of Contents _______________________________________________________ 3
Introduction to this Module ______________________________________________ 7
Symbols at the side of the document 8
Lesson 1 Introduction to the Essential Tools for Rule Set Development _______ 9
1.1 The Definiens Workspace and Project File _________________________ 10
1.1.1 Open the Workspace 10
1.1.2 Open the Project file 11
1.2 The predefined Viewing Modes _________________________________ 11
1.3 The Visualization Tools ________________________________________ 13
1.3.1 Visualize Image Objects 13
1.3.2 Display Classification results 14
1.3.3 Panning and zooming functions 14
1.4 The essential windows for Rule Set development ___________________ 16
1.4.1 The Feature View and Image Object Information 16
1.4.2 The Class Hierarchy 17
1.4.3 The Process Tree 18
1.5 Execute a Process sequence ____________________________________ 19
Lesson 2 Load and View Data__________________________________________ 21
2.1 Create a Project in a Workspace _________________________________ 21
2.1.1 Supported image formats 22
2.1.2 Define the data to be loaded 23
2.1.3 Overview: Create Project dialog box 23
2.1.4 Define Layer Alias 24
2.1.5 Confirm the settings to create the Project 25
2.2 How to open a the created Project _______________________________ 25
2.3 How to save a Project _________________________________________ 26
2.4 Adjust the View settings _______________________________________ 26
2.4.1 Changing the Layer Mixing 26
2.4.2 Navigating through Image Layers 28
Lesson 3 Introduction to Processes _____________________________________ 29
3.1 Introduction to Cogniton Network Language (CNL) _______________ 29
The Concept of the Image Object Domain 30
3.2 Overview of available algorithms ________________________________ 31
3.3 Working with Processes________________________________________ 33
3.3.1 Create a Process 33
3.3.2 Arrange Processes 34
3.3.3 Save a Rule Set or single Process 34
3.3.4 Execute Processes 34
3.3.5 Delete Rule Set or single Process 35
Lesson 4 Segmentation: How to Create Image Objects ____________________ 37
4.1 Theory: Segmentation and Image Objects _________________________ 38
4.1.1 Image Object Primitives and Objects of Interest 38
4.1.2 The Image Object Hierarchy 39
4.1.3 Generating Suitable Image Objects 40
4.2 The Multiresolution Segmentation _______________________________ 41
4.2.1 The Multiresolution Segmentation algorithm 41
4.2.2 Segment with Multiresolution Segmentation 43
4.2.3 Effect of different Image Layer Weights 45
4.2.4 Effect of different Homogeneity Criterion 50

3
Definiens Developer 7.0 Essentials Training Module1: Complete Basic Workflow - Operation Principals and Tools;
the main Features and Functions

4.3Creating multiple Image Object Levels ____________________________53


4.3.1 Theory: Image Object Hierarchy 53
4.3.2 Creating several Object Levels in one Project 54
4.3.3 Navigate within the Image Object Hierarchy 58
Lesson 5 Image Objects - the Information Carriers________________________ 61
5.1 Overview over the available Features for Classification________________61
5.1.1 Object Features 62
5.1.2 Class-Related Features 62
5.1.3 Scene Features 63
5.1.4 Process-related Features 63
5.1.5 Meta Data 63
5.1.6 Feature Variables 63
5.2 How to use the Feature View ____________________________________64
5.2.1 Open the Feature View tool 64
5.2.2 Navigate in the Feature View 64
5.2.3 Visualize the Feature value range 65
Lesson 6 Basic Classification __________________________________________ 69
6.1 Create a Class_________________________________________________69
6.2 Define the first Classification Process ______________________________70
6.2.1 Preparation: Prepare Process structure 70
6.2.2 Append the classification Process 71
6.2.3 Define the condition for Classification 71
6.2.4 Define the Class and execute the Process 72
6.2.5 Review the Classification result 73
6.3 Define the second Classification Process ___________________________75
6.3.1 Define the correct Image Object Domain 76
6.3.2 Define the condition for Classification 77
6.3.3 Define the target Class 77
6.3.4 Review the Classification 77
6.4 Alternative Classification method: Insert conditions in the Class Description
____________________________________________________________78
6.4.1 Insert both conditions in the Class Description of Water 79
6.4.2 Insert a Classification Process 80
6.4.3 Evaluate the Membership values of Objects 80
Lesson 7 Exercise: Recap Classification and Segmentation_________________ 83
7.1 Create the Project and assign No Data values ______________________84
7.2 Segmentation ________________________________________________85
7.2.1 Examine which Image Layers contain significant information for
the class water 85
7.2.2 Set up the Segmentation Process 86
7.3 Find Features and classify Water and Road ________________________87
Lesson 8 Classify Using Context Information: Relative border to class ______ 89
8.1 Create the Class-Related Feature Relative border to Water Body _______90
8.2 Find the appropriate threshold___________________________________90
8.3 Add, edit and execute the Process for classifying with the Class-Related
Feature ______________________________________________________91
Lesson 9 Merge Objects ______________________________________________ 93
9.1 Overview over algorithms to reshape Objects _______________________93
9.2 Merge Water, Road and all unclassified Objects ___________________94
Lesson 10 Export Results _______________________________________ 95
10.1 Export the current view_________________________________________95
10.2 Export a Project statistic ________________________________________97
10.3 Export a vector layer (.shp) ______________________________________98
10.3.1 Define the name and vector type 98

4
Definiens Developer 7.0 Essentials Training Module1: Complete Basic Workflow - Operation Principals and Tools;
the main Features and Functions

10.3.2
Add Features and configure the attribute table 99
Lesson 11 Sample Based Classification with Nearest Neighbor Classifier
103
11.1 Nearest Neighbor (NN) theory__________________________________ 103
Classic workflow 104
11.2 Nearest Neighbor configurations _______________________________ 106
11.3 Declare Sample Objects for the NN Classification, (manual step!) ______ 109
11.4 Add, edit and execute a Process to classify________________________ 111
11.5 Refine the Classification_______________________________________ 111
Lesson 12 Batch-Processing with eCognition Server _______________ 113
12.1 Import data using an existing template __________________________ 113
12.1.1 Load the import template 114
12.1.2 Load the data 114
12.2 Submitting data for analysis ___________________________________ 115
12.3 View Job Scheduler status in a browser __________________________ 116
12.3.1 Review user jobs 117
12.3.2 Review job overview 117
12.3.3 View job details 118
12.3.4 Review engine status 118
12.3.5 Review engine usage 118
12.4 Roll-back to initial status ______________________________________ 119

5
Definiens Developer 7.0 Essentials Training Module1: Complete Basic Workflow - Operation Principals and Tools;
the main Features and Functions

6
Introduction to the Essential Tools for Rule Set Development

Introduction to this Module


After participating in the complete Essentials training the trainee will be able to solve
image analysis tasks comparable to the following example. The essential concepts of
Object based image analysis will be taught and the main strategies for Rule Set
development will be introduced.
The Module1 is the first Module of the Essentials Training course and gives an
introduction to the main Features and functions needed for Rule Set Development. The
trainee will learn the complete workflow: from Loading and managing data to creating
Objects and classifying them up to the final export.
To achieve that different kind of data will be used and different segmentation and
Classification task will be discussed.

Classification of water bodies in Landsat data.

Classification of water and roads in Quickbird data

7
Introduction to the Essential Tools for Rule Set Development

Symbols at the side of the document


The symbols at the side of the document shall guide you through the exercise and help
you to identify if you should read something or if an action is needed or if the screenshot
is meant to be compared with your settings in the software.

Introduction If the side is hachured and Introduction is added, this indicates that the text is giving a
general introduction or methodology about the following Lesson, method or exercise.
Information If the side is hachured and Information is added, this indicates that the text is giving
information about the following exercise.
If this symbol is shown, you have to follow the numbered items in the text. If you just
want to work through the exercises without reading the theory part, follow only this
sign.
Action!

If this symbol is shown, compare the settings shown in the screenshot with the settings
in the according dialog box in Developer.
Settings
Check

If this symbol is shown check the screenshot of the Process Tree with the content of the
Process Tree in Developer.
Rule Set
Check
If this symbol is shown check the screenshot aside with the result Developer. It should
look similar.
Result
Check

8
Introduction to the Essential Tools for Rule Set Development

Lesson 1 Introduction to the


Essential Tools for Rule Set
Development
In this Lesson you will get an introduction to:

 The Definiens Workspace and Project File


 The predefined Viewing Modes
 The Visualization Tools
 The essential windows for Rule Set development
 Execute a Process sequence

After participating in this training the trainee will be able to solve image analysis tasks
comparable to the following example. The essential concepts of Object based image
analysis will be taught and the main strategies for Rule Set development will be
introduced.
In this first Lesson you will get an introduction to the most important tools for Rule Set
development and get a feeling of the goal the whole training module is about.
This example Project contains an already classified subset of a Quickbird scene (Data
courtesy of Digital Globe).

Figure 1: Definiens Developer 7.0 GUI with one viewer open. At the top are the menus and
toolbars. At the right the Process Tree and the Class Hierarchy windows, as well as the Feature
View and the Image Object Information window is shown.

9
Introduction to the Essential Tools for Rule Set Development

1.1 The Definiens Workspace and


Project File
This Chapter covers the following content:

 Open the Workspace


 Open the Project file
Introduction All necessary data and Rule Set components are stored in the Definiens Project file
(.dpr). The Project files are managed in the Workspace. During this training you will
learn how to create and save those projects, how to manage them in the Workspace
and how to execute batch-processing from the Workspace.
A Workspace file (.dpj) contains

image data references

projects

exported result parameters

references to the used Rule Set.


Furthermore, it comprises the import and export templates, result states, and metadata.
To get a feeling for the structure of a Workspace and contents of a Project file, you will
open an example Workspace and Project.

1.1.1 Open the Workspace


Information A Workspace file contains image data references, projects, exported result parameters
and references to the used Rule Set. Furthermore, it comprises the import and export
templates, result states, and metadata.
In the Workspace window, you administer the Workspace files. Here you manage all
relevant data of your image analysis tasks.
1. From the main menu select File>Open workspace or press the Open
Workspace button .
Action! 2. Browse to the \01_Definiens_ESSENTIALS_TRAINING\WS_Developer_7
folder and select the WS _Developer_7.dpj file.

Figure 2: Developer with Workspace WS_Developer_7 opened.

10
Introduction to the Essential Tools for Rule Set Development

1.1.2 Open the Project file


In the Workspace the Project QB_Yokosuka is already contained. Information

The Yokosuka Project file represents a subset of a whole Quickbird scene (Image data
courtesy of DigitalGlobe). This Project has already been processed using the Process
stored in the Process Tree.
The Project contains one Image Object Level. In addition the Image Objects have been
classified.
1. To open a Project file from the Workspace window do one of the following:

In the Workspace window double-click the QB_Yokosuka Project file


Action!
Right-click on the Project in the Workspace window and select Open from the
menu.
The Project file will open and the image with Rule Set and Classes will be loaded.

NOTE:

The currently opened Project is marked with an asterisk in the Workspace window

Chapter 1.1covered the following content:

 Open the Workspace


 Open the Project file

1.2 The predefined Viewing Modes


In the View Settings toolbar there are 4 predefined View Settings available, specific to
the different phases of a Rule Set development workflow.

Figure 3: View Settings toolbar with the 4 predefined View Setting buttons: Load and Manage
Data, Configure Analysis, Review Results, Develop Rule Sets.

2. Select the predefined View Setting number 4 Develop Rulesets from the
View Settings toolbar.
By default one viewer window for the image data is open, as well as the Process Tree Action!
and the Image Object Information window.
3. Check whether the following tools are open:

11
Introduction to the Essential Tools for Rule Set Development

Snippets: go to the main menu and select Process>Snippets.

Process Tree: go to the main menu and select Process>Process Tree or press
the Process Tree button .

Class Hierarchy: go to the main menu and select Classification>Class


Hierarchy or press the Class Hierarchy button .

Image Object Information: go to main menu and select Image


Objects>Image Object Information or press the Image Object Information
button .

Feature View: go to the main menu and select Tools>Feature View or press
the Feature View button .

Figure 4: Definiens GUI with main windows: image data Viewer in the left window, menu- and tool bars
at the top, Process Tree, Class Hierarchy, Feature View and Image Object Information windows at the
right side.

12
Introduction to the Essential Tools for Rule Set Development

1.3 The Visualization Tools


This Chapter covers the following content:

 Visualize Image Objects


 Display Classification results
 Panning and zooming functions

It is essential to display the image data, segmentation outlines, Feature values in a Introduction
color range and Classification results during Rule Set development and to verify your
results.
Several options for displaying the content of the Image Object Level can be chosen from
the View Settings toolbar.

1.3.1 Visualize Image Objects


Image Objects are the building blocks for any further image analysis and also the final Introduction
result of any analysis.
There are several ways to display Image Objects, their contained information or their
Classification. In the View settings toolbar below, the settings to display the Object
outlines is shown.

Figure 5: View Settings toolbar.

1. Make sure that View Layer is selected by activating the button.

2. Select the Show or Hide Outlines button to show all outlines of all Image Action!
Objects of the Level. See how they fit with the image content.

3. To show the Object mean view select the Pixel View of Object Mean View
button.

Display
Check

Figure 6: Image View, Object outlines with Pixel View and Object Outlines with Object Mean View.

13
Introduction to the Essential Tools for Rule Set Development

1.3.2 Display Classification results


In this Project the Image Objects have been already classified according to the
conditions defined in the Rule Set.

1. Select the View Classification button and make sure that the Show or Hide
Outlines button is deselected.
Action! 2. Move the cursor over the Classification and the assigned class will appear in a tool
tip.

3. Select the Pixel View or Objects Mean View button to switch on and off the
transparency view.

4. Select the Show or Hide Outlines button and the outlines will be displayed in
the Classification colors.

Display
Check

Figure 7: Classification View with Pixel View, Classification View with Object Mean View and
Classification View with Object Outlines and Pixel View.

1.3.3 Panning and zooming functions


Introduction Depending on the Level of detail you are working on or for switching between detail
and overview you need the zooming functions of Definiens Developer.
They are all grouped in the Zoom toolbar, in the menu View>Cursor Mode or View>
Display Mode.

Figure 8: Zoom Toolbar

Use the different buttons to get familiar with their behavior.

1. Switch from zoom mode or panning mode to normal cursor.

Action! 2. To pan, drag the hand-shaped cursor around the Project window to move to
other areas of the image. (Alternatively Crtl + P)

3. Area Zoom (Alternatively Ctrl + U)

4. Zoom Out Center

5. Zoom In Center

6. Select or enter a zoom value to change the display in the Project view.

14
Introduction to the Essential Tools for Rule Set Development

7. Zoom Scene to Window


Grouped in the menu View>Cursor Mode or >Cursor Mode or by right-clicking in the
Viewer window.

1. Zoom In

2. Zoom Out

3. Area Zoom (Alternatively Ctrl + U)


4. Panning

5. The Pan Window enables you to move around the image. Drag the red
rectangle to move to a different region of the image.

Chapter 1.3 covered the following content:

 Visualize Image Objects


 Display Classification results
 Panning and zooming functions

15
Introduction to the Essential Tools for Rule Set Development

1.4 The essential windows for Rule Set


development
This Chapter covers the following content:

 The Feature View and Image Object Information


 The Class Hierarchy
 The Process Tree

1.4.1 The Feature View and Image Object


Information
Introduction The Feature View tool and the Image Object Information window help you to decide
which Features and values to use for Classification. Every Image Object is an
information carrier for Classification and has associated Features. These Features are
used to assign the Image Objects to the according classes.

To get information about several Feature values for one Image Object use the
Image Object Information.

To get values of one Feature for all Image Objects use the Feature View.

1. Select a single Image Object by clicking on it and see the associated Features and
values in the Image Object Information window.

Action! 2. Select another Image Object and the values will change.
3. Double-click on one Feature listed in the Feature View window.
All Objects appear now in gray values representing the values for the selected Feature.
Objects with low values are shown in dark grey, high values in bright.

Display
Check

Figure 9: Image Object Information window with Feature values for a selected Image Object.

16
Introduction to the Essential Tools for Rule Set Development

Figure 10: Feature View window with value displayed in the viewer in grey values.

1.4.2 The Class Hierarchy


You will learn during this training how to create classes and how to assign Image Introduction
Objects to certain classes. The rules for the Classification can either be stored in the
Class Description or in the Process listed in the Process Tree.
All classes and their content are stored in the Class Hierarchy and can be structured in a
semantic way in the Groups Hierarchy and inherit their conditions by structuring them
in the Inheritance Hierarchy.

Display
Check

Figure 11: Class Hierarchy window, the Groups Hierarchy tab is selected.

17
Introduction to the Essential Tools for Rule Set Development

1.4.3 The Process Tree


Introduction In this training you will learn how to create and edit single Processes, a sequence of
Processes and how to structure these in a meaning full way, similar to the Process Tree
contained in the QB_Yokosuka Project.
1. Collapse and expand the Parent Processes in the Process Tree to see the single
Processes and the structure in which they are grouped.

Action! 2. Double-click on a single Process and examine the content and settings.

Display
Check

Figure 12: Process Tree window showing the Process for analyzing the subset.

Chapter1.4 covered the following content:

 The Feature View and Image Object Information


 The Class Hierarchy
 The Process Tree

18
Introduction to the Essential Tools for Rule Set Development

1.5 Execute a Process sequence


In the currently open Project the inserted Processes are already executed. An Image Introduction
Object Level and a Classification exists.
To re-execute the analysis, first the existing Image Object Level has to be deleted. After
that the processing steps for segmentation and Classification can be performed once
more.
1. Delete the existing Image Object Level by clicking on the Delete Level button
in the View Navigate toolbar.
Action!
2. Select Level1 and confirm with OK.
3. Select the top most Process Take The Plunge and right-click.
4. Select Execute from the menu list.
Now the subsequent Processing steps are executed one after the other:
Segmentation-> Classification Water-> Classification Vegetation -> Merge Image
Objects

Lesson 1 you got an introduction to:

 The Definiens Workspace and Project File


 The predefined Viewing Modes
 The Visualization Tools
 The essential windows for Rule Set development
 Execute a Process sequence

19
Introduction to the Essential Tools for Rule Set Development

20
Load and View Data

Lesson 2 Load and View Data


This Lesson has the following content:

 Create a Project in a Workspace


 How to open a the created Project
 How to save a Project
 Adjust the View settings

In this Lesson we create a new Project and we will examine the loaded data. Introduction

After importing the data and at different steps of the workflow of an image analysis, you
investigate your Definiens projects visually. Different visualization methods enable you
focus on what you are searching for.
At the beginning, you explore the single image layers. You can define the color
composition for the display of image layers and set equalizing options.
In this Lesson a subset of a Quickbird scene is used (Data courtesy of Digital Globe).

Figure 13: Different view settings for the same data set.

2.1 Create a Project in a Workspace


This Chapter covers the following content:

 Supported image formats


 Define the data to be loaded
 Overview: Create Project dialog box
 Define Layer Alias
 Confirm the settings to create the Project

21
Load and View Data

Introduction There are several ways to import image data in a Workspace.

Create single projects manually

Import existing projects

Use predefined import templates

Use customized import settings


The option to create single projects or to import already existing projects is used for
testing and evaluating purposes and will be used here in this lesson.
The easiest way to import image data in a Workspace is to use the import templates.
These templates provide all import settings for standard situations.
The Customized Import tool allows importing data also from complex file systems.

2.1.1 Supported image formats


Information Image scene can be stored within a Project in a .dpr file. Image analysis extracts
information from a scene and adds it to a Project. This information is expressed in
classified Image Objects. When viewing projects, the user can investigate the input
scene, the segmentation information and the Classification results.
Definiens is capable of importing both raster and vector data. All vector data is
converted into a raster layer during import. The software distinguishes between two
basic types of data:

Image layers

Thematic layers
While image layers contain continuous information, the information of thematic layers is
discrete. The two types of layers have to be treated differently in both segmentation and
Classification. Thematic layers can be imported in addition to image layers.
Information
Definiens Developer 7.0 supports the import of a variety of raster file formats
including:

22
Load and View Data

2.1.2 Define the data to be loaded


1. Switch back to the Load and Manage Data view by clicking in the View
Settings tool bar.
2. To create a new Project in the Workspace window, right-click and select Add Action!
Project from the menu.
The Create Project dialog box opens.
3. Navigate to the folder
\01_Definiens_ESSENTIALS_TRAINING\Module1\QB_Maricopa.
4. Mark the following image files and click Open.

04mar_pan.img

04mar_multi.img
The Create Project dialog box opens.
5. In the Create Project dialog box in the field Name enter a meaningful name for the
Project, e.g. QB_Maricopa.

2.1.3 Overview: Create Project dialog box


The Create Project dialog box has 4 main sections. Information

The General Settings section N :

the geocoding information is displayed if the Use geocoding check box is selected
and the resolution is automatically detected and displayed in the Resolution field.

the unit is detected automatically if auto is selected from the drop down list.

The unit is automatically set to meters, but can be changed by selecting an other
one from the drop down list.

A subset of the loaded images can be selected by clicking the Subset Selection
button. The Create Subset dialog box opens.

23
Load and View Data

Information The Image Layer Options section O :

All preloaded image layers are displayed along with their properties. To select an
image layer, click it. To select multiple image layers, press Ctrl or the Shift key and
click on the image layers.

To edit a layer double-click or right-click an image layer and choose Edit. The
Layer Properties dialog will open. Alternatively you can click the Edit button.

To insert an additional image layer you can click the Insert button or right-click
inside the image layer display window and choose Insert on the context menu.

To remove one or more image layers, select the desired layer(s) and click Remove.

To change the order of the layers select an image layer and use the up and down
arrows.

To set No Data values for those pixels not to be analyzed, click No Data. The
Assign No Data Values dialog box opens.

The Thematic Layer Options Section P :

To insert a thematic layer, you can click the Insert button or right-click inside the
thematic layer display window l and choose Insert from the context menu.

To edit a thematic layer works similar to editing image layers described above.

The Meta Data Options Section Q :


Here you can load additional information data as an .ini file, if available.

Figure 14: Create New Project dialog box.

2.1.4 Define Layer Alias


Information In order to generate Rule Sets that are transferable between different datasets, the
loaded channels have to have aliases assigned to them.

24
Load and View Data

1. To assign a layer alias, select the layer in the Create Project dialog box and double
click it.
The Layer Properties dialog opens.
Action!

Figure 15: The Layer Properties dialog box.

2. Assign the following aliases to the layers:

Layer1 blue

Layer2 green Layer4 nir

Layer3 red Layer5 pan

2.1.5 Confirm the settings to create the


Project
3. Click OK at the bottom of the Create Project dialog box.
The new Project is now added to the Workspace.

Chapter 2.1 covered the following content:

 Supported image formats


 Define the data to be loaded
 Overview: Create Project dialog box
 Define Layer Alias
 Confirm the settings to create the Project

2.2 How to open a the created Project


There are several methods to open a Project from the Workspace

25
Load and View Data

Double-click on the Project in the Workspace window.

Right-click on the Project and select Open from the main menu.

Note:

The currently opened Project is marked in the Workspace window with an asterisk.

2.3 How to save a Project


Save the changes of a Project by any of these methods:

Click on the Save Project button .

Or press Crtl + S on your keyboard.

NOTE:

As there is no undo command, it is recommended that you save a Project prior to any
operation that could lead to the unwanted loss of information, such as deleting an
Object layer or splitting Objects. To retrieve the last saved state of the Project, close
the Project without saving and reopen.

2.4 Adjust the View settings


This Chapter covers the following content:

 Changing the Layer Mixing


 Navigating through Image Layers
Introduction Displaying the image data in an appropriate way is crucial for Rule Set development. It
helps you to explore the content of the image data. Good display settings also enable
you to get a feeling for the information that is important to you, layer by layer.
When you create a new Project, the first three bands of data are displayed in red, green
and blue color by default. To receive more contrast in the view, the layer mixing can be
changed.

NOTE:

The layer mixing changes the display only and does not affect image processing.

2.4.1 Changing the Layer Mixing


1. Open the Edit Image Layer Mixing window by one of the following:

Action!

26
Load and View Data

From the View menu, select Image Layer Mixing.

or click on the Edit Image Layer Mixing button in the View Settings
toolbar.

Figure 16: Layer mixing buttons in the View Settings toolbar

2. To view the image in true color, set the blue, green and red layer to the respective
color slots by clicking in the according field.
3. To view additionally the nir layer, select the respective box additionally.
4. To weight the display of the image layer, uncheck the field No Layer Weights
and click inside the desired R, G or B box. Increase with a left mouse click, decrease
them with the right mouse click.
5. To confirm the settings click OK at the bottom of the Edit Layer Mixing dialog
box.
The image will now be displayed using the view settings you specified.

Settings
Check

Figure 17: True color layer mix.

Figure 18: False color mix with additional nir layer displayed green.

Figure 19: False color mix settings with weighted image layers and resulting image display in the viewer.

27
Load and View Data

2.4.2 Navigating through Image Layers


Overview Go to the View Settings toolbar and switch between different display-modes of
image layers.

The Single Layer Gray scale Button displays the first image layer in gray
values.

The Mix Three Layers RGB Button shows the first three layers as true
color.

The Show previous image layer Button shifts the weight arrangement
down.

The Show next image layer Button shifts the weight arrangement up.

The Edit Image Layer Mixing Button opens the related window.

Chapter 2.4 covered the following content:

 Changing the Layer Mixing


 Navigating through Image Layers

Lesson 2 had the following content:

 Create a Project in a Workspace


 How to open a the created Project
 How to save a Project
 Adjust the View settings

28
Introduction to Processes

Lesson 3 Introduction to Processes


This Lesson has the following content:

 Overview of available algorithms


 Working with Processes

This Module gives you an introduction to the Cogniton Network Language which is Introduction
the unique computing language for developing advanced image analysis algorithms.

3.1 Introduction to Cogniton Network


Language (CNL)
The Cogniton Network Language is a programming language to translate the human
recognition Process to a series of rules (Rule Sets), combining algorithms and Features.
A single Process is the elementary module of a Rule Set providing a solution to a
specific image analysis problem. Processes are the main working tools for developing
Rule Sets.
The main functional parts of a single Process are:

the Image Object domain.

the algorithm and Features


A single Process enables the application of a specific algorithm to a specific region of
interest in the image.

29
Introduction to Processes

The Concept of the Image Object Domain


The Image Object Domain describes the region of interest where the algorithm of the
Process will be executed in the Image Object Hierarchy.

30
Introduction to Processes

first segment the entire image

second classify all Image Object to class A or B

third segment only Objects of Class B

3.2 Overview of available algorithms


Process Related Algorithms: Overview
Use the listed execute Child Processes algorithm in conjunction with the no Image
Object domain as a Parent Process to structure your Process tree. A Process with these
settings serves as a container for a sequence of functional related Processes.
With the algorithm Set Rule Set options you can for example define general
vectorization settings and settings for distance calculation.
Segmentation Algorithms:
Segmentation algorithms are used to create or modify Image Objects.
Basic Classification Algorithms:
Classification algorithms assign an Image Object under certain criteria to one or more
classes.
Advanced Classification Algorithms:
Advanced Classification algorithms classify Image Objects that fulfill special criteria like
being enclosed by another Image Object or being the smallest or the largest Object in a
whole set of Object.
Variables Operation Algorithms:
Variable operation algorithms are used to modify the values of variables. They provide
different methods to perform computations based on existing variables and Image
Object Features and store the result within a variable.

31
Introduction to Processes

Overview Reshaping Operation Algorithms:


Reshaping algorithms modify the shape of existing Image Objects. They execute
operations like merging Image Objects, splitting them into their sub Objects and also
sophisticated algorithm supporting a variety of complex Object shape transformations.
Level Operation Algorithms:
Level operation algorithms allow you to copy, remove or rename entire Image Object
Levels within the Image Object Hierarchy
Interactive Operations Algorithms (formerly: Training Operation Algorithms):
With the Interactive Operation algorithms you can configure the user interaction
necessary to use predefined actions in Definiens Architect for processing.
Sample Operation Algorithms:
Use sample operation algorithms to handle samples for Nearest Neighbor Classification
and to configure the Nearest Neighbor settings.
Image Layer Operation Algorithms:
Image layer operation algorithms are used to create or to delete image layers. Special
filter algorithms are available to generate additional layers, like edge filter layers or
smoothed image layers
Thematic Layer Operation Algorithms:
Thematic layer operation algorithms are used to transfer data from thematic layers to
Image Objects and vice versa.
Export Algorithms:
Export algorithms are used to export table data, vector data and images derived from
the image analysis results.
Workspace Automation Algorithms:
For executing Rule Sets at Definiens eCognition Server must be installed. Workspace
automation algorithms are used for working with subroutines of Rule Sets. They allow
you to automate and accelerate the processing of especially large images. Workspace
automation algorithms enable multi-scale workflows, which integrate analysis of
images at different scales, magnifications, or resolutions.
Customized Algorithms:
Customized algorithms enable you to create your own algorithms. Once created, they
will be available at the bottom of the algorithm drop-down list box in the Edit Process
dialog box as any other algorithm.

32
Introduction to Processes

3.3 Working with Processes


This Chapter covers the following content:

 Create a Process
 Arrange Processes
 Save a Rule Set or single Process
 Execute Processes
  Delete Rule Set or single Process

All Processes are created saved and stored in the Process Tree window. Introduction

In the Process Tree you can:

add

arrange

delete

load

save
single Processes and Process sequences.
Processes may have any number of Child Processes. The so formed hierarchy defines
the structure and flow control of the image analysis. Arranging Processes containing
different types of algorithms allows the user to build a sequential image analysis
routine.

3.3.1 Create a Process


When the Process is stored in the Process tree a small icon will be attached according to Information
the algorithm type. This makes it easier to identify the purpose of the Process.
1. If not already opened the Process Tree can be opened by either

clicking on the Process Tree button in the Tools toolbar


Action!
or selecting Processes > Process Tree in the menu
2. Right-click the Process Tree window.
3. Select Append New from the context menu.
The Edit Process dialog box opens.
4. Keep the default settings.
5. To confirm the default settings click OK.
The Process is now appended to the Process Tree. As the default settings are kept it
appears as for all in the Process List.
Process
Check

33
Introduction to Processes

Figure 20: Process Tree window with one Process inserted. As this Process has the default settings it is
listed as for all.

3.3.2 Arrange Processes


Information In the Process tree, Processes can be arranged to form a hierarchical structure. Each
Process can contain any number of Child Processes. This allows to group Processes
into functional Modules. Further this functionality enables the generation of complex
workflows by restricting Child Processes to certain domains or tying Child Processes to
conditions.
To see how the grouping of Processes works:
1. Right-click on the existing Process and select Insert Child. Confirm with OK and
the new Process will be added as a sub Process to the existing Process.

Action! 2. Alternatively a Process can be dragged beneath another Process while pressing
the right mouse button.
3. Change the order of a Child Process. Drag it below all Processes while pressing the
left mouse button.

Process
Check

Figure 21: Parent Process with inserted Child Process.

3.3.3 Save a Rule Set or single Process


1. To save the complete Rule Set right-click and choose Save Rule Set from the
menu.

Action! 2. Insert an appropriate name and confirm with OK.


Now the Rule Set is saved with all the algorithm settings and according classes.
3. To save a single Process or a specific Process sequence right-click and select Save
as.

3.3.4 Execute Processes


To execute a Process do one of the following:

34
Introduction to Processes

1. Right-click on the Process or Process Sequence you want to execute and select
Execute.

2. Alternatively select the Process and click F5.


Action!

3.3.5 Delete Rule Set or single Process


To delete a Process select it and do one of the following: Information

Right-click and select Delete Rule Set to delete the complete Rule Set.

Right-click and select Delete to delete single Process or Process sequence or


press delete on your keyboard.

Chapter 3.3 covered the following content:

 Create a Process
 Arrange Processes
 Save a Rule Set or single Process
 Execute Processes
  Delete Rule Set or single Process

Lesson 3 had the following content:

 Overview of available algorithms


 Working with Processes

35
Introduction to Processes

36
Segmentation: How to Create Image Objects

Lesson 4 Segmentation: How to


Create Image Objects
This Lesson has the following content:

 Theory: Segmentation and Image Objects


 The Multiresolution Segmentation
 Creating multiple Image Object Levels

In this Lesson you will learn how to create Objects with the Multiresolution Introduction
Segmentation. Segmentation is always the first step to do when starting an image
analysis.
Before executing any other analysis Process, all subsequent steps require initial Image
Objects. Depending on the Level of detail of your task, several Segmentation and
Classification Processes may follow.
Segmentation algorithms are used to subdivide the entire image represented by the
pixel Level domain or specific Image Objects from other domains into smaller Image
Objects. Or to merge small Objects into larger ones.
Definiens provides several different approaches ranging from very simple algorithms like
chessboard and quad tree based segmentation to highly sophisticated methods like
multiresolution segmentation or the contrast filter segmentation.
Segmentation algorithms are required whenever you want to create new Image
Objects Levels based on the image layer information. But they are also a very valuable
tool to refine existing Image Objects by subdividing them into smaller pieces for a
more detailed analysis. Some are used for merging, e.g. the Spectral Difference
Segmentation.

NOTE:

For more detailed information on the segmentation algorithm, please refer to the
Reference Book, which you will find in the folder User Guide of your Definiens
Developer installation.

37
Segmentation: How to Create Image Objects

4.1 Theory: Segmentation and Image


Objects
This Chapter covers the following content:

 Image Object Primitives and Objects of Interest


 The Image Object Hierarchy
 Generating Suitable Image Objects

4.1.1 Image Object Primitives and Objects of


Interest
Information
Object Primitives are the information carrier and building blocks for later
Classification

Object Primitives are the starting point for every processing in Definiens.

Ideally, Object Primitives are fragments of the Objects of Interest (but not
necessarily)

The internal presentation and data structure are the same for Object Primitives
and Objects of Interest
In most applications there are no generic procedures that are able to reliably extract
Objects of interest. Objects of interest can be heterogeneous, variable, noisy, structured.
semantics and expert knowledge, as used in Definiens, are needed to accurately identify
and shape the right Objects of interest.

38
Segmentation: How to Create Image Objects

Objects of Interest represent the desired final structures of interest

4.1.2 The Image Object Hierarchy


Based on the possibility to generate Image Object primitives at any chosen scale, Information
Definiens Developer enables the production of more than one Object Level and the
connection of these Levels in a hierarchical manner.
The different techniques for segmentation in Definiens Developer can be used to
construct a hierarchical network of Image Objects which represents the image
information at different spatial resolutions simultaneously. The Image Objects are
networked so that each Image Object knows its context (neighborhood), its super-
Object and its sub-Objects. This hierarchical network is topologically definite, in other
words, the border of a Super-Object is consistent with the borders of its Sub-Objects.
The area represented by a specific Image Object is defined by the sum of its Sub-Object's
areas.
Each Level is constructed based on its direct sub-Objects, in other words, the sub-
Objects are merged into larger Image Objects on the next higher Level. This merge is
limited by the borders of existing super-Objects; adjacent Image Objects cannot be
merged if they have different super-Objects.

Figure 22: 2 schemes of hierarchical network of Image Objects in abstract illustration.

39
Segmentation: How to Create Image Objects

4.1.3 Generating Suitable Image Objects


Information In order to be able to produce a satisfying Classification result, the Image Objects must
represent the classes so that are to be discriminated in the subsequent Classification.
Therefore, avoid merging Objects that belonging to different classes. Often it is not
possible to produce an Image Object Level in which all Image Objects explicitly
represent the classes to be extracted. In such cases, consider using different Object
Levels for the Classification of structures of different scales.
Two main principles exist for segmentation:

Always produce Image Objects representing the class Features accurately: as large
as possible and as fine as necessary.

Especially for the Multiresolution Segmentation it is important to use as much


color criterion as possible and as much shape criterion as necessary to produce
Image Objects of the best border smoothness and compactness. The reason for this
rule is that the spectral information is ultimately the primary information contained
in image data. Using too much shape criterion therefore can reduce the quality of
segmentation result.
A well-suited approach when segmenting new data is to simply play with it, running
different segmentations with different parameters until the result is satisfying. If the
datasets are too large for easy handling, try to work with a representative subset to
speed up the process. Once suitable segmentation parameters have been found, they
can be applied to the whole dataset.

Chapter 4.1 covered the following content:

 Image Object Primitives and Objects of Interest


 The Image Object Hierarchy
 Generating Suitable Image Objects

40
Segmentation: How to Create Image Objects

4.2 The Multiresolution Segmentation


This Chapter covers the following content:

 The Multiresolution Segmentation algorithm


 Segment with Multiresolution Segmentation
 Effect of different Image Layer Weights
 Effect of different Homogeneity Criterion

4.2.1 The Multiresolution Segmentation


algorithm
Multiresolution segmentation is a algorithm developed to extract Image Objects, which Theory
are homogeneous both based on pixel value and on Object shape.
It allows extraction of homogeneous Image Object primitives in any chosen resolution,
especially taking into consideration local contrasts.

Multiresolution Segmentation is a pair wise region merging technique (i.e.


bottom up)

The procedure starts with single-pixel-sized Objects and merges them in


several loops iteratively in pairs to larger Modules as long as an upper
threshold of heterogeneity defined through the scale parameter is locally not
exceeded.

In each loop every Object in the Object Level will be handled once.

The loops will be continued until no further merger is possible

Therefore, higher values for the scale parameter will result in larger Objects,
smaller values in small.

Multiresolution Segmentation is an optimization procedure that minimizes for


a given number of Image Objects the average heterogeneity, respectively
maximizes homogeneity.

Heterogeneity is defined as a mixture of spectral (standard deviation) and


shape (deviation of a compact or a smooth shape) heterogeneity.

41
Segmentation: How to Create Image Objects

General setting possibilities for Multiresolution


Segmentation algorithm
Information In order to segment an image with the Multiresolution Segmentation the following
parameters have to be set:
Level Name:
Here you can give your Image Object Levels meaningful names. Either according the
hierarchy, like Level1, Level2 or according to the content, like Level Vegetation, Level
Buildings.
Image Layer weights:
Image layers can be used differently depending on their importance or suitability for the
segmentation result.
The higher the weight which is assigned to a layer the more of its information will be
used during the segmentation Process. Consequently, image layers that do not contain
the information intended for representation by the Image Objects should be given little
or no weight. Non the less they can be used for Classification,

Figure 23: With these settings only the panchromatic layer is weighted.

Note that the sum of all chosen weights for image layers is internally normalized to 1.
Thematic Layer usage:
If Thematic Layers are used, the segmentation will not segment (=merge) over thematic
Objects. Consequently in the segmentation result, where there is an Object boundary in
the thematic layer, there will always be an Object boundary.

Figure 24: Left: Thematic Layer; Right: Object outlines on basis of Thematic Layer.

Scale parameter:
The scale parameter is an abstract term.
It determines the maximum allowed heterogeneity for the resulting Image Objects.
In heterogeneous data the resulting Objects for a given scale parameter are smaller than
in homogeneous data.
By modifying the value of the scale parameter, you can vary the size of the resulting
Image Objects. A high scale parameter results in large Objects and vice versa.

42
Segmentation: How to Create Image Objects

Figure 25: Objects are allowed to grow coarse, the higher the scale parameter was set.

Composition of homogeneity criterion:


To define the homogeneity of an Object two criteria can be set. It is important to
understand that both criteria have an opponent with which it sums up to 1.

Shape has the opponent Color

Compactness has the opponent Smoothness.


And it is important to understand that

Compactness and Smoothness together represent the Shape criteria

4.2.2 Segment with Multiresolution


Segmentation
In the following exercise we will create an Image Object Level Level1 using the Information
Multiresolution Segmentation with scale parameter set to 60 and all image layers
weighted equally to 1.
Before we append and execute the Process, the existing Image Object Level has to be
deleted.

Preparation
Clear the existing rules set and append a new Process.
1. Delete the Rule Set in QB_Maricopa Project created earlier this training.

2. Select the predefined view setting number 4 Develop Rulesets from the View
Settings toolbar, if not already selected. Action!

3. Check whether the following tools are open:

43
Segmentation: How to Create Image Objects

Process Tree: go to the main menu and select Process>Process Tree or press
the Process Tree button .

Class Hierarchy: go to the main menu and select Classification>Class


Hierarchy or press the Class Hierarchy button .

Image Object Information: go to main menu and select Image


Objects>Image Object Information or press the Image Object Information
button .

Feature View: go to the main menu and select Tools>Feature View or press
the Feature View button .
4. Append a new Process by right-clicking in the Process Tree and selecting: Append
New.

Define the algorithm


5. In the filed Algorithm choose from the drop down multiresolution segmentation.

Define the Image Object Domain


6. Keep the default settings for Image Object Domain, he segmentation shall be
performed on basis of the pixel Level with no specific condition.

Algorithm parameters:
7. Set the Scale parameter to 60.
8. Set the Level Name to Level1.
9. Keep the Image Layer weights and all remaining settings as they are per default.
10. Execute the Process.

Settings
Check

Figure 26: Edit Process dialog box with sample settings for multiresolution segmentation.

44
Segmentation: How to Create Image Objects

Examine the result and see that in homogeneous areas, the Objects are bigger (e.g.
water areas) and in more heterogeneous areas the Objects are smaller.
Result
If you want to see the Image Object outlines press the Show/Hide Outlines button . Check

Figure 27: Result of the Multiresolution Segmentation.

4.2.3 Effect of different Image Layer Weights


In the Multiresolution parameter settings there is the possibility to weight individual Information
image layers due to their importance for the segmentation result.
In the following exercise the effect of weighting different image layers is examined.

Create a new Project


1. Switch back to the Load and Manage Data view by clicking in the View
Settings tool bar.
2. To create a new Project in the Workspace window, right-click and select Add Action!
Project from the menu.
The Create Project dialog box opens.
3. Navigate to the folder QB_Maricopa.
4. Mark the following image files and click Open.

04MAR17_MS_Image_Layer_Weights.TIF

04MAR17_PAN_Image_Layer_Weights.TIF
Information about these files is listed in the Create Project dialog box.
5. Define the according alias for the image layers.
6. In the Create Project dialog box in the field Name enter a meaningful name for
the Project, e.g. Layer Weights.
7. Click OK at the bottom of the Create Project dialog box.

8. Switch back to the Develop Ruleset view by clicking in the View Settings
tool bar.

Insert Multiresolution Process using all image layers


1. Append a new Process in the Process Tree.

45
Segmentation: How to Create Image Objects

2. Choose multiresolution segmentation from the algorithm list.


3. Insert Level1 in the field Level Name.
4. Insert 30 in the field Scale parameter.
5. Keep all the layers weighted with 1.
6. Execute the Process.

Settings
Check

Figure 28: Edit Process dialog box with settings for Multiresolution Segmentation with all image layers
weighted.

Result
Check

Figure 29: An Image Object Level is created where all image layers have the same influence on the
Object shapes.

Append a Process to delete the existing Image Object


Level
Information As in this example the panchromatic layer has a higher resolution in the next step only
on basis of this layer a segmentation will be performed.
1. Append a New Process and choose the algorithm delete Image Object Level.
2. Specify in the Level domain the currently existing Image Object Level.
Action!
3. Execute the Process.
The Level1 is deleted. No Image Object Level is existing.

46
Segmentation: How to Create Image Objects

Settings
Check

Figure 30: Edit Process dialog box with settings for deleting Image Object Level1.

Rule Set
Check

Figure 31: Process Tree with Segmentation Process and Process to delete the Image Object Level.

Insert Multiresolution Process using only panchromatic


image layer
As in this example the panchromatic layer has a higher resolution in the next step only Information
on basis of this layer a segmentation will be performed.
In the Image Layer Weights dialog box you can give the individual layers a weighting
value and calculate the standard deviation of the layers.
4. Append a new Process and choose again Multiresolution Segmentation as
algorithm.
5. Choose the Level name used before by clicking in the very right corner of the field Action!
on the drop-down of the field Level Name.
6. In the field Scale parameter insert 30 .

7. Click on the button next to the image layer weight filed.


The Image Layer Weight dialog box opens.

47
Segmentation: How to Create Image Objects

Figure 32: Only the panchromatic layer is weighted to be used for the Object creation.

8. Select the the layers blue, green, red and nir and insert the weighting 0 in the
New value text box and click Apply.
9. Keep for the panchromatic layer the weighting 1.
10. Confirm the Process settings with OK.
11. Execute the Process.

Settings
Check

Figure 33: Edit Process dialog box with settings for Multiresolution Segmentation with only
panchromatic layer weighted.

48
Segmentation: How to Create Image Objects

Compare the two different segmentation results

Result
Check

Figure 34: Left: Image with only multispectral image layers displayed; Right: only panchromatic
layer displayed.

Figure 35: Image Objects created with the same scale parameter but different layer weightings.
Left: Image Objects created in basis of all image layers; Right: Image Objects created on basis of
only the higher resoluted panchromatic layer.

49
Segmentation: How to Create Image Objects

4.2.4 Effect of different Homogeneity


Criterion

Theory: Homogeneity Criterion

Figure 36: Schematic diagram of Composition of Homogeneity Criterion.

Shape Criterion:
In the Shape field you can define to which percentage the shape of the Objects (in
terms of the parameters smoothness and compactness) contribute to the entire
homogeneity criterion, as opposed to the percentage of the color.
For most cases the color is most important to create meaningful Objects. However, a
certain degree of shape homogeneity often improves the quality of Object extraction.
Therefore:
Changing the weight for the shape criterion to 0 will result in Objects more optimized
for spatial homogeneity. The shape criterion cannot have a value more than 0.9, due to
the obvious fact that without the spectral information of the image, the resulting
Objects would not be related to the spectral information at all.

Compactness:
In addition to spectral information, the Object homogeneity is optimized with regard to
the Object shape. The shape criterion is composed of two parameters:

The compactness criterion is used to optimize Image Objects with regard to


compactness.

The smoothness criterion is used to optimize Image Objects with regard to smooth
borders.
Although they do not share an antagonistic relationship, they sum up to 1. When the
compactness value is set to 1, the shapes of the Objects will be only optimized for
compactness, where as the value is set to 0 Objects will be optimized for smoothness.

50
Segmentation: How to Create Image Objects

Compare results of different Homogeneity Criterion


settings
In the following example, you will see the effect of changing the composition of Information
homogeneity criteria.
The goal in this example is to create Objects that represent the river.
Decide which combination of parameters you think is best for distinguishing the river.
Depending on your application, the data you are using, and the types of Features in your
image, you can get very different results by changing the homogeneity criterion
parameters. You need to decide which combination of parameters gives you the best
representation of the Image Object primitives of your Features of interest.

Create a new Project

1. Switch back to the Load and Manage Data view by clicking in the View
Settings tool bar.
2. To create a new Project in the Workspace window , right-click and select Add Action!
Project from the menu.
The Create Project dialog box opens.
3. Navigate to the folder SAR_Indonesia.
4. Mark the Indonesia_aug_sep.bmp file and click Open.
Information about these files is listed in the Create Project dialog box.
5. In the Create Project dialog box in the field Name enter a meaningful name for
the Project, e.g. Homogeneity Criterion.
6. Click OK at the bottom of the Create Project dialog box.

7. Switch back to the Develop Ruleset view by clicking in the View Settings
tool bar.
8. Delete the existing Rule Set.

Insert the Processes


The scale parameter will be kept but the shape and compactness criteria will be changed
according to the values in table 1.
1. Right-click in the Process Tree and select Append New.
In the Edit Process dialog insert the following settings:
2. Algorithm: choose multiresolution segmentation.
3. Keep the default settings of the Image Object Domain:
Algorithm parameters:
4. Insert a Level name.
5. Set the Scale Parameter to 30
6. Keep the image layer weights.
7. Set the Shape and Compactness values as listed in the first row table below, 0
for Shape, and 0 Compactness.
8. Execute the Process for every setting.

51
Segmentation: How to Create Image Objects

9. Examine the outlines of the Image Object according to whether they fit to
represent the river.
10. Delete the Image Object Level after you examined the result and change the
parameter in the Process according to the list below and re segment.
Shape Compactness
0 0
0.9 0.5
0.6 0.5
0.6 0.1
0.6 0.9
0.2 0.9
Table 1: Parameters for different composition of Homogeneity Criterion.

11. After each segmentation, show the Object outlines and zoom in to compare
single Image Objects with the information at pixel Level.

Compare the different Object shapes

Result
Check

Shape=0 Shape=0.9 Shape=0.6


Compactness = 0.5 Compactness = 0.5

Shape=0.6 Shape=0.6 Shape=0.2


Compactness = 0.1 Compactness = 0.9 Compactness = 0.9

Chapter 4.2 covered the following content:

 The Multiresolution Segmentation algorithm


 Segment with Multiresolution Segmentation
 Effect of different Image Layer Weights
 Effect of different Homogeneity Criterion

52
Segmentation: How to Create Image Objects

4.3 Creating multiple Image Object Levels


This Chapter covers the following content:

 Theory: Image Object Hierarchy


 Creating several Object Levels in one Project
 Navigate within the Image Object Hierarchy

4.3.1 Theory: Image Object Hierarchy


Definiens Developer allows you to insert new Image Object Levels above, below and Information
between existing ones.
Since Definiens Developer uses a pair-wise merging algorithm, take the following into
consideration:

Every segmentation uses the Image Objects of the next lower Image Object
Level as building blocks, which are subsequently merged into new segments.

At the same time, the Object borders of the next higher Level are stringently
obeyed.
For this reason, it is not possible to build a Level containing larger Objects (i.e., using a
larger scale parameter) than its super-Objects. Consequently, it is also not possible to
build a Level containing Objects smaller than its sub-Objects.
The image date used in this chapter is a subset of an IKONOS scene (Data courtesy of
GeoEye).

Figure 37: Image Object Hierarchy

53
Segmentation: How to Create Image Objects

Segmentation Principles: Top-Down and Bottom-Up

Note

When creating the first Object Level, the lower limit is represented by the pixels, the
upper limit by the scene size.

4.3.2 Creating several Object Levels in one


Project
Information In this example several Levels will be created using the algorithm Multiresolution
Segmentation. Therefore several subsequent Processes with different scale parameter
values have to be added in the Process Tree. They will be grouped under one Parent
Process.
For this exercise an already existing Project will be imported to the Workspace.

Create the first Level

Figure 38: The first Level is created on basis of the Pixel Level.

Import an existing Project


1. Right-click In the Workspace window and select Import existing Project from the
menu.

Action!

54
Segmentation: How to Create Image Objects

2. Browse to the folder


\01_Definiens_ESSENTIALS_TRAINING\Module1\IKONOS_Munich and select
the MultipleLevels.dpr file.
The Project is now added to the Workspace.
3. Open the Project by double-clicking on it.

Append a Parent Process


Use this Process as a Parent Process, for the underlying Child Processes to be added.
1. Right-click in the Process Tree and choose Append New.
2. Enter Create multiple Levels as name for the Process,
3. Keep all other settings as default and confirm with OK.

Insert Process for creating the first Image Object Level


1. Right-click in the Process Tree and choose Insert Child from the list.

In the Edit Process dialog insert the following settings:


2. Algorithm: choose Multiresolution Segmentation.
3. Image Object domain: keep the default settings.
Algorithm Parameters:
4. As Level Name insert Level1.
5. For the Image Layer Weights set for all layers except the pan layer a value of 0.
Only the panchromatic layer will be used for segmentation.
6. Use the default Scale Parameter of 10.
7. For the homogeneity criterion keep the default settings.
8. Click on the Execute button to perform the Segmentation.
Level1, is created with very small Objects.

Settings
Check

Figure 39: Process settings for creating the first Level.

55
Segmentation: How to Create Image Objects

Process
Check

Figure 40: Process Tree with Parent Process and Child Process added.

Create the second Level above


Information The already existing Level1 is the basis for the next Level, as it will be created above.

1. Right-click in the Process Tree on the already existing Process and choose
Append New from the list.

Action! 2. Algorithm: choose multiresolution segmentation.


Choose the existing Level 1 as domain:
3. Image Object domain: change the Level domain from pixel Level to Level1.

NOTE:

Selecting the correct Level in the Level domain:


In the drop down menu of the Level domain only the Level you are currently
displaying in the viewer is listed. To select a different one click on the Parameter
button and select the required Level.

Algorithm parameters:
4. In the Level Settings change use current (merge only) to create above
5. As Level Name insert Level2.
6. For the Image Layer Weights set for all layers except the pan layer a value of 0.
Only the panchromatic layer will be used for segmentation.
7. Change the Scale Parameter to 35.
8. Keep the default settings for the homogeneity criterion.
9. Click on the Execute button to perform the Segmentation.
A second Object Level is created by merging Objects from Level 1 into larger Objects in
Level 2.

56
Segmentation: How to Create Image Objects

Settings
Check

Figure 41: Process settings for creating the second Level.

Process
Check

Figure 42: Process Tree with Process for creating the second Level added.

Create the third Level above


The third Object Level will be created by merging Objects from Level 2 into much Information
larger Objects in Level3.

1. Right-click in the Process Tree on the already existing Process and choose
Append New from the list.
2. Algorithm: choose Multiresolution Segmentation. Action!
3. In the Image Object domain change the Level domain from pixel Level to Level2.
Algorithm Parameters:
4. In the Level Settings change use current (merge only) to create above.
5. As Level Name insert Level3.

57
Segmentation: How to Create Image Objects

6. For the Image Layer Weights set for all layers except the pan layer a value of 0.
Only the panchromatic layer will be used for segmentation.
7. Change the Scale Parameter to 110.
8. Keep the default settings for the homogeneity criterion.
9. Click on the Execute button to perform the Segmentation.

Settings
Check

Figure 43: Process settings for creating the third Level.

Result
Check

Figure 44: Object outline view of all three Image Object Levels: Level1, Level2; Level3.

4.3.3 Navigate within the Image Object


Hierarchy
Information Now that you have created three Levels in your Image Object hierarchy, you can
examine the Objects created in each Level using the navigation tools.

1. First view the outlines by clicking on the Show or Hide Outlines button .

Action! 2. To navigate through the Levels one by one, click for one Level down or for
one Level up on the Navigate toolbar.
3. To navigate to a specific Level, select the desired Level from the drop down list in
the Navigate toolbar.

Figure 45: Navigate toolbar with Level 1 selected for display in the viewer.

58
Segmentation: How to Create Image Objects

Chapter 4.3 covered the following content:

 Theory: Image Object Hierarchy


 Creating several Object Levels in one Project
 Navigate within the Image Object Hierarchy

Lesson 4 had the following content:

 Theory: Segmentation and Image Objects


 The Multiresolution Segmentation
 Creating multiple Image Object Levels

59
Segmentation: How to Create Image Objects

60
Image Objects - the Information Carriers

Lesson 5 Image Objects - the


Information Carriers
This Lesson has the following content:

 Overview over the available Features for Classification


 How to use the Feature View

No matter with which algorithm the Objects have been created, they all contain a lot of Information
information. These information, called Features, is the basis for formulation conditions
for Classification or further segmentation steps.
It is crucial to find the right Feature and the right threshold for conditions to be used in
processing.
This Lesson gives you an introduction to the Features available and how to visualize
Feature values using the Feature View.

5.1 Overview over the available


Features for Classification
Definiens offers a variety of Features that can be used for Classification. This lesson will Information
give you an overview over the available Features.

Figure 46: Features for Classification.

61
Image Objects - the Information Carriers

5.1.1 Object Features


Information Object Features are obtained by evaluating Image Objects themselves as well as their
embedding in the Image Object hierarchy.
Layer values
These are Features concerning the pixel channel values of an Image Object (spectral
Features). Examples: Mean values, Brightness, Standard deviation
Shape
With these Features, the shape of an Image Object can be described using the Object
itself or its sub-Objects. Examples: Area, Length, Elliptic Fit.
Texture:
Texture Features evaluate the texture of an Image Object based on its Sub-Objects.
Hierarchy
These Features provide information about an embedded Image Object within the Image
Object Hierarchy.
Thematic attributes
These are attributes of the thematic layer Objects. This type of Feature is only available if
such a thematic layer has been imported into the Project and was used for the
segmentation.
Customized Features
All Features created in the customized Feature dialog and which do not refer to other
classes are displayed here

5.1.2 Class-Related Features


Information Class-related Features refer to the Classification of other Image Objects that are taken
into account for the Classification of the Image Object in question.
Relations to neighbor Objects
These Features refer to existing class assignments of Image Objects on the same Level in
the Image Object Hierarchy. Examples, Relative border to neighbors, Distance to a
certain class. Number of neighbors.
Relations to sub-Objects
These Features refer to existing class assignments of Image Objects on a lower Level in
the Image Object Hierarchy. Example: Number of sub Objects of a specific class.
Relations to super-Objects
These Features refer to existing class assignments of Image Objects on a higher Level in
the Image Object
Relations to Classification
These Features refer to the actual Classification of an Object. Example: The Object is
currently classified as what class, Class Name, Membership to a class.
Customized Features
All Features created in the customized Feature dialog which do refer to other classes are
displayed here

62
Image Objects - the Information Carriers

5.1.3 Scene Features


Scene Features refer to information that is referenced on the scale of the whole scene or Information
image within the view.
Class-Related
These Features refer to number, area, mean and standard deviations of Objects within
certain classes
Scene-Related
These Features refer to data specific to the whole scenes such as total number of pixels,
total number of Objects, pixel resolution, etc.

5.1.4 Process-related Features


Process-related Features are Image Object dependent Features. They involve the Information
relationship of a Child Process Image Object to the Parent Process. They are used in
looping Processes.

5.1.5 Meta Data


A metadata item that can be used as a Feature in Rule Set development. To make Information
external metadata available to the Feature tree, you have to convert it within data
import procedures to get an internal metadata definition.

5.1.6 Feature Variables


In a Rule Set they can be used like that Feature. It returns the same value as the Feature
to which it points. It uses the Module of whatever Feature is assigned as a variable. It is
possible to create a Feature variable without a Feature assigned, but the calculation
value would be invalid.

63
Image Objects - the Information Carriers

5.2 How to use the Feature View


This Chapter covers the following content:

 Open the Feature View tool


 Navigate in the Feature View
 Visualize the Feature value range
Introduction The most crucial part in Rule Set development is to find the optimal Features and values
for classifying Image Objects in the one or the other class.
The Feature View is the tool used for finding the optimal Features and to help
determine threshold values for Classification.
With the Feature View tool the values for all Objects are displayed in grey values and
you also have the possibilities to show value ranges in color.
Of course you first have to have an Image Object Level.
The aim of this chapter we will use the Feature View tool to find the Feature and values
for classifying the water bodies in a Landsat image.

5.2.1 Open the Feature View tool

Preparation
1. From the folder LANDSAT_Dessau import the already existing Project
LANDSAT_Dessau_Segmented.dpr.

Action! An Image Object Level already exists.


Per default the Feature View tool is open when changing to the Develop Ruleset View.
2. If the Feature View tool is not open, you have two possibilities to open the Feature
View:

In the menu Tools choose Feature View

Select the Feature View button from the Tools toolbar.

5.2.2 Navigate in the Feature View


3. To expand and collapse the Feature groups click on the + or -.

64
Image Objects - the Information Carriers

5.2.3 Visualize the Feature value range


The Feature we want to use here in this lesson is the Brightness Feature. The Information
Brightness is listed in the category Mean. In the category Mean all mean values for the
image layers are listed as well as the Feature Brightness and Max. Difference.

Figure 47: The Feature View window with the Features of the category Mean expanded.

1. Browse to Object Features>Layer Values>Mean.


2. Double-click on the Feature Brightness.
Action!
3. Move your cursor over the Objects in the viewer and the exact Feature values for
the Object appears.
All Objects appear now in gray values representing the respective Brightness value.
Objects with low values are shown dark, high values bright.
Result
Check

Figure 48: Left: False color image with Object outlines; Right: Feature values for Brightness in a
gray range.

NOTE:

Per default the Feature Brightness is calculated using all image layers. But there is
the possibility to choose dedicated image layers. To do so, go to menu Classification
>Select Image Layers for Brightness and select those which should be used for
brightness calculation.

65
Image Objects - the Information Carriers

Visualize a Feature range in color


Information The water areas appear quite dark, this means they have low values for the Feature
Brightness.
But how to get the correct threshold?
Besides moving the mouse over the Objects and guessing a threshold value, you can use
the color displayed range and so find the threshold value range.
1. Select the Feature Brightness in the Feature View and right-click.
2. From the menu select Update Range.
Action!
3. Click the check box at the bottom of the Feature View window.
Information When selecting Update Range command, internally the value range for the Feature
Brightness is now calculated. Basis for the calculation of this range is a representative
number of Image Objects, not all Objects. This can have the effect that extreme high or
low values are not within this range.
When switching on the check box, this activates the display of the Feature range and the
calculated minimum value (here 26.26) is displayed in the left box and the maximum
value (here 106.58) is displayed in the right box.
All Image Objects are colored in a smooth transition from blue (low values) to green
(high values).
If you select a new Feature, the range for that Feature should be updated to display the
range of values in that Feature view.

Result
Check

Figure 49: The Feature View with the updated Feature Brightness and switched on Feature range
checkbox.

66
Image Objects - the Information Carriers

Visualize a certain area of the whole feature range


With the arrows beside the value boxes you can increase or lower the end of the range. Information
All Objects which are not within this range will be displayed in grey again.

Figure 50: The Feature Brightness with Feature range from minimum value to 40 as it is displayed.

4. Isolate the low values (water areas) by clicking the down arrow to the right of the
maximum value. Continue until you reach the value 40 or type it in.

Action!
The upper end of the range is decreased. Only Objects within this new range are now
displayed in color.
Result
Check

Visualize a second Feature


Some Objects, which are not water bodies, are also within this range. Therefore choose a Information
second Feature and examine whether water bodies have a more significant range there.
With the assumption that water bodies have a very low value in the near infra-red
choose the Feature Mean nir.
Find now the threshold for the Feature Mean nir using the Feature View and the
Feature range.

67
Image Objects - the Information Carriers
Figure 51: Final Feature range for the Mean of nir for the water bodies.

5. In the Feature View select the Feature Mean nir, right-click and select Update
Range.

Action! 6. Use the up arrow of the min values until all water body Objects are out of the
range.

NOTE:

Be sure to update the range of Feature values each time you select a different Feature.
Otherwise, the range of the recent Feature is used.

Everything displayed in color now has a too high value in nir to belong to water bodies.

Result
Check

Chapter 5.2 covered the following content:

 Open the Feature View tool


 Navigate in the Feature View
 Visualize the Feature value range

Lesson 5 has the following content:

 Overview over the available Features for Classification


 How to use the Feature View

68
Basic Classification

Lesson 6 Basic Classification


This Lesson has the following content:

 Create a Class
 Define the first Classification Process
 Define the second Classification Process
 Alternative Classification method: Insert conditions in the Class Description

The most basic algorithm for Classification is the algorithm assign class. One fix Introduction
threshold for a Feature is defined directly in the Process condition. All Objects selected
in the Image Object Domain and which meet the condition are assigned to the given
class.
The most crucial part of Classification is to translate knowledge into Processes and
conditions.
In the chapter before we found that Brightness and the Mean of nir describes the Class
Water Body.

We will use the basic Classification algorithm assign class. With this algorithm you can
set one condition as basis for Classification.
Therefore we will need two Classification Processes. one to classify Objects with a
Brightness lower than 40 to the class Water and a second which un-classifies Water
Objects with a too high value for near infrared.
Before we start defining the Processes and the conditions we first have to create a class
Water.

6.1 Create a Class


A class describes the semantic meaning of Image Objects. Introduction

All Classes are stored in the so called Class Hierarchy. To create a class you have two
possibilities:

For some algorithms you can create a class directly in the Process (e.g. assign
class)

Create a class directly in the Class Hierarchy.


In the following example we will create the class in the Class Hierarchy.
69
Basic Classification

You will learn more about class descriptions in the subsequent chapters. For the
Classification using the algorithm assign class only the name and the color is to be set.
1. Right-click in the Class Hierarchy window and select Insert Class.
The Class Description dialog box opens.
Action!
2. In the field Name enter Water.
3. From the drop-down list next to the field choose an appropriate color, e.g. blue.
4. Confirm the settings with OK.

Result
Check

Figure 52:The class is inserted in the Class Hierarchy.

6.2 Define the first Classification


Process
This Chapter covers the following content:

 Preparation: Prepare Process structure


 Append the classification Process
 Define the condition for Classification
 Define the Class and execute the Process
 Review the Classification result

6.2.1 Preparation: Prepare Process structure


Information In the currently open Project the segmentation Process sequence has already been
written.
A new Parent Process named Basic Classification has to be added in the same hierarchy
Level as the multiresolution segmentation Process. The subsequent Processes for

70
Basic Classification

classifying the water bodies are then Child Processes to the Basic Classification
Parent Process.
1. Open the Process Tree and examine the existing Process structure.
2. Select the multiresolution segmentation Process, right-click on it and select
Append New from the menu. Action!
3. Name Process Basic Classification and confirm with OK.
Now the Parent Process Basic Classification is added in the Process Tree in the same
hierarchical Level as the multiresolution segmentation.
Rule Set
Check

Figure 53: The Administrative Process Classification is added.

6.2.2 Append the classification Process


In this lesson the Feature Mean nir is used to classify Image Object to the class Water. Information
The condition for Classification is a value for the Feature Mean nir lower than 200.
In the next step the first Classification Process is inserted as a Child Process.
1. Right-click on the Parent Process Classification.
Action!
2. Select from the menu Insert Child.
The Edit Process window opens.
1. Algorithm: open the drop down list and select assign class.
Now the according settings appear in the right window of the Edit Process dialog.
2. Keep all Objects in the Object domain.

6.2.3 Define the condition for Classification


The Feature for Classification is set via the Select Single Feature dialog box. The Introduction
threshold and operator for the condition is defined via the Edit threshold condition
dialog box.
You can choose between the operators:

< smaller than, smaller or equal than

= Equal, <> not equal

higher or equal than, > higher than


1. Click on the no condition button.
The Select Single Feature opens.
Action!
2. Browse to Object Features>Layer Values>Mean.
3. Select Brightness by double-clicking on it.
The Edit threshold condition dialog box opens.
71
Basic Classification

4. Choose the smaller than operator and insert the value 40 as shown in the figure
below.
5. Confirm with OK.

Settings
Check

Figure 54: Threshold conditions for the first Classification Process.

The condition is also displayed in the Edit Process window.

6.2.4 Define the Class and execute the Process


Information The next step is now to define the class the Image Objects should be assigned to. This is
defined in the right window of the Edit Process dialog, in the Algorithm Parameters
window.
Per default the class unclassified is selected.
1. Click on unclassified and click on the drop down arrow.
2. Select Water from the list.
Action!
The class is now selected.
3. Confirm the settings with OK.
4. There are 2 possibilities to execute a Process, when added to the Process Tree:

Select the Process in the Process Tree and right-click on it, then select
Execute Process.

Select the Process in the Process Tree and press F5.

Settings
Check

Figure 55: Process settings to the assign all Objects of Level1, which have a lower value than 40 for
the Feature Brightness to the class Water.

72
Basic Classification

Rule Set
Check

Figure 56: Process Tree with first Process for water Classification added.

6.2.5 Review the Classification result


All Objects which are classified are displayed in the appropriate class color. If you hover Information
your mouse over a classified Object, a tool tip pops up indicating the class to which the
Object belongs.
Objects that are unclassified appear transparent. If you hover your mouse over an
unclassified Object, a tool tip pops up indicating that no Classification has been applied
to this Object.
To visualize your Classification results there are several possibilities as learned in chapter
1.3.2 Display Classification results.

Figure 57: View Settings toolbar-

View the image or the Classification result by selecting either or .

Display the image/Classification in a transparent view or in the pixel mean


view by switching on and off .

Display the image/Classification result as outlines by switching on or off .

Result
Check

Figure 58: Classification result for Water, with transparent view and only the nir layer displayed.

73
Basic Classification

Chapter 6.2 covers the following content:

 Preparation: Prepare Process structure


 Append the classification Process
 Define the condition for Classification
 Define the Class and execute the Process
 Review the Classification result

74
Basic Classification

6.3 Define the second Classification


Process
This Chapter covers the following content:

 Define the correct Image Object Domain


 Define the condition for Classification
 Define the target Class
 Review the Classification

The first Classification Process is inserted and all Objects with Brightness lower than 40 Information
are classified as Water.
But there are some misClassifications. To get rid of the misclassified Objects, we will
insert a second Process with an additional condition.
We will assign all Water Objects with a Mean of nir larger than 44 to unclassified again.
To express that only the Water Objects shall be treated, the class has to be selected now
a in the Image Object Domain of the Process.

75
Basic Classification

6.3.1 Define the correct Image Object


Domain
Introduction The Image Object Domain is an essential concept in Definiens software. It enables you
to Process only specific Objects. In the Image Object Domain, you can define:

In which Level the Processing shall take place

Objects of which class shall be taken for Processing

Which condition the Objects must fulfill to be Processed.

Figure 59: Schema of the Image Object Domain concept.

In the following example we want to correct misclassified Water Objects in Level1


which have a larger Mean nir than 44. This will be specified in the Image Object Domain
of the Process.

Preparation
1. Append a new Process in the same hierarchy Level as the Classification Process
before.

Action! 2. Choose again assign class as algorithm.

Define the Image Object Domain


3. Check that Level1 is set. Per default the currently in the Viewer displayed.
4. Click on the all Objects button and select Water from the Edit Classification
filter list.

Settings
Check

Figure 60: Water is selected as Class of the Image Object Domain.

76
Basic Classification

6.3.2 Define the condition for Classification


The Condition for processing is also part of the Image Object Domain area in the Edit Information
Process Tree dialog box.
1. Click on the no condition button and choose the Feature Mean nir.
2. Choose the operator =< .
3. Insert the value 44. Action!

6.3.3 Define the target Class


1. In the field Active Class keep unclassified.
2. Execute the Process.

Rule Set
Check

Figure 61: Process Tree with both Classification Processes added.

6.3.4 Review the Classification

Result
Check

Figure 62: After the first Classification step there were misclassifications, the second Classification
Process eliminates the wrong classified Objects.

Chapter 6.3 covered the following content:

 Define the correct Image Object Domain


 Define the condition for Classification
 Define the target Class
 Review the Classification

77
Basic Classification

6.4 Alternative Classification method:


Insert conditions in the Class
Description
This Chapter covers the following content:

 Insert both conditions in the Class Description of Water


 Insert a Classification Process
 Evaluate the Membership values of Objects
Introduction In the chapters before we created two separate Processes to describe two conditions an
Object must fulfill to be classified as Water.

Figure 63: Using the algorithm assign class two Processes were needed to come to a correct
Classification.

Another Method to achieve the same result, is to insert both conditions in the Class
Description of the class it self and then add only one Process pointing to the content of
the Class Description.
Important to understand is whenever you want to classify using the information
contained in the Class Description, you have to use the algorithm Classification.
Using Class Descriptions you can retrace why an Object was classified to a certain class.
This can be seen in the Image Object Information window and in the Feature View using
the Relations to Classification Features.

Figure 64: The Objects carry the Classification information, if the conditions are inserted in the
Class Description.

Other advantages of using Class Descriptions as container for conditions are

That the conditions can be combined using different operators like and/or.

That the conditions of the classes can be inherited to Child classes, if they are
arranged in the Inheritance Hierarchy.

In Class Descriptions you can use so called Membership Functions describing


smooth transitions between classes in a fuzzy way.
How to use these additional advantages is described in later chapters.

78
Basic Classification

6.4.1 Insert both conditions in the Class


Description of Water
The conditions are all sorted under the operator. Per default the and (min) operator is Introduction
defined. The and(min) operator indicates that the minimum value of the conditions
inserted wins. In other words, if one of the conditions is 0, the whole Classification value
is 0, the Object will not be classified.
1. Double-click on the Class in the Class Hierarchy to open it.
2. Double-click the and(min) operator and select Insert New Expression.
Alternatively right-click the and(min) operator. Action!
The Insert Expression window opens.
3. Browse to the Layer Values>Mean>Brightness.
4. Right-click on it and choose Insert Threshold....
The Edit Threshold Conditions window opens.
5. Insert the threshold value 40 and choose an operator <= smaller or equal than.
6. Confirm with OK.
The first condition is inserted in the Class Description.
7. Browse to the Layer Values>Mean>nir.
8. Right-click on it and choose Insert Threshold....
9. Insert the threshold value 44 and choose an operator <= smaller or equal than.
10. Confirm with OK.
The second condition is inserted in the Class Description.
11. Close the Insert Expression window.
12. Click on the OK button in the Class Description window to confirm.

Settings
Check

Figure 65: Class Description for the Class Water containing two threshold conditions.

79
Basic Classification

6.4.2 Insert a Classification Process

Preparation: Delete the existing Classification and


existing Processes
Information Delete an existing Classification manually is sometimes necessary during Rule Set
development. In our case we already have classified before and want to have again all
Objects unclassified.
1. In the Class Hierarchy window right-click on the class Water
2. Select Delete Classification.
Action! All Objects are now unclassified again.
3. In the Process Tree select the first Process and press delete on your keyboard.
4. Repeat for the second Process.

Insert the Process to classify according the conditions in


the Class Description
1. Right-click on the Parent Process and select Insert Child.
2. As algorithm choose Classification.
3. Keep all Objects and no condition. The conditions for Classification is set in the
Class Description.
4. Click in the very right of the field Active classes to choose the class Water.
5. Keep all other default settings and confirm with OK:
6. Execute the Process.

Rule Set
Check

Figure 66: Process Tree with Process for Classification according to the Class Description added.

6.4.3 Evaluate the Membership values of


Objects
Information As the conditions were inserted in the Class Description, the information about the
membership to the class is stored in the Object as well as the Feature values of the
conditions. You can check this using the Image Object Information window.
1. Select the class Water in the Class Hierarchy.
2. In the Image Object Information window change to the tab Class Evaluation
Action! 3. Select several Water Objects and unclassified Objects. The values will change.

80
Basic Classification

Result
Check

Figure 67: The selected Object has a total membership value of 1. Both conditions have been
fulfilled. For Mean nir the Object has the value 16.08, for Brightness 28.28.

Chapter 6.4 covered the following content:

 Insert both conditions in the Class Description of Water


 Insert a Classification Process
 Evaluate the Membership values of Objects

Lesson 6 had the following content:

 Create a Class
 Define the first Classification Process
 Define the second Classification Process
 Alternative Classification method: Insert conditions in the Class Description

81
Basic Classification

82
Exercise: Recap Classification and Segmentation

Lesson 7 Exercise: Recap


Classification and
Segmentation
This Lesson has the following content:

 Create a Class
 Define the first Classification Process
 Define the second Classification Process
 Alternative Classification method: Insert conditions in the Class Description

This exercise has the aim that you recap the lessons learned on your own. The trainer will Introduction
assist you when questions occur. At the end the different results of the group are
discussed together.
The image data used in this and the following lessons is a subset of a Quickbird scene
(Data courtesy of Digital Globe).
Task: Classify water and roads in a Quickbird subset.
During this exercise, you will realize that water and roads have quite similar spectral
values in this image. For this exercise, try to classify with one Process water Objects
without any misclassifications. Simply said, rather miss some water areas as classifying
some roads as water. We will solve misclassified roads later using context information.
Guide Line:

Load a subset of a Quickbird scene, multispectral and panchromatic layers

Define No Data values

Insert an overall Parent Process

Examine which Image Layers contain significant information for the class
water.

83
Exercise: Recap Classification and Segmentation

Introduction With a Child Process, create Objects using multispectral segmentation and
weight only the image layers with information about water

Find a Features and a thresholds representing the class Water

Create the class Water

Add another Parent Process to carry all following Classification Processes

Insert a Child Process to classify Water Objects

Repeat for the class Road

7.1 Create the Project and assign No


Data values
Information It is important for the upcoming lessons that the Project is created correctly. In this case,
Quickbird data with different resolution in multispectral and panchromatic is used. This
has the effect that the Project has a No Data area at its border. To avoid
misclassifications, we will define those areas with 0 values as so called No Data areas.
This means in this area, no Objects will be created.

Figure 68: Left: higher resoluted panchromatic layer; Right: coarser resoluted multispectral layer.

1. Switch back to the Load and Manage Data View, right-click in the Workspace
window and select Add Project.

Action! 2. Browse to the folder


\01_Definiens_ESSENTIALS_TRAINING\Module1\QB_Maricopa select
04mar_multi.img and 04mar_pan.img.
3. Give the Project the name QB_Maricopa_Exercise and define the alias

blue, green, red nir for the multispectral layers

pan for the panchromatic layer

Define the No Data values


4. Click on the No Data button.
5. In the field Global No Data Value switch on the check-box and enter the value 0.
This indicates that an area is defined as No Data if any of the layers contains a 0 value
pixel.
6. Confirm the settings with OK.
7. Confirm the settings for creating the Project with OK.

84
Exercise: Recap Classification and Segmentation

Settings
Check

Figure 69: Settings for the Create Project and the Assign No Data Values dialog box.

7.2 Segmentation
This Chapter covers the following content:

 Examine which Image Layers contain significant information for the class water
 Set up the Segmentation Process

The task of this lesson is to find water and road Objects. The first step is to define which Information
segmentation algorithm to use and which layers do contain the relevant information.
As it is a rather simple task to identify two very obvious classes in this subset the
multiresolution segmentation will do a good job. It will give back quite realistic Objects
outlining the homogeneous water and road Objects with larger Objects. The next crucial
step to do is to select those image layers which contain the most relevant information
for segmentation.

7.2.1 Examine which Image Layers contain


significant information for the class
water
1. Switch to Single Layer Gray Scale
2. Browse through the individual layers.
Action!

85
Exercise: Recap Classification and Segmentation

Result
Check

Figure 70: Left Layer nir; Right: Layer pan.

In the nir and the pan layer the water and the roads have a good visibility, therefore
these two layers should be used for segmentation. Using the pan layer has also the
advantage that the lower resolution gives back fine outlines.

7.2.2 Set up the Segmentation Process


Preparation:
1. Enter an overall Parent Process QB_Maricopa.
2. Enter a Child Process and name it Segmentation.
Action! 3. Enter again a Child Process and choose as algorithm multiresolution
segmentation.
4. As Level Name enter Level1.
5. Weight only the pan and the nir layer with 1.

Find a scale parameter


Information The scale parameter influences the size of Objects. Try to find a scale parameter which
produces not too small, but also not to coarse Objects for the water areas and the road
areas.
6. Enter different scale parameter and evaluate the different results. Finally decide
which one to use.

Action!

Result
Check

Figure 71: Left scale parameter 300: too coarse scale parameter results in mixed Objects; Right:
scale 150: the Objects are smaller and gives back the layer values more accurate..

Composition of the homogeneity criterion


Information Water areas are rather compact areas and also the roads should not be too fractal. A
higher weighting of the compactness would make sense here. The overall influence of
shape should not bee too high, as the spectral significance of water and road Objects are
quite clear.

86
Exercise: Recap Classification and Segmentation

7. Enter different compactness values and evaluate the different results. Finally decide
which one to use.

Action!
Result
Check

Figure 72: Left: Shape 0.2, Compactness 0.2; Right; Shape 0.2, Compactness 0.8.

8. Enter the scale parameter, shape and compactness as you decide it to be optimal
and execute the Process.

Action!
Chapter 7.2 covered the following content:

 Examine which Image Layers contain significant information for the class water
 Set up the Segmentation Process

7.3 Find Features and classify Water


and Road
Water and road Objects appear quite dark and they have a certain size compared to Information
other dark Objects like shadows. Good Features to describe water in a spectral way is
either the mean of pan or the standard deviation, same true for nir. As an additional
condition to distinguish small shadow Objects from water you can use the area Feature
in the category shape. You can add two Processes using the assign class algorithm or
you can insert both conditions in the Class Description of the class Water.
1. Evaluate with the Feature View the mean of pan and nir and find a threshold for
water and for roads.
2. Evaluate with the Feature View the minimum area water or road Objects must have. Action!
3. Create the classes Water and Road
4. Define your conditions either directly in the Process using assign class algorithm or
insert them in the Class Description. Take care about the order of the Processes and
the Image Object Domain
5. Execute the Classification Processes and adapt if necessary your threshold values.

Rule Set
Check

Figure 73: Example solution: Water is classified using the assign class algorithm.

87
Exercise: Recap Classification and Segmentation

Figure 74: Example solution: Road is classified using the Classification algorithm and the content
of the Class Description.

Result
Check

Figure 75: Water and road is classified in this subset. Some Roads are still misclassified. This will be
eliminated in the next lesson.

Lesson 7 had the following content:

 Create a Class
 Define the first Classification Process
 Define the second Classification Process
 Alternative Classification method: Insert conditions in the Class Description

88
Classify Using Context Information: Relative border to class

Lesson 8 Classify Using Context


Information: Relative border to
class
This Lesson has the following content:

 Create the Class-Related Feature Relative border to Water Body


 Find the appropriate threshold
 Add, edit and execute the Process for classifying with the Class-Related Feature
Introduction
In this lesson we will use context information to correct the Classification of Road
Objects which are actually Water. We will simply formulate the rule that Road Objects
adjacent to Water Objects shall be also classified as Water Objects.

Context information is expressed in the so called Class Related Features. These Features
express relationships of Objects within one Level, between super and sub Objects.
Relations to neighbors within a Level can be: Existence or number of neighbors,
common border to a class, Relative area of a class, spatial distances or the spectral
difference to a class.

The Feature we will use in this lesson is the Relative Border to Feature. It expresses the
amount of border to a certain class compared to the overall border of the Object.

89
Classify Using Context Information: Relative border to class

8.1 Create the Class-Related Feature


Relative border to Water Body
Information Some misclassified Road Objects have a common border with the Water Objects. The
Feature Border to or Relative Border to is useful for re-classifying such Objects.
1. In the Feature View browse to Class-Related Features>Relations to neighbor
Objects>Relative Border to...

Action! Per default the Feature list is always empty.


2. To create the Feature for the class Water do one of the following:

Right-click on the Create a new Rel. Border to and select Create.

Double-click on the Create a new Rel. Border to

NOTE:

To create the Feature for all classes at one right-click and select Create all.

3. Select Water from the Value drop-down list and confirm with OK.
The Feature is now available in the Feature View.

Result
Check

Figure 76: The Feature Rel. border to Water is created in the Feature View.

8.2 Find the appropriate threshold


Information High values mean that the Object has a long border to the Water Objects, low values
that the Object has short border to Water Objects.
4. Update the range of the Feature, switch on the check box to activate the colored
display of the Feature range and double-click to display the Feature values in the
Viewer window.
Action!
5. Find now an appropriate threshold, e.g. greater than 0.2 (20% of its border is
shared with water)

Result
Check

Figure 77: Classification, Classification in outlines view, compared with the Feature View for Relative
Border to Water Objects

90
Classify Using Context Information: Relative border to class

8.3 Add, edit and execute the Process


for classifying with the Class-
Related Feature
Add now a Process that expresses: Assign all Road Objects with a Rel. border to Water Information
Objects higher than 0.2 to the class Water.
1. Append a new Process at the end of the Process list.
2. Algorithm: assign class.
Action!
3. Level domain: Level 1.
4. Class Filter: Road.
5. Condition: Relative border to Water Body greater than 0.2.
6. Active Class: Water.
7. Execute the Process.

Settings
Check

Figure 78: Process settings for classifying Road Objects with border to Water Objects.

Rule Set
Check

Figure 79: Content of QB_Maricopa Process up to this lesson.

91
Classify Using Context Information: Relative border to class

Result
Check

Figure 80: Left: previous Classification result; Right: current Classification result. All misclassified
Road Objects are now belonging to the class Water Body.

Lesson 8 had the following content:

 Create the Class-Related Feature Relative border to Water Body


 Find the appropriate threshold
 Add, edit and execute the Process for classifying with the Class-Related Feature

92
Merge Objects

Lesson 9 Merge Objects


This Lesson has the following content:

 Overview over algorithms to reshape Objects


 Merge Water, Road and all unclassified Objects

The final goal of an Image Analysis is to have the outlines of the Objects of interest. Introduction
Within a more complex Rule Set there are always several segmentation and
Classification steps to come to the final Objects. In our first simple example we will now
merge all adjacent Objects.

Figure 81: Objects before and after the merging.

9.1 Overview over algorithms to


reshape Objects
Definiens offers several algorithms to merge Objects, they are all grouped in the Introduction
category reshaping.
The most simple one is the algorithm merge region. With this algorithm Objects are
merged without any condition. The other reshaping algorithms (Grow Region, Image
Object Fusion) enable to formulate conditions for merging or use special growing
algorithms (Morphology, Watershed Transformation). A very special algorithm is the
remove Objects algorithms, it merges an Object with the Object which has the largest
common border.

93
Merge Objects

9.2 Merge Water, Road and all


unclassified Objects
Information The algorithm merge region allows the user to combine Image Objects. Usually Objects
are merged after a Classification was applied. This way multiple Objects of identical
Classification are merged into large Objects. With the Image Object domain define
Objects of which class are merged. In the following example we will create one separate
merging Process for every Class and for the unclassified Objects.
1. Append a new Parent Process and enter the name Merge.
2. Insert a Child Process and select as algorithm merge region.
Action!
3. Choose Water in the class filter.
4. Confirm the Settings with OK.
5. Copy and Paste the Process in the Process Tree and simply change the Class in the
Image Object domain to merge Road and unclassified.
6. Execute all the three Processes.
All Water, Road and all unclassified Objects are merged.

Rule Set
Check

Figure 82: Process Tree with merging Processes added.

Result
Check

Figure 83: Objects before and after the merging.

Lesson 9 has the following content:

 Overview over algorithms to reshape Objects


 Merge Water, Road and all unclassified Objects

94
Export Results

Lesson 10 Export Results


This Lesson has the following content:

 Export the current view


 Export a Project statistic
 Export a vector layer (.shp)

10.1 Export the current view


To export a screenshot of the result use the algorithm export current view. You can set Information
the view settings you want the screenshot to be exported with, e.g. the Classification
outlines or transparent View.
1. Set the Viewing modes you want the exported screenshot to appear in your
Viewer, e.g. Classification outlines.
2. Append a new Parent Process for Export. Action!
3. Insert a Child Process and select as algorithm export current view.
4. Keep the domain setting no Image Object. This algorithms simply generates a
screenshot and is not dealing with any Object.
5. In the field Export item name give the export file a meaningful name, e.g.
according to the View settings Classification outlines.
6. In the field Save current View Settings click on the next to Click to capture
view settings.
A message confirms that the view settings were captured.
7. Define the image format to be exported by clicking in the drop-down list in the
field Default Export Driver.
8. In the field Scale keep the default resolution of the original scale.
9. Execute the Process.
10. Browse to \01_Definiens_ESSENTIALS_TRAINING\Module1\QB_Maricopa. Open
the file to check the exported result.

95
Export Results

Note:

For the field Desktop Export Folder. The Scene Dir means that the image is stored at
the same place where the original images come from. There is the possibility to point
to a selected place too. If you run the Rule Set in batch processing the default path
changes to the Workspace folder. It of course can be changed too, this must be edited
in the Analysis dialog box.

Settings
Check

Figure 84: Process settings to export a screenshot of the Classification Result.

Rule Set
Check

Figure 85: Process Tree with Process added to export the current view.

Result
Check

Figure 86: A screenshot of the Classification outlines is exported.

96
Export Results

10.2 Export a Project statistic


With the algorithm Export Project statistics you can export scene Features like the Information
overall number of Water Objects to a csv file. Such scene Features can be found in the
Feature View under the category Scene Features.
1. Append a new Process and select Export Project statistics as algorithm.
2. In the field Export item name enter Number of Water Objects.
3. In the field Features click on the and browser to Scene Features>Class- Action!
Related>Number of classified Objects, create and select Number of Water Objects.
4. Keep al other default settings.
5. Execute the Process.
6. Browse to \01_Definiens_ESSENTIALS_TRAINING\Module1\QB_Maricopa. Open
the file to check the exported result.

Settings
Check

Figure 87: Process settings to export the number of Water Objects.

Rule Set
Check

Figure 88: Process Tree with export Process added.

Result
Check
Figure 89: The content of the exported csv file.

97
Export Results

10.3 Export a vector layer (.shp)


This Chapter covers the following content:

 Define the name and vector type


 Add Features and configure the attribute table
Information With the algorithm export vector layer you can export the Image Objects into a shape
file and also attach attributes, respectively Features, to the Objects. You can choose
between points, lines and polygons to be exported. In our exercise we will export
polygons.

Note:

You have also the possibility to influence the Vectorization by defining a smoothing
of the polygons. If you want to smooth the outlines you have to add additionally the
algorithm Set Rule Set options before the actual export algorithm.

10.3.1 Define the name and vector type


1. Append a new Process.
2. Select the algorithm export vector layers.
Action!
3. For the Image Object Domain keep the default settings but make sure that the
correct Image Object Level is selected.

Set the Name and Export mode


4. Enter a name in the Export Item name field like ExportVectorLayer or
Classification.

98
Export Results

Note:

If you run the analysis in batch mode the scene name will be automatically added to
the defined name. This separates the shape files from each other and keeps reference
to the source file name.

Settings
Check

Figure 90: Parameter settings for exporting a Classification to a vector layer

Choose the Shape Type and Export Type


5. In the field Shape Type select Polygons from the drop-down list. The other
available types are points and lines.
6. In the field Export Type keep Raster. This indicates that no smoothing of the Action!
vectors is done.

Note:

If you choose Smoothed, the vectors will be generalized after default thresholds of
according to the settings done via the algorithm Set Rule Set Options.

10.3.2 Add Features and configure the


attribute table
In the field Attribute table, you can choose and new: configure the Features to be Information
exported with the shape file. Wee will add the Feature Class Name to ensure that the
Classification of the Object is part of the shape file. The other Feature we will choose are:
the area of the Objects.
1. Click in the field Attribute table and click on the button at the right side.
The Select Multiple Features dialog box opens.
Action!
2. Double click on the Features Area, and Class Name to change them to the
Selected window.
3. Confirm with OK.

99
Export Results

The Edit attribute table columns dialog box now opens.

The Edit attribute table dialog box


Information In this dialog box you can

define whether it should be written in string, integer or double

define a the length, the number of decimal places (scale) of the output, if
needed.

assign alias to default Feature names

The column type


Information Per default this field is set to <auto>, and we recommend to keep that if possible. All
Features are assigned to one of the types, either double, integer or string. This means for
example for Features exporting text, like the Feature Class name the type is set
automatically to string. If you want to export string values from a thematic layer, you
would have to define string separately. Double allows to export decimal places.

The Precision/Length and the Scale of the column


Information In the field Precision/Length you can define the number of digits that can be stored
in a number field. For example, the number 12.34 has a precision of 4.
For a string column it specifies the length of a text field specified in characters.
In the field Scale you can define the number of digits to the right of the decimal point
in a number in a field of type double. For example, the number 12.34 has a scale of 2.
Scale is only used for Double field types.

Figure 91: The Edit Attribute Table Columns dialog box with settings to export.

Add or Remove Features


Information With the Add/Remove button at the lower left, you can choose new Features for the list
or remove one.

100
Export Results

Note:

It is a known bug, that alias get lost when loading new Features!

Change the default name of the Feature


1. Expand the Feature Class_Name.
2. In the field Name you can define the column name other than the default Feature
name. Enter Class as the new name. Action!

Note:

The name length is limited to 10 characters, due to dbf restrictions!

3. Confirm the settings with OK as well as the Process.


4. Execute the Process.
5. Browse to \01_Definiens_ESSENTIALS_TRAINING\Module1\QB_Maricopa. Open
the file to check the exported shape file.

Result
Check

Figure 92: Content of the exported .dbf file, which belongs to the .shp file.

Chapter 10.3 covered the following content:

 Define the name and vector type


 Add Features and configure the attribute table

101
Export Results

Lesson 10 covered the following content:

 Export the current view


 Export a Project statistic
 Export a vector layer (.shp)

102
Sample Based Classification with Nearest Neighbor Classifier

Lesson 11 Sample Based


Classification with Nearest
Neighbor Classifier
This Lesson has the following content:

 Nearest Neighbor (NN) theory


 Nearest Neighbor configurations
 Declare Sample Objects for the NN Classification, (manual step!)
 Add, edit and execute a Process to classify
 Refine the Classification

The Nearest Neighbor (NN) classifier is Definiens solution for a quick and simple Introduction
Classification of Image Objects based on given sample Image Objects within a defined
Feature space.
After a representative set of sample Objects has been declared for each class, each
Image Object is assigned to the class of the nearest sample Object in the Feature space.
Starting with a few samples, it produces fast results that can quickly be improved by
adding or editing samples.

11.1 Nearest Neighbor (NN) theory


The use of NN as a classifier is advisable if you intend to use several Object Features for a Information
class description. There are several reasons for the use of the NN. NN evaluates the
correlation between Object Features favorably. Overlaps in the Feature space increase as
its dimension increases and can be handled much easier with NN. NN allows very fast
and easy handling of the class hierarchy.
Definiens distinguishes between two types of nearest neighbor expressions:

Nearest Neighbor : The Feature space can be defined independently for each
individual class.

Standard Nearest Neighbor: The Feature space of the standard nearest


neighbor is valid for the whole project and all classes to which the standard
nearest neighbor expression is assigned.
The standard NN is useful, because in many cases the separation of classes only makes
sense when operating in the same Feature space.
In this Module you will work with an existing project, collect samples of four different
land cover types and classify the image using the nearest neighbor classifier.

103
Sample Based Classification with Nearest Neighbor Classifier

Classic workflow
Information To classify Image Objects using the Nearest Neighbor classifier, follow the
recommended workflow:
1. Choose the Features you want to use for the Features Space. The default Feature
Space are the Mean Features of the layers.
2. Load or create classes.
3. Append a new Process.
4. Choose algorithm Nearest Neighbor Configuration.
5. Set the Algorithm Parameters:

Select the classes to apply the NN.

Define the Feature Space.

Define the Function Slope.


6. Execute the Process.
The Feature Space to evaluate the Classification is applied to the Class Description of the
Classes
7. Select the samples for the Classes.
8. Append a new Process to classify using the algorithm classification.

104
Sample Based Classification with Nearest Neighbor Classifier

Figure 93: Membership Classification vs. Nearest Neighbor Classification.

105
Sample Based Classification with Nearest Neighbor Classifier

11.2 Nearest Neighbor configurations

Preparation
1. Import the project file Dessau_NearestNeighbor.dpr. in the LANDSAT_Dessau
folder.

Action! 2. Open the project. An Image Object Level already exists and the following classes
have been inserted:

Woodland General

Grassland General

Impervious General

Water Bodies

Result
Check

Figure 94: Class Hierarchy of Dessau project.

Figure 95: Landsat satellite image subset of Dessau, outlines View.

3. In the Process Tree append a new Parent Process Classification Nearest


Neighbor.

Action!

Rule Set
Check

Figure 96: Process Tree with inserted Parent Process for NN Classification.

106
Sample Based Classification with Nearest Neighbor Classifier
Append Process using the algorithm nearest neighbor
configurations
With this algorithm the Feature Space can be defined and applied to the selected Information
classes. The Feature Space is an n-dimensional combination of Features used for
calculating membership values. Before defining the Feature space, decide which
Features you intend to use. The Feature View will help you define your Feature Space.

1. Insert a Child Process using the algorithm Nearest Neighbor configuration.


Algorithm Parameters:
Action!
2. Active classes: select all classes.

NOTE

This algorithm is not listed in the default algorithm list. You first have to make it
available. Therefore go to the end of the algorithm list and select more.

Settings
Check

Figure 97: Process settings for Nearest Neighbor configuration.

Define the Feature Space


3. Click in the value field of NN Feature space.
The Select Multiple Features window opens. By default, the layer Mean Features and
Standard Deviations are listed in the Selected box. Action!
4. To add a Feature to the Feature space, navigate to it in the tree in the left window
and double-click.
The selected Feature will appear in the Selected window.
5. To remove a Feature from the Feature space, double-click it in the Selected
window.
It will be removed and reappear in the Available window.
6. Confirm with OK after all Features you need are listed in the Selected window..

107
Sample Based Classification with Nearest Neighbor Classifier

Settings
Check

Figure 98: Select Multiple Futures window with Mean and Standard deviation Features selected for
Features Space definition.

7. Keep the standard value for the Function Slope.


This indicates which Membership function value must be reached by an Object to be
classified to the according class.
8. Execute the Process.
The standard NN Feature space is now defined for the entire project.
Check by opening the Class Description. If now the standard NN Feature space is
changed in one class description, these changes affect all classes that contain the
standard NN expression.

Settings
Check

Figure 99: Class Description containing Nearest Neighbor features.

108
Sample Based Classification with Nearest Neighbor Classifier
NOTE:

The Feature space for both the nearest neighbor and the standard nearest neighbor
classifier can be edited by double-clicking them in the class description.

11.3 Declare Sample Objects for the NN


Classification, (manual step!)
After assigning the Nearest Neighbor classifier to all classes, the samples for all classes Information
have to be collected.
1. To open the tools for selecting and evaluation samples do one of the following:

Go to menu Classification>Samples and choose Select Samples , Sample


Action!
Editor and Sample Selection Information .

Open the Sample Navigation tool bar and choose the tools there.

Settings
Figure 100: Toolbar Sample Navigation. Check

Sample Editor and Sample Selection information


window
While in sample selection mode, two dialog boxes help with the selection of the Information
samples.
Sample Editor: As collecting samples, this dialog box displays histograms for each of the
listed Features. Right-click in the dialog box to change which Features to be visualized.
Sample information is shown for individual classes, but this dialog box can also be used
to compare sample Feature values to those of another class.

Figure 101: Sample Editor window

Sample Selection Information: Once a class has at least one sample, the quality of a
new sample can be assessed in this dialog box. It can help to decide if an Object contains
new information for a class, or if it should belong to another class. Three distances are
shown in the dialog box:

109
Sample Based Classification with Nearest Neighbor Classifier
Membership: shows the potential degree of membership according to the
adjusted function slope of the nearest neighbor classifier.

Minimum Distance: shows the distance in Features space to the closest


sample of the respective class.

Mean Distance: shows the mean distance to all samples of the respective
class.

Figure 102: Sample Selection Information window.

The Sample Editor:


Information In the Sample Editor, the Features are displayed which are selected in the Standard
Nearest Neighbor Feature space.
First you have to make a class, e.g. Woodland General, the active class so any samples
selected will be assigned to this class.
When you click an Object once, its Feature value in each of the listed Features is
highlighted with a red pointer in the Sample Editor. This enables you to compare
different Objects with regards to their Feature values.
The Feature values for an accepted sample Object are displayed as black lines in the
histograms of the Sample Editor.
In the Sample Selection Information dialog box, the membership value of this Object
to the Woodland General class is 1.0 and the distance is 0.0, indicating that it has full
membership to the Woodland General class.
2. In the Sample Editor select Woodland General from the Active Class list or select
it in the Class Hierarchy.

Action! 3. Click once on a sample Object for the Woodland General class.
4. Double-click to accept this Object as a sample for the Woodland General class.
5. Click another potential sample Object for the Woodland General class.

110
Sample Based Classification with Nearest Neighbor Classifier
The Sample Selection Information
Once a sample is assigned to a class, the quality of a new sample can be assessed in the Information
Sample Selection Information dialog box.
Analyze its membership value and its distance to the Woodland General class and to all
other classes within the Feature space.
Decide

if the sample includes new information to describe the selected class (low
membership value to selected class, low membership value to other classes)

if it is in fact a sample of another class (low membership value to selected class,


high membership value to other classes)

or if it is a sample needed to distinguish the selected class from other classes


(high membership value to selected class, high membership value to other classes).
6. Repeat the declaration of samples for the remaining classes Grassland General,
Impervious General and Water bodies.

Action!

11.4 Add, edit and execute a Process to


classify
The next step is to classify the Image Objects in the scene.
1. Add a new process using the algorithm Classification.
2. As Classification Filter choose unclassified.
Action!
3. Select all classes as Active Class.
4. Execute the Process.
The result of the Classification is now displayed. In the View Settings dialog box, the
View Mode setting has changed from Samples to Classification.

Rule Set
Check

Figure 103: Process Tree with Process for NN configuration and Classification.

11.5 Refine the Classification


Notice that not only some Objects are not classified, but there are many Objects that are Information
classified incorrectly. The Classification results will now be refined by iteratively
assigning non-classified and misclassified Image Objects as sample Objects of the
correct class.
1. While still in Select Samples mode, click another Object.
2. Assign one or two unclassified Objects to the class in which they belong. Do this as
necessary for each class. Action!
3. Assign one or two incorrectly classified Objects to the correct class. Do this as
necessary for each class.
111
Sample Based Classification with Nearest Neighbor Classifier
Use a helper process
Information Append a helper process before the actual Classification process which un-classifies all
Objects.
1. Append a process outside the overall Process Structure.
2. Use the algorithm assign class and select all classes as Classification Filter. Leave
Action! unclassified as Active class and execute.
3. Re-execute the Classification process which classifies all Objects using the Nearest
Neighbor classifier.
The refined Classification results are displayed.

NOTE:

When you are finished collecting samples, be sure to click on Select Samples to turn
off sample selection from the Samples menu.

Rule Set
Check

Figure 104: Process Tree with helper Process for refining NN Classification.

Lesson 11 had the following content:

 Nearest Neighbor (NN) theory


 Nearest Neighbor configurations
 Declare Sample Objects for the NN Classification, (manual step!)
 Add, edit and execute a Process to classify
 Refine the Classification

112
Batch-Processing with eCognition Server

Lesson 12 Batch-Processing with


eCognition Server
This Lesson has the following content:

 Import data using an existing template


 Submitting data for analysis
 View Job Scheduler status in a browser
 Roll-back to initial status

Definiens in combination with a Definiens eCognition Server allows batch processing Introduction
of entire sets of image data. The following Lesson will walk you through the different
steps involved in setting up, running and monitoring a batch process .
We will use a Rule Set for batch-processing, which classified impervious surface from
aerial RGB data and calculates categories compared to a GIS.

Note

Without a Definiens Server, you will not be able to submit data for batch processing.
However, it still may be useful to go through the steps of this tutorial to see how to
set up a Workspace. The Workspace can be quite useful for managing data and results
even without batch processing.

12.1 Import data using an existing


template
This Chapter covers the following content

 Load the import template


 Load the data

We will first load the data to be processed in the existing Workspace using an import Information
template. Multiple projects will be created automatically. Standardized import templates
are used to load the image data according to the necessary file structure.
The import via template guarantees the names of the image layers correspond to those
used in the Rule Set (red, green and blue). This includes the names of thematic layers
(parcels).

113
Batch-Processing with eCognition Server
Aerial images together with the corresponding subset of a shape file will be loaded.

12.1.1 Load the import template


1. Switch your view to the Load and Manage Data View .
2. Copy the file DSS applanix with parcel vectors.xml, which is located in the
Action! \01_Definiens_ESSENTIALS_TRAINING\Data\Module1\Aerial_Thematic folder
and paste it to the \bin\drivers\import folder of your Definiens installation.
3. Close the Definiens Developer and open it up again so that the new import
template is loaded.

12.1.2 Load the data


Information Now you can add files to the Workspace using the newly added import template. The
data import may take some time. This is because the thematic datasets must be scanned
to make sure they fit the corresponding image data.
To keep the Workspace organized first create a folder to which the data is loaded.
4. Right-click in the left Workspace folder and select Add Folder from the context
menu. Name it ImperviousSurface.

Action! 5. Select the created Folder, right-click it and select Predefined Import.
6. From the drop-down Import Template list select DSS applanix with parcels.
7. Next to the Root Folder of Image Data field, click the folder icon to open the
Browse For Folder dialog box.
8. Make sure the Search in Subfolders checkbox is activated, because the data to be
imported is stored in different sub folders.
9. Select the folder containing the data to be analyzed, here:
\01_Definiens_ESSENTIALS_TRAINING\Module1\Aerial_Thematic.
10. Once the data is loaded in the Import Scenes dialog box, expand folders in the
Preview window by clicking the + sign or collapse them by clicking on the -
button to evaluate the data to be loaded.
11. Finally, click OK to import the data into the Workspace.
The Workspace now shows all datasets imported as well as additional information
available on the different Workspace items.
Result
Check

Figure 105: Left: Import Scenes dialog; Right: Workspace with automatically created Projects.

114
Batch-Processing with eCognition Server
Chapter 12.1 covered the following content

 Load the import template


 Load the data

12.2 Submitting data for analysis


Once the data is loaded in the Workspace can be opened, edited or sent to a server for Information
analysis.
To submit data for analysis, you can select individual scenes, folders or the entire
Workspace.
In this example, you will analyze only the recently loaded data.
1. Right-click on the ImperviousSurface folder and select Analyze from the
context menu.
The Start Analysis Job dialog box opens. Action!
2. In the Job Scheduler field enter the network address of the computer delegating
the analysis job to all participating computers.
3. Right-click and select Analyze from the context menu. This will open the Start
Analysis Job dialog.
4. Click the Browse button to select the Rule Set to be used in the batch processing.
Browse to \01_Definiens_ESSENTIALS_TRAINING\Module1\Aerial_Thematic and
select ImperviousSurface.dcp .
5. Leave the remaining settings in default mode and select Start to run the analysis.

Settings
Check

Figure 106: Starting the Analysis Job.

After submitting data for analysis, the Workspace entries display the current processing
status. If statistical information is exported, it will be added to the details of each
Workspace entry.

115
Batch-Processing with eCognition Server

Workspace
Check

Figure 107: Viewing analysis status in the Workspace window.

Workspace entries can be opened by double-clicking. The result project will be opened
in the current Definiens application. If the project is modified and saved, the status will
be set to Edited. To reset a Workspace entry, select the entry and right-click, then select
History.

12.3 View Job Scheduler status in a


browser
This Chapter covers the following content

 Review user jobs


 Review job overview
 View job details
 Review engine status
 Review engine usage
Information You can also examine the status of jobs submitted in the Workspace via your web
browser (Microsoft Internet Explorer and so on). The address is identical to the entry in
the Job Scheduler entry of the Start Analysis Job dialog.

Open a web browser

Enter the job scheduler string. If a local job scheduler is used use
http://localhost:8184.
The HTML page is split into four parts: User Jobs and, Engines (on the left side of the
screen), Engine Usage (lower part) and the Job Overview on the right, which is
empty by default. You can resize the panes by clicking on the dividers and dragging
them.

116
Batch-Processing with eCognition Server

Figure 108: The Job Scheduler status page.

12.3.1 Review user jobs


Look in the User Jobs section (upper left) to see all jobs on schedule. There are four
options you can use to filter this list.

All is the default.

Active jobs are those currently being processed.

Inactive jobs encompass both successfully completed jobs and those that failed or
were cancelled.

Failed lists only those that did not successfully finish. Any filter in use is
surrounded by asterisks (this information applies to all filters on the page).
Look at some of the available data in this pane:|
1. Click Active to display only jobs currently running.
2. Push the Refresh button to reload the site.
Action!
3. Click Log to see additional information about how the job was processed. The log
lists the dates of events, followed by machine and engine number and the type of
event (an engine either connecting or shutting down).
4. Click on the index number of a job in the User Jobs pane to view its details in the
Job Overview section.

12.3.2 Review job overview


If you wish to stop the current job, click Cancel link in the upper right corner.
Click on the number in front of a job to switch to the Job Details pane and review
details of the results. Click 1 to view and 2 to view
The status of the result is one of the following:

117
Batch-Processing with eCognition Server
done

failed

waiting

cancelled

processing
Information displayed about a specific job includes the start and end times, the version
number of your Definiens software, the (local) path of the utilized Rule Set, a list of the
image layers submitted for processing and the path of all the output files you specified
in the Configure Exported Results dialog. Also in case of errors a remarks section will
be displayed, providing information about the origin of the error.

12.3.3 View job details


In the User Jobs section,
you can review processed jobs by monitoring the result status. If a submitted job failed,
look in the Remarks section of the Job Details pane for further information.

12.3.4 Review engine status


In the Engines section, the participating cluster nodes are listed. Filter them by
selecting either only the Active or Inactive nodes. The status of an active node is
idle. The status of nodes whose analysis could be completed is set to timeout. If an
error occurred during processing check the Remarks section for details.

12.3.5 Review engine usage


The Engine Usage displays two graphs representing capacity utilization of the cluster
nodes. The left graph represents the workload of the last 60 seconds while the right one
displays data for the last 24 hours.

118
Batch-Processing with eCognition Server
Chapter 12.3 covered the following content

 Review user jobs


 Review job overview
 View job details
 Review engine status
 Review engine usage

12.4 Roll-back to initial status


If you need to repeat an automated image analysis, you can rollback those scenes. This
means they will then have a gain the status created.
1. Return to the Workspace window
2. Select the folder ImperviousSurface.
3. Right-click and select Rollback in the context menu to reset their state

Lesson 12 had the following content:

 Import data using an existing template


 Submitting data for analysis
 View Job Scheduler status in a browser
 Roll-back to initial status

119

S-ar putea să vă placă și