Documente Academic
Documente Profesional
Documente Cultură
Release 7.1.0.35
Thinkbox Software
CONTENTS
Introduction
1.1 Overview . . . . . . . . . .
1.2 Feature Set . . . . . . . . .
1.3 Supported Software . . . .
1.4 Render Farm Considerations
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
7
10
30
Installation
2.1 System Requirements . . . . . . . . .
2.2 Licensing . . . . . . . . . . . . . . . .
2.3 Database and Repository Installation .
2.4 Client Installation . . . . . . . . . . .
2.5 Submitter Installation . . . . . . . . .
2.6 Upgrading or Downgrading Deadline .
2.7 Relocating the Database or Repository
2.8 Importing Repository Settings . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
37
37
40
41
67
83
87
89
90
Getting Started
3.1 Application Configuration
3.2 Submitting Jobs . . . . .
3.3 Monitoring Jobs . . . . .
3.4 Controlling Jobs . . . . .
3.5 Archiving Jobs . . . . . .
3.6 Monitor and User Settings
3.7 Local Slave Controls . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
93
93
97
104
112
144
148
156
Client Applications
4.1 Launcher . . .
4.2 Monitor . . . .
4.3 Slave . . . . .
4.4 Pulse . . . . .
4.5 Balancer . . .
4.6 Command . .
4.7 Web Service .
4.8 Mobile . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
163
163
168
192
199
204
209
212
217
Administrative Features
5.1 Repository Configuration
5.2 User Management . . . .
5.3 Slave Configuration . . .
5.4 Pulse Configuration . . .
5.5 Balancer Configuration . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
223
223
260
266
274
278
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5.6
5.7
5.8
5.9
5.10
5.11
5.12
5.13
6
ii
Job Scheduling . . . . . . .
Pools and Groups . . . . . .
Limits and Machine Limits
Job Failure Detection . . . .
Notifications . . . . . . . .
Remote Control . . . . . .
Network Performance . . .
Cross Platform Rendering .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
284
287
292
296
299
301
310
315
Advanced Features
6.1 Manual Job Submission . . . . .
6.2 Power Management . . . . . . .
6.3 Slave Scheduling . . . . . . . . .
6.4 Farm Statistics . . . . . . . . . .
6.5 Client Configuration . . . . . . .
6.6 Auto Configuration . . . . . . . .
6.7 Render Environment . . . . . . .
6.8 Multiple Slaves On One Machine
6.9 Cloud Controls . . . . . . . . . .
6.10 Job Transferring . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
319
319
328
338
341
354
359
362
365
368
371
Scripting
7.1 Scripting Overview . . .
7.2 Application Plugins . .
7.3 Event Plugins . . . . . .
7.4 Cloud Plugins . . . . .
7.5 Balancer Plugins . . . .
7.6 Monitor Scripts . . . . .
7.7 Job Scripts . . . . . . .
7.8 Web Service Scripts . .
7.9 Standalone Python API .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
375
375
381
398
411
417
420
428
430
433
REST API
8.1 REST Overview
8.2 Jobs . . . . . . .
8.3 Job Reports . . .
8.4 Tasks . . . . . .
8.5 Task Reports . .
8.6 Slaves . . . . . .
8.7 Pulse . . . . . .
8.8 Balancer . . . .
8.9 Limits . . . . . .
8.10 Users . . . . . .
8.11 Repository . . .
8.12 Pools . . . . . .
8.13 Groups . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
437
437
439
451
453
458
459
464
466
468
471
475
481
484
Application Plugins
9.1 3ds Command . .
9.2 3ds Max . . . . .
9.3 After Effects . . .
9.4 Anime Studio . . .
9.5 Arion Standalone .
9.6 Arnold Standalone
9.7 AutoCAD . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
489
489
498
552
563
566
569
573
9.8
9.9
9.10
9.11
9.12
9.13
9.14
9.15
9.16
9.17
9.18
9.19
9.20
9.21
9.22
9.23
9.24
9.25
9.26
9.27
9.28
9.29
9.30
9.31
9.32
9.33
9.34
9.35
9.36
9.37
9.38
9.39
9.40
9.41
9.42
9.43
9.44
9.45
9.46
9.47
9.48
9.49
9.50
9.51
9.52
9.53
9.54
9.55
9.56
9.57
9.58
9.59
9.60
9.61
Blender . . . . . . . . . . . . .
Cinema 4D . . . . . . . . . . .
Cinema 4D Team Render . . .
Clarisse iFX . . . . . . . . . .
Combustion . . . . . . . . . . .
Command Line . . . . . . . . .
Command Script . . . . . . . .
Composite . . . . . . . . . . .
Corona Standalone . . . . . . .
Corona Distributed Rendering .
CSiBridge . . . . . . . . . . .
CSiETABS . . . . . . . . . . .
CSiSAFE . . . . . . . . . . . .
CSiSAP2000 . . . . . . . . . .
DJV . . . . . . . . . . . . . . .
Draft . . . . . . . . . . . . . .
Draft Tile Assembler . . . . . .
EnergyPlus . . . . . . . . . . .
FFmpeg . . . . . . . . . . . . .
Fusion . . . . . . . . . . . . .
Fusion Quicktime . . . . . . .
Generation . . . . . . . . . . .
Hiero . . . . . . . . . . . . . .
Houdini . . . . . . . . . . . . .
Lightwave . . . . . . . . . . .
LuxRender . . . . . . . . . . .
LuxSlave . . . . . . . . . . . .
Mantra Standalone . . . . . . .
Maxwell . . . . . . . . . . . .
Maya . . . . . . . . . . . . . .
Media Encoder . . . . . . . . .
Mental Ray Standalone . . . . .
Messiah . . . . . . . . . . . . .
MetaFuze . . . . . . . . . . . .
MetaRender . . . . . . . . . .
MicroStation . . . . . . . . . .
modo . . . . . . . . . . . . . .
Naiad . . . . . . . . . . . . . .
Natron . . . . . . . . . . . . .
Nuke . . . . . . . . . . . . . .
Nuke Frame Server . . . . . . .
Octane Standalone . . . . . . .
PRMan (Renderman Pro Server)
Puppet . . . . . . . . . . . . .
Python . . . . . . . . . . . . .
Quicktime Generation . . . . .
Realflow . . . . . . . . . . . .
REDLine . . . . . . . . . . . .
Renderman (RIB) . . . . . . .
Rendition . . . . . . . . . . . .
Rhino . . . . . . . . . . . . . .
RVIO . . . . . . . . . . . . . .
Salt . . . . . . . . . . . . . . .
Shake . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
577
582
586
591
596
600
602
605
610
612
619
621
624
627
629
632
635
638
641
644
650
654
657
661
666
672
674
677
681
684
706
709
713
717
720
722
728
736
739
742
748
754
757
760
762
764
767
774
777
780
783
792
794
796
iii
9.62
9.63
9.64
9.65
9.66
9.67
9.68
9.69
9.70
9.71
9.72
9.73
SketchUp . . . . . . . . . . .
Softimage . . . . . . . . . . .
Terragen . . . . . . . . . . .
Tile Assembler . . . . . . . .
V-Ray Distributed Rendering
VRay Standalone . . . . . . .
VRay Ply2Vrmesh . . . . . .
VRay Vrimg2Exr . . . . . . .
VRED . . . . . . . . . . . .
VRED Cluster . . . . . . . .
Vue . . . . . . . . . . . . . .
xNormal . . . . . . . . . . .
10 Event Plugins
10.1 Draft . .
10.2 FontSync
10.3 ftrack . .
10.4 Puppet .
10.5 Salt . . .
10.6 Shotgun .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
799
802
813
816
818
828
831
834
837
840
843
847
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
851
851
853
854
869
870
871
11 Cloud Plugins
11.1 Amazon EC2 . .
11.2 Google Cloud . .
11.3 Microsoft Azure
11.4 OpenStack . . .
11.5 vCenter . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
893
893
896
903
905
907
12 Release Notes
12.1 Deadline 7.0.0.54 Release Notes
12.2 Deadline 7.0.1.3 Release Notes
12.3 Deadline 7.0.2.3 Release Notes
12.4 Deadline 7.0.3.0 Release Notes
12.5 Deadline 7.1.0.35 Release Notes
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
911
911
939
940
942
942
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
search
CONTENTS
CONTENTS
CHAPTER
ONE
INTRODUCTION
1.1 Overview
Deadline is a hassle-free administration and rendering toolkit for Windows, Linux, and Mac OSX based render farms.
It offers a world of flexibility and a wide-range of management options for render farms of all sizes, and supports over
60 different rendering packages out of the box.
Deadline 7 is the latest version of Thinkbox Softwares scalable high-volume compute management solution. It features built-in VMX (Virtual Machine Extension) capabilities, which allow artists, architects and engineers to harness
resources in both public and private clouds.
In addition to enhanced cloud support, Deadline 7 expands support for the Jigsaw multi-region rendering feature,
which can now be accessed in 3ds Max, Maya, modo, and Rhino. Deadline 7 also includes an updated version of
Draft, Thinkboxs lightweight compositing and video processing plug-in designed to automate typical post-render
tasks such as image format conversion as well as the creation of animated videos and QuickTimes, contact sheets, and
watermark elements on exported images. Finally, Deadline 7 introduces a wealth of new features, enhancements, and
bug fixes.
Deadline 7.1 adds many new features to Deadline 7.0, including new slave metrics, better font synchronization, and
new application support. It also fixes some bugs that were discovered after Deadline 7.0 was released.
Note that a new 7.1 license is required to run this version. If you have a license for Deadline 7.0 or earlier, you will
need an updated license. In addition, the version of Draft that ships with Deadline 7.1 needs a new 1.3 license. If you
have a license for Draft 1.2 or earlier, you will need an updated license.
1.1.1 Components
The Deadline Render Farm Management System is built up of 3 components:
A single Deadline Database
A single Deadline Repository
One or more Deadline Clients
The Database and Repository together act as a global system where all of Deadlines data is stored. The Clients
(workstations and render nodes) then connect to this system to submit, render, and monitor jobs. It is important to
note that while the Database and Repository work together, they are still separate components, and therefore can be
installed on separate machines if desired.
1.1.2 Database
The Database is the global database component of the Deadline Render Farm Management System. It stores the jobs,
settings, and slave configurations. The Clients access the Database via a direct socket connection over the network. It
only needs to be installed on one machine (preferably a server), and does not require a license.
1.1.3 Repository
The Repository is the global file system component of the Deadline Render Farm Management System. It stores the
plugins, scripts, logs, and any auxiliary files (like scene files) that are submitted with the jobs. The Clients access the
Repository via a shared network path. It only needs to be installed on one machine (preferably a server), and does not
require a license.
1.1.4 Client
The Client should be installed on your render nodes, workstations, and any other machines you wish to participate in
submitting, rendering, or monitoring jobs. The Client consists of the following applications:
Chapter 1. Introduction
Launcher: Acts as a launch point for the Deadline applications on workstations, and facilitates remote communication on render nodes.
Monitor: An all-in-one application that artists can use to monitor their jobs and administrators can use to monitor
the farm.
Slave: Controls the rendering applications on the render nodes.
Command: A command line tool that can submit jobs to the farm and query for information about the farm.
Pulse: An optional mini server application that performs maintenance operations on the farm, and manages
more advanced features like Auto Configuration, Power Management, Slave Throttling, Statistics Gathering,
and the Web Service. If you choose to run Pulse, it only needs to be running on one machine.
Balancer: An optional Cloud-controller application that can create and terminate Cloud instances based on
things like available jobs and budget settings.
Note that the Slaves and the Balancer applications are the only Client applications that require a license.
1.1.5 Jobs
A Deadline job typically represents one of the following:
The rendering of an animation sequence from a 3D scene.
The rendering of a frame sequence from a composition. It could represent a single write node, or multiple write
nodes with the same frame range.
The generation of a Quicktime movie from an existing image sequence.
A simulation.
These are just some common cases. Since a job simply represents some form of processing, a plug-in can be created
for Deadline to do almost anything you can think of.
Job Breakdown
A job can be broken down into one or more tasks, where each task is an individual unit that can be rendered by the
Slave application. Each task can then consist of a single frame or a sequence of frames. Here are some examples:
When rendering an animation with 3ds Max where each frame can take hours to render, each frame can be
rendered as a separate task.
When rendering a compositing job with After Effects where each frame can take seconds to render, each task
could consist of 20 frames.
When rendering a Quicktime job to create a movie from an existing sequence of images, the job would consist
of a single task, and that task would consist of the entire image sequence.
1.1. Overview
Job Scheduling
Use numeric job priorities, machine groups and pools, and job-specific machine lists to explicitly control distribution
of rendering resources among multiple departments. Limits allow you to handle both limited license plug-ins and
render packages, while job dependencies and scheduling allow you to control when your jobs will begin rendering.
Chapter 1. Introduction
The Slave applications are fully responsible for figuring out which job they should render next, and they do this by
connecting directly to the Database. In other words, there is no central server application that controls which jobs the
Slaves are working on. The benefit to this is that as long as your Database and Repository are online, Deadline will be
fully operational.
1.2.6 Notifications
Deadline can be configured to notify users of job completion or failure through an automatic e-mail notification or a
popup message on the users machine.
Administrators can also configure Deadline to notify them with information about Power Management, stalled Slaves,
licensing issues, and other issues that may arise on the farm.
Chapter 1. Introduction
doing so, Deadline provides a seamless transition from Job Submission to Review process, without artists needing to
monitor their renders.
1.2.9 Draft
Draft is a tool that provides simple compositing functionality. It is implemented as a Python library, which exposes
functionality for use in python scripts. Draft is designed to be tightly integrated with Deadline, but it can also be used
as a standalone tool.
Using the Draft plugin for Deadline, artists can automatically perform simple compositing operations on rendered
frames after a render job finishes. They can also convert them to a different image format, or generate Quicktimes for
dailies.
Active Deadline subscribers are entitled to Draft licenses at no additional cost. Active Deadline subscribers can request
a Draft license by emailing sales@thinkboxsoftware.com.
10
Chapter 1. Introduction
Supported Renderers
Brazil r/s
Corona
finalRender
finalToon
Krakatoa
Maxwell
NVIDIA iray
NVIDIA Mental Ray
Quicksilver
RenderPipe
Scanline
VRay
11
Supports Version 8 to 11
Shotgun Support
ftrack Support
Draft Support
Path Mapping Of Scene File Path
Path Mapping Of Output Path
12
Chapter 1. Introduction
1.3.6 AutoCAD
Highlighted Features
1.3.7 Blender
Highlighted Features
Supported Renderers
All
1.3.8 Cinema 4D
Highlighted Features
Supports Versions 12 to 16
Integrated Submission
Local Rendering
Automatic Scene Exporting
Team Render Support
Custom Sanity Check
Shotgun Support
ftrack Support
Draft Support
Path Mapping Of Scene File Path
Path Mapping Of Output Path
Supported Renderers
All
13
Integrated Submission
Automatic Render Archiving
Path Mapping Of Scene File Path
Path Mapping Of Config File Path
Path Mapping Of Module Paths
Path Mapping Of Search Paths
1.3.10 Combustion
Highlighted Features
1.3.12 Composite
Highlighted Features
14
Chapter 1. Introduction
Supported Applications
1.3.15 CSiBridge
Highlighted Features
1.3.16 CSiETABS
Highlighted Features
Submit Solver, Analysis and Reporting jobs
Cleanup Options to Optimize Data Size
Optional Automatic Compression of Output
Documentation: CSiETABS Documentation
15
1.3.17 CSiSAFE
Highlighted Features
1.3.18 CSiSAP2000
Highlighted Features
1.3.19 DJV
Highlighted Features
1.3.20 Draft
Highlighted Features
16
Chapter 1. Introduction
1.3.21 EnergyPlus
Highlighted Features
1.3.22 FFmpeg
Highlighted Features
1.3.23 ftrack
Highlighted Features
1.3.24 Fusion
Highlighted Features
Supports Versions 5 to 7
Integrated Submission
Keeps Scene In Memory
Custom Sanity Check
Quicktime Generation
Shotgun Support
ftrack Support
Draft Support
17
1.3.25 Generation
Highlighted Features
Integrated Submission
Submit Comp Jobs To Fusion
Documentation: Generation Documentation
1.3.26 Hiero
Highlighted Features
Integrated Submission
Submit Transcoding Jobs To Nuke
Documentation: Hiero Documentation
1.3.27 Houdini
Highlighted Features
Supports Versions 9 to 14
Integrated Submission
Submit ROPs as Separate Jobs
Submit Wedge ROPs as Separate Jobs
IFD Export Jobs
Custom Sanity Check
Shotgun Support
ftrack Support
Draft Support
Path Mapping Of Scene File Path
Path Mapping Of Output File Path
Path Mapping Of Scene File Contents
Path Mapping Of IFD File Path
Supported Renderers
All
18
Chapter 1. Introduction
1.3.28 Lightwave
Highlighted Features
Supported Renderers
All
1.3.29 LuxRender
Highlighted Features
Path Mapping Of Scene File Path
Documentation: LuxRender Documentation
1.3.30 LuxSlave
Highlighted Features
Supports Versions 7 to 13
Shotgun Support
ftrack Support
Path Mapping Of IFD File Path
Path Mapping Of Output File Path
Path Mapping Of IFD File Contents
19
1.3.32 Maxwell
Highlighted Features
1.3.33 Maya
Highlighted Features
Supported Renderers
3Delight
Arnold
Caustic Visualizer
Final Render
Gelato
Krakatoa
Maxwell
MayaSoftware
MayaHardware
MayaVector
Mental Ray
Octane
Redshift
Renderman
Renderman RIS
Turtle
VRay
20
Chapter 1. Introduction
Local Rendering
Shotgun Support
ftrack Support
Path Mapping Of Input File Path
Path Mapping Of Output File Path
Local Rendering
Shotgun Support
ftrack Support
Path Mapping Of Input File Path
Path Mapping Of Output File Path
1.3.36 Messiah
Highlighted Features
Integrated Submission
Shotgun Support
ftrack Support
Path Mapping Of Scene File Path
Path Mapping Of Output Folder Path
Path Mapping Of Content Folder Path
1.3.37 MetaFuze
Highlighted Features
Batch Folder Submission
Path Mapping Of Scene File Path
Documentation: MetaFuze Documentation
21
1.3.38 MetaRender
Highlighted Features
Path Mapping Of Input File Path
Path Mapping Of Output File Path
Documentation: MetaRender Documentation
1.3.39 MicroStation
Highlighted Features
Supported Renderers
Luxology (modo)
All built-in renderers
1.3.40 modo
Highlighted Features
Supported Renderers
modos default renderer
VRay
22
Chapter 1. Introduction
1.3.41 Naiad
Highlighted Features
Simulation Jobs
EMP to PRT Conversion Jobs
Shotgun Support
ftrack Support
Path Mapping Of Scene File Path
Path Mapping Of EMP File Path
1.3.42 Natron
Highlighted Features
1.3.43 Nuke
Highlighted Features
Supports Versions 6 to 9
Integrated Submission
Keeps Scene In Memory
Submit Write Nodes As Separate Jobs
Submit Write Nodes in Precomp Nodes
Specify Views to Render
Render Using Proxy Mode
Nuke Studio Support
Studio Frame Server distributed rendering
Studio Sequence Submission
Custom Sanity Check
Shotgun Support
ftrack Support
Draft Support
Path Mapping Of Scene File Path
Path Mapping Of Scene File Contents
23
Shotgun Support
ftrack Support
Path Mapping Of Scene File Path
Path Mapping Of Output File Path
Shotgun Support
ftrack Support
Path Mapping Of Input File Path
Path Mapping Of Working Directory Path
1.3.46 Puppet
Highlighted Features
Sync applications and plugins across render nodes
Automatically sync when render nodes are idle
Documentation: Puppet Event Documentation
1.3.47 Python
Highlighted Features
24
Chapter 1. Introduction
1.3.48 Quicktime
Highlighted Features
1.3.49 RealFlow
Highlighted Features
1.3.50 REDLine
Highlighted Features
Path Mapping Of Scene File Path
Path Mapping Of Output Folder Path
Path Mapping Of RSX File Path
Documentation: REDLine Documentation
25
Highlighted Features
Shotgun Support
ftrack Support
Draft Support
Path Mapping Of Input File Path
Supported Renderers
3Delight
AIR
Aqsis
BMRT
Entropy
PRMan
Pixie
RenderDotC
RenderPipe
1.3.52 Rendition
Highlighted Features
Tile Rendering
Shotgun Support
ftrack Support
Path Mapping Of Scene File Path
Path Mapping Of Output File Path
1.3.53 Rhino
Highlighted Features
Supported Renderers
Brazil r/s
Flamingo Raytrace
Flamingo Photometric
Maxwell
Penguin
Rhino
TreeFrog
VRay
26
Chapter 1. Introduction
1.3.54 RVIO
Highlighted Features
Shotgun Support
ftrack Support
Path Mapping Of Input File Paths
Path Mapping Of Audio File Paths
Path Mapping Of Output File Path
1.3.55 Salt
Highlighted Features
Sync applications and plugins across render nodes
Automatically sync when render nodes are idle
Documentation: Salt Event Documentation
1.3.56 Shake
Highlighted Features
Shotgun Support
ftrack Support
Path Mapping Of Scene File Path
Documentation: Shake Documentation
1.3.57 Shotgun
Highlighted Features
27
1.3.58 SketchUp
Highlighted Features
Supported Renderers
All
1.3.59 Softimage
Highlighted Features
Supported Renderers
All
1.3.60 Terragen
Highlighted Features
Supports Versions 2 to 3
Local Rendering
Path Mapping Of Scene File Path
Path Mapping Of Output File Path
28
Chapter 1. Introduction
Supported Applications
1.3.63 VRED
Highlighted Features
29
1.3.65 Vue
Highlighted Features
1.3.66 xNormal
Highlighted Features
Path Mapping Of Scene File Path
Documentation: xNormal Documentation
30
Chapter 1. Introduction
When rendering in a mixed OS environment, you can configure Deadline to swap paths based on the operating system
it is running on. The way this works is often specific to the rendering application that you are using, so please refer
to Cross-Platform Rendering Considerations section for the plug-in that you are using for more information. You can
access plug-in specific documentation in the Plug-ins documentation.
31
4. The next time you restart the machine, it should login automatically as the specified user.
By default, the Slaves are set to start automatically when the machine logs in. This setting, as well as others, can be
modified from the Launcher on each machine.
32
Chapter 1. Introduction
For more information about the possible settings, see here: MSDN article WER Settings.
Its also possible to just default to sending them if you like, or to store the crash dumps in a safe place if youre a
developer.
33
34
Chapter 1. Introduction
Protocol
UDP
Port
Number
17061
TCP
17061
TCP
17062
TCP
TCP
27017
28017
TCP
8080
UDP
UDP
TCP
123
25
TCP
587
TCP
465
Service
Comment
Pulse
auto-configuration
Pulse
auto-configuration
Pulse
Default UDP port - Pulse listens for broadcasts on the UDP port
MongoDB
MongoDB Web
API
Pulse WebService
WoL
(Wake-On-Lan)
NTP
SMTP
SMTP
(submission)
SMTP SSL
License Server
If necessary, ensure that the Thinkbox Flexlm license file has been configured to run over an exact TCP port and
this port has also been allowed access through any required firewall or network switch. Please refer to the FLEXnet
Licensing Documentation.
External Web Service Access & Deadline Mobile
If external network access is required, please see the Web Service and Deadline Mobile documentation.
35
36
Chapter 1. Introduction
CHAPTER
TWO
INSTALLATION
2.1.1 Database
Deadline uses MongoDB for the Database, and requires MongoDB 2.6.1 or later. The Repository installer can install the MongoDB database for you, or you can use an existing MongoDB installation providing that it is running
MongoDB 2.6.1 or later.
The following operating systems are supported for the Database:
Windows Server 2003 and later (64-bit)
Linux (64-bit)
Mac OS X 10.7 and later (64-bit)
These are the minimum recommended hardware requirements for a production Database:
64-bit Architecture
8 GB RAM
4 Cores
RAID or SSD disks
20 GB disk space
Note that MongoDB performs best if all the data fits into RAM, and it has fast disk write speeds. In addition, larger
farms may have to scale up on RAM and Cores as necessary, or even look at Sharding their database. Finally, while
you can install MongoDB to a 32-bit system for testing, it has limitations and is not recommended for production. For
example, the database size will be limited to 2 gigabytes, and Journaling will be disabled. Without Journaling, it will
not be possible to repair the database if a crash corrupts the data. See the MongoDB FAQ for more information.
Windows
If you choose a non-Server Windows Operating System (Vista, 7, or 8) to host the database, you should be aware
that these operating systems have a TCP/IP connection limitation of 10 new connections per second. If your render
farm consists of more than 10 machines, it is very likely that youll hit this limitation every now and then (and the
37
odds continue to increase as the number of machines increase). This is a limitation of the operating systems, and isnt
something that we can workaround, so we recommend using a Server edition of Windows, or a different operating
system like Linux.
Linux
If you choose a Linux system to host the database, you will need to make sure the system resource limits are configured
properly to avoid connection issues. More details can be found in the Database and Repository Installation Guide.
Other Linux recommendations include:
Do not run MongoDB on systems with Non-Uniform Access Memory (NUMA). It can cause a number of
operational problems, including slow performance or high system process usage.
Install on a system with a minimum Linux kernel version of 2.6.36.
Install on a system with Ext4 or XFS file systems.
Turn off atime or relatime for the storage volume containing the database files, as it can impact performance.
Do not use hugepages virtual memory pages as MongoDB performs better with normal virtual memory pages.
Mac OS X
If you choose a Mac OS X system to host the database, you will need to make sure the system resource limits are
configured properly to avoid connection issues. More details can be found in the Database and Repository Installation
Guide.
2.1.2 Repository
The Repository is just a collection of files and folders, so it can be installed to any type of share on any type of
operating system. Common Repository choices include:
Windows Server
Linux
FreeBSD
While the Repository can be installed on any operating system, the Repository installer is only supported on the
following operating systems. To install on a different operating system, first create the network share on that
system, and then run the Repository installer on one of the systems below and choose the network share as the
installation location.
Windows (32 and 64-bit)
Windows XP and later (32 and 64-bit)
Windows Server 2003 and later (32 and 64-bit)
Linux (64-bit only)
Ubuntu 12.04 and later
Debian 7 and later
Fedora 16 and later
CentOS 6 and later
RHEL 6 and later
38
Chapter 2. Installation
2.1.3 Client
The Client can be installed on Windows, Linux, or Mac OS X. The requirements for todays rendering applications go
far beyond the requirements of the Client, so if a machine is powerful enough to be used for rendering, it is more than
capable of running the Client applications.
If you choose to run Pulse or Balancer, and you wish to run it on the same machine as the Database and/or Repository,
you will have to install the Client on that machine as well.
The following operating systems are supported for the Client:
Windows (32 and 64-bit)
Windows XP and later (32 and 64-bit)
Windows Server 2003 and later (32 and 64-bit)
Linux (64-bit only)
Ubuntu 12.04 and later
Debian 7 and later
Fedora 16 and later
CentOS 6 and later
RHEL 6 and later
Mac OS X (64-bit only)
10.7 (OS X Lion) and later
Note that on Linux, the Deadline applications have dependencies on some libraries that are installed with the lsb
(Linux Standard Base) package. To ensure you have all the dependencies you need, we recommend installing the full
lsb package. In addition, the libX11 and libXext must be installed on Linux for the Deadline applications to run, even
if running them with the -nogui flag. Theyre required for the Idle Detection feature, among other things. To check if
libX11 and libXext are installed, open a Terminal and run the following commands. If they are installed, then the path
to the libraries will be printed out by these commands.
ldconfig -p | grep libX11
ldconfig -p | grep libXext
39
If any of these libraries are missing, then please contact your local system administrator to resolve this issue. Here is
an example assuming you have root access, using YUM to install them on your system:
sudo -s
yum install redhat-lsb
yum install libX11
yum install libXext
Note that if you are choosing a machine to run Pulse, you should be aware that non-Server editions of Windows
have a TCP/IP connection limitation of 10 new connections per second. If your render farm consists of more than
10 render nodes, it is very likely that youll hit this limitation every now and then (and the odds continue to increase
as the number of machines increase). This is a limitation of the operating systems, and isnt something that we can
workaround, so we recommend using a Server edition of Windows, or a different operating system like Linux.
2.2 Licensing
See the License Server Documentation for more information on installing and configuring the License Server.
40
Chapter 2. Installation
2.3.2 Installation
While the Repository can be installed on any operating system, the Repository installer is only available for Windows,
Linux, and Mac OS X. However, the machine that you run the Repository installer on doesnt have to be the same
machine youre installing the Repository to. For example, if you have an existing share on a FreeBSD server or a NAS
system, you can run the Repository installer on Windows, Linux, or Mac OS X and choose that share as the install
location.
To install the Repository, simply run the appropriate installer for your operating system and follow the steps. This
procedure is identical for all operating systems. The Repository installer also supports silent installations.
41
When choosing the Installation Directory, you can choose either a local path on the current machine, or the path to an
existing network share. Note that if you choose a local path, you must ensure that path is shared on the network so that
the Clients can access it. Do not install over an existing installation unless its the same major version, or there
could be unexpected results.
42
Chapter 2. Installation
If youre installing over an existing Repository installation, all previous binaries, plug-ins, and scripts will be backed
up prior to being overwritten. After the installation is complete, you can find these backed up files in the Backup folder
in the Repository installation root. Note that installing over an existing repository is only supported for repairing a
damaged repository, or for performing a minor upgrade. Major upgrades require a fresh repository installation. See
the Upgrading or Downgrading Deadline Documentation for more information.
43
After choosing the installation directory, you will be asked to install the MongoDB Database, or connect to an existing
one. If you choose to install the MongoDB Database, you will be asked to choose an installation location and a port
number. It is highly recommended that you choose a local directory to install the Database.
Note that Deadline 7 requires a newer version of the MongoDB database application than the one shipped with
Deadline 6. However, this newer version is backward compatible with Deadline 6. So if you are installing the
MongoDB database application to a machine that already has a Deadline 6 database installed, you can just
install it over top of the existing Deadline 6 database installation.
44
Chapter 2. Installation
Next, you need to specify the Database Settings so that the installer can set up the Database. These settings will also
be used by the Clients to connect to the database. The following are required:
Database Server: The host name or the IP address of the machine that the MongoDB database is running on.
If desired, you can specify multiple entries and separate them with semicolons. There are a couple reasons to
specify multiple entries:
You have machines on different subnets that need to access the database differently (ie: machines in the
cloud might use a different host name than machines on the local network).
Some machines need to resolve the database machine by its host name, and others need to use its IP
address.
Note that if there are IP addresses listed that cannot be resolved, the Deadline Command application
can run slower on Linux and OSX Clients because it wont exit until the connection attempt for those IP
addresses time out.
Database Port: The port that the MongoDB database is listening on.
Database Name: The name of the Database. If you are setting up a new Database, you can leave this as the
default. If you are connecting to an existing Database, make sure to enter the same name you used when you
initially set up the Database.
Replica Set: If you set up your MongoDB database manually and it is part of a Replica Set, specify the Replica
Set Name here. If you dont have Replica Set, just leave this blank.
When you press Next, the installer will try to connect to the database using these settings to configure it. This can take
a minute or two. If an error occurs, you will be prompted with the error message. If the setup succeeds, you can then
proceed with the installation of the Repository.
45
To run in silent mode, pass the mode unattended command line option to the installer. For example, on Windows:
DeadlineRepository-X.X.X.X-windows-installer.exe --mode unattended
To get a list of all available command line options, pass the help command line option to the installer. For example,
on Mac OS X:
/DeadlineRepository-X.X.X.X-osx-installer.app/Contents/MacOS/installbuilder.sh --help
Note that there are a few Repository installer options that are only available from the command line, which you can
view when running the help command. These options include:
backuprepo: If enabled, many folders in the Repository will be backed up before overwriting them (this is
enabled by default).
dbauth: If enabled, Deadline will use the given user and password to connect to MongoDB (if authentication
is enabled on your database).
dbuser: The user name to connect to MongoDB if authentication is enabled.
dbpassword: The password to connect to MongoDB if authentication is enabled.
dbsplit: If enabled, the database collections will be split into separate databases to improve performance (this
is enabled by default).
Database Config File
A file called config.conf is installed to the data directory in the database installation folder. This file is used to configure
the MongoDB database, and can be modified to add or change functionality. This is what you will typically see by
default:
#MongoDB config file
#where to log
systemLog:
destination: file
path: C:/DeadlineDatabase7/data/logs/log.txt
quiet: true
#verbosity: <integer>
#port for mongoDB to listen on
#uncomment below ipv6 and REST option to enable them.
net:
port: 27070
#ipv6: true
#http:
46
Chapter 2. Installation
#RESTInterfaceEnabled: true
#where to store the data
storage:
dbPath: C:/DeadlineDatabase7/data
#enable sharding
#sharding:
#clusterRole
#configDB
#setup replica set with give replica set name
#replication:
#replSetName
#enable authentication
#security:
#authorization: enabled
After making changes to this file, simply restart the mongod process for the changes to take effect. See the MongoDB
Configuration File Options for more information on the available options.
Manual Database Installation
The Repository installer installs MongoDB with the bare minimum settings required for Deadline to operate. Manually
installing the Database might be preferable for some because it gives you greater control over things like authentication,
and allows you to create sharded clusters or replica sets for backup.
If you wish to install MongoDB manually, you can download MongoDB from the MongoDB Downloads Page. Once
MongoDB is running, you can then run the Repository installer, and choose to connect to an existing MongoDB
Database. Here are some helpful links for manually installing the MongoDB database:
Installing MongoDB
Enabling Authentication
Replication
Sharding
MongoDB also has a management system called MMS. Its a cloud service that makes it easy to provision, monitor,
backup, and scale your MongoDB databse. Here are some helpful links for setting up and using MMS:
Getting Started
Add MongoDB Servers to MMS
Install the Automation Agent
The Automation Agent mentioned above makes it possible to setup your MongoDB database from a web interface, and
easily configure which MongoDB servers are replica sets or shards. It also allows you to easily upgrade the version of
your MongoDB database. Here are some additional links for how you can use the Automation Agent:
Deploy a Replica Set
Deploy a Sharded Cluster
Deploy a Standalone MongoDB Instance
Change the MongoDB Version
47
Note though that as of this writing, the Automation Agent is only available for Linux and Mac OS X.
Database Resource Limits
Linux and Mac OS X systems impose a limit on the number of resources a process can use, and these limits can
affect the number of open connections to the database. It is important to be aware of these limits, and make sure they
are set appropriately to avoid unexpected behaviour. Note that MongoDB will allocate 80% of the system limit for
connections, so if the system limit is 1024, the maximum number of connections will be 819.
If you choose a Linux system to host the database, make sure the system limits are configured properly to avoid connection issues. See MongoDBs Linux ulimit Settings documentation for more information, as well as the recommended
system limits to use.
You can check your current Linux/OSX ulimit settings in a terminal shell:
#overall ulimit settings on the machine
ulimit -a
#number of open files allowed
ulimit -n
MongoDB provides these Recommended ulimit Settings for optimal performance of your database. Note, you must
restart the Deadline Database daemon after changing these ulimit settings.
If you choose a Mac OS X system to host the database, and you use the Repository installer to install the database,
the resource limits will be set to 1024. These limits can be adjusted later by manually editing the HardResourceLimits
and SoftResourceLimits values in /Library/LaunchDaemons/org.mongodb.mongod.plist after the Repository installer
has finished.
48
Chapter 2. Installation
49
On the Protocol and Ports page, choose TCP, and then specify the port that you chose for the database during the
install, and then press next. Then on the Action page, choose Allow The Connection and press Next.
50
Chapter 2. Installation
51
On the Profile page, choose the networks that this rule applies to, and then press next. Then on the Name page, specify
a name for the rule (for example, MongoDB Connection), and then press Finish.
52
Chapter 2. Installation
53
Linux
On RedHat and CentOS, the following commands should allow incoming connections to the Mongo database if iptables are being used. Just make sure to specify the port that you chose for the database during the install.
sudo iptables -I INPUT 1 -p tcp --dport 27070 -j ACCEPT
sudo ip6tables -I INPUT 1 -p tcp --dport 27070 -j ACCEPT
Ubuntu has no firewall installed by default, and we have not yet tested Fedora Cores FirewallD.
Mac OS X
Mac OS X has its firewall disabled by default, but if enabled, it is possible to open ports for specific applications. Open
up System Preferences choose the Security & Privacy option, and click on the Firewall tab.
54
Chapter 2. Installation
Press the Firewall Options button to open the firewall options. Press the [+] button and choose the path to the mongod
application, which can be found in the database installation folder in mongo/application/bin (for example, /Applications/Thinkbox/DeadlineDatabase7/mongo/application/bin/mongod). Then click OK to save your settings.
55
Chapter 2. Installation
Right-click on the Repository folder and select Properties from the menu.
Select the Security tab.
If there is already an Everyone item under Group or user names, you can skip the next two steps.
Click on the Add button.
In the resulting dialog, type Everyone and click OK.
57
58
Chapter 2. Installation
Second, you need to share the Repository folder. Note that the images shown here are from Windows XP, but the
procedure is basically the same for any version of Windows.
On the machine where the Repository is installed, navigate to the folder where it is installed using
Windows Explorer.
Right-click on the Repository folder and select Properties from the menu. If youre unable to see the
Sharing tab, you may need to disable Simple File Sharing in the Explorer Folder Options.
59
60
Chapter 2. Installation
Select the option to Share This Folder, then specify the share name.
Click the Permissions button.
Give Full Control to the Everyone user.
Press OK on the Permissions dialog and then the Properties dialog.
61
Linux
Since the Clients expects full read and write access to the repository, its recommended to use a single user account
to mount shares across all machines. It is possible to add particular users to a deadline group, but you will need to
experiment with that on your own.
So for both of the sharing mechanisms we explain below, youll need to create a user and a group named deadline.
They dont need a login or credentials, we just need to be able to set files to be owned by them and for their account to
show up in /etc/passwd. So, to do this use the useradd command.
sudo useradd -d /dev/null -c "Deadline Repositry User" -M deadline
This should create a user named deadline with no home folder, and a fancy comment. The account login should also
be disabled, meaning your standard users cant ssh or ftp into your file server using this account. Set a password using
sudo passwd deadline if you need your users to login as deadline using ftp or ssh.
62
Chapter 2. Installation
And finally, have the Repository owned by this new user and group
sudo chown -R deadline:deadline /path/to/repository
sudo chmod -R 777 /path/to/repository
Now youre ready to set up your network sharing protocol. There are a many ways this can be done, and this just
covers a few of them.
Samba Share
This is an example entry in the /etc/samba/smb.conf file:
[DeadlineRepository]
path = /path/to/repository
writeable = Yes
guest ok = Yes
create mask = 0777
force create mode = 0777
force directory mode = 0777
unix extensions = No
NFS Share
The simplest thing that could possibly work. Note that this is not the most secure thing that could possibly work:
For Linux and BSD, open up /etc/exports as an administrator, and make one new export:
/path/to/repository
192.168.2.0/24(rw,all_squash,insecure)
63
Any time you change the exports file, youll need to issue the same command, but replace start with reload.
There is an excellent tutorial here as well: https://help.ubuntu.com/community/SettingUpNFSHowTo
Mac OS X
First, you need to configure the Repository folder permissions. Note that the images shown here are from Leopard
(10.5), but the procedure is basically the same for any version of Mac OS X.
On the machine where the Repository is installed, navigate to the folder where it is installed using
Finder.
Right-click on the Repository folder and select Get Info from the menu.
Expand the Sharing & Permissions section, and unlock the settings if necessary.
Give everyone Read & Write privileges.
While probably not necessary, also give admin Read & Write privileges.
64
Chapter 2. Installation
If you prefer to set the permissions from the Terminal, run the following commands:
$ chown -R nobody:nogroup /path/to/repository
$ chmod -R 777 /path/to/repository
Now you can share the folder. There are a many ways this can be done, and this just covers a few of them.
Using System Preferences
Note that the images shown here are from Leopard (10.5), but the procedure is basically the same for any version of
Mac OS X.
Open System Preferences, and select the Sharing option.
2.3. Database and Repository Installation
65
Make sure File Sharing is enabled, and then add the Repository folder to the list of shared folders.
Under Users, give everyone Read & Write privileges.
If sharing with Windows machines, press the Options button and make sure the Share files and
folders using SMB (Windows) is enabled.
Samba Share
Interestingly, Mac OS X uses samba as well. Apple just does a good job of hiding it. To create a samba share in Mac
OS X, past this at the bottom of /etc/smb.conf:
[DeadlineRepository]
path = /path/to/repository
writeable = Yes
guest ok = Yes
create mask = 0777
force create mode = 0777
force directory mode = 0777
unix extensions = No
2.3.5 Uninstallation
The Repository installer creates an uninstaller in the folder that you installed the Repository to. To uninstall the
Repository, simply run the uninstaller and confirm that you want to proceed with the uninstallation.
66
Chapter 2. Installation
Note that if you installed the Database with the Repository installer, it will be uninstalled as well. If you chose to
connect to a Database that you manually installed, the Database will be unaffected.
Command Line or Silent Uninstallation
The Repository uninstaller can be run in command line mode or unattended mode on each operating system.
To run in command line mode, pass the mode text command line option to the installer. For example, on Linux:
./uninstall --mode text
To run in silent mode, pass the mode unattended command line option to the installer. For example, on Windows:
uninstall.exe --mode unattended
To get a list of all available command line options, pass the help command line option to the installer. For example,
on Mac OS X:
./uninstall --help
67
Pulse: An optional mini server application that performs maintenance operations on the farm, and manages
more advanced features like Auto Configuration, Power Management, Slave Throttling, Statistics Gathering,
and the Web Service. If you choose to run Pulse, it only needs to be running on one machine.
Balancer: An optional Cloud-controller application that can create and terminate Cloud instances based on
things like available jobs and budget settings. If you choose to run Balancer, it only needs to be running on one
machine.
Note that the Slaves and the Balancer applications are the only Client applications that require a license.
68
Chapter 2. Installation
69
Configure the necessary Client Setup and Launcher Setup settings. The following Client settings are available:
Repository Directory: This is the shared path to the Repository. Note, if you are unable to browse to your
Repository shared path via your drive mapping in the install wizard, then this is more than likely due to a
problem with Windows UAC elevation. Essentially, even if the currently logged in user has the network drive
70
Chapter 2. Installation
configured; that configuration is not available in the elevated scope, as you are technically another user here. This
is something which is handled by the OS so we cannot do anything on our side. However, possible workarounds
are to simply select the UNC path instead that the drive is mapped to OR logon to the system as the user account
with elevated permissions (local administrator for example) and then run the client install wizard.
License Server: The license server entry should be in the format @SERVER, where SERVER is the host name
or IP address of the machine that the license server is running on. If you configured your license server to use a
specific port, you can use the format PORT@SERVER. For example, @lic-server or 27000@lic-server. If you
are running Deadline in LICENSE-FREE MODE, or you have not set up your license server yet, you can leave
this blank for now.
The following Launcher settings are available:
Launch Slave When Launcher Starts: If enabled, the Slave will launch whenever the Launcher starts.
Install Launcher As A Service: Enable this if you which to install the Launcher as a service. The service
must run under an account that has network access. See the Windows Service documentation below for more
information.
After configuring the Client and Launcher settings, press Next to continue with the installation.
Linux
Note that on Linux, the Deadline applications have dependencies on some libraries that are installed with the lsb
(Linux Standard Base) package. To ensure you have all the dependencies you need, we recommend installing the full
lsb package. In addition, the libX11 and libXext must be installed on Linux for the Deadline applications to run, even
if running them with the -nogui flag. Theyre required for the Idle Detection feature, among other things. To check if
libX11 and libXext are installed, open a Terminal and run the following commands. If they are installed, then the path
to the libraries will be printed out by these commands.
ldconfig -p | grep libX11
ldconfig -p | grep libXext
If any of these libraries are missing, then please contact your local system administrator to resolve this issue. Here is
an example assuming you have root access, using YUM to install them on your system:
sudo -s
yum install redhat-lsb
yum install libX11
yum install libXext
Start the installation process by double-clicking on the Linux Client Installer. The Linux Client installer also supports
silent installations with additional options.
71
72
Chapter 2. Installation
73
74
Chapter 2. Installation
Configure the necessary Client Setup and Launcher Setup settings. The following Client settings are available:
Repository Directory: This is the shared path to the Repository.
License Server: The license server entry should be in the format @SERVER, where SERVER is the host name
or IP address of the machine that the license server is running on. If you configured your license server to use a
specific port, you can use the format PORT@SERVER. For example, @lic-server or 27000@lic-server. If you
are running Deadline in LICENSE-FREE MODE, or you have not set up your license server yet, you can leave
this blank for now.
The following Launcher settings are available:
Launch Slave When Launcher Starts: If enabled, the Slave will launch whenever the Launcher launches.
Install Launcher As A Daemon: Enable this if you which to install the Launcher as a daemon. You can also
choose to run the daemon as a specific user. If you leave the user blank, it will run as root instead. See the Linux
Daemon documentation below for more information.
After configuring the Client and Launcher settings, press Next to continue with the installation.
Mac OSX
Start the installation process by double-clicking on the Mac Client Installer. The Mac Client installer also supports
silent installations with additional options.
75
76
Chapter 2. Installation
77
78
Chapter 2. Installation
Configure the necessary Client Setup and Launcher Setup settings. The following Client settings are available:
Repository Directory: This is the shared path to the Repository. Deadline isnt able to understand paths starting
with afp:// or smb://, so point the installer to the Repository path mounted under /Volumes.
License Server: The license server entry should be in the format @SERVER, where SERVER is the host name
or IP address of the machine that the license server is running on. If you configured your license server to use a
specific port, you can use the format PORT@SERVER. For example, @lic-server or 27000@lic-server. If you
are running Deadline in LICENSE-FREE MODE, or you have not set up your license server yet, you can leave
this blank for now.
The following Launcher settings are available:
Launch Slave When Launcher Starts: If enabled, the Slave will launch whenever the Launcher launches.
Install Launcher As A Daemon: Enable this if you which to install the Launcher as a daemon. You can also
choose to run the daemon as a specific user. If you leave the user blank, it will run as root instead. See the Mac
OSX Daemon documentation below for more information.
After configuring the Client and Launcher settings, press Next to continue with the installation.
79
To run in command line mode, pass the mode text command line option to the installer. For example, on Linux:
./DeadlineClient-X.X.X.X-linux-x64-installer.run --mode text
To run in silent mode, pass the mode unattended command line option to the installer. For example, on Windows:
DeadlineClient-X.X.X.X-windows-installer.exe --mode unattended
To get a list of all available command line options, pass the help command line option to the installer. For example,
on OSX:
/DeadlineClient-X.X.X.X-osx-installer.app/Contents/MacOS/installbuilder.sh --help
Note that there are quite a few Client installer options that are only available from the command line, which you can
view when running the help command. These options include:
configport: The port that the Client uses for Auto Configuration.
slavestartupport: The port that the Slaves use to ensure that only one slave is initializing at a time.
slavedatadir: The local path where the Slave temporarily stores plugin and job data from the Repository during
rendering (if not specified, the default location is used).
noguimode: If enabled, the Launcher, Slave, and Pulse will run without a user interface on this machine.
killprocesses: If enabled, the installer will kill any running Deadline processes before proceeding with the
installation (Windows only).
launcherport: The Launcher uses this port for Remote Administration, and it should be the same on all Client
machines.
launcherstartup: If enabled, the Launcher will automatically launch when the system logs in (non-service
mode on Windows only).
restartstalled: If enabled, the Launcher will try to restart the Slave application on this machine if it stalls.
autoupdateoverride: Overrides the Auto Update setting for this client installation (leave blank to use the value
specified in the Repository Options)
launcherservicedelay: If the Launcher is running as a service or daemon, this is the amount of seconds it waits
after starting up before launching other Deadline applications.
80
Chapter 2. Installation
First, the default user for a service has no access to network resources, so while Launcher service will run without any
issues, neither the Slave nor Pulse applications will be able to access the Repository. To avoid network access issues,
you must configure the service to run as a user with network privileges. Typical desktop users have this permission,
but check with your system administrator to find which account is best for this application.
Another issue presented by the service context is that there is no access to the default set of mapped drives. Applications will either need to map drives for themselves, or make use of UNC paths. While Deadline supports Automatic
Drive Mapping, the SMB protocol does not allow sharing a resource between two users on the same machine. This
means that mapping of drives or accessing a resource with different credentials may fail when running as a service on
a machine which already requires access to the Repository.
There is also an issue with hardware-based renderers. Starting with Windows Vista, services now run in a virtualized
environment which prevents them from accessing hardware resources. Because the renderer will run in the context of
a service, hardware-based renderers will typically fail to work.
Linux Daemon
When installing the daemon, the Client installer creates the appropriate deadlinelauncherservice script in /etc/init.d.
When running as a daemon on Linux, the Launcher will run without displaying its system tray icon. If the Slave or
Pulse application is started through the Launcher while it is in this mode, they will also run without a user interface.
This is useful when running Deadline on a Linux machine that doesnt have a Desktop environment.
Mac OSX Daemon
When installing the daemon, the Client installer creates the appropriate com.thinkboxsoftware.deadlinelauncher.plist
file in /Library/LaunchDaemons.
When running as a daemon on Mac OSX, the Launcher will run without displaying its system tray icon. If the Slave
or Pulse application is started through the Launcher while it is in this mode, they will also run without a user interface.
81
The other option is to set up Auto Configuration so that the Client automatically pulls the license server information.
2.4.6 Uninstallation
The Client installer creates an uninstaller in the folder that you installed the Client to. To uninstall the Client, simply
run the uninstaller and confirm that you want to proceed with the uninstallation.
To run in silent mode, pass the mode unattended command line option to the installer. For example, on Windows:
uninstall.exe --mode unattended
To get a list of all available command line options, pass the help command line option to the installer. For example,
on Mac OS X:
./uninstall --help
82
Chapter 2. Installation
83
The Deadline Client Bin Directory page shows what DEADLINE_PATH is currently set to. This value is originally
set by the Client installer, and is used by the submission scripts to find the Clients bin directory so that it can find the
Repository and submit jobs. You can change the DEADLINE_PATH value here if its incorrect or if it doesnt exist,
and the submitter installer will give you the option to make the change permanent.
The next page will show the Repository directory that the Client is currently connected to, which is where the submission scripts are installed from. If this path is incorrect, you can change it here.
84
Chapter 2. Installation
Select the components you wish to install (the installer will try to auto select the versions it detects), and then verify
the install location for each one.
After configuring these, press Next to continue with the installation.
85
To run in silent mode, pass the mode unattended command line option to the installer. For example, on Windows:
Maya-submitter-windows-installer.exe --mode unattended
To get a list of all available command line options, pass the help command line option to the installer. For example,
on OSX:
/Maya-submitter-osx-installer.app/Contents/MacOS/installbuilder.sh --help
Note that there are quite a few Submitter installer options that are only available from the command line, which you
can view when running the help command. These options include:
enable-components: Select the components which you would like to enable (programs installed in default
locations will be auto selected)
disable-components: Select the components which you would like to disable (programs installed in default
locations will be auto selected)
destDir###: The destination directories for the components (will be defaulted to if installed in default locations)
An example batch script that puts these all together:
@echo off
.\Maya-submitter-windows-installer.exe --mode unattended --disable-components Maya2014
.\3dsMax-submitter-windows-installer.exe --mode unattended
--enable-components 3dsMax2011,3dsMax2015
--disable-components 3dsMax2012,3dsMax2013,3dsMax2014
--destDir2011 "C:\3dsMax2011_64"
.\Nuke-submitter-windows-installer.exe --mode unattended
This script installs the submitters for Maya (ignoring Maya 2014), 3ds Max(2011 and 2015 only, with 2011 in an
unusual directory) and Nuke (default settings)
86
Chapter 2. Installation
87
Important Notice When Upgrading From 7.0 to 7.1: Due to a change in the Slave Scheduling settings in the
database, you should avoid editing the Slave Scheduling settings from a machine running version 7.1 until all machines
have upgraded to 7.1. Otherwise, you will get the following error when the Launcher tries to auto-upgrade. If you
get this error when the Launcher tries to auto-upgrade, the workaround is to delete all Slave Scheduling groups in the
Slave Scheduling settings, and then recreate them once all machines have upgraded to 7.1.
An error occurred while deserializing the SlaveSchedulingGroups property of class
Deadline.Configuration.DeadlineNetworkSettings: Element 'AllSlaves' does not match
any field or property of class Deadline.Slaves.SlaveSchedulingGroup.
(System.IO.FileFormatException)
88
Chapter 2. Installation
If the slaves are currently rendering and you dont want to disrupt them, you can choose the option to Restart Slaves
After Current Task instead. This option will allow the Slaves to upgrade or downgrade after they finishe rendering
their current task to prevent the loss of any render time. See the Remote Control documentation for more information.
After restarting the Slaves, several Slaves may appear offline or a message may pop up saying the certain Slaves did
not respond. This may occur because all the Slaves are trying to upgrade or downgrade at once. Wait a little bit and
eventually all the Slaves should come back online.
Because the Clients use the dbConnect.xml file in the Repository to determine the database connection settings, you
dont have to reconfigure the Clients to find the new database.
89
2. Shut down all the Slave applications running on your render nodes. You dont want them making changes during
the move.
3. Copy the Repository folder from the original location to the new location.
4. Redirect all your Client machines to point to the new Repository location.
5. Start up the Slaves and ensure that they can connect to the new Repository location.
6. Delete the original Repository (optional).
As an alternative to step (4), you can configure your share name (if the new Repository is on the same machine) or
your DNS settings (if the new Repository is on a different machine) so that the new Repository location has the same
path as the original. This saves you the hassle of having to reconfigure all of your Client machines.
Specify the path to the old Repository that you want to import the settings from, and then choose which settings you
want to import and press the Import Settings button. Note that all passwords in Repository Options (Super User, SMTP,
90
Chapter 2. Installation
Mapped Drives) and Users (Web Service, Windows Login) will not be transferred, so these must be set manually after
the transfer is complete.
Also note that this feature only allows you to import settings from Deadline 6 or later. An un-supported Python
script DeadlineV5Migration.py attempts to migrate Deadline v5.x customers over to Deadline v6.x. It can be found
together with other useful example scripts on our Github site. Please note the disclaimer before executing this script
in your Deadline queue.
91
92
Chapter 2. Installation
CHAPTER
THREE
GETTING STARTED
93
RenderExecutable2016_0=C:\\Program Files\\Autodesk\\Maya2016\\bin\\MayaBatch.exe;/usr/autodesk/maya2
RenderExecutable2016_5=C:\\Program Files\\Autodesk\\Maya2016.5\\bin\\MayaBatch.exe;/usr/autodesk/may
The MayaBatch.dlinit file is automatically written to as you commit UI changes in the Plugin Configuration dialog
in Monitor. There is no need to manually edit these text files although this is possible. The ../<DeadlineRepository>/plugins/MayaBatch/MayaBatch.param file is an optional file that is used by the Plugin Configuration dialog in
the Monitor. It declares properties that the Monitor uses to generate a user interface for modifying custom settings in
the MayaBatch.dlinit file.
94
Typically, there are 3 functions in our scripting API which help us identify the correct application executable to
return as the Render Executable to be used, depending on which Build option is selected in your in-app or monitor
submission UI (see above for an example) to be used - None (default), 32bit or 64bit. These functions check the actual
bitness of the application binary executable to ensure we use a 32bit or 64bit application if applicable:
FileUtils. SearchFileList ( string fileList ) Searches a semicolon separated list of files (fileList) for the first
one that exists. For relative file paths in the list, the current directory and the PATH environment variable
will be searched. Returns the first file that exists, or if no file is found.
FileUtils. SearchFileListFor32Bit ( string fileList ) Searches a semicolon separated list of files (fileList) for
the first 32bit file that exists. For relative file paths in the list, the current directory and the PATH environment variable will be searched. Returns the first file that exists, or if no file is found.
FileUtils. SearchFileListFor64Bit ( string fileList ) Searches a semicolon separated list of files (fileList) for
the first 64bit file that exists. For relative file paths in the list, the current directory and the PATH environment variable will be searched. Returns the first file that exists, or if no file is found.
95
96
97
98
The pool and group that the job belongs to. See the Job Scheduling documentation for more information
on how these options affect job scheduling.
Priority
A job can have a numeric priority ranging from 0 to 100, where 0 is the lowest priority and 100 is the
highest priority. See the Job Scheduling documentation for more information on how this option affects
job scheduling.
Task Timeout and Auto Task Timeout
The number of minutes a slave has to render a task for this job before an error is reported and the task is
requeued. Specify 0 for no limit. If the Auto Task Timeout is properly configured in the Repository Options, then enabling the Auto Task Timeout option will allow a task timeout to be automatically calculated
based on the render times of previous frames for the job.
Concurrent Tasks and Limiting Tasks To A Slaves Task Limit
The number of tasks that can render concurrently on a single slave. This is useful if the rendering application only uses one thread to render and your slaves have multiple CPUs. Caution should be used when
using this feature though if your renders require a large amount of RAM.
If you limit the tasks to a slaves task limit, then by default, the slave wont dequeue more tasks then it has
CPUs. This task limit can be overridden for individual slaves by an administrator. See the Slave Settings
documentation for more information.
Machine Limit and Machine Whitelists/Blacklists
Use the Machine Limit to specify the maximum number of slaves that can render your job at one time.
Specify 0 for no limit. You can also force the job to render on specific slaves by using a whitelist, or you
can avoid specific slaves by using a blacklist. See the Limit Documentation for more information.
Limits
The limits that your job must adhere to. See the Limit Documentation for more information.
Dependencies
Specify existing jobs that this job will be dependent on. This job will not start until the specified dependencies finish rendering.
On Job Complete
If desired, you can automatically archive or delete the job when it completes.
Submit Job As Suspended
If enabled, the job will submit in the suspended state. This is useful if you dont want the job to start
rendering right away. Just resume it from the Monitor when you want it to render.
Scene/Project/Data File (if applicable)
The file path to the Scene/Project/Data File to be processed/rendered as the job. The file needs to be in
a shared location so that the slave machines can find it when they go to render it directly. See Submit
Scene/Project File With Job below for a further option. Note, all external asset/file paths referenced by
the Scene/Project/Data File should be resolvable by your slave machines on your network.
Frame List
The list of frames to render. See the Frame List Formatting Options below for valid frame lists.
Frames Per Task
Also known as Chunk Size. This is the number of frames that will be rendered at a time for each job task.
Increasing the Frames Per Task can help alleviate some of the inherited overhead that comes with network
99
rendering, but if your frames take longer than a couple of minutes to render, it is recommended that you
leave the Frames Per Task at 1.
Submit Scene/Project File With Job
If this option is enabled, the scene or project file you want to render will be submitted with the job, and
then copied locally to the slave machine during rendering. The benefit to this is that you have a copy of
the file in the state that it was in when it was submitted. However, if your scene or project file uses relative
asset paths, enabling this option can cause the render to fail when the asset paths cant be resolved.
Note, only the Scene/Project File is submitted with the job and ALL external/asset files referenced by the
Scene/Project File are still required by the slave machines.
If this option is disabled, the file needs to be in a shared location so that the slave machines can find
it when they go to render it directly. Leaving this option disabled is required if the file has references
(footage, textures, caches, etc) that exist in a relative location. Note though that if you modify the original
file, it will affect the render job.
3.2.6 Jigsaw
Jigsaw is a flexible multi-region rendering system for Deadline, and is available for 3ds Max, Maya, modo, and Rhino.
It can be used to render regions of various sizes for a single frame, and in 3ds Max and Maya, it can be used to track
and render specific objects over an animation.
Draft can then be used to automatically assemble the regions into the final frame or frames. It can also be used to
automatically composite re-rendered regions onto the original frame.
Jigsaw is built into the 3ds Max, Maya, modo, and Rhino submitters, and with the exception of 3ds Max, Jigsaw
viewport will be displayed in a separate window.
100
The viewport can be used to create and manipulate regions, which will then be submitted to Deadline to render. The
available options are listed below.
General Options
These options are always available:
Add Region: Adds a new region.
Delete All: Deletes all the current regions.
Create From Grid: Creates a grid of regions to cover the full viewport. The X value controls the number of
columns and the Y value controls the number of rows.
Fill Regions: Automatically creates new regions to fill the parts of the viewport that are not currently covered
by a region.
Clean Regions: Deletes any regions that are fully contained within another region.
Undo: Undo the last change made to the regions.
Redo: Redo the last change that was previously undone.
Selected Regions Options
These options are only available when one or more regions are selected.
Delete: Deletes the selected regions.
Split: Splits the selected regions into sub-regions based on the Tiles In X and Tyles In Y settings.
These options are only available when a single region is selected:
Clone: Creates a duplicate region parallel to the selected region in the specified direction.
Lock Postion: If enabled, the region will be locked to its current position.
Enable Region: If disabled, the region will be ignored when submitting the job.
X Position: The horizontal position of the selected region, taken from the left.
101
Y Position: The vertical position of the selected region, taken from the top.
Width: The width of the selected region.
Height: The height of the selected region.
These options are only available when multiple regions are selected.
Merge: Combines the selected regions into a single region that covers the full area of the selected regions.
Zoom Options
These zoom options are always available:
Zoom Slider: Use the slider to zoom the viewport in and out. You can also use the mouse wheel to zoom in and
out, and you can click the mouse wheel down to pan the image if it doesnt fit in the viewport.
Reset Zoom: Resets the zoom within the viewport.
Fit Viewport: Zoom to see everything in the viewport.
Keep Fit: Zoom to see everything in the viewport, and force the viewport to not change. This allows the
viewport to scale when resizing the Jigsaw window.
Maya Options
These options are currently only available for Maya:
Reset Background: Gets the current viewport image from Maya.
Fit Selection: Create regions surrounding the selected items in the Maya scene.
Mode: The type of regions to be used when fitting the selected items. The options are Tight (fitting the minimum
2D bounding box of the points) and Loose (fitting the minimum 2D bounding box of the bounding box of the
object).
Padding: The amount of padding to add when fitting the selection (this is a percentage value that is added in
each direction).
Save Regions: Saves the informations in the regions directly into the Maya scene.
Load Regions: Loads the saved regions information from the Maya scene.
You can specify individual frames by separating each frame with a comma or a space:
5,10,15,20
5 10 15 20
You can specify a frame range by separating the start and end frame with a dash:
102
1-100
Each of these examples will render every 5th frame between 1 and 100 (1, 6, 11, 16, etc).
Specifying a Reverse Frame Sequence
You can specify a reverse frame range by separating the end frame and start frame with a dash:
100-1
To render every 5th frame between 1 to 100, then fill in the rest, you can specify one of the following:
1-100x5,1-100
1-100x5 1-100
To render every 10th frame between 1 to 100, then every 5th frame, then every 2nd frame, then fill in the rest, you can
specify one of the following:
1-100x10,1-100x5,1-100x2,1-100
1-100x10 1-100x5 1-100x2 1-100
To render in a mix of forward and reverse by different Nth frames, then fill in the rest in reverse, you can specify one
of the following:
100-1x10,0-100x5,100-1
100-1x10 0-100x5 100-1
NOTE, a jobs frame range can be modified after a job has been submitted to Deadline by right-clicking on a job and
selecting Modify Frame Range....
103
If youre launching the Monitor for the first time on your machine, you will be prompted with a Login dialog. Simply
choose your user name or create a new one before continuing. Once the Monitor is running, youll see your user name
in the bottom right corner. If this is the wrong user, you can log in as another user by selecting File -> Change User.
Note that if your administrator set up Deadline to lock the user to the systems login account, you will have to log off
of your system and log back in as the correct user.
104
Job Panel: This panel shows all the jobs in the farm.
Task Panel: When a job is selected, this will show all the tasks for the job.
Job Reports Panel: When a job is selected, this will show all reports (logs and errors) for the job.
These panels, and others, can be created from the View menu, or from the main toolbar. They can be re-sized, docked,
or floated as desired. This allows for a highly customized viewing experience which is adaptable to the needs of
different users. See the Panel Features documentation for instructions on how to create new panels in the Monitor.
The easiest way to find your jobs is to enable Ego-Centric Sorting in the job panels drop down menu, which can be
found in the upper-right corner of the panel. This keeps all of your jobs at the top of the job list, regardless of which
column the job list is sorted on. Then sort on the Submit Date/Time column to show your jobs in the order they were
submitted.
105
For more advanced filtering, use the Edit Filter option in the drop down menu to filter on any column in the job list. If
you would like to save a filter for later use, use the Pinned Filters option in the drop down menu to pin your filter. You
106
will then be able to select it later from the Pinned Filters sub menu.
107
Finally, you can use the search box above the job list to filter your results even further.
108
If you prefer to not have the jobs grouped together in the job list, you can disable the Group Jobs By Batch Name
option in the Monitor and User Settings.
To modify the properties of your job, you can double-click on the job, or right-click on it and select Modify Properties.
Here you can change scheduling options such as priority and pool, as well as other general properties like the job
name. If you wish to limit which render nodes your job runs on, as well as the number of nodes that can render it
concurrently, you can do so on the Machine Limit page. Depending on the application youre rendering with, you
109
may see an extra page at the bottom of the properties list (with the name of the plug-in) that allows you to modify
properties which are specific to that application. More information on job properties can be found in the Job Properties
documentation.
110
111
112
If the Job or Task panels are not visible, see the Panel Features documentation for instructions on how to create new
panels in the Monitor.
113
When suspending a job, a confirmation message will appear that gives you the option to suspend the tasks for the
job that are currently rendering. If you disable this option, any tasks that are currently rendering will be allowed to
complete.
These are the states that a job can be in. They are color coded to make it clear which state the job is in.
Queued (white): No tasks for the job are currently being rendered.
Rendering (green): At least one task for the job is being rendered.
Completed (blue): All tasks for the job have finished rendering.
Suspended (gray): The job will not be rendered until it is resumed.
Pending (orange): The job is waiting on dependencies to finish, or is scheduled to start at a later time.
Failed (red): The job has failed due to errors. It must be resumed before it can be rendered again.
You may notice Queued or Rendering jobs turn slightly red or brown as they sit in the farm. This is an indication that
the job is reporting errors. See the Job Reports section further down for more information.
The Job panels right-click menu also gives the option to delete or archive jobs. Both options will remove the jobs
from the farm, but archived jobs can be imported again for later use. You can import archived jobs from the File menu
in the Monitor. See the Job Archiving documentation for more information.
114
Note that you can resubmit it as a normal job or a maintenance job. Maintenance jobs are special jobs where each task
for the job will render the same frame(s) on a different machine in your farm. This is useful for performing benchmark
tests on your machines. When a maintenance job is submitted, a task will automatically be created for each slave, and
once a slave has finished a task, it will no longer pick up the job.
Its even possible to resubmit specific tasks as a new job, which can be done from the Task panels right-click menu.
Note though that a Maintenance job can only be resubmitted from the Job panel.
Note that Tile jobs will have their own resubmission dialog, and only the Tile frame can be changed.
115
116
On Job Complete: When a job completes, you can auto-archive or auto-delete it. You can also choose to do
nothing when the job completes.
Job Is Protected: If enabled, the job can only be deleted by the jobs user, a super user, or a user that belongs
to a user group that has permissions to handle protected jobs. Other users will not be able to delete the job, and
the job will also not be cleaned up by Deadlines automatic house cleaning.
Re-synchronize Auxiliary Files Between Tasks: If checked, all job files will be synchronized by the Slave
between tasks for this job. This can add significant network overhead, and should only be used if you are
manually editing any of the files that were submitted with the job.
Reload Plugin Between Tasks: If checked, the slave reloads all the plug-in files between tasks for the same
job.
Enforce Sequential Rendering: Sequential rendering forces a slave to render the tasks of a job in order. If
an earlier task is ever requeued, the slave wont go back to that task until it has finished the remaining tasks in
order.
Suppress Event Plugins: If enabled, this job will not trigger any event plugins while in the queue.
Job Is Interruptible: If enabled, tasks for this job can be interrupted during rendering by a job with a higher
priority.
Interruptible %: A task for this job will only be interrupted if the task progress is less than or equal to this
value.
Timeouts
These properties effect how a job will timeout. It is important to note that the Auto Task Timeout feature is based on
the Auto Job Timeout Settings in the Repository Options. The timeout is based on the render times of the tasks that
have already finished for this job, so this option should only be used if the frames for the job have consistent render
times.
117
118
119
You can modify the following options for the machine limit:
Slaves that can render this job simultaneously: The number of slaves that can render this job at the same
time.
Return Limit Stub When Task Progress % Reaches: If enabled, you can have a slave release its limit stub
when the current task it is rendering reaches the specified progress. Note that not all plug-ins report task progress,
in which case the machine limit stub will not be released until the task finishes rendering.
Whitelisted/Blacklisted Slaves: If slaves are on a blacklist, they will never try to render this job. If slaves are
on a whitelist, only those slaves will try to render this job. Note that an empty blacklist and an empty whitelist
are functionally equivalent, and have no impact on which machines the job renders on.
Load Machine List: Open a file dialog to load a list of slaves to be used in the white/blacklist. One machine
name per line in the file (.txt).
Save Machine List: Open a file dialog to save the current white/black list. Each machine name will be written
to a single line.
Limits
Here you can add or remove the limits that will effect your job. Limits are used to ensure floating licences are used
correctly on your farm. To add a limit to your job, you can select the limit(s) you require from from the limit list and
press the right arrow between the Limit List and the Required Limits. You are also able to drag and drop your selected
limits into or from the required limits or just double click a limit to move it from one list to another.
120
Dependencies
Dependencies can be used to control when a job should start rendering. See the Job Dependency Options below for
more information.
121
Failure Detection
Here you can set how your job handles errors and determine when to fail a job.
122
123
124
125
You may attach the following scripts which will be executed at different times:
Pre Job Script: Executed before a job is run.
Post Job Script: Executed after a job has completed.
Pre Task Script: Executed before a task is completed.
Post Task Script: Executed after a task has completed.
For more details on these script properties, see the Job Scripting section of the documentation.
Environment
When running a job, you are able to attach environment variables through the Environment tab. The environment variables are specified as key-value pairs and are set on the slave machine running the job. You are able to specify whether
your job specific environment variables will only be set while your job is rendering. All job specific environment
variables will be removed when the job has finished running.
You are also able to set a custom plugin directory on this panel. This acts as an alternative directory to load your jobs
plugin from. It is useful while creating and testing custom job plugins or when you need 1 or more jobs to specifically
use a custom job plugin which is not stored in the Deadline Repository.
126
127
The Extra Info 0-9 properties can be renamed from the Jobs section of the Repository Options, and have corresponding
columns in the Job list that can be sorted on. The additional key/value pairs in the list at the bottom do not have
corresponding columns, and can be used to contain internal data that doesnt need to be displayed in the job list.
Submission Params
Here you can view and export the job info and plugin info parameters that were specified when the job was submitted.
The exported files can be passed to the the Command application to manually re-submit the job. See the Manual Job
Submission documentation for more information.
128
129
To get a description of specific plug-in properties, just hover your mouse cursor over them in the properties dialog and
a tooltip will pop up with a description.
In the Asset tab, you can make this job dependent on asset files (textures, particle caches, etc). This job wont be able
to render on a slave unless it can access all the files listed here.
131
Iin the Script tab, you can make this job dependent on the results of the specified scripts.
132
133
134
Note that drag & drop dependencies will not work if you are holding down a modifier key (SHIFT, CTRL, etc). This
is to help avoid accidental drag & drops when selecting multiple jobs in the list.
If you would like to disable drag & drop dependencies, you can do so from the Monitor Options, which can be accessed
from the main toolbar. Note that if you change this setting, you will have to restart the Monitor for the changes to take
effect.
Dependency View
The Job Dependency View is used to be able to visualize and modify your jobs and their dependencies. You can open
the Job Dependency View panel from the View menu in the Monitor.
135
The view will show your currently selected job and all nodes that are linked to it by dependencies. The job node colors
indicate the state of the job, while the asset nodes are yellow and the script nodes are purple.
Jobs are dependent on everything that has a connection to the Square Socket on their left side. Connections can be
made by dragging from the sockets on the nodes (square/circle) to the socket/main body of the other node. Connections
can be broken by either dragging the connection off of the node or by selecting the connection and pressing the delete
key. Note that changes made in the dependency view do not take effect until saved. If you have made changes and go
to close the dependency view, you will be notified that you have unsaved changes.
Additional job nodes can be added to the view by dragging them in from the job list (after locking the dependency
first), or through the right click menu. Asset and script nodes can also be added by dragging the file in from your
explorer/finder window, or through the right click menu as well.
Dependencies can be tested by pressing the Test Dependency button in the toolbar. The results are represented by the
following colors:
Green: The dependency test has passed.
Red: The dependency test has failed.
Yellow: The job is frame dependent, and the dependency test for some of the frames has passed.
136
All the available dependency view options can be found across the toolbar at the top of the view, and/or from the
views right click menu.
137
Save View: Saves the changes made to the dependency view for the selected job.
Selection Style: If off, all nodes and connections touched by the selection area will be selected. If on, only
nodes and connections that are fully contained by the selection are will be selected.
Minimap: Controls if the minimap is visible and if so, in which corner.
Elide Titles: Control whether or not the titles of nodes should be elided and if so, in which direction.
Zoom All: Zoom the view to the point where the entire view (area that has been used) is visible.
Zoom Extents: Zoom the view to the point where all nodes currently in the view are visible.
Options in toolbar only:
Modify Job Details: This allows you to set which properties are visible in the nodes.
Test Dependencies: This allows you to test your dependencies.
Zoom Level: The current zoom level.
Options in right-click menu only:
Job Menu: If one or more jobs are selected, you can use the same job menu that is available in the job list.
Add Job: Choose a job to add to the dependency view.
Add Asset: Choose an asset file to add to the dependency view.
Add Script: Choose a script file to add to the dependency view.
Expand/Collapse: Expand or collapse the details in all nodes.
See the Frame List Formatting Options documentation for more information on options for formatting frame lists.
138
The following reports can be viewed from the Job Report panel:
Render Logs: These are the reports from tasks that rendered successfully.
Render Errors: This are the reports from tasks that failed to render.
Event Logs: These are the reports from Events that were handled successfully.
Event Errors: These are the reports from Events that raised errors.
Requeues: These are reports explaining why tasks were requeued.
You can use the Job Report panels right-click menu to save reports as files to send to Deadline Support. You can also
delete reports from this menu as well. Finally, if a particular Slave is reporting lots of errors, you can blacklist it from
this menu (or remove it from the jobs whitelist).
In addition to viewing job reports, you can also view the jobs history. The History window can be brought up from
the Job panels right-click menu by selecting the Job History option.
139
140
When viewing the output for a job, the Monitor will typically open the image file in the default application on the
machine. You can configure the Monitor to use specific image viewer applications in the Monitor Options, which can
be accessed from the main toolbar.
141
Finally, some jobs will support the ability to scan completed tasks for a job to see if any output is missing or below an
expected file size. The Scan For Missing Output window can be opened by right-clicking on a job and selecting Job
Output -> Scan For Missing Output. If any missing output is detected, or the output file is smaller than the Minimum
File Size specified, you are given the option to requeue those tasks (simply place a check mark beside the tasks to
requeue).
142
143
Typically, this zip file is placed in the jobsArchived folder in the Repository. However, when manually archiving a
job, you have the option to choose an alternative archive location.
144
By default, it will save the archive to the jobsArchived folder in the Repository. However, you can choose a different
folder to archive the job. You can also choose whether or not to delete the job from the database after archiving it.
Once case where you might not want to delete it is if you are archiving a job to send to Deadline Support for testing
purposes.
If the Job panel is not visible, see the Panel Features documentation for instructions on how to create new panels in
the Monitor.
145
Administrators can also configure Deadline to automatically archive all jobs after they have finished rendering and
place them in the jobsArchived folder in the Repository. This can be done in the Job Settings section of the Repository
Options.
146
147
148
Job List
Enable Drag & Drop Dependencies: If enabled, you can drag jobs and drop them on other jobs to set dependencies. Note that you must restart the Monitor for this setting to take effect. See the Controlling Jobs
documentation for more information on setting dependencies this way.
Show Task States In Job Progress Bar: If enabled, the job progress bars will show the states of all the tasks
for the job.
Group Jobs By Batch Name: If enabled, jobs that have the same Batch Name will be grouped together in the
job list. Note that you must restart the Monitor for this setting to take effect.
Change Color Of Jobs That Accumulate Errors: If enabled, jobs will change color from the Rendering color
to the Failed color as they accumulate errors. See the Styles section further down for more on the colors.
Task List
Task Double-click Behavior: Customize the double-click behavior of rendering, completed, and failed tasks in
the task list. Double-clicking on tasks in other states will bring up the task reports panel. These are the available
options:
149
View Reports: This will bring up the task reports panel for the selected task.
Connect To Slave Log: This will connect to the Slave that is rendering or has rendered the selected task.
View Image: This will open the output image for the selected task in the default viewer.
Change Color Of Tasks That Accumulate Errors: If enabled, tasks will change color from the Rendering color
to the Failed color as they accumulate errors. See the Styles section further down for more on the colors.
Miscellaneous
Start In Super User Mode: If enabled, the Monitor will start with Super User mode enabled. If Super User
mode is password protected, you will be prompted for the password when you start the Monitor.
Stream Job Logs from Pulse: If enabled, the Monitor will stream the job logs from Pulse instead of reading
them directly from the Repository. While streaming the logs this way is typically slower, it can be useful if the
connection to the Repository server is slow.
Show House Cleaning Updates In Status Bar: If enabled, the Monitor status bar will show when the last
House Cleaning was performed.
Show Repository Repair Updates In Status Bar: If enabled, the Monitor status bar will show when the last
Repository Repair was performed.
Show Pending Job Scan Updates In Status Bar: If enabled, the Monitor status bar will show when the last
Pending Job Scan was performed.
Enable Slave Pinging: If enabled, the Slave List will show if slave machines can be pinged or not.
150
You can specify up to three image view applications with the following options:
Executable: The path to the image viewer executable you want to use.
Arguments: The arguments to pass to the image viewer executable. The default is {FRAME}, which represents a path to a single image file for a task. More information about the support argument tags can be found
below.
Name: The viewer name, which is used in the menu item created for this image viewer (defaults to the executable
name if left blank).
Viewer Supports Chunked Tasks: If enabled, the tasks image viewer dialog will not be shown when viewing
the output for jobs with Frames Per Task greater than 1.
The following tags are supported in the custom viewer arguments, and can be combined with other arguments that the
image viewer accepts:
3.6. Monitor and User Settings
151
{FRAME}: This represents the tasks frame file. For example: /path/to/image0002.png
{SEQ#}: This represents the tasks frame sequence files, using # as the padding.
/path/to/image####.png
For example:
as the padding.
For example:
{SEQ@}: This represents the tasks frame sequence files, using @ as the padding.
/path/to/image@@@@.png
For example:
{SEQ%}: This represents the tasks frame sequence files, using %d as the padding.
/path/to/image%04d.png
For example:
You can also specify the Preferred Image Viewer, which is the default image viewer to use when viewing output files.
If set to DefaultViewer, the systems default application for the output file type will be used.
152
Notification Settings
If you would like to receive email notifications for your job, you can specify your email address in the Notification
Settings and enable the option to receive them. Note that this requires your administrator to configure the email settings
in the Repository Options.
If you would like to receive popup message notifications for your job, you can specify your machine name in the
Notification Settings and enable the option to receive them. Note that this requires the Launcher to be running on the
machine that you specify here.
Render Job As User Settings
If the Render Job As User option is enabled in the job settings in the Repository Options, these options will be used to
launch the rendering process as the specified user. For Linux and OSX, only the User Name is required. For Windows,
the Domain and Password must be provided for authentication. See the Render Jobs As Jobs User documentatison
for more information.
Web Service Authentication Settings
You can also specify a Web Service password, which is typically used for the Mobile application. A password is
required to authenticate with the Deadline web service if authentication has been enabled and empty passwords are
not allowed.
Region
A users region is used for cross platform rendering. All the paths a user sees in the Monitor will be replaced based on
the path mappings for their region. Example: Viewing the output of a completed job. See Region Settings and Regions
for more information.
3.6.5 Styles
The Styles panel can be used to customize the color palette and the fonts that the Deadline Applications use. Custom
styles can be saved and imported as well.
153
By default, the current style will be Default Style, which is the style shipped with Deadline and cannot be modified in
any way. Previously saved styles will be available in the Saved Styles list. Custom styles can be created and deleted
by clicking the Create New Style and Delete Style buttons, respectively.
Once a custom style has been selected, the styles color palette can be modified:
The General Palette color is used to generate the colors for the various controls and text in the Deadline applications. Note that dark palettes will result in light text, and light palettes will result in dark text.
The Selection color is used to highlight selected items or text.
The remaining colors are used to color the text for jobs, tasks, slaves, etc, based on their current state. It is
recommened to choose colors that contrast well with the General Palette and Selection colors to ensure the text
is readable.
The styles font can be modified as well:
Primary Font: This is the font used for almost all the text in the Deadline applications.
Console Font: This is the font used in console and log windows. By default, a monospace font is used for these
windows.
154
Any style changes made are not saved until the Monitor Options dialog is accepted by clicking OK. Once the dialog
has been accepted, the Monitor must be restarted in order to apply the style changes. In order to facilitate testing out
new styles, there is a Preview Style button which opens a dialog that displays an approximation of the current style
settings.
Note that the Deadline applications will always load with the style that was last selected in the Styles panel in the
Monitor Options.
155
Styles may also be saved and loaded using the View menu in the Monitor. Note that when saving styles, all of the
custom styles are saved, and when loading saved styles from disk the loaded styles will be appended to the list of styles
currently present, overwriting any styles with a shared name.
156
Note that it is possible for Administrators to disable the Local Slave Controls. If thats the case, you will see this
message when trying to open them.
157
More information about the avaiable controls can be found in the Remote Control documentation.
158
159
before stopping when the machine is no longer idle. If disabled, the slave wil requeue its current task before
stopping so that another slave can render it.
There are some limitations with Idle Detection depending on the operating system:
On Windows, Idle Detection will not work if the Launcher is running as a service. This is because the service
runs in an environment that is separate from the Desktop, and has no knowledge of any mouse or keyboard
activity.
On Linux, the Launcher uses X11 to determine if there has been any mouse or keyboard activity. If X11 is not
available, Idle Detection will not work.
160
the specified users. This is another way of ensuring that your slave will only render your jobs. However, it can
also be used to make your slave render jobs from other specific users, which is useful if youre waiting on the
results of those jobs.
161
162
CHAPTER
FOUR
CLIENT APPLICATIONS
4.1 Launcher
4.1.1 Overview
The Launchers main use is to provide a means of remote communication between the Monitor and the Slave or Pulse
applications, and therefore should always be left running on your render nodes and workstations. It can also detect if
the Slave running on the machine has stalled, and restart it if it does.
Unless the Launcher is running as a service or daemon, you should see the
icon in your system tray or notification
area. You can right-click on the icon to access the Launcher menu, or double-click it to launch the Monitor.
163
Remote Administration
If you have enabled Remote Administration under the Client Setup section of the Repository Options, you will be able
to control the Slave or Pulse applications remotely, and remotely execute arbitrary commands. Note that it may be a
potential security risk to leave it running if you are connected to the internet and are not behind a firewall. In this case,
you should leave Remote Administration disabled.
Launch Monitor
Launches the Monitor application. If the Repository has been upgraded recently, and Automatic Updates
is enabled, this will automatically upgrade the client machine.
Launch Slave(s)
Launches the Slave application. If this machine has been configured to run more than one Slave instance,
this will launch all of them. If the Repository has been upgraded recently, and Automatic Updates is
enabled, this will automatically upgrade the client machine.
Launch Slave By Name
Launch a specific Slave instance, or add/remove Slave instances from this machine (if enabled for the
current user). Note that new Slave instances must have names that only contain alphanumeric characters,
underscores, or hyphens. See the documentation on running Multiple Slaves On One Machine for more
information.
Local Slave Controls
Opens the Local Slave Controls window, which allows you to control and configure the Slave that runs on
your machine.
Launch Slave at Startup
164
If enabled, the Slave will launch when the Launcher starts up.
Restart Slave If It Stalls
If enabled, the Launcher will try to restart the Slave on the machine if it stalls.
Scripts
Allows you to run general scripts that you can create. Note that these are the same scripts that you can
access from the Scripts menu in the Monitor. Check out the Monitor Scripts documentation for more
information.
Submit
Allows you to submit jobs for different rendering plug-ins. Note that these are the same submission scripts
that you can access from the Submit menu in the Monitor. More information regarding the Monitor submission scripts for each plug-in can be found in the Plug-Ins section of the documentation. You can also
add your own submission scripts to the submission menu. Check out the Monitor Scripts documentation
for more information.
Change Repository
Change the Repository that the client connects to.
Change User
Change the current user on the client.
Change License Server
Change the license server that the Slave connects to.
Explore Log Folder
Opens the Deadline log folder on the machine.
Available Options
To start the Monitor with the Launcher, use the -monitor option. If another Launcher is already running, this will tell
the existing Launcher to start the Monitor. If an upgrade is available, this will trigger an automatic upgrade:
deadlinelauncher -monitor
To start the Slave with the Launcher, use the -slave option. If another Launcher is already running, this will tell the
existing Launcher to start the Slave. If an upgrade is available, this will trigger an automatic upgrade:
deadlinelauncher -slave
To start Pulse with the Launcher, use the -pulse option. If another Launcher is already running, this will tell the existing
Launcher to start Pulse. If an upgrade is available, this will trigger an automatic upgrade:
4.1. Launcher
165
deadlinelauncher -pulse
To start the Balancer with the Launcher, use the -balancer option. If another Launcher is already running, this will tell
the existing Launcher to start the Balancer. If an upgrade is available, this will trigger an automatic upgrade:
deadlinelauncher -balancer
To run the Launcher without a user interface, use the -nogui option. Note that if the Launcher is running in this mode,
if you launch the Slave or Pulse through the Launcher, they will also run without a user interface:
deadlinelauncher -nogui
deadlinelauncher -nogui -slave
To shutdown the Launcher if its already running, use the -shutdown option:
deadlinelauncher -shutdown
To shutdown the Slaves, Pulse, and Balancer on the machine before shutting down the Launcher, use the -shutdownall
option:
deadlinelauncher -shutdownall
InstallLauncherServiceLogOn
[User Name]
[Password]
[true/false]
166
UninstallLauncherService
StartLauncherService
StopLauncherService
4.1.7 FAQ
Why should the Launcher application be left running on the client machines?
Its main purpose is to provide a means of remote communication between the Monitor and the Slave
applications. If its not running, the Slave will have to be stopped and started manually.
In addition, whenever you launch the Monitor or Slave using the Launcher, it will check the Repository
for updates and upgrade itself automatically if necessary before starting the selected application. If the
Launcher is not running, updates will not be detected.
Finally, the Launcher can detect if the Slave running on the machine has stalled, and restart it.
Can I run the Launcher without a user interface?
Yes, you can do this by passing the -nogui command line argument to the Launcher application:
deadlinelauncher -nogui
I have Idle Detection enabled, but the Launcher doesnt start the Slave on Linux when its been idle long enough.
The libX11 and libXext libraries must be installed on Linux for Idle Detection to work. To check if libX11
and libXext are installed, open a Terminal and run the following commands. If they are installed, then the
path to the libraries will be printed out by these commands.
ldconfig -p | grep libX11
ldconfig -p | grep libXext
If any of these libraries are missing, then please contact your local system administrator to resolve this
issue. Here is an example assuming you have root access, using YUM to install them on your system:
4.1. Launcher
167
sudo -s
yum install redhat-lsb
yum install libX11
yum install libXext
4.2 Monitor
4.2.1 Overview
The Monitor application offers detailed information and control options for each job and Slave in your farm. It provides
normal users a means of monitoring and controlling their jobs, and it gives administrators options for configuring and
controlling the entire render farm.
If youre launching the Monitor for the first time on your machine, you will be prompted with a Login dialog. Simply
choose your user name or create a new one before continuing. Once the Monitor is running, youll see your user name
in the bottom right corner. If this is the wrong user, you can log in as another user by selecting File -> Change User.
Note that if your administrator set up Deadline to lock the user to the systems login account, you will have to log off
of your system and log back in as the correct user.
168
4.2. Monitor
169
The current layout can be pinned to the Pinned Layouts menu so that it can be restored at a later time. This can be
done from the View menu, or from the main toolbar. The current layout can also be saved to a file from the View
menu, and then loaded from that file later.
When you pin a layout you can chose to save the location and size of the monitor by checking the Save Location and
Size box when pinning the layout.
170
To prevent accidental modifications to the current layout, you can lock the layout from the View menu, by pressing
Alt-, or from the main toolbar. When locked, panels cannot be moved, but they can still be docked and undocked.
To dock a floating panel while the layout is locked, simply double-click on the panels title. It will be docked to the
same location it was originally undocked from.
The columns in monitor panels are customizable. The columns can be resized by simply clicking on the separator
column line and moving it and can be reordered by clicking on a column and moving it. Right clicking on the column
headers in a panel allows you to toggle the visibility of each column.
4.2. Monitor
171
In this menu you can modify the visibility and ordering of the columns by clicking the Customize.. menu item.
Moving columns to the left side list hides them, and the order that columns are listed in the right list corresponds to
the order they will appear in the panel (top->bottom corresponds to left->right). You move the columns around by
clicking the arrow buttons.
172
Once you have configured your column layout you can pin it.
4.2. Monitor
173
You can also set the current list layout as the list layout to load by default, when opening new panels of the same type,
by clicking Save Current List Layout As Default. If you want to restore the original list layout default click the
Reset Default List Layout.
Data Filtering
Almost every panel has a search box that you can use to filter the information youre interested in. You can simply
type in the word(s) you are looking for, or use regular expressions for more advanced searching.
In addition, every panel that has a search box also supports a more advanced filtering system. To add a filter to a
panel, select the Edit Filter option in the panels drop down menu, which can be found in the upper-right corner of the
panel. A window will appear allowing you to specify the name the filter being created. You can select to match all of
the filters added or any of the filters added. If all must match, only records where all data matches each filter will be
shown, while if any can match, if a record contains one or more matches it will be shown.
174
4.2. Monitor
175
Clicking the add filter button generates a new filter. The filter requires a column to be selected, an operation to perform,
and a value to use in the operation. Filters can also be removed by clicking the minus button to the right of each filter.
After all filters are are entered, press OK to apply the filter to the current panel.
A filter can be cloned and opened in a new tab within the panel through the Clone Filter option in the panel drop down
menu. The Clear Filter option can be used to clear all filters from the current panel.
Finally, you can pin the current filters so that they can be restored at a later time using the Pinned Filters sub menu in
the panel drop down menu. Note that the Pin Current Filter option is only available if a filter is currently being applied.
If there are no filters, the Pin Current Filter option will be hidden.
Automatic Sorting and Filtering
Almost every panel has an option to do automatic sorting and filtering when data changes in the panel. When this
option is disabled, sorting and filters must manually be re-applied to ensure that the data is sorted and filtered properly.
Note that automatic sorting and filtering can affect the Monitors performance if there are lots of jobs (10,000+) or lots
of slaves (1000+) in the farm. To improve Monitor performance in this case, it is recommended to disable automatic
sorting and filtering. There is an option in the Monitor Settings in the Repository Configuration to automatically
disable it by default.
Saving and Loading Panel Layouts
Every list-based panel (Jobs, Slaves, Tasks, etc) has an option to save and load the list layout, which you can find in
the panels drop down menu. This allows you to save out a lists filters, column order and visibility, etc, and load them
again later or share them with another user.
176
Note that when loading a list layout, you must choose a layout that was saved from the same type of list. For example,
you cannot save a layout from the Job list and then load it into the Slave list.
Graph Views
Almost every panel supports showing a graphical representation of the data. The graph can be shown by selecting
the Graph View option in the panels drop down menu, which can be found in the upper-left corner of the panel. The
graph view can be saved as an image file by right-clicking anywhere in view and selecting Save Graph As Image.
4.2. Monitor
177
If the graph is a pie chart, you can filter the data from the graph view by holding down the SHIFT key and clicking
on one of the pie slices. The data will be filtered to only show records that are represented by the pie slice that was
clicked on.
178
Scripts
Almost every panel has the option to run custom scripts from the panels right-click menu. Many scripts are already
shipped with Deadline, and additional custom scripts can be written. See the Monitor Scripts documentation for more
information.
These script menus can also be customized from the Repository Options.
4.2. Monitor
179
The Jobs panel supports standard filtering, but it also has a Quick Filter option in the panels drop down menu to make
it easier to filter out unwanted jobs. By toggling the options within the Status, User, Pool, Group, and Plugin sections,
you can quickly drill down to the jobs you are interested in. There is also an Ego-Centric Sorting optino in the panels
drop down menu which can be used to keep all of your jobs at the top of the job list.
The Jobs panel also supports the ability to group jobs together based on their Batch Name property. All of the job
submitters that are included with Deadline will automatically set the Batch Name if they are submitting multiple jobs
that are related to each other. The Batch Name for a job can be modified in the Job Properties. If you prefer to not
180
have the jobs grouped together in the job list, you can disable the Group Jobs By Batch Name option in the Monitor
and User Settings.
Finally, the Jobs panel allows jobs to be controlled and modified using the right-click menu. You can also bring up the
Job Properties window by double clicking on a job. See the Controlling Jobs documentation for more information.
Tasks
The Task panel shows all the tasks for the job that is currently selected. It displays useful information about each task
such as its frame list, status, and if applicable, the Slave that is rendering it.
The Task panel also allows you to control tasks from the right-click menu. See the Controlling Jobs documentation
4.2. Monitor
181
for more information. In addition, the double-click behavior in the Task panel can be set in the Monitor and User
Settings, which can be accessed from the main toolbar.
Job Details
The Job Details panel shows all available information about the job that is currently selected. The information is split
up into different sections that can be expanded or collapsed as desired.
182
Job Report
All reports for a job can be viewed in the Job Reports panel. This includes error reports, logs, and task requeue
reports. This panel can also be opened by right-clicking on a job in the Job List and selecting View Job Reports. More
information can be found in the Controlling Jobs documentation.
4.2. Monitor
183
Slaves
The Slave panel shows all the Slaves that are in your farm. It shows system information about each Slave, as well as
information about the job the slave is currently rendering.
If you see a slave that is colored orange in the list, this means that the slave is unable to get a license or that the license
is about to expire. When the slave cannot get a license, it could be because there is a network issue, the license has
expired, or the license limit has been reached.
If a slave isnt rendering a job that you think it should be, you can use the Job Candidate Filter option in the panels drop
down menu to try and figure out why. See the Job Candidate Filter section in the Slave Configuration documentation
184
Pulses
The Pulse panel shows which machine Pulse is running on, as well as previous machines that Pulse has run on. It also
shows system information about each machine.
4.2. Monitor
185
Balancers
The Balancer panel shows which machines the Balancer is running on. It also shows system information about each
machine.
The Balancer panels right-click menu allows you to modify Balancer settings and control the Balancer remotely. See
the Balancer Configuration documentation for more information.
Limits
The Limit panel shows all the Limits that are in your farm. You can access many options for the Limits by rightclicking on them. See the Limits and Machine Limits documentation for more information.
Console
The Console panel shows all lines of text that is written to the Monitors log.
186
Remote Commands
The Remote Command panel shows all pending and completed remote commands that were sent from the Monitor.
When sending a remote command, if this panel is not already displayed, it will be displayed automatically (assuming
you have permissions to see the Remote Command panel). See the Remote Control documentation for more information.
Cloud
The Cloud panel shows all the instances from the cloud providers that the Monitor is connected to. This panel allows
you to control and close your existing instances. See the Cloud Controls documentation for more information.
4.2. Monitor
187
188
4.2. Monitor
189
Configure a wide range of global settings. See the Repository Configuration documentation for more
information.
Configure Slave Scheduling
Configure the slave scheduling options. See the Slave Scheduling documentation for more information.
Configure Power Management Options
Configure the Power Management settings. See the Power Management documentation for more information.
Configure Cloud Providers
Set up and enable cloud service providers. See the Cloud Controls documentation for more information.
Configure Plugins
Configure the available render plugins, such as 3ds Max, After Effects, Maya, and Nuke. See the plugin
documentation for more information on the configurable settings for each plugin.
Configure Event Plugins
Configure the available event plugins such as Draft and Shotgun. See the event plugin documentation for
more information on the configurable settings for each plugin.
Connect to Pulse Log
Use this to remotely connect to the Pulse log. See the Remote Control documentation for more information.
Perform Pending Jobs Scan
Performs a scan of pending jobs and determines if any should be released. This operation is normally
performed automatically, but you can force an immediate clean up with this option if desired.
Perform House Cleaning
Clean up files for deleted jobs, check for stalled slaves, etc. This operation is normally performed automatically, but you can force an immediate clean up with this option if desired.
Undelete Jobs
Use this to recover any deleted jobs that havent been purged from the database yet.
Explore Repository Root
View the root directory of the current Repository.
Import Settings
Import settings from another Repository. See the Importing Repository Settings documentation for more
information.
Synchronize Scripts and Plugin Icons
Rebuilds the script-specific menus, and updates your local plugin icon cache with the icons that are currently in the Repository. Note that if any new icons are copied over, you will have to restart the Monitor
before the jobs in list show the new icons.
Local Slave Controls
Opens the Local Slave Controls window, which allows you to control and configure the Slave that runs on
your machine.
Options
Modify the Monitor and User Settings. There is also a toolbar button for this option.
190
Available Options
To start a new Monitor if there already another Monitor running, use the -new option:
deadlinemonitor -new
To start the Monitor connected to a different repository, use the -repository option. You can combine this with the
-new option to have different Monitors connected to different repositories:
deadlinemonitor -repository "\\repository\path"
deadlinemonitor -new -repository "\\repository\path"
To start the Monitor without the splash screen, use the -nosplash option:
deadlinemonitor -nosplash
To shutdown the Monitor if its already running, use the -shutdown option:
deadlinemonitor -shutdown
You can also set all of the Monitor Options using command line options. For example:
deadlinemonitor -draganddropdep True -groupjobbatches False
4.2.7 FAQ
Im unable to move panels in the Monitor, or dock floating panels.
You need to unlock the Monitory layout. This can be done from the View menu or from the toolbar.
Can I dock a floating panel when the Monitor layout is locked?
Yes, you can dock the floating panel by double-clicking on its title bar. It will be docked to its previous
location, or to the bottom of the Monitor if it wasnt docked previously.
What does it mean when a Slave is orange in the Slave list?
This means that the Slave is currently unable to get a license.
4.2. Monitor
191
4.3 Slave
4.3.1 Overview
The Slave is the application that controls the rendering applications and should be running on any machine you want
to to include in the rendering process.
192
4.3. Slave
193
right-click menu.
On Linux, you can start the Slave from a terminal window by running the deadlineslave script in the bin folder,
or from the Launchers right-click menu.
On Mac OS X, you can start the Slave from Finder by running the DeadlineSlave application in Applications/Thinkbox/Deadline, or from the Launchers right-click menu.
You can also configure the Slave to launch automatically when the Launcher starts up. To enable this, just enable the
Launch Slave At Startup option in the Launcher menu.
The Slave can also be started from a command prompt or terminal window. For more information, see the Slave
Command Line documentation.
4.3.3 Licensing
The Slave requires a license to run, and more information on setting up licensing can be found in the Licensing Guide.
The Slave only requires a license while rendering. If a Slave cannot get a license, it will continue to run, but it wont
be able to pick up jobs for rendering. In addition when a slave becomes idle it will return its license. The Slaves
licensing information can be found under the Slave Information tab (see next section).
If you have more then one slave running on a machine they will all share the same licence.
194
4.3. Slave
195
If the Slave is running in the background or without an interface, you can connect to the Slaves log from the command
line. In a command prompt or terminal window, navigate to the Deadline bin folder (Windows or Linux) or the
Resources folder (Mac OS X) and run the following, where SLAVENAME is the name of the Slave you want to
connect to:
deadlinecommand -ConnectToSlaveLog "SLAVENAME"
File Menu
Change License Server
Change the license server that the Slave connects to.
Options Menu
Hide When Minimized
The Slave is hidden when minimized, but can be restored using the Slave icon in the system tray.
Minimize On Startup
Starts the Slave in the minimized state.
Control Menu
Search For Jobs
If the Slave is sitting idle, this option can be used to force the slave to search for a job immediately.
Cancel Current Task
If the Slave is currently rendering a task, this forces the slave to cancel it.
Continue Running After Current Task Completion
Check to keep the Slave application running after it finishes its current task completion.
Stop/Restart Slave After Current Task Completion
Check to stop or restart the Slave application after it finishes its current task.
Shutdown/Restart Machine After Current Task Completion
Check to shutdown or restart the machine after the Dealine Slave finishes its current task.
Available Options
To start a new instance of the Slave, use the -name option. If you already have multiple instances of the Slave
configured, use the -name option to start a specific instance:
deadlineslave -name "second-slave"
To start the Slave without a user interface, use the -nogui option:
4.3. Slave
197
deadlineslave -nogui
To start the Slave without the splash screen, use the -nosplash option:
deadlineslave -nosplash
To shut down the Slave if its already running, use the -shutdown option. This can be combined with the -name option
if you have more than one Slave instance running and you want to shut down a specific instance:
deadlineslave -shutdown
deadlineslave -shutdown -name "second-slave"
To control what a running Slave should do after it finishes rendering its current task, use the -aftertask option. The
available options are Continue, StopSlave, RestartSlave, ShutdownMachine, or RestartMachine. This can be combined
with the -name option if you have more than one Slave instance running and you want to control a specific instance:
deadlineslave -aftertask RestartSlave
deadlineslave -aftertask RestartMachine -name "second-slave"
4.3.8 FAQ
Can I run the Slave on an artists workstation?
Yes. On Windows and Linux, you can set the Affinity in the Slave Settings to help reduce the impact that
the renders have on the artists workstation.
Can I run the Slave as a service or daemon?
Yes. If youre running the Launcher as a service or daemon, then it will run the Slave in the background
as well. See the Client Installation documentation for more information.
The Slave keeps reporting errors for the same job instead of moving on to a different job. What can I do?
You can enable Bad Slave Detection in the Repository Configuration to have a slave mark itself as bad for
a job when it reports consecutive errors on it.
What does it mean when a Slave is stalled, and is this a bad thing?
Slaves become stalled when they dont update their status for a long period of time, and is often an
indication that the slave has crashed. A stalled slave isnt necessarily a bad thing, because its possible the
slave just wasnt shutdown properly (it was killed from the Task Manager, for example). In either case,
its a good idea to check the slave machine and restart the slave application if necessary.
On Linux, the Slave is reporting that the operating system is simply Linux, instead of showing the actual
Linux distribution.
In order for the Slave to report the Linux distribution properly, you need to have lsb installed, and
lsb_release needs to be in the path. You can use any package management application to install lsb.
On Linux, the Slave crashes shortly after starting up.
The libX11 and libXext libraries must be installed on Linux for the Slave to run, even if running it with
the -nogui flag. To check if libX11 and libXext are installed, open a Terminal and run the following
commands. If they are installed, then the path to the libraries will be printed out by these commands.
198
If any of these libraries are missing, then please contact your local system administrator to resolve this
issue. Here is an example assuming you have root access, using YUM to install them on your system:
sudo -s
yum install redhat-lsb
yum install libX11
yum install libXext
4.4 Pulse
4.4.1 Overview
Pulse is an optional mini server application that performs maintenance operations on the farm, and manages more
advanced features like Auto Configuration, Power Management, Slave Throttling, Statistics Gathering, and the Web
Service. If you choose to run Pulse, it only needs to be running on one machine. Note that Pulse does not play a role
in job scheduling, so if you are running Pulse and it goes down, Deadline will still be fully operational (minus the
advanced features). Note to build redundancy if Primary Pulse fails in your environment, consider protecting yourself
by configuring Pulse Redundancy.
4.4. Pulse
199
If you are choosing a machine to run Pulse, you should be aware that non-Server editions of Windows have a TCP/IP
connection limitation of 10 new connections per second. If your render farm consists of more than 10 render nodes,
it is very likely that youll hit this limitation every now and then (and the odds continue to increase as the number of
machines increase). This is a limitation of the operating systems, and isnt something that we can workaround, so we
recommend using a Server edition of Windows, or a different operating system like Linux.
200
If Pulse is running in the background or without an interface, you can connect to the Pulse log from the command line.
In a command prompt or terminal window, navigate to the Deadline bin folder (Windows or Linux) or the Resources
4.4. Pulse
201
folder (Mac OS X) and run the following, where PULSENAME is the name of the Pulse you want to connect to:
deadlinecommand -ConnectToPulseLog "PULSENAME"
Manual Configuration
The connection settings, as well as additional settings, can be configured for Pulse from the Monitor. Advanced
features like Auto Configuration, Power Management, Slave Throttling, Statistics Gathering, and the Web Service can
also be configured in the Monitor. See the Pulse Configuration documentation for more information.
202
Options Menu
Hide When Minimized
Pulse is hidden when minimized, but can be restored using the Pulse icon in the system tray.
Minimize On Startup
Starts Pulse in the minimized state.
Control Menu
Perform Pending Job Scan
If Pulse is between repository pending job scans, this option can be used to force Pulse to perform a
pending job scan immediately. A pending job scan releases pending jobs by checking their dependencies
or scheduling options.
Perform Repository Clean-up
If Pulse is between repository clean-ups, this option can be used to force Pulse to perform a repository
clean-up immediately. A repository clean-up includes deleting jobs that are marked for automatic deletion.
Perform Repository Repair
If Pulse is between repository repairs, this option can be used to force Pulse to perform a repository repair
immediately. A repository repair includes checking for stalled slaves and orphaned limit stubs.
Perform Power Management Check
If Pulse is between power management checks, this option can be used to force Pulse to perform a power
management check immediately.
Available Options
To start Pulse without a user interface, use the -nogui option:
deadlinepulse -nogui
To start Pulse without the splash screen, use the -nosplash option:
deadlinepulse -nosplash
To shut down Pulse if its already running, use the -shutdown option:
deadlinepulse -shutdown
4.4. Pulse
203
4.4.7 FAQ
Does Pulse use any license?
No. It is an unlicensed product and included in the Deadline Client software installer.
Can I run Pulse on any machine in my farm?
You can run Pulse on any machine in your farm, including the Repository or Database machine. However,
for larger farms, we recommend running Pulse on a dedicated machine.
When choosing a machine to run Pulse on, you should be aware that non-Server editions of Windows
have a TCP/IP connection limitation of 10 new connections per second. If your render farm consists of
more than 100 machines, it is very likely that youll hit this limitation every now and then (and the odds
continue to increase as the number of machines increase). Therefore, if you are running Pulse on a farm
with 100 machines or more, we recommend using a Server edition of Windows, or a different operating
system like Linux.
Can I run Pulse as a service or daemon?
Yes. If youre running the Launcher as a service or daemon, then it will run Pulse in the background as
well. See the Client Installation documentation for more information.
If Pulse is shutdown or terminated, is the Power Management feature still functional?
In this case, the only aspect of Power Management that is still functional is the Temperature Checking.
Redundancy for Temperature checking has been built into the Slave application, so if Pulse isnt running,
youre still protected if the temperature in your farm room begins to rise.
Which temperature sensors work with Power Management?
We have tested with many different temperature sensors. Basically, as long as the temperature sensors use
SNMP, and you know its OID (which is configurable in the Power Management settings), it should work.
Can I run multiple Pulses on separate machines?
Yes and like typical IT best practices, this will provide Pulse Redundancy. Note, only one Pulse can be
Primary at any given time.
4.5 Balancer
4.5.1 Overview
The Balancer is a cloud controller application capable of virtual/physical, private/public, remote/local simultaneous
machine orchestration. It can create, start, stop and terminate cloud instances based on the current queue load taking
into account jobs and tasks. Further customization to take into account other job/task factors can be achieved by
utilizing the Deadline plugin API to create a custom Balancer algorithm. Note to build redundancy if Primary Balancer
fails in your environment, consider protecting yourself by configuring Balancer Redundancy.
204
The Balancer works in cycles, and each cycle consists for a number of stages.
First, the Balancer will do a House Keeping step in which it will clean up any disks or instances that havent
been terminated like they were supposed to.
Second, the Balancer will execute the Balancer Algorithm. These are the steps of the default algorithm (note
that these steps can be customized with your own Balancer Algorithm plugin):
Create State Structure: This sets up the data structures used in the rest of the algorithm.
Compute Demand: Examines the groups for jobs that are queued and assigns a weighting to the group
based on the amount of tasks that need to be done and the group priority.
Determine Resources: Here we determine how much space we have available with our provider and how
many limits we have.
Compute Targets: Based on the Demand and the available Resources we set a target number of instances
for each group.
Populate Targets: This sets up a full target data structure for use in Deadline.
4.5. Balancer
205
Third, the Balancer will equalize the targets by starting or terminating instances.
206
If the Balancer is running in the background or without an interface, you can connect to the Balancer log from the
command line. In a command prompt or terminal window, navigate to the Deadline bin folder (Windows or Linux)
or the Resources folder (Mac OS X) and run the following, where BALANCERNAME is the name of the Balancer
you want to connect to:
deadlinecommand -ConnectToBalancerLog "BALANCERNAME"
4.5. Balancer
207
Available Options
To start the Balancer without a user interface, use the -nogui option:
deadlinebalancer -nogui
To start the Balancer without the splash screen, use the -nosplash option:
deadlinebalancer -nosplash
To shut down the Balancer if its already running, use the -shutdown option:
deadlinebalancer -shutdown
208
4.5.7 FAQ
Can I run Balancer on any machine in my farm?
You can run Balancer on any machine in your farm, including the Repository or Database machine.
However, for larger farms, we recommend running Balancer on a dedicated machine.
When choosing a machine to run Balancer on, you should choose a machine which has the correct network
routable access to your local renderfarm as well as external access to any public/private connections via
technologies such as VPN.
Can I run Balancer as a service or daemon?
Yes. If youre running the Launcher as a service or daemon, then it will run Balancer in the background
as well. See the Client Installation documentation for more information.
Can I run multiple Balancers on separate machines?
Yes and like typical IT best practices, this will provide Balancer Redundancy. Note, only one Balancer
can be Primary at any given time and this is the machine which will checkout a Flexlm based Balancer
license.
Does Balancer use a Deadline Slave license?
No. Primary Balancer will checkout a Balancer specific license which is included to all customers who are
currently on Thinkbox annual support for Deadline. The Draft and Balancer licenses will be renewed for
another 12 months as you renew your annual Thinkbox Deadline support contract. Please email Deadline
Sales for further details.
4.6 Command
4.6.1 Overview
The deadlinecommand application is a command line tool for the Deadline render farm management system. It can be
used to control, query, and submit jobs to the farm.
There is also a deadlinecommandbg application which is identical to deadlinecommand, except that it is executed in
the background. When using deadlinecommandbg, the output and exit code are written to the Deadline temp folder
as dsubmitoutput.txt and dsubmitexitcode.txt respectively. If you want to control where these files get written to, you
can use the -outputFiles option, followed by the paths to the output and exit code file names. For example:
deadlinecommandbg -outputFiles c:\output.txt c:\exitcode.txt -pools
You can find the deadlinecommand and deadlinecommandbg applications in the Deadline bin folder (Windows or
Linux) or the Resources folder (Mac OS X).
To get usage information for a specific command, specify the command name after the -help argument:
4.6. Command
209
210
546cc87357dbb04344a5c6b5
4.6. Command
211
Sending An Email
To send the message to jsmith@mycompany.com(cc cjones@mycompany.com):
deadlinecommand -sendemail -to jsmith@mycompany.com -cc cjones@mycompany.com
-subject "the subject" -message "C:\MyMessage.html"
Note that the -to, -subject, and -message options are required. The other two options are optional.
4.6.5 FAQ
Whats the difference between the deadlinecommand and deadlinecommandbg applications?
The deadlinecommandbg application is identical to deadlinecommand, except that it is executed in the
background. When using deadlinecommandbg, the exit code and output are written to the Deadline temp
directory as dsubmitexitcode.txt dsubmitoutput.txt respectively.
212
4.7.2 Setup
Before you can use the web service, you need to configure the general Web Service settings in the Repository Configuration. These settings apply to both the standalone deadlinewebservice application, and Pulses web service feature.
213
214
the Web Service Settings). Note that if port 8080 is being blocked by a firewall, the web service will not be able to
accept web requests. An example URL will look like the following:
http://[myhost]:8080/[command][arguments]
Where:
myhost is your web service servers IP address or host name.
command is the command you want to execute. The web service can support two different types of commands,
which are explained below.
arguments represents the arguments being passed to the command. This can be optional, and depends on the
command.
To confirm that you can at least connect to the web service, try the following URL.
http://[myhost]:8080/
You should see the following if you connect to the web service successfully:
This is the Deadline web service!
Ensure you have correctly elevated permissions when executing the above in a command prompt and replace USERNAME with the appropriate %USERNAME% that the web service is running under. Depending on your local security
policy, the user account may need to have local administrator rights temporarily for you to initially reserve the namespace. The namespace reservation will also need updating if you ever modify the port number or user account used.
Use the following command in a command prompt to help list what namespace reservations are currently present on
your machine:
netsh http show urlacl
Running Commands
The first set of commands are the same commands that you can use with the Command application. However, these
commands are disabled by default. To enable them, you need to enable the Allow Non-Script Commands setting in the
Web Service settings. If left disabled, you will see the following results when trying to call one of these commands:
Error - Non-Script commands are disabled.
Here is an example of how you would use the web service to call the -GetSlaveNames command:
215
http://[myhost]:8080/GetSlaveNames
Some commands can take arguments. To include arguments, you need to place a ? between the command name and
the first argument, and then a & between additional arguments. Here is an example of how you would use the web
service to call the -GetSlaveNamesInPool command, and pass it two pools as arguments:
http://[myhost]:8080/GetSlaveNamesInPool?show_a&show_b
Some scripts can take arguments. To include arguments, you need to place a ? between the command name and the
first argument, and then a & between additional arguments. Here is an example of how you would pass arg1, arg2,
and arg3 as separate arguments to the GetFarmStatistics.py script:
http://[myhost]:8080/GetFarmStatistics?arg1&arg2&arg3
The way the results are displayed depends on the format in which they are returned. Again, see the Web Service
Scripting documentation for more information.
216
4.8 Mobile
4.8.1 Overview
The Mobile application allows you to monitor your jobs from anywhere. The application connects to the Deadline
web service to download information about the state of your jobs, so the web service must be running before you can
use the Mobile application. See the Web Service documentation for more information.
The minimum requirements for the Mobile application are as follows.
Mobile Device
Android
iPhone or iPad
Windows Phone
Minimum Requirements
Deadline 5.0 and Android 2.1
Deadline 4.1 and iPhone OS 3.0 - 7.10
Deadline 5.0 and Windows Phone 7.0
4.8. Mobile
217
To refresh the job list, just press the Refresh button. If you want to see more information about a specific job, press
the button to the right of the job name to bring up the job details panel.
To refresh the job details, just press the Refresh Job button. To return to the job list, press the Job List button in the
upper left corner.
218
4.8.5 Settings
The settings panel can be accessed from the job list by pressing the Settings button. You can access the online help by
pressing the Help button in the top right corner (Android) or by scrolling down to find the Online Help link (iPhone).
To return to the job list, press the Job List button in the upper left corner.
4.8. Mobile
219
Note that the Pulse Server Settings can be used to connect to a Pulse instance if the web service feature is enabled,
or it can be used to connect to the standalone web service application. See the Web Service documentation for more
information.
Proxy Server Settings
Server URL: If you are using a proxy web server, you may need to set a more specific URL to connect to the
web service.
Http Authorization: If your proxy web server requires HTTP authorization, you should enable this option and
specify the user name and password.
SSL: If you are using a proxy web server that requires SSL, you should enable this option. Note that this will
change the server port in the Pulse Server Settings to 443 by default.
Download Information
This a running tally of the data that youve downloaded from the web service.
4.8.7 Troubleshooting
These are some known Mobile errors and solutions.
You must provide a password for authentication
This error occurs when a password has not been set for the current user while authentication is enabled
and empty passwords are not accepted. To resolve this issue, you must fill in the Web Service Password
field for the user in the User Settings in the Monitor. Before you can connect, you may need to wait for
the web service to update its network settings or manually restart the web service.
The provided user name and password are invalid
220
This error occurs when the password provided is incorrect for the given user. If you believe the password
is correct, you may need to wait for the web service to update its network settings or manually restart the
web service.
The provided user name is invalid
This error occurs when the provided user is not in the web services cached list. If the user name is
valid, you may need to wait for the web service to update its network settings or manually restart the web
service.
There was an error connecting to Pulse
This error occurs when there are two errors connecting to the web service in a row. The likely cause of
this error is that the web service is not running on the specified server. Verify that the web service is
running on the specified server and that you have entered the servers name or IP address correctly. If you
have a name specified for the server and are not on the local area network of that machine, you may need
to enter the servers IP address instead of its name.
Network Error
The connection with the server failed. Please check your server settings in the Settings Section
Double check your settings in Mobile to make sure they match the required information. If all the Mobile settings
are entered correctly and you still cannot connect, look in your general mobile device settings and make sure you are
connected to the right network. Depending on how things are set up, your device will try to connect to the strongest
network in the area. If the network it switches to doesnt have the correct settings to connect to your server then the
connection will fail.
If you are still unable to connect try rebooting the device (fully power off your device and power it back on). This
error also occurs when the server you are trying to connect to has lost access to the internet. Double check that the
server is connected to the internet.
4.8.8 FAQ
How do I get the Mobile application?
The Mobile application can be downloaded from the Android Market and the iPhone App Store.
How much does Mobile cost?
Nothing, its free!
4.8. Mobile
221
222
CHAPTER
FIVE
ADMINISTRATIVE FEATURES
223
Note that long-running applications like the Launcher, Slave, and Pulse only update these settings every 10 minutes,
so after making changes, it can take up to 10 minutes for all machines to recognize them. You can restart these
applications to have them recognize the changes immediately.
224
225
To add a new layout, simply press the Add button, and then choose an existing Monitor layout file, or use the current
Monitors layout. Note that Monitor layout files can be saved from the Monitor by selecting View -> Save Layout.
Update Settings
Enable Manual Refreshing
If your Auto Refreshing Intervals are set to longer intervals, manual refreshing in the Monitor can be enabled to allow
226
users to get the most up to date data immediately. To prevent users from abusing manual refreshing, a minimum
interval between manual refreshes can be configured.
Sorting and Filtering
For farms that have a large number of jobs (10,000+) or slaves (1000+), disabling Automatic Sorting and Filtering in
the lists in the Monitor can improve the Monitors overall performance. This option in the Repository Options can be
used to disable Automatic Sorting and Filtering by default, and users can enable it later in their Monitors if desired.
227
Delete Offline/Stalled Slaves from the Repository after this many days: Slaves that are Offline or Stalled
will be removed from the Repository after this many days.
Gather System Resources (CPU and RAM) When Rendering Tasks On Linux/Mac: If enabled, the Slave
will collect CPU and RAM usage for a task while it is rendering. We have seen cases where this can cause the
Slave to crash on Linux or Mac, so you should only disable this feature if you run into this problem.
Use fully qualified domain name (FQDN) for Machine Name instead of host name: If enabled, the Slave
will try to use the machines fully qualified domain name (FQDN) when setting its Machine Name instead of
using the machines host name. The FQDN will then be used for Remote Control, which can be useful if the
remote machine name isnt recognized in the local network. If the Slave cant resolve the FQDN, it will just use
the host name instead.
Use Slaves IP Address for Remote Control: If enabled, the Slaves IP address will be used for remote control
instead of trying to resolve the Slaves host name.
Wait Times
Number of Minutes Before An Unresponsive Slave is Marked as Stalled: If a slave has not provided a status
update in this amount of time, it will be marked as stalled.
Number of Seconds To Wait Fora Response When Connecting to Pulse: The number of seconds a salve that
is connected to plus will wait for pulse to respond when querying for a job.
Number of Seconds Between Thermal Shutdown Checks if Pulse is Offline: The number of seconds between
thermal shutdown checks. The Slave only does this check if Pulse is not running.
228
Extra Properties
Extra arbitrary properties can be set for slaves, and these properties can be given user friendly names so that they can
easily be identified and used to filter and sort slaves in the Monitor.
229
230
231
it is idle.
Maximum number of seconds between Job queries while the Slave is Idle: The maximum number of seconds
a slave will wait between polls to the Repository for tasks when it is idle.
Minimum number of seconds between Job queries when the Slave is Idle: The minimum number of seconds
a slave will wait between polls to the Repository for tasks when it is idle.
Maximum Connection Attempts: The maximum number of times a Slave will attempt to connect to Pulse
before giving up.
Stalled Pulse Threshold (in minutes): Deadline determines if a Pulse has stalled by checking the last time that
the Pulse has provided a status update. If a Pulse has not updated its state in the specified amount of time, it will
be marked as Stalled.
Use Pulses IP Address When Slaves Connect To Pulse and For Remote Control: If enabled, the Pulses IP
address will be used when the slaves connect to pulse, and for remote control, instead of trying to resolve the
Pulses host name.
Power Management
Power Management Check Interval: How often Pulse performs Power Management operations.
233
Throttling
Throttling can be used to limit the number of slave applications that are copying over the job files at the same time.
This can help network performance if large scene files are being submitted with the jobs. Note that a Slave only copies
over the job files when it starts up a new job. When it goes to render subsequent tasks for the same job, it will not be
affected by the throttling feature.
Enable Throttling: Allow throttling to occur.
Maximum Number of Slaves That Can Copy Job Files at The Same Time: The maximum number of Slaves
that can copy a scene file at the same time.
The Interval a Slave Waits Between Updates To See If It Can Start Copying Job Files: The amount of
time(in seconds) a Salve will wait to send throttle checks and updates to Pulse.
Throttle Update Timeout Multiplier (based on the Slave Interval): The interval a slave waits between updates is multiplied by this value to determine the timeout value.
234
Web Service
Enable the Web Service
The Web Service allows you to execute commands and scripts from a browser, and must be enabled to use the Mobile
applications and the Pulse RESTful API (see REST Overview). While there is a standalone web service application, it
can also be enabled in Pulse if you are running it. All other Web Service settings can be set in the Web Service page,
which is covered further down this page.
Enable the Web Service: Makes the Pulse Web Service Available. Note that if you enable or disable the Web
Service feature while Pulse is running, it must be restarted for the changes to take effect.
235
237
238
Note that if you have SSL enabled, you may need to configure your Linux and OSX machines for SSL to work. The
process for doing this is explained in Monos Security Documentation.
If you using Google Mail to send emails (smtp.gmail.com), you will typically use port 25 if SSL is disabled, and port
465 if SSL is enabled. See Googles documentation on Sending Emails for more information.
Notifications
Job Completed: When a job completes, an email will be sent to these email addresses.
Job Timed Out: When a job times out, an email will be sent to these email addresses.
Job Error Warning: When a job accumulates a certain number of errors, a warning email will be sent to these
email addresses. You can configure the warning limit in the Failure Detection settings.
Job Failed: When a job fails, an email will be sent to these email addresses.
Job Corrupted: When a corrupted job is detected, an email will be sent to these email addresses.
5.1. Repository Configuration
239
Slave License Errors: When a slave is unable to get a license, an email will be sent to these email addresses.
Slave Status Errors: When a slave is unable to update its state in the Repository, an email will be sent to these
email addresses.
Slave Error Warning: When a slave accumulates a certain number of errors in one session, a warning email
will be sent to these email addresses. You can configure the warning limit in the Failure Detection settings.
Stalled Slave: When a stalled slave detected, an email will be sent to these email addresses.
System Administrator: When users use the option in the Error Report Viewer to report error messages to their
system administrator, those emails will be sent to these email addresses.
Low Database Connections: Low Database connection notification emails will be sent to these email addresses.
Database Connection Thresholds: When the number of available database connections is below the set threshold a warning email will be sent.
Thermal Shutdown: Notifications for Thermal Shutdown operations will be sent to these email addresses.
Machine Restart: Notifications for Machine Restart operations will be sent to these email addresses.
241
Pending Job Scan Process Timeout: If running the pending job scan in a separate process, this is the
maximum amount of time the process can take before it is aborted.
Asynchronous Job Events: If enabled, many job events will be processed asynchronously by the Pending Job
Scan operation, which can help improve improve the performance of the Monitor when performing operations
on batches of jobs. If this is enabled, the OnJobSubmitted event will still be processed synchronously to ensure
that any updates to the job are committed before the job can be picked up by Slaves.
Maximum Job Events Per Session: The maximum number of pending job events that can be processed
per scan.
House Cleaning
House Cleaning Interval: The maximum amount of time between House Cleaning operations in seconds.
Allow Slaves to Perform House Cleaning If Pulse is not Running: If enabled, the Slaves will perform house
cleaning if Pulse is not running. If disabled, only Pulse can perform house cleaning.
Run House Cleaning in a Separate Process: If enabled, the house cleaning operation will be run in a separate
process.
242
Write House Cleaning Output to Seperate Log File: If enabled, all output from the house cleaning will
be placed into a seperate log file.
House Cleaning Process Timeout: If running the house cleaning in a separate process, this is the maximum amount of time the process can take before it is aborted.
House Cleaning Maximum Per Session
Maximum Deleted Jobs: The maximum number of deleted jobs that can be purged per session.
Maximum Archived Jobs: The maximum number of jobs that can be archived per session.
Maximum Auxiliary Folders: The maximum number of job auxiliary folders that can be deleted per
session.
Maximum Job Reports: The maximum number of jobs report files that can be deleted per session.
Repository Repair
Repository Repair Interval: The maximum amount of time between Repository Repair operations in seconds.
Allow Slaves to Perform the Repository Repair If Pulse is not Running: If enabled, the Slaves will perform
the repository repair if Pulse is not running. If disabled, only Pulse can perform the repository repair.
5.1. Repository Configuration
243
Run Repository Repair in a Separate Process: If enabled, the repository repair operation will be run in a
separate process.
Write Repository Repair Output to Seperate Log File: If enabled, all output from the repository repair
will be placed into a seperate log file.
Repository Repair Process Timeout: If running the repository repair in a separate process, this is the
maximum amount of time the process can take before it is aborted.
Automatic Primary Election: If enabled, the Repository Repair operation will elect another running
Pulse/Balancer instance as the Primary if the current Primary instance is no longer running.
244
245
Render Jobs As User: Enable to have jobs render as the user that submitted them.
Use su Instead Of sudo On Linux and Mac OS X: If enabld, su will be used to run the process as
another user instead of sudo. This setting is ignored on Windows.
Preserve Environment On Linux and Mac OS X: If enabled, the user environment will be preserved
when running the process as another user using su or sudo. This setting is ignored on Windows, and is
ignored on Mac OS X when using su instead of sudo.
Error Weight: Weight given to the number of errors a job has when using a Weighted scheduling order.
Rendering Task Weight: Weight given to the number of rendering tasks a job has when using a Weighted
scheduling order.
Rendering Task Buffer: A buffer that is used by slaves to give their job extra priority on the farm.
Enhanced Balancing Logic: If enabled, a more enhanced method of balancing slave between jobs is used,
which should prevent slaves from jumping between jobs as much. This feature is still considered experimental.
Submission Limitations
Task Limit For Jobs: The maximum number of tasks a job can have. Note that this does not impose a frame
limit so you you can always increase the number of frames per task to stay below this limit.
Maximum Job Priority: The maximum priority value a job can have.
Automatic Job Timeout
Configure Deadline to automatically determine a timeout for a job based on the render times of tasks that have already
completed. If a task goes longer than that timeout, a timeout error will occur and the task will be requeued.
Minimum number of completed tasks required before calculating a timeout: The minimum number of tasks
that must be completed before Auto Job Timeout Checking occurs.
Minimum percent of completed tasks required before calculating a timeout: The minimum percent of tasks
that must be completed before Auto Job Timeout Checking occurs.
Enforce an automatic job timeout for all jobs: If enabled, the Auto Job Timeout will be enabled for all jobs
overriding the per job specification of the value.
Timeout Multiplier: To calculate the Auto Job Timeout, the longest render time of the completed tasks is
multiplied by this value to determine the timeout time.
247
Failure Detection
Job Failure Detection
Sends warnings and fail jobs or tasks if they generate too many errors.
Send a warning to the jobs user after it has generated this many errors: A warning will be sent to the jobs
notification list once its error count has reach. By default, the submitting user is automatically added to this list.
Mark a job as failed after it has generated this many errors: The number of errors a job must throw before
it is marked as failed.
Mark a task as failed after it has generated this many errors: The number of errors a task must throw before
it is marked as failed.
Automatically delete corrupted jobs from the Repository: If enabled, if a job is found to be corrupted it will
it will be automatically removed from the the Repository.
Maximum Number of Job Error Reports Allowed: This is the maximum number of error reports each job
can generate. Once a job generate this many errors it will fail and can not be resumed until some of its error
reports are deleted or this value is increased.
248
Cleanup
Automatic Job Cleanup
Cleanup Jobs After This Many Days: If enabled, this is the number of days to wait before cleaning up unarchived jobs.
Cleanup Mode: Whether the cleanup should archive the jobs found or delete them.
You can also set the number of hours since the job was last modified before cleaning it up.
5.1. Repository Configuration
249
Auxiliary Files
Many jobs have an option to submit the scene file and other auxiliary files with the job. This can be useful because it
stores a copy of the scene file with the job that can be referred to later. However, if the size of these files are large and
the Repository server isnt designed to handle this load, it can seriously impact the Repository machines performance.
This problem can be avoided by storing these files in a location on a different server that is designed to handle the
load.
Store job auxiliary files in a different location: If enabled, job auxiliary files submitted to Deadline will be
stored at a location specified and not the Repository.
250
Extra Properties
Extra arbitrary properties can be submitted with a job, and these properties can be given user friendly names so that
they can easily be identified and used to filter and sort jobs in the Monitor.
251
252
Maximum Number of Slave History Entries: The maximum number of slave history entries that are stored
before old entries are overwritten.
Maximum Number of Pulse History Entries: The maximum number of pulse history entries that are stored
before old entries are overwritten.
Maximum Number of Balancer History Entries: The maximum number of balancer history entries that are
stored before old entries are overwritten.
Logging Verbosity
Slave Verbose Logging: If enabled, more information will be written to the Slave log while it is running.
Pulse Verbose Logging: If enabled, more information will be written to the Pulse log while it is running.
Balancer Verbose Logging: If enabled, more information will be written to the Balancer log while it is running.
253
254
255
Only map drives when the Slave is running as a service: If checked, the slave will only map the drives if its
running as a service. If unchecked, it will also do it when the slave is running as a normal application.
256
257
258
259
Allow Execution of Non-Script Commands: If enabled, users are allowed access to Deadline Command
commands.
260
and edit their user settings. See the Monitor and User Settings documentation for more information on the available
user settings.
261
By default, Deadline does not enforce Enhanced User Security. This means that a user can switch to a different User
and edit someone elses Jobs. For some pipelines, this honor system will work fine, but for those looking for tighter
security, you should enable Enhanced User Security, so that it uses the system user as the Deadline User. When this
option is enabled, users will not be able to switch to a another Deadline User unless they log off their system and log
back in as someone else.
It is also recommended that you add a Super User password if you are looking for enhanced security, as a Super
User without a password would allow Users to circumvent User Job-editing restriction, as well as circumventing any
restrictions imposed on them by their User Groups (see below).
262
The left side of this dialog contains the list of User Groups that have already been created in the Repository. There are
also controls allowing you to manipulate this list in many ways:
Add: Will create a new User Group using the default options and feature access levels (equivalent to the default
Everyone group before modification).
Remove: Will delete the selected User Group from the Repository. Note that the Everyone group can never
be Removed in order to guarantee that all Users will at least be part of this group.
Clone: Will create a new User Group using the Options and Feature Access Levels of the currently selected
group as defaults.
This list is visible regardless of which tab is selected, allowing you to quickly change which Group youre modifying,
and ensuring youre always aware of which one is currently selected.
263
General Options
This tab contains basic higher-level settings for User Groups. Note that most of the features on this tab, described
below, will be disabled when modifying the Everyone group, since it is a special Group that must always be active
and enabled for all Users.
Group Options
Group Enabled: This indicates whether or not this User Group is currently active or not. Disabling
a User Group instead of Removing it altogether can be useful if you just want to temporarily disable
access for a group of users without having to re-create it later. This is always true for the Everyone
Group.
Group Expires: This setting will cause a Group to only be valid up to the specified Date and Time.
This can be useful if you are hiring temporary staff and know in advance that you will need to revoke
their access on a certain Date. This cannot be set for the Everyone Group.
Job Access Level
Can View Other Users Jobs: This setting determines whether or not Users belonging to the Group
can see other users jobs.
Can Modify Other Users Jobs: This setting indicates whether or not Users in this Group should be
allowed to modify other users jobs (change properties, job state, etc).
Can Handle Protected Jobs: This setting determines whether or not Users belonging to the Group
can archive or delete protected jobs that dont belong to them.
Can Submit Jobs: This setting determines whether or not Users belonging to the Group can submit
jobs.
Default Monitor Layout: Here you can select a Monitor layout that was added to the Repository Configuration.
This layout will act as the default for users belonging to this user group. The Priority setting is used as a tie
breaker if a user is part of more than one group with a default layout. When a user selects View -> Reset Layout,
it will reset to their user groups default layout instead of the normal default. Finally, if the Reset Layout On
Startup setting is enabled, the Monitor will always start up with that layout when it is launched.
Time-Restricted Access: This section allows you to set windows of time during which this Group is considered
Active. This is useful if you want to set up permissions to change based on the time of day, or if you just want
to lock out certain Users after hours. This cannot be enabled for the Everyone Group.
Group Members: This is where you control which Users are considered members of the currently selected
Group. Users can be part of multiple Groups. All Users are always part of the Everyone Group, and this
cannot be changed.
Controlling Feature Access
The other tabs in the Group Management dialog are dedicated to enabling or restricting access to certain Features on
a per-group basis.
Each tab groups displays a different type of Feature, that represent different aspects of the end-user experience:
Menu Items: This tab contains all the Menu Item features, including the main menu bar, right-click menus, and
toolbar items.
Job Properties: This tab contains all of a Jobs modifiable properties, and determines which ones a User will
be allowed to change. Note that this is only for Jobs a User is allowed to modify in the first place, if he is not
allowed to modify other Users Jobs (see section above).
264
Scripts: This contains all the different type of Scripts a User could run from the Monitor. This section is a
little different than the others, because the actual Features are dynamically generated based on which Scripts are
currently in the Repository. Note that all scripts will default to a value of Inherited, so make sure to revisit this
screen when adding new Scripts to your Repository.
UI Features: This tab contains all the different types of Panels that a User can spawn in the Monitor, and
controls whether or not a particular User Group is allowed to spawn them.
These Features are also grouped further within each tab into logical categories, to try and make maintenance easier.
There are three possible Access Levels that you can specify for each Feature:
Enabled: The members of this Group will have access to this particular Feature.
Disabled: This Group is not granted access to this Feature. Note, however, that Users in this Group might be
granted access to this Feature by a different Group.
Inherited: Whether or not this Feature is Enabled or Disabled is deferred to the Features Parent Category.
Its current inherited value is reflected in the coloured square next to the dropdown; Red indicates it is currently Disabled, while Green indicates it is currently Enabled. Top-level Parents in a category cannot be set to
Inherited.
If Users are part of multiple Groups, they will always use the least-restrictive Group for a particular Feature. In other
words, a given User will have access to a Feature as long as he is part of at least one currently active Group that has
access to that Feature, regardless of whether or not his other Groups typically allow it.
265
266
Note that the only settings here that have an actual impact on rendering are the Concurrent Tasks and CPU Affinity
settings. Furthermore, the CPU Affinity feature is only supported on Windows and Linux operating systems, since
OSX does not support process affinity.
General
These are some general Slave settings:
Slave Description: A description of the selected Slave. This can be used to provide some pertinent information
about the slave, such as certain system information.
Slave Comment: A short comment regarding the Slave. This can be used to inform other users why certain
changes were made to that Slaves settings, or of any known potential issues with that particular Slave.
Normalized Render Time Multiplier: This value is used to calculate the normalized render time of Tasks. For
example, a Slave that normally takes twice as long to render a Task should be assigned a multiplier of 2.
267
Normalized Task Timeout Multiplier: This value is used to calculate the normalized render time of Task
Timeouts. Typically, this should be the same value as above.
Concurrent Task Limit Override: The concurrent Task Limit for the Slave. If 0, the Slaves CPU count is
used as the limit.
Host Name/IP Address Override: Overrides the Host name/IP address for remote commands.
MAC Address Override: This is used to override the MAC Address associated with this Slave. This is useful
in the event that the slave defaults to a different MAC Address than the one needed for Wake On Lan.
Region: The Slaves region. Used for cross platform rendering. Default is None. See Regions for more
information.
Exclude Jobs in the none Pool: Enable this option to prevent the Slave from picking up Jobs that are assigned
to the none Pool.
Exclude Jobs in the none Group: Enable this option to prevent the Slave from picking up Jobs that are
assigned to the none Group.
Idle Detection
These settings can be used to override the global Slave Scheduling settings for the slave (if there are any). It can be
used to start the slave when its machine becomes idle (based on keyboard and mouse activity), and stop the slave when
its machine is in use again. Note that Idle Detection is managed by the Launcher, so it must be running for this feature
to work.
Start Slave When Machine Idle For: If enabled, the Slave will be started on the machine if it is idle. A
machine is considered idle if there hasnt been any keyboard, mouse or tablet activity for the specified amount
of time.
268
Only Start Slave If CPU Usage Less Than: If enabled, the slave will only be launched if the machines CPU
usage is less than the specified value.
Only Start Slave If Free Memory More Than: If enabled, the slave will only be launched if the machine has
more free memory than the specified value (in Megabytes).
Only Start Slave If These Processes Are Not Running: If enabled, the slave will only be launched if the
specified processes are not running on the machine.
Only Start If Launcher Is Not Running As These Users: If enabled, the slave will only be launched if the
launcher is not running as one of the specified users.
Stop Slave When Machine Is No Longer Idle: If enabled, the Slave will be stopped when the machine is no
longer idle. A machine is considered idle if there hasnt been any keyboard, mouse or tablet activity for the
specified amount of time.
Only Stop Slave If Started By Idle Detection: If enabled, the Slave will only be stopped when the machine is
no longer idle if that Slave was originally started by Idle Detection. If the Slave was originally started manually,
it will not be stopped.
Allow Slave To Finish Its Current Task When Stopping: If enabled, the Slave application will not be closed
until it finishes its current Task.
There are some limitations with Idle Detection depending on the operating system:
On Windows, Idle Detection will not work if the Launcher is running as a service. This is because the service
runs in an environment that is separate from the Desktop, and has no knowledge of any mouse or keyboard
activity.
On Linux, the Launcher uses X11 to determine if there has been any mouse or keyboard activity. If X11 is not
available, Idle Detection will not work.
269
Job Dequeuing
These setting are used to determine when a Slave can dequeue Jobs.
All Jobs: In this mode, the Slave will dequeue any job.
Only Jobs Submitted From This Slaves Machine: In this mode, the Slave will only dequeue job submitted
from the machine its running on.
Only Jobs Submitted From These Users: In this mode, the Slave will only dequeue job submitted from the
specified users.
CPU Affinity
These settings affect the number of CPUs the Slave renders with (Windows and Linux only):
Override CPU Affinity: Enable this option to override which CPUs the Slave and its child processes are limited
to.
Specify Number of CPUs to use: Choose this option if you just want to limit the number of CPUs used, and
you arent concerned with which specific CPUs are used.
Select Individual CPUs: Choose this option if you want to explicitly pick which CPUs are used. This is useful
if you are running multiple Slaves on the same machine and you want to give each of them their own set of
CPUs.
270
Extra Info
Like jobs, extra arbitrary properties can also be set for slaves.
271
The Extra Info 0-9 properties can be renamed from the Slaves section of the Repository Configuration, and have
corresponding columns in the Slave list that can be sorted on.
272
You can use the Slave Report panels right-click menu to save reports as files to send to Deadline Support. You can
also delete reports from this menu as well.
In addition to viewing Slave reports, you can also view the Slaves history. The History window can be brought up
from the Slave panels right-click menu by selecting the View Slave History option.
273
274
If the Pulse panel is not visible, see the Panel Features documentation for instructions on how to create new panels in
the Monitor.
You can also auto-configure a Pulse instance by right-clicking on it in the Monitor and selecting Auto Configure
Pulse. This will automatically make this Pulse the Primary Pulse, and set its connection settings.
General
These are some general Pulse settings:
This Pulse Is The Primary: If enabled, this is the Primary Pulse that the Slaves will connect to. If there is no
Primary, the Slaves will not be able to connect to Pulse.
275
Override Port: If enabled, this port will be used by Pulse instead of a random port.
Host Name/IP Address Override: Overrides the Host name/IP address used by the Slaves to connect to Pulse,
and for remote commands.
MAC Address Override: This is used to override the MAC Address associated with this Pulse. This is useful
in the event that the pulse defaults to a different MAC Address than the one needed for Wake On Lan.
Region: The region for Pulse. Used for path mapping when executing commands with the Web Service.
When the Slaves connect to Pulse, they will use Pulses host name, unless the option to use Pulses IP address is
enabled in the Pulse Settings in the Repository Options. Use the Host Name/IP Address Override setting above to
override what the Slaves use to connect to Pulse.
276
277
This allows you to set the repository path in a single location. When a Slave starts up, it will automatically pull the
repository path from Pulse and from that apply some settings before fully initializing. See the Auto Configuration
documentation for more information.
Slave Throttling
Pulse supports a throttling feature, which is helpful if youre submitting large files with your jobs. This is used to limit
the number of Slaves that copy over the job and plugin files at the same time. See the Network Performance Guide
documentation for more information.
Power Management
Power management is a system for controlling how machines startup and shutdown automatically based on sets of
conditions on the render farm, including job load and temperature. Power management is built into Pulse, so Pulse must
be running to use this feature. The only exception to this rule is Temperature checking. See the Power Management
documentation for more information.
Statistics Gathering
While Pulse isnt required to gather job statistics, it is required to gather the Slave and Repository statistics. See the
Farm Statistics documentation for more information.
Web Service
While Deadline has a standalone Web Service application, Pulse also has a web service feature built in. The web
service can be used to get information over an Internet connection. It is used by the Mobile application, and can also
be used to display information in a web page. See the Web Service documentation for more information.
When adding a new Cloud Region youll have to enter all of your credentials and settings for that particular provider.
You can look at the documentation for each plugin for further details about all the settings and credentials. Enabling
the region will show instances in the Cloud Panel. Your credentials need to be verified before youre able to enable
the region to work with the Balancer.
Basic Configuration
The basic configuration options are:
Enabled: Enabling the region makes it usable by the Balancer.
Region Preference: Weighting towards the region.
Region Budget: Total Budget for a region. Governs how many instances will be started for this region.
Asset Checking
Asset Checking can be used to sync assets between the repository and the slaves. The Asset Checking options are:
Enable Asset Checking: Enables asset crawler for jobs with assets.
Asset Crawler Hostname: Hostname for the Asset Crawler.
Asset Crawler Port: Port number for the Asset Crawler.
Asset Crawler OS: Operating system of the Asset Crawler.
The asset script itself can be found in the vmx folder in the Repository, and is called AssetCrawler_Server.py.
279
Balancer Plugins
The Balancer uses an algorithm thats defined in a Balancer Plugin. That can be set in the Balancer Settings section in
Repository Configuration. Weve included a default algorithm that should be fine for most use cases but you can write
your own for your specific needs.
Group Mappings
Group Mappings are the heart of the Balancer. They tell the Balancer what kinds of instances to start for each group.
A Group Mapping is mainly comprised of a group, an image, a hardware type and a budget. The image and hardware
type are from the provider. The Cost is how much of the regions budget will be consumed by each instance.
280
You can also add Pools to a mapping so that instances will be started in those pools.
281
You can also auto-configure a Balancer instance by right-clicking on it in the Monitor and selecting Auto Configure
Balancer. This will automatically make this Balancer the Primary Balancer.
General
These are some general Balancer settings:
This Balancer Is The Primary: If enabled, this is the Primary Balancer.
Host Name/IP Address Override: Overrides the Host name/IP address for remote commands.
MAC Address Override: This is used to override the MAC Address associated with this Balancer. This is
useful in the event that the balancer defaults to a different MAC Address than the one needed for Wake On Lan.
Region: The region for Balancer.
282
283
Note that when multiple Balancer instances are running, only the Primary Balancer is starting and stopping virtual
instances.
284
285
286
Where:
PW = priority weight
EW = error weight
SW = submission time weight
RW = rendering task weight
RB = rendering task buffer
NOW = the current repository time
Note that because the job submission time is measured in seconds, it will have the greatest impact on the overall
weight. Reducing the SW value can help reduce the submission times impact on the weight value.
There is also an experimental option to enhance the balancing logic. When this option is enabled, the slaves will use
the database to get a more accurate snapshot of all the rendering jobs in the farm, and use this information to make
better decisions about which job they should be rendering. Testing has shown that when this option is enabled, a
proper distribution of Slaves among jobs is much more consistent, and Slaves no longer jump between jobs of the
same priority. The result is more predictable behavior, and less wasted time due to the overhead of switching between
jobs that are expensive to start up.
287
right-click menu.
The dialogs are very similar to each other, but the nuances between the two are described below in detail. Note that if
you used the Slave panels right-click menu to open these dialogs, they will be pre-filtered to just show the slaves that
you right-clicked on. They will also show the same columns that are currently being shown in the slave list.
Group Management Dialog
From here, you can manage individual Groups, and assign them to various Slaves. It is a bit simpler than the Pool
Management Dialog, which will be covered below in more detail, since it does not have to worry about the order of
Groups for a given Slave.
Only Show Slaves Assigned to a Selected Group: This option will filter the displayed Slaves to only
include the ones that are currently assigned to at least one of the selected Groups.
Group Operations: These operations are used to manipulate which Groups are assigned to which Slaves. They
typically require a selection of one or more Groups and one or more Slaves to be active.
Add: This will add all of the selected Groups to all of the selected Slaves, if it wasnt already there.
Remove: This will remove all of the selected Groups from all of the selected Slaves, if applicable.
Copy: This will copy the groups from the selected slave to the clipboard.
Paste: This will paste the groups that were copied using the Copy button to the selected slaves.
Clear: This will clear all the groups from all of the selected Slaves. This option does not require a Group
to be selected.
Pool Management Dialog
The Pool Management dialog functions similarly to the Group Management dialog described above, but with a few
added options to deal with managing Pool Ordering for individual Slaves.
The functions you can perform here are as follows. Note that a lot of these overlap with the described Group Management functionality described in the previous section.
Pools: This section displays existing Pools and allows you to manipulate them, or create new ones. Your
selection here will determine which Pools will be affected by the Pool Operations described below.
New: This will create a new Pool in the Repository, and allow you to assign it to Slaves. You will be
prompted for a name for the new Pool; not that Pool names cannot be changed once the Pool has been
created. Adding a Pool with the name of previously Deleted Pool will effectively re-instate that Pool if it
hasnt been Purged yet (see below).
289
Delete: This will Delete all of the selected Pools from the Repository, and enable the option to Purge them
(described below).
Purge Obsolete Pools on Close: This will purge any obsolete (deleted) Pools from existing Jobs and
remove them from any Slaves that may have them in their list. They will be replaced with the Pool
selected in the drop down. Note that if you choose not to Purge the obsolete Pools right now, you can
always return to this dialog and do it later.
Priority Distribution: This graph visualizes how many Slaves have one of the selected Pools as #1 priority, #2 priority, etc. It also displays how many Slaves are not currently assigned to the selected Pools.
Slaves: This section displays a list of all known Slaves that have connected to your Repository. Your selection
here will determine which Slaves will be affected by the Pool Operations described below.
Only Show Slaves Assigned to a Selected Pool: This option will filter the displayed Slaves to only include
the ones that are currently assigned to at least one of the selected Pools.
Pool Operations: These operations are used to manipulate which Pools are assigned to which Slaves. They
typically require a selection of one or more Pools and one or more Slaves to be active.
Add: This will add all of the selected Pools to all of the selected Slaves, if it wasnt already there.
Remove: This will remove all of the selected Pools from all of the selected Slaves, if applicable.
Promote: This will bump up the selected Pools by one position in all of the selected Slaves Pool lists.
Any selected Slaves that are not assigned to the selected Pool(s) are unaffected.
Demote: This will bump down the selected Pools by one position in all of the selected Slaves Pool lists.
Any selected Slaves that are not assigned to the selected Pool(s) are unaffected. Note that a Pool cannot
be demoted to be lower than the none pool the none Pool is always assigned the lowest priority by
Slaves.
Copy: This will copy the pools from the selected slave to the clipboard.
Paste: This will paste the pools that were copied using the Copy button to the selected slaves.
Clear: This will clear all the Pools from all of the selected Slaves. This option does not require a Pool to
be selected.
Preventing Slaves from Rendering Jobs in the none Pool or Group
In some cases, it may be useful to prevent on or more Slaves from rendering Jobs that are assigned to the none Pool
or Group. For example, you may have a single machine that you want to only render Quicktime Jobs. Normally, you
could add this machine to a quicktime Group, but if there are noe Quicktime Jobs, the Slave could move on to Jobs
that are in the none Group. If you want this machine to only be available for Quicktime Jobs, you can configure it to
exclue Jobs in the none Group.
The option to exclude Jobs in the none Pool or Group can be found in the Slave Settings in the Monitor.
290
show_b
Now say we have 10 machine in our render farm, and we want to give each show top priority on half of it. To do this,
wed just assign the pools to our Slaves like this:
Slaves 1-5:
1. show_a
Slaves 6-10:
1. show_b
With this setup, if Jobs from both shows are in the queue, then Slaves 1-5 will pick up the Jobs from show_a, while
Slaves 6-10 will work on Jobs from show_b. This effectively splits our farm in half, like we desired, but with this
configuration Slaves 1-5 would sit idle once show_a finishes production, even if there are plenty of show_b Jobs in the
queue. The reverse would also be true if show_b production slows down while show_a is still ramping up.
To accomplish this second goal of maximizing our resources, well assign the Pools to our Slaves as follows:
Slaves 1-5:
1. show_a
2. show_b
Slaves 6-10:
1. show_b
2. show_a
Now, Slaves 1-5 will still give top priority to show_a Jobs, and Slaves 6-10 will similarly give top priority to show_b
Jobs. However, if there are no show_a Jobs currently in the queue, Slaves 1-5 will start working on show_b Jobs
until another show_a Job comes along. Similarly, Slaves 6-10 would start working on show_a if no show_b Jobs were
available.
This concept is also extensible to any number of shows/pools, you just have to make sure to have an even Priority
Distribution across your farm (the Priority Distribution graph should help with that). Heres an example of what the
Priority Distribution for a 3-show farm with 15 Slaves could look like:
Slaves 1-5:
1. show_a
2. show_b
3. show_c
Slaves 6-10:
1. show_b
2. show_c
3. show_a
Slaves 11-15:
1. show_c
2. show_a
3. show_b
291
292
5.8.3 Limits
Limits can be managed from the Limit list in the Monitor while in Super User mode (or as a user with appropriate
User Group privileges). This list shows all the Limits that are in your Repository. It also displays useful information
about each Limit, such as its name, its limit, and the number of Limit stubs that are currently in use. You can access
many options for the Limits (listed below) by right-clicking on them, and you can create a new Limit by clicking on
the [+] button in the Limit lists toolbar.
If the Limits panel is not visible, see the Panel Features documentation for instructions on how to create new panels
in the Monitor.
293
New Limit
Use this option to add a new Limit to your Repository.
You can modify the following settings for the new Limit:
Name
The name of the new Limit. Note that this setting cannot be changed once the Limit has been created.
Usage Level
The level at which a Limit Stub will be checked out. Slave is the default, and will require each Slave
to acquire a Stub; if Machine is selected, only a single Stub will be required for all Slaves on the same
machine. Conversely, if Task is selected, Slaves will try to acquire one Stub per concurrent Render
Thread. Note that this setting cannot be changed after Limit creation.
Limit
The maximum number of simultaneous uses that this Limit can support at any given time. What counts
294
as a use is based on the usage Level (will be either on a Machine, Slave, or Task level).
Release at Task Progress
If enabled, Slaves will release their Limit stub when the current Task reaches the specified percentage.
Note that not all Plugins report Task progress.
Whitelisted/Blacklisted Slaves
If Slaves (or Machines, depending on Level selected above) are on a Blacklist, they will never try to render
Jobs associated with this Limit. If Slaves/Machines are on a Whitelist, then they are the only ones that
will try to render Jobs associated with this Limit. Note that an empty blacklist and an empty whitelist are
functionally equivalent, and have no impact on which machines the job renders on.
Slaves Excluded From Limit
These Slaves (or Machines, depending on Level selected above) will ignore this Limit and wont contribute to the Limits stub count. This is useful if you are juggling a mix of floating and node-locked
licenses, in which case your machines with node-locked licenses should be placed on this list.
Clone Limit
This option allows you to create a new Limit while using an existing Limit as a template. It will bring up a dialog
very similar to the one pictured in Create Limit, with all the same options. This option is handy if you want to create
a Limit that is very similar to an existing one, but with a small variation.
Modify Limit Properties
This option allows you to edit the settings for an existing Limit. All of the settings described in the New Limit section
above can be changed except for the Limits Name and Usage Level, which cannot be changed once the Limit has
been created.
Reset Limit Usage Count
Sometimes a Limit stub will get orphaned, meaning that it is counting against the Limits usage count, but not machines
are actually using it. After a while, Deadline will eventually clean up these orphaned Limit stubs. This option provides
the means to delete all existing stubs immediately (whether they are orphaned or not), which will require Slaves to
acquire them again.
Delete Limit
Removes an existing Limit from your Repository. Any Jobs associated with deleted Limits will still be able to render,
but they will print out Warnings indicating that the Limit no longer exists.
295
If a job is assigned to a Limit, and that Limit has a whitelist, the job will only render on the slaves in that
whitelist.
If a job is assigned to two Limits, and one of those Limits is currently maxed out, the job will not be picked up
by any additional slaves. This is because a slave must be able to acquire all Limits that the job requires.
If a job is assigned to two Limits, and one of those Limits has slave_1 on its blacklist, slave_1 will never pick
up the job. This is because a slave must be able to acquire all Limits that the job requires.
Job Machine Limits
If a job has a Machine Limit greater than 0, and that Limit is currently maxed out, the job will not be picked up
by any additional slaves.
If a job has a whitelist, the job will only render on the slaves in that whitelist.
296
297
If youve resolved the problems that were preventing the Job from rendering properly, you can right-click on it in the
Monitor and select Resume Failed Job. You will then be prompted with the option to ignore or override Failure
Detection for this Job going forward. Note that an Error Limit of 0 indicates that there is no limit, and the Job will
never be marked as Failed by Failure Detection.
If you choose not to ignore Failure Detection, make sure to clear the Jobs errors, or a single new error will result in
the Job failing again, because its error limit is still over the maximum. To clear a Jobs errors, simply delete all of the
Jobs Error Reports using the Job Reports Panel.
Jobs bad list by navigating to the Failure Detection section of a Jobs Properties dialog. There is also an option in
this section to have your Job completely ignore Slave Failure Detection, if you wish.
5.10 Notifications
5.10.1 Overview
Deadline can be configured to notify Users when their Jobs finish, or if they have failed. In addition, Deadline can
be configured to send notifications to administrators when certain events occur on the farm (e.g., when a Slave has
stalled, or if a Slave is being shutdown by Power Management).
5.10. Notifications
299
300
In order to receive email notifications, the user needs to set their Email Address setting and enable the Email Notification option. Note that email notifications will only be sent if the SMTP settings in the Repository Options are set
properly, as mentioned in the previous section.
In order to receive popup message notifications, the user needs to have the Launcher running on their workstation, and
have their workstation machine name specified in their User Settings.
301
There are a few places in the Monitor you can find the option to connect to the Slave log:
The Slave panel right-click menu.
The Task panel right-click menu. Note that it will only appear for rendering or completed tasks.
The Job Report panel right-click menu.
The Slave Report panel right-click menu.
302
303
304
When executing an arbitrary command, if you want to execute a DOS command on a Windows machine, the command
must be preceded with cmd /C. This opens the DOS prompt, executes the command, and closes the prompt. For
example:
cmd /C echo "foo" > c:\test.txt
These remote commands do not allow for user input for any command requiring a prompt. An example where this
might cause confusion is with Microsofts xcopy command. Here, if the destination is uncertain to be a file or folder,
xcopy will immediately exit as though successful instead of asking what should be done.
If a command returns a non-zero exit code, the command will be interpreted as having failed.
Slave Remote Control Options
These options are only available in the Slave Remote Control menu:
Search For Jobs: Forces the Slave to search the Repository for a job to do.
Cancel Current Tasks: Forces the Slave to cancel its current tasks.
Start Slave: Starts the Slave instance.
Stop Slave: Stops the Slave instance.
Restart Slave: Restarts the Slave instance.
Continue Running After Current Task Completion: The Slave will continue to run after it finishes its current
task.
Stop Slave After Current Task Completion: The Slave will stop after the current task is completed.
Restart Slave After Current Task Completion: The Slave will restart after the current task is completed.
Shutdown Machine After Current Task Completion: The Machine running the Slave will stop after the
current task is completed.
Restart Machine After Current Task Completion: The machine running the Slave will restart after the current
task is completed.
Start All Slave Instances: Starts all the slave instances on the selected machines.
Start New Slave Instance: Starts a new slave instance with the specified name on the selected machine.
305
306
You can view the results of a remote command by clicking on the command in the list. The full results will be shown
in the log window below. All successful commands will start with Connection Accepted.
307
The following options are available in the ARD window in the Monitor:
Machine IP Address(s): Specify which machines to connect to. Use a comma to separate multiple machine
names.
Hide this window if running from a right-click Scripts menu: If enabled, this window will be hidden if run
from a right-click menu in the Monitor. You can always run it from the main Scripts menu to see this window.
Radmin
Radmin is fast, secure and affordable remote-control software that enables you to work on a remote computer in real
time as if you were sitting in front of it.
308
The following options are available in the Radmin window in the Monitor:
Machine Name(s): Specify which machines to connect to. Use a comma to separate multiple machine names.
Radmin Viewer: The Radmin viewer executable to use.
Radmin Port: The Radmin port.
Hide this window if running from a right-click Scripts menu: If enabled, this window will be hidden if run
from a right-click menu in the Monitor. You can always run it from the main Scripts menu to see this window.
Remote Desktop Connection (RDC)
With Remote Desktop Connection (RDC), you can easily connect to a terminal server or to another computer running
Windows. All you need is network access and permissions to connect to the other computer.
The following options are available in the RDC window in the Monitor:
Machine Name(s): Specify which machines to connect to. Use a comma to separate multiple machine names.
Settings:
No Settings: When this option is chosen, no existing RDP settings are used to connect.
Settings File: When this option is chosen, the specified RDP config file is used to connect.
Settings Folder: When this option is enabled, existing RDP config files in this folder are used to
connect. If the machine does not have an RDP config file, youll have the option to save one before
connecting.
Hide this window if running from a right-click Scripts menu: If enabled, this window will be hidden if run
from a right-click menu in the Monitor. You can always run it from the main Scripts menu to see this window.
309
VNC
Virtual Network Computing (VNC) is a desktop protocol to remotely control another computer. It transmits the keyboard presses and mouse clicks from one computer to another relaying the screen updates back in the other direction,
over a network. There are many options available for VNC software. TightVNC, RealVNC, UltraVNC, and Chicken
have all been used successfully with Deadline.
The following options are available in the VNC window in the Monitor:
Machine Name(s): Specify which machines to connect to. Use a comma to separate multiple machine names.
VNC Viewer: The VNC viewer executable to use.
Password: The VNC password.
VNC Port: The VNC port.
Remember Password: Enable to remember your password between sessions.
Hide this window if running from a right-click Scripts menu: If enabled, this window will be hidden if run
from a right-click menu in the Monitor. You can always run it from the main Scripts menu to see this window.
310
311
For example, if you have 100 Slaves, and youre submitting 500MB scene files with your jobs, you may notice a
performance hit if all 100 Slaves try to copy over the Job and Plugin files at the same time. You could set the Slave
Throttle Limit to 10, so that only 10 of those Slaves will ever be copying those files at the same time. When it goes
to render subsequent tasks for the same Job, it will not be affected by the throttling feature, since it already has the
required files. Note that for this feature to work, you must be running Pulse.
312
313
314
From here, you can choose a server thats better equipped to handle the load, which will help improve the performance
and stability of your Repository machine, especially if it is also hosting your Database backend. In a mixed farm
environment, you need to ensure that the paths for each operating system resolve to the same location. Otherwise, a
scene file submitted with the Job on one operating system will not be visible to a Slave running on another.
315
To add a new Path Mapping, just click the Add button. Then, you specify the path that needs to be swapped out,
along with the paths that will be swapped in based on the operating system. You can also specify a region so you can
have different mappings for the same path across different regions. For best results, make sure that all paths end with
their appropriate path separator (/ or \). This helps avoid mangled paths that are a result of one path with a trailing
separator, and one without.
316
Note that these swaps only work one-way, so if you are swapping from PC to Linux and vice-versa, you will need
two separate entries. For example, lets say the PC machines use the path \\server\share\ for assets, while the Linux
machines use the path /mnt/share/. Here are what your two entries should look like:
Entry 1 (replaces the Linux path with the PC path on PCs):
Replace Path: /mnt/share/
Windows Path: \\server\share\
Linux Path:
Mac Path:
If you have Mac machines as well, you will need three entries. For example, if the Macs use /Volumes/share/ to
access the assets from the previous example, here are what your three entries should look like:
Entry 1 (replaces the Linux path with the PC path on PCs and the Mac path on Macs):
Replace Path: /mnt/share/
Windows Path: \\server\share\
Linux Path:
Mac Path: /Volumes/share/
Entry 2 (replaces the PC path with the Linux path on Linux and the Mac path on Macs):
Replace Path: \\server\share\
Windows Path:
Linux Path: /mnt/share/
Mac Path: /Volumes/share/
Entry 3 (replaces the Mac path with the PC path on PCs and the Linux path on Linux):
Replace Path: /Volumes/share/
Windows Path: \\server\share\
317
By default, Deadline just uses regular string replacement to swap out the paths. In this case, Deadline takes care of
the path separators (/ and \) automatically. If you want more flexibility, you can configure your path mappings to
use regular expressions, but not that you will need to handle the path separators manually using [/\\] in your regular
expressions.
5.13.4 Regions
Regions can be used to setup different mappings for the same path across your farm. For example, lets say we have a
local farm and a remote farm, and we want to map the path /mnt/share/ in our remote farm but not in our local farm.
All we have to do is set the region of our mapping to the same region our remote slaves are in. Slaves in the region
will replace /mnt/share/ but all the other slaves will use /mnt/share/ normally. We could also setup an alternate path
for the slaves in our local farm.
A mapping in the All region will apply to every region. It should be noted that a regions mapping is applied before
the All region.
318
CHAPTER
SIX
ADVANCED FEATURES
319
For the job, we want a task chunk size of 2, we want to submit to the 3dsmax group, we want a priority of 50, and we
want a machine limit of 5. Finally, we want to call the job 3dsmax command line job. The command line to submit
this job would look like this:
deadlinecommand.exe
-SubmitCommandLineJob
-executable "c:\Program Files\Autodesk\3dsmax8\3dsmaxcmd.exe"
-arguments "-start:<STARTFRAME> -end:<ENDFRAME>
-width:480 -height:320 <QUOTE>\\shared\path\scene.max<QUOTE>"
-frames 1-10
-chunksize 2
-group 3dsmax
-priority 50
-name "3dsmax command line job"
-prop MachineLimit=5
By default, a Maintenance job will render frame 0 on every machine. To render a different frame, or a sequence of
frames, you can specify the MaintenanceJobStartFrame and MaintenanceJobEndFrame options in the job info file:
MaintenanceJob=True
MaintenanceJobStartFrame=1
MaintenanceJobEndFrame=5
320
Note that if you specify a whitelist or blacklist in the job info file, the number of tasks that are created for the Maintenance job will equal the number of valid slaves that the job could render on.
Another way to submit a Maintenance job is to right-click on an existing job in the Monitor and choose the Resubmit
Job option. See the Resubmitting Jobs section of the Controlling Jobs documentation for more information.
321
ChunkSize=<1 or greater> : Specifies how many frames to render per task (default = 1).
ForceReloadPlugin=<true/false> : Specifies whether or not to reload the plugin between subsequent frames of
a job (default = false). This deals with memory leaks or applications that do not unload all job aspects properly.
SynchronizeAllAuxiliaryFiles=<true/false> : If enabled, all job files (as opposed to just the job info and plugin
info files) will be synchronized by the Slave between tasks for this job (default = false). Note that this can add
significant network overhead, and should only be used if you plan on manually editing any of the files that are
being submitted with the job.
InitialStatus=<Active/Suspended> : Specifies what status the job should be in immediately after submission
(default = Active).
LimitGroups=<limitGroup,limitGroup,limitGroup> : Specifies the limit groups that this job is a member of
(default = blank).
MachineLimit=<0 or greater> : Specifies the maximum number of machines this job can be rendered on at
the same time (default = 0, which means unlimited).
MachineLimitProgress=<0.1 or greater> : If set, the slave rendering the job will give up its current machine
limit lock when the current task reaches the specified progress. If negative, this feature is disabled (default =
-1.0). The usefulness of this feature is directly related to the progress reporting capabilities of the individual
plugins.
Whitelist=<slaveName,slaveName,slaveName> : Specifies which slaves are on the jobs whitelist (default =
blank). If both a whitelist and a blacklist are specified, only the whitelist is used.
Blacklist=<slaveName,slaveName,slaveName> : Specifies which slaves are on the jobs blacklist (default =
blank). If both a whitelist and a blacklist are specified, only the whitelist is used.
ConcurrentTasks=<1-16> : Specifies the maximum number of tasks that a slave can render at a time (default
= 1). This is useful for script plugins that support multithreading.
LimitTasksToNumberOfCpus=<true/false> : If ConcurrentTasks is greater than 1, setting this to true will
ensure that a slave will not dequeue more tasks than it has processors (default = true).
Sequential=<true/false> : Sequential rendering forces a slave to render the tasks of a job in order. If an earlier
task is ever requeued, the slave wont go back to that task until it has finished the remaining tasks in order
(default = false).
Interruptible=<true/false> : Specifies if tasks for a job can be interrupted by a higher priority job during
rendering (default = false).
SuppressEvents=<true/false> : If true, the job will not trigger any event plugins while in the queue (default =
false).
NetworkRoot=<repositoryUNCPath> : Specifies the repository that the job will be submitted to. This is
required if you are using more than one repository (default = current default repository for the machine from
which submission is occurring).
Cleanup Options
Protected=<true/false> : If enabled, the job can only be deleted by the jobs user, a super user, or a user that
belongs to a user group that has permissions to handle protected jobs. Other users will not be able to delete the
job, and the job will also not be cleaned up by Deadlines automatic house cleaning.
OnJobComplete=<Nothing/Delete/Archive> : Specifies what should happen to a job after it completes (default = Nothing).
DeleteOnComplete=<true/false> : Specifies whether or not the job should be automatically deleted after it
completes (default = false).
322
ArchiveOnComplete=<true/false> : Specifies whether or not the job should be automatically archived after it
completes (default = false).
OverrideAutoJobCleanup=<true/false> : If true, the job will ignore the global Job Cleanup settings and
instead use its own (default = false).
OverrideJobCleanup=<true/false> : If OverrideAutoJobCleanup is true, this will determine if the job should
be automatically cleaned up or not.
JobCleanupDays=<true/false> : If OverrideAutoJobCleanup and OverrideJobCleanup are both true, this is the
number of days to keep the job before cleaning it up.
OverrideJobCleanupType=<ArchiveJobs/DeleteJobs> :
Cleanup are both true, this is the job cleanup mode.
Environment Options
EnvironmentKeyValue#=<key=value> : Specifies environment variables to set when the job renders. This
option is numbered, starting with 0 (EnvironmentKeyValue0), to handle multiple environment variables. For
each additional variable, just increase the number (EnvironmentKeyValue1, EnvironmentKeyValue2, etc). Note
that these variables are only applied to the rendering process, so they do not persist between jobs.
IncludeEnvironment=<true/false> : If true, the submission process will automatically grab all the environment
variables from the submitters current environment and set them in the jobs environment variables (default =
false). Note that these variables are only applied to the rendering process, so they do not persist between jobs.
UseJobEnvironmentOnly=<true/false> : If true, only the jobs environment variables will be used at render
time (default = false). If False, the jobs environment variables will be merged with the slaves current environment, with the jobs variables overwriting any existing ones with the same name.
CustomPluginDirectory=<directoryName> : If specified, the job will look for for the plugin it needs to render
in this location. If it does not exist in this location, it will fall back on the Repository plugin directory. For example, if you are rendering with a plugin called MyPlugin, and it exists in \\server\development\plugins\MyPlugin,
you would set CustomPluginDirectory=\\server\development\plugins.
Failure Detection Options
OverrideJobFailureDetection=<true/false> : If true, the job will ignore the global Job Failure Detection
settings and instead use its own (default = false).
FailureDetectionJobErrors=<0 or greater> : If OverrideJobFailureDetection is true, this sets the number of
errors before the job fails. If set to 0, job failure detection will be disabled.
OverrideTaskFailureDetection=<true/false> : If true, the job will ignore the global Task Failure Detection
settings and instead use its own (default = false).
FailureDetectionTaskErrors=<0 or greater> : If OverrideTaskFailureDetection is true, this sets the number
of errors before a task for the job fails. If set to 0, task failure detection will be disabled.
IgnoreBadJobDetection=<true/false> : If true, slaves will never mark the job as bad for themselves. This
means that they will continue to make attempts at jobs that often report errors until the job is complete, or until
it fails (default = false).
SendJobErrorWarning=<true/false> : If the job should send warning notifications when it reaches a certain
number of errors (default = false).
Timeout Options
MinRenderTimeSeconds=<0 or greater> : Specifies the minimum time, in seconds, a slave should render a
task for, otherwise an error will be reported (default = 0, which means no minimum). Note that if MinRenderTimeSeconds and MinRenderTimeMinutes are both specified, MinRenderTimeSeconds will be ignored.
323
MinRenderTimeMinutes=<0 or greater> : Specifies the minimum time, in minutes, a slave should render a
task for, otherwise an error will be reported (default = 0, which means no minimum). Note that if MinRenderTimeSeconds and MinRenderTimeMinutes are both specified, MinRenderTimeSeconds will be ignored.
TaskTimeoutSeconds=<0 or greater> : Specifies the time, in seconds, a slave has to render a task before it
times out (default = 0, which means unlimited). Note that if TaskTimeoutSeconds and TaskTimeoutMinutes are
both specified, TaskTimeoutSeconds will be ignored.
TaskTimeoutMinutes=<0 or greater> : Specifies the time, in minutes, a slave has to render a task before it
times out (default = 0, which means unlimited). Note that if TaskTimeoutSeconds and TaskTimeoutMinutes are
both specified, TaskTimeoutSeconds will be ignored.
StartJobTimeoutSeconds=<0 or greater> : Specifies the time, in seconds, a slave has to start a render job
before it times out (default = 0, which means unlimited). Note that if StartJobTimeoutSeconds and StartJobTimeoutMinutes are both specified, StartJobTimeoutSeconds will be ignored.
StartJobTimeoutMinutes=<0 or greater> : Specifies the time, in minutes, a slave has to start a render job
before it times out (default = 0, which means unlimited). Note that if StartJobTimeoutSeconds and StartJobTimeoutMinutes are both specified, StartJobTimeoutSeconds will be ignored.
OnTaskTimeout=<Error/Notify/ErrorAndNotify/Complete> : Specifies what should occur if a task times
out (default = Error).
EnableAutoTimeout=<true/false> : If true, a slave will automatically figure out if it has been rendering too
long based on some Repository Configuration settings and the render times of previously completed tasks (default = false).
EnableTimeoutsForScriptTasks=<true/false> : If true, then the timeouts for this job will also affect its
pre/post job scripts, if any are defined (default = false).
Dependency Options
JobDependencies=<jobID,jobID,jobID> : Specifies what jobs must finish before this job will resume (default
= blank). These dependency jobs must be identified using their unique job ID, which is outputted after the job
is submitted, and can be found in the Monitor in the Job ID column.
JobDependencyPercentage=<-1, or 0 to 100> : If between 0 and 100, this job will resume when all of its
job dependencies have completed the specified percentage number of tasks. If -1, this feature will be disabled
(default = -1).
IsFrameDependent=<true/false> : Specifies whether or not the job is frame dependent (default = false).
FrameDependencyOffsetStart=<-100000 to 100000> : If the job is frame dependent, this is the start frame
offset (default = 0).
FrameDependencyOffsetEnd=<-100000 to 100000> : If the job is frame dependent, this is the end frame
offset (default = 0).
ResumeOnCompleteDependencies=<true/false> : Specifies whether or not the dependent job should resume
when its dependencies are complete (default = true).
ResumeOnDeletedDependencies=<true/false> : Specifies whether or not the dependent job should resume
when its dependencies have been deleted (default = false).
ResumeOnFailedDependencies=<true/false> : Specifies whether or not the dependent job should resume
when its dependencies have failed (default = false).
RequiredAssets=<assetPath,assetPath,assetPath> : Specifies what asset files must exist before this job will
resume (default = blank). These asset paths must be identified using full paths, and multiple paths can be
separated with commas. If using frame dependencies, you can replace padding in a sequence with the #
characters, and a task for the job will only be resumed when the required assets for the tasks frame) exist.
324
Notification Options
NotificationTargets=<username,username,username> : A list of users, separated by commas, who should be
notified when the job is completed (default = blank).
325
ClearNotificationTargets=<true/false> : If enabled, all of the jobs notification targets will be removed (default
= false).
NotificationEmails=<email,email,email> : A list of additional email addresses, separated by commas, to send
job notifications to (default = blank).
OverrideNotificationMethod=<true/false> : If the job users notification method should be ignored (default =
false).
EmailNotification=<true/false> : If overriding the job users notification method, whether to use email notification (default = false).
PopupNotification=<true/false> : If overriding the job users notification method, whether to use popup notification (default = false).
NotificationNote=<note> : A note to append to the notification email sent out when the job is complete (default
= blank). Separate multiple lines with [EOL], for example:
This is a line[EOL]This is another line[EOL]This is the last line
Script Options
PreJobScript=<path to script> : Specifies a full path to a python script to execute when the job initially starts
rendering (default = blank).
PostJobScript=<path to script> : Specifies a full path to a python script to execute when the job completes
(default = blank).
PreTaskScript=<path to script> : Specifies a full path to a python script to execute before each task starts
rendering (default = blank).
PostTaskScript=<path to script> : Specifies a full path to a python script to execute after each task completes
(default = blank).
Tile Job Options
TileJob=<true/false> : If this job is a tile job (default = false).
TileJobFrame=<frameNumber> : The frame that the tile job is rendering (default = 0).
TileJobTilesInX=<xCount> : The number of tiles in X for a tile job (default = 0). This should be specified
with the TileJobTilesInY option below.
TileJobTilesInY=<yCount> : The number of tiles in Y for a tile job (default = 0). This should be specified
with the TileJobTilesInX option above.
TileJobTileCount=<count> : The number of tiles for a tile job (default = 0). This is an alternative to specifying
the TileJobTilesInX and TileJobTilesInY options above.
Maintenance Job Options
MaintenanceJob=<true/false> : If this job is a maintenance job (default = false).
MaintenanceJobStartFrame=<frameNumber> : The first frame for the maintenance job (default = 0).
MaintenanceJobEndFrame=<frameNumber> : The last frame for the maintenance job (default = 0).
Extra Info Options
These are extra arbitrary properties that have corresponding columns in the Monitor that can be sorted on. There are a
total of 10 Extra Info properties that can be specified.
ExtraInfo0=<arbitrary value>
ExtraInfo1=<arbitrary value>
326
ExtraInfo2=<arbitrary value>
ExtraInfo3=<arbitrary value>
ExtraInfo4=<arbitrary value>
ExtraInfo5=<arbitrary value>
ExtraInfo6=<arbitrary value>
ExtraInfo7=<arbitrary value>
ExtraInfo8=<arbitrary value>
ExtraInfo9=<arbitrary value>
These are additional arbitrary properties. There is no limit on the number that are specified, but they do not have
corresponding columns in the Monitor.
ExtraInfoKeyValue0=<key=value>
ExtraInfoKeyValue1=<key=value>
ExtraInfoKeyValue2=<key=value>
ExtraInfoKeyValue3=<key=value>
...
Job Info File Examples
3ds Max Job Info File:
Plugin=3dsmax
ForceReloadPlugin=false
Frames=0-400
Priority=50
Pool=3dsmax
Name=IslandWaveScene_lighted01
Comment=Testing
OutputDirectory0=\\fileserver\Renders\OutputFolder\
OutputFilename0=islandWaveBreak_Std####.png
327
Group=Fusion
Name=Fusion Dependency Test
OutputFilename0=\\fileserver\Renders\OutputFolder\dfusion_test####.tif
JobDependencies=546cc87357dbb04344a5c6b5,53d27c9257dbb027b8a4ccd2
InitialStatus=Suspended
LimitGroups=DFRNode
ExtraInfo0=Regression Testing
ExtraInfoKeyValue0=TestID=344
ExtraInfoKeyValue1=DeveloperID=12
328
6.2.3 Configuration
Power Management can be configured from the Monitor by selecting Tools -> Configure Power Management. You
will need to be in Super User mode for this, if you are not part of a User Group that has access to this feature.
329
Machine Groups are used by Power Management to organize Slave machines on the farm, and each group has four
sections of settings that can be configured independently of each other. To add a new Machine Group, simply click the
Add button in the Machine Group section.
330
331
332
Overrides: Define overrides for different days and times. Simply specify the day(s) of the week, the time
period, the minimum number of Slaves, and the idle shutdown time for each override required. For example, if
more machines are required to be running continuously for Friday evening and Saturday afternoon, this can be
accomplished with an override.
Override Shutdown Order: Whether or not to define the order in which Slaves are shutdown. If disabled,
Slaves will be shut down in alphabetical order. If enabled, use the Set Shutdown Order dialog to define the order
in which you would like the Slaves to shut down. Note that this feature is not available if the Power Management
Group is configured to include all slaves.
Machine Startup
This is a system that allows powered-down machines to be started automatically when new Jobs are submitted to the
render farm. Combining this feature with Idle Shutdown will ensure that machines in the render farm are only running
when they are needed.
If Slave machines support it, Wake On Lan (WOL) or IPMI commands can be used to start them up after they shutdown. By default, the WOL packet is sent over port 9, but you can change this in the Wake On Lan settings in the
Repository Configuration. Make sure there isnt a firewall or other security software blocking communication over the
selected port(s).
WOL Packets are sent to the MAC addresses that Deadline has on file for each of the Slaves. If your Slaves have multiple Ethernet ports, the Slave may have registered the wrong MAC address, which may prevent WOL from working
properly. If this is the case, you will have to manually set MAC Address overrides for the Slaves that are having this
problem, which can be done through the Slave Settings dialog.
Note that if machines in the group begin to be shutdown due to temperature, this feature may be automatically disabled
for the group to prevent machines from starting up and raising the temperature again.
333
Run Command: This is primarily for IPMI support. If enabled, Pulse will run a given command to start Slave
machines. This command will be run once for each Slave that is being woken up. A few tags can be used within
the command:
{SLAVE_NAME} is replaced with the current Slaves hostname.
{SLAVE_MAC} is replaced with the current Slaves MAC address.
{SLAVE_IP} is replaced with the current Slaves IP address.
Thermal Shutdown
The Thermal Shutdown system polls temperature sensors and responds by shutting down machines if the temperature
gets too high. The sensors we have used for testing are NetTherms, and APC Sensors are also known to be compatible.
Note that the temperature sensor uses port 161, and should be automatically unblocked.
335
Thermal Shutdown Mode: Select Disabled, Enabled, or Debug mode. In Debug mode, all the checks will be
performed, but no action is actually taken.
Temperature Units: The units used to display and configure the temperatures. Note that this is separate from
the units that the actual sensors use.
Thermal Sensors: The host and OID (Object Identifier) of the sensor(s) in the zone. To add a new sensor,
simply click the Add button.
Temperature Threshold: Thresholds can be added for any temperature. When a sensor reports a temperature
higher than a particular threshold, the Slaves in the zone will respond accordingly. Note that higher temperature
thresholds take precedence over lower ones.
Shut down Slaves if sensors are offline for this many minutes: If enabled, Slaves will shut down after a period
of time in which the temperature sensor could not be reached for temperature information.
Disable Machine Startup if thermal threshold is reached: If enabled, Machine Startup for the current group
will be disabled if a thermal threshold is reached.
Re-enable Machine Startup when temperature returns to: If enabled, this will re-enable Machine Startup
when the temperature returns to the specified temperature.
Override Shutdown Order: Whether or not to define a custom order in which Slaves will be shutdown. If
disabled, Slaves will be shut down in alphabetical order. If enabled, use the Set Shutdown Order dialog to
define the order. Note that this feature is not available if the Power Management Group is configured to include
all slaves.
Sensor Settings:
Sensor Hostname or IP Address: The host of the temperature sensor.
Sensor OID: The OID (Object Identifier) of the temperature sensor. The default OID is for the particular type
of sensor we use.
Sensor SNMP Community: If testing the sensor fails with private is selected, try using public.
336
Sensor Reports Temperature As: Select the units that your temperature sensor uses to report the temperature.
Sensor Timeout in Milliseconds: The timeout value for contacting the sensor.
Sensor Testing Temperature: If enabled, the corresponding temperature will always be returned by this sensor.
This is useful for testing purposes.
Test Sensor: Queries the sensor for the current temperature, and displays it. If the temperature displayed seems
incorrect, make sure you have selected the correct unit of temperature above.
If you simply want to test the Thermal Shutdown feature, but you dont have any thermal sensors to test with, you
can enable the Sensor Testing Temperature in the Sensor settings above. When enabled, you dont need to provide a
Sensor Hostname or Sensor OID, and the test sensor will alway return the specified temperature.
Machine Restart
If you have problematic machines that you need to reboot periodically, you can configure the Machine Restart feature
of Power Management to restart your Slaves for you . Note that if the Slave on the machine is in the middle of
rendering a Task, it will finish its current Task before the machine is restarted.
337
If a slave is scheduled to start on a machine, a notification message will pop up for 30 seconds indicating that the slave
is scheduled to start. If someone is still using the machine, they can choose to delay the start of the slave for a certain
amount of time.
6.3.2 Configuration
Slave Scheduling can be configured from the Monitor by selecting Tools -> Configure Slave Scheduling. You will
need to be in Super User mode for this, if you are not part of a User Group that has access to this feature.
Machine Groups are used by Slave Scheduling to organize Slave machines on the farm, and each group can have
different scheduling settings. To add a new Machine Group, simply click the Add button in the Machine Group
section.
339
340
Idle Detection
These settings are used to launch the slave if the machine has been idle for a certain amount of time (idle means no
keyboard or mouse input). There is also additional criteria that can be checked before launching the slave, including
the machines current memory and CPU usage, the current logged in user, and the processes currently running on the
machine. Finally, this system can stop the slave automatically when the machine is no longer idle.
Start Slave When Machine Is Idle For ___ Minutes: If enabled, the Slave will be started on the machine if
it is idle. A machine is considered idle if there hasnt been any keyboard or mouse activity for the specified
amount of time.
Stop Slave When Machine Is No Longer Idle: If enabled, the Slave will be stopped when the machine is no
longer idle. A machine is considered idle if there hasnt been any keyboard or mouse activity for the specified
amount of time.
Only Stop Slave If Started By Idle Detection: If enabled, the Slave will only be stopped when the machine is
no longer idle if that Slave was originally started by Idle Detection. If the Slave was originally started manually,
it will not be stopped.
There are some limitations with Idle Detection depending on the operating system:
On Windows, Idle Detection will not work if the Launcher is running as a service. This is because the service
runs in an environment that is separate from the Desktop, and has no knowledge of any mouse or keyboard
activity.
On Linux, the Launcher uses X11 to determine if there has been any mouse or keyboard activity. If X11 is not
available, Idle Detection will not work.
Note that Idle Detection can be overridden in the Local Slave Controls so that users can configure if their local slave
should launch when the machine becomes idle.
Miscellaneous Options
These settings are applied to both Slave Scheduling and Idle Detection.
Only Start Slave If CPU Usage Less Than ___%: If enabled, the slave will only be launched if the machines
CPU usage is less than the specified value.
Only Start Slave If Free Memory More Than ___ MB: If enabled, the slave will only be launched if the
machine has more free memory than the specified value (in Megabytes).
Only Start Slave If These Processes Are Not Running: If enabled, the slave will only be launched if the
specified processes are not running on the machine.
Only Start If Launcher Is Not Running As These Users: If enabled, the slave will only be launched if the
launcher is not running as one of the specified users.
Allow Slaves to Finish Their Current Task When Stopping: If enabled, the Slave application will not be
closed until it finishes its current Task.
341
useful rendering metrics like render time, CPU usage, and memory usage. You can use all of this information to figure
out if there are any Slaves that arent being utilized to their full potential.
Statistical information is also gathered for individual slaves, including the slaves running time, rendering time, and
idle time. It also includes information about the number of tasks the slave has completed, the number of errors it has
reported, and its average Memory and CPU usage.
Note that some statistics can only be gathered if Pulse is running.
Note that if Pulse is not running, only statistics for completed Jobs, User usage and Slave Statistics will be recorded.
You must run Pulse to keep track of Slave Resource Usage and overall Repository statistics. When running, Pulse will
periodically gather information about Slaves Resource Usage and the general state of the repository, and record them
in the Database.
342
From this window, you can specify which type of report(s) to generate, and a date range to filter the statistics. You can
also specify a region to filter the statistics, but only the Active Slave Stats and Slaves Overview reports will use it.
There are five default Reports that will always be available, but custom reports can also be created and saved for later
use (see the Custom Reports section below for more info).
Active Slave Stats
The Active Slave Stats report displays Slave usage statistics for the farm, which are logged by Slaves as they are
running. The statistics displayed by this report are generated by each individual slave at regular intervals and do not
require Pulse to be running.
343
344
Farm Overview
The Farm Overview report displays statistics about the Farm using graphs. The statistics displayed by this report are
assembled by Pulse, and will therefore only be gethered if Pulse is running.
The State Counts section displays the statistics in terms of counts.
345
The State Totals gives a visual representation of the statistics in terms of percentages.
346
Slaves Overview
The Slaves Overview report displays the statistics for each Slave on the farm with graphs to help display the statistics.
The statistics displayed by this report are assembled by Pulse, and will therefore only be gathered if Pulse is running.
The Slaves Overview chart shows now many slaves were in each state (starting job, rendering, idle, offline, stalled,
and disabled).
The Available/Active Slaves charts show the number of slaves that are available, and the number of available slaves
that are active.
347
The Individual Slaves list and charts show the average CPU and Memory usage for individual slaves, as well as average
time each slave spends in each state.
348
349
Custom Reports
Users can create their own custom Reports to control how the gathered statistics are aggregated and presented. By
doing this, users can create their own arsenal of specialized reports that help to drill down and expose potential
problems with the farm.
In order to create or edit Custom Reports you first need to be in Super User mode, or have the appropriate User Group
Permissions to do so. If that is the case, there should be a new set of buttons below the list of Reports, providing
control over Custom Reports.
By clicking the New button, you will be prompted to specify a name for your new report and select the type of
statistics which this report will display.
Once youve done that, youll be brought to the Edit view for your new Report. Youll note that this is very similar to generating a report under normal circumstances, but with the addition of several buttons that allow further
350
Chief among these new buttons is the Edit Data Columns button, which will allow you to select which columns are
displayed. You can also specify if you want to aggregate row information by selecting a Group By column, and a
Group Op for each other column.
351
The way the aggregation works is similar to a SQL query with a group by statement. Data rows will be combined
based on identical values of the Group By column, while the values of other columns will be determined by performing
the Group Ops on the combined rows.
As a simple example to demonstrate how this works in practice, let us consider a case where you might want to view
the error information on a per-plugin basis. We dont have a built-in report to do this, but all this information is
contained in Completed Job Stats. With that in mind, you can create a Custom Report based on Completed Job Stats
to group by Plugin, and aggregate Error Counts and Wasted Error Time, as illustrated below.
352
Once youve specified which columns are displayed, and whether/how rows are aggregated, you can also add simple
Graphs to your report. Simply click the Add Graph button, and specify the type of graph you want along with the
columns on which the graph should be based. Graphs are always based on all of the data presented the list view, and
currently cannot be based on selection or a different data model.
Once youre done customizing your new report, simply click the OK button on the Farm Status Reports window, and
your changes will be committed to the Database. Now, every time anyone brings up this dialog, they should be able to
353
354
[Deadline]
LicenseServer=@my-server
NetworkRoot=\\\\repository\\path
LauncherListeningPort=17060
AutoConfigurationPort=17061
There can also be additional NetworkRoot# settings that store previous Repository paths. These paths will be prepopulated in the drop down list when changing Repositories.
NetworkRoot0=\\\\repository\\path
NetworkRoot1=\\\\another\\repository
NetworkRoot2=\\\\test\\repository
This setting can be changed using the Change Repository option in the Launcher or the Monitor, and it can also be
configured using Auto Configuration.
LicenseServer
The LicenseServer setting tells the Client where it can get a license from.
LicenseServer=@my-server
This setting can be changed using the Change License Server option in the Launcher or the Slave, and it can also be
configured using Auto Configuration.
LauncherListeningPort
The LauncherListeningPort setting is the port that the Launcher listens on for Remote Control. It must be the same on
all Clients.
355
LauncherListeningPort=17060
RestartStalledSlave
The RestartStalledSlave setting indicates if the Launcher should try to restart the Slave on the machine if it becomes
stalled. The default is True.
RestartStalledSlave=True
This setting can be changed from the Launcher menu, and it can also be configured using Auto Configuration.
LaunchPulseAtStartup
The LaunchPulseAtStartup setting controls if the Launcher should automatically launch Pulse after the launcher starts
up. The default is False.
LaunchPulseAtStartup=True
357
LaunchWebServiceAtStartup
The LaunchWebServiceAtStartup setting controls if the Launcher should automatically launch the Web Service after
the launcher starts up. The default is False.
LaunchWebServiceAtStartup=True
This setting can be changed using the Change User option in the Launcher or the Monitor. To prevent users from
changing who they are, see the User Management documentation.
358
LaunchSlaveAtStartup
The LaunchSlaveAtStartup setting controls if the Launcher should automatically launch the Slave after the launcher
starts up. The default is True.
LaunchSlaveAtStartup=False
This setting can be changed from the Launcher menu, and it can also be configured using Auto Configuration.
359
6.6.2 Rulesets
You can set up Client Configuration Rulesets from the Auto Configuration section of the Repository Configuration. If
you want to configure groups of Clients differently from others, you can add multiple Rulesets. This is useful if you
have more than one Repository on your network, or if you want to configure your render nodes differently than your
workstations.
New Rulesets can be added by pressing the Add button. You can give the Ruleset a name, and then choose a Client
Filter method to control which Clients will use this Ruleset. There are currently three types of Slave Filters:
Hostname Regex: You can use regular expressions to match a Clients host name. If your Slaves are using IPv6,
this is probably the preferred method to use. Note that this is case-sensitive. For example:
.*host.* will match hostnames containing the word host in lower case.
host.* will match hostnames starting with host.
.*[Hh]ost will match ending with Host or host.
.* will match everything.
IP Regex: You can use regular expressions to match a Clients IP address. This works with both IPv4 and IPv6
addresses. For example:
192.168..* will match IPv4 addresses not transported inside IPv6 starting with 192.168.
[:fF]*192.168.
should match IPv4 address even if they are carried over IPv6 addresses (ex
::ffff:192.168.2.128).
.* will match everything.
IPv4 Match: You can specify specific IP addresses, or a range of IP addresses (by using wildcards or ranges).
Note that this only works with IPv4. Do not use this for IPv6 addresses. For example:
192.168.0.1-150
192.168.0.151-255
192.168.*.*
*.*.*.*
Configurations are generated starting from the top rule working down one by one. When there is a match for the
requesting Client, any properties in the rule which are not marked as (Inherited) will override a previous setting. By
default, Slaves will use their local configuration for any property which is not set by a rule. Based on the example
here, all clients starting with the name Render- and ending with a whole number will use the same Repository Path
and launch the Client at startup, while the Default rule above it matches all Clients and sets their license server.
360
361
There is also an IncludeEnvironment option that takes either True or False (False is the default). When IncludeEnvironment is set to True, Deadline will automatically grab all the environment variables from the submitters environment
and set them as the jobs environment variables.
IncludeEnvironment=True
This can be used in conjunction with the EnvironmentKeyValue# options above, but note that the EnvironmentKeyValue# options will take precedence over any current environment variables with the same name.
Finally, there is a UseJobEnvironmentOnly option that takes either True or False (False is the default):
UseJobEnvironmentOnly=True
The UseJobEnvironmentOnly setting controls how the jobs environment variables are applied to the rendering environment. If True, ONLY the jobs environment variables will be used. If False, the jobs environment variables will
be merged with the Slaves current environment, with the jobs variables overwriting any existing ones with the same
name.
Job Rendering
At render time, the jobs environment variables are applied to the rendering process. As explained above, the jobs
environment can either be merged with the Slaves current environment, or the jobs environment can be used exclusively.
Note though that if the jobs plugin defines any environment variables, those will take precedence over any job environment variables with the same name. In a jobs plugin, there are two functions that are available for the DeadlinePlugin
object that can be used to set environment variables:
SetProcessEnvironmentVariable( key, value ):
This should be used in Advanced plugins only.
Any variables set by this function are applied to all process launched through Deadlines plugin API.
Note that calling SetProcessEnvironmentVariable in Simple plugins or within ManagedProcess callbacks
will not affect the current process environment.
When using SetProcessEnvironmentVariable in an Advanced plugin, make sure to call it outside of the
ManagedProcess callbacks.
362
363
364
365
6.8.2 Licensing
In Deadline 7, all Slave instances running on a single machine will use the same license. For example, if you had 3
slave instances running on one machine, they would only use 1 license.
From the right-click menu in the Slave list in the Monitor by selecting Remote Control -> Slave Commands ->
Start New Slave Instance. By default, this is only available when in Super User Mode.
Additonally for a headless/no GUI machine, you would add a -nogui flag.
deadlineslave -name "instance-01" -nogui
Note that the name you enter is the postfix that is appended to the slaves base name. For example, if the slaves base
name is Render-02, and you start a new instance on it called instance-01, the full name for that slave instance will
be Render-02-instance-01. This is done so that if the slaves machine name is changed, the full slave name will be
updated accordingly. Using the same example, if the machine was renamed to Node-05, the slave instance will now
be called Node-05-instance-01.
Once the new Slave shows up in the Slave List in the Monitor, you can configure it like any other Slave. You might
want to use Slave Settings (see Slave Configuration) to assign the different Slaves to run on separate CPUs. It might
also be a good idea to assign them to different Pools and Groups, so that they can work on different types of Jobs to
avoid competing for the same resource (e.g., you could have one Slave assigned to CPU intensive Jobs, while the other
works on RAM intensive ones).
Once the Slave has been created, you can also launch it remotely like you would any other Slave. See the Remote
Control documentation for more information.
366
From the right-click menu in the Slave list in the Monitor by selecting Remote Control -> Slave Commands
-> Remove Slave Instance. This method gives the additional option to automatically remove the slave instance
from the repository as well. By default, this is only available when in Super User Mode.
Manually delete the .ini files that define the local slaves instances on the machine that the slave runs on. See the
Client Configuration documentation for more information.
367
In this scenario, you can disable the multi-slave feature by opening the systems deadline.ini file and adding this line:
MultipleSlavesEnabled=False
The system deadline.ini file can be found in the following locations. Note that the # in the path will change based on
the Deadline version number.
Windows: %PROGRAMDATA%\Thinkbox\Deadline#\deadline.ini
Linux: /var/lib/Thinkbox/Deadline#/deadline.ini
OSX: /Users/Shared/Thinkbox/Deadline#/deadline.ini
368
Adding Providers
To add a provider, click the Add button under the Cloud Region list. Choose the Cloud plugin you wish to use, and
give it a region name. This is useful for providers like Amazon EC2 that have more than one region. Then click OK.
The new Cloud region will now show up in the Cloud Region list.
Configuring Providers
To configure an existing provider, select it in the Cloud Region box, which will bring up its configuration settings.
This are the settings that the Monitor will use to connect to your cloud provider(s).
6.9. Cloud Controls
369
Every provider has an option to enable or disable it, but the other options can vary between providers. To get more
information about a particular setting, just hover your mouse over the setting text, or refer to the Cloud Plugins section
of the documentation.
If the Cloud panel is not visible, see the Panel Features documentation for instructions on how to create new panels in
the Monitor.
Controlling Instances
The Cloud panel allows you to create new instances and control your existing instances using the right-click context
menu. The following options are available when you right-click on an instance:
Create New Instance: Creates a new instance.
Start Instance: Starts an instance that is currently stopped.
Stop Instance: Stops an instance that is currently running.
370
Destroy Instance: Destroys an existing instance. Once an instance is destroyed, it can not be recovered.
Clone Instance: Clones an existing instance. This allows you to quickly launch multiple copies of the selected
instance.
Reboot Instance: Reboots an instance that is currently running.
It should be noted that some cloud providers dont provide the ability to Start/Stop instances.
371
Youll notice that youre actually submitting another Job that will transfer the original Job. The general Deadline
options are explained in the Job Submission documentation. The Job Transfer specific options are:
Frame List and Frames Per Task: This is the frame list for the original Job that will be transferred. It will
default to the values for the original Job, but you can change them if you only want to transfer a subset of frames.
New Repository: This is the path to the remote Repository that the original Job will be transferred to. Note that
the Slaves that the transfer Job will be running on must be able to see this path in order to transfer the original
Job to the new repository.
Compress Files During Transfer: If enabled, the original Jobs files will be compressed during the transfer.
Suspend Remote Job After Transfer: If enabled, the original Job will be submitted in the Suspended state to
the new Repository.
Email Results After Transfer: If enabled, you will be emailed when the original Job has been successfully
372
transferred. Note that this requires you to have your email notification options set up properly.
Remove Local Job After Transfer: If enabled, the original Job in the local Repository will be deleted after the
Job has been successfully transferred to the remote Repository.
Once you have your options set, click the Submit button to submit the transfer Job.
373
374
CHAPTER
SEVEN
SCRIPTING
375
Chapter 7. Scripting
can be run with the correct access permissions, to install the local proxy Client script(s) for you and also carry
out any further configuration that may be required. Where applicable, Installers are provided for the different
operating systems.
The following in-application deeply integrated submitters are available for reference or as a starting point for your
own custom submitter:
3ds Command ../<DeadlineRepository>/submission/3dsCmd/
3ds Max ../<DeadlineRepository>/submission/3dsmax/
Corona Distributed Rendering ../<DeadlineRepository>/submission/3dsmaxCoronaDR/
RPManager Script Setup ../<DeadlineRepository>/submission/3dsmaxRPM/
3ds Max ../<DeadlineRepository>/submission/3dsmaxVRayDBR/
After Effects ../<DeadlineRepository>/submission/AfterEffects/
AutoCAD ../<DeadlineRepository>/submission/AutoCAD/
Blender ../<DeadlineRepository>/submission/Blender/
Cinema 4D ../<DeadlineRepository>/submission/Cinema4D/
Cinema 4D Team Render ../<DeadlineRepository>/submission/Cinema4DTeamRender/
Clarisse iFX ../<DeadlineRepository>/submission/Clarisse/
Composite ../<DeadlineRepository>/submission/Composite/
Draft ../<DeadlineRepository>/submission/Draft/
ftrack ../<DeadlineRepository>/submission/FTrack/
Fusion ../<DeadlineRepository>/submission/Fusion/
Generation ../<DeadlineRepository>/submission/Generation/
Hiero ../<DeadlineRepository>/submission/Hiero/
Houdini ../<DeadlineRepository>/submission/Houdini/
Jigsaw ../<DeadlineRepository>/submission/Jigsaw/
Lightwave ../<DeadlineRepository>/submission/Lightwave/
Maya ../<DeadlineRepository>/submission/Maya/
Maya ../<DeadlineRepository>/submission/MayaVRayDBR/
Messiah ../<DeadlineRepository>/submission/Messiah/
MicroStation ../<DeadlineRepository>/submission/MicroStation/
modo ../<DeadlineRepository>/submission/Modo/
Interactive Distributed Rendering ../<DeadlineRepository>/submission/ModoDBR/
Nuke ../<DeadlineRepository>/submission/Nuke/
Realflow ../<DeadlineRepository>/submission/RealFlow/
Rhino ../<DeadlineRepository>/submission/Rhino/
SketchUp ../<DeadlineRepository>/submission/SketchUp/
Softimage ../<DeadlineRepository>/submission/Softimage/
Softimage ../<DeadlineRepository>/submission/SoftimageVRayDBR/
377
If you save this script to a file called myscript.py, you can execute it using this command:
deadlinecommand -ExecuteScript "myscript.py"
If you are running the script in a headless environment where there is no display, you should use this command again:
deadlinecommand -ExecuteScriptNoGui "myscript.py"
The only difference between these commands is that ExecuteScriptNoGui doesnt pre-import any of the user interface
modules so that it can run in a headless environment. If your script doesnt use any user interface modules, then you
can use ExecuteScriptNoGui regardless of whether or not youre in a headless environment.
378
Chapter 7. Scripting
Replacement Function
There is no replacement for this function because most job information is now stored in the
Database. If you want to get the auxiliary folder for a job, use
RepositoryUtils.GetJobAuxiliaryPath(job), which takes an instance of a job as a parameter.
There is no replacement for this function because drop jobs have been removed.
There is no replacement for this function because Limit information is now stored in the
Database.
RepositoryUtils.GetPluginsDirectory()
There is no replacement for this function because Pulse information is now stored in the
Database.
RepositoryUtils.GetRootDirectory()
RepositoryUtils.GetScriptsDirectory()
RepositoryUtils.GetSettingsDirectory()
There is no replacement for this function because Slave information is now stored in the
Database.
There is no replacement for this function.
There is no replacement for this function because there is no longer a temp folder in the
Repository.
There is no replacement for this function because there is no longer a trash folder in the
Repository.
There is no replacement for this function because User information is now stored in the
Database.
Replacement Function
ClientUtils.GetBinDirectory()
ClientUtils.GetCurrentUserHomeDirectory()
ClientUtils.GetUsersHomeDirectory()
ClientUtils.GetUsersSettingsDirectory()
ClientUtils.GetDeadlineTempPath()
PathUtils.GetLocalApplicationDataPath()
PathUtils.GetSystemTempPath()
379
Replacement Function
ProcessUtils.IsProcessRunning(name)
ProcessUtils.KillProcesses(name)
ProcessUtils.KillParentAndChildProcesses(name)
ProcessUtils.WaitForProcessToStart(name,
timeoutMilliseconds)
File/Path/Directory Functions
Original Global Function
AddToPath(semicolonSeparatedList)
ChangeFilename(path, filename)
FileExists(filename)
GetExecutableVersion(filename)
GetFileSize(filename)
GetIniFileKeys(iniFilename, section)
GetIniFileSections(iniFilename)
GetIniFileSetting(iniFilename, section, key,
default)
Is64BitDllOrExe(filename)
SearchDirectoryList(semicolonSeparatedList)
SearchFileList(semicolonSeparatedList)
SearchFileListFor32Bit(semicolonSeparatedList)
SearchFileListFor64Bit(semicolonSeparatedList)
SearchPath(filename)
SetIniFileSetting(iniFilename, section, key,
value)
SynchronizeDirectories(srcPath, destPath,
deepCopy)
ToShortPathName(filename)
Replacement Function
DirectoryUtils.AddToPath(directory)
PathUtils.ChangeFilename(path, filename)
FileUtils.FileExists(filename)
FileUtils.GetExecutableVersion(filename)
FileUtils.GetFileSize(filename)
FileUtils.GetIniFileKeys(fileName, section)
FileUtils.GetIniFileSections(fileName)
FileUtils.GetIniFileSetting(fileName, section, key, defaultValue)
FileUtils.Is64BitDllOrExe(filename)
DirectoryUtils.SearchDirectoryList(directoryList)
FileUtils.SearchFileList(fileList)
FileUtils.SearchFileListFor32Bit(fileList)
FileUtils.SearchFileListFor64Bit(fileList)
DirectoryUtils.SearchPath(filename)
FileUtils.SetIniFileSetting(filename , section, key, value)
DirectoryUtils.SynchronizeDirectories(sourceDirectory,
destDirectory,deepCopy)
PathUtils.ToShortPathName(path)
Miscellaneous Functions
Original Global Function
BlankIfEitherIsBlank(str1, str2)
ExecuteScript(scriptFilename, arguments)
Sleep(milliseconds)
380
Replacement Function
StringUtils.BlankIfEitherIsBlank(str1, str2)
ClientUtils.ExecuteScript(scriptFilename, arguments)
SystemUtils.Sleep(milliseconds)
Chapter 7. Scripting
OS Functions
Original Global Function
GetAvailableRam()
GetApplicationPath(filename)
GetCpuCount()
GetRegistryKeyValue(keyName, valueName,
defaultValue)
GetTotalRam()
GetUsedRam()
Is64Bit()
IsRunningOnLinux()
IsRunningOnMac()
IsRunningOnWindows()
Replacement Function
SystemUtils.GetAvailableRam()
PathUtils.GetApplicationPath(applicationName)
SystemUtils.GetCpuCount()
SystemUtils.GetRegistryKeyValue(keyName, valueName,
defaultValue)
SystemUtils.GetTotalRam()
SystemUtils.GetUsedRam()
SystemUtils.Is64Bit()
SystemUtils.IsRunningOnLinux()
SystemUtils.IsRunningOnMac()
SystemUtils.IsRunningOnWindows()
381
Description
A short description of the plug-in.
Set to True or False (default is False). If tasks for this plug-in can render concurrently without
interfering with each other, this can be set to True.
Set to True or False (default is False). If set to True, then debug plug-in logging will be printed out
during rendering.
Set to True or False (default is False). Only set to True if you want a custom Python.NET plug-in
from Deadline 5.1 or 5.2 to work with Deadline 6 or later. More information on DeprecatedMode
can be found later on.
It can also define key=value custom settings to be used by the plug-in. A common custom setting is the executable to
use to render the job. For this example, our MyPlugin.dlinit file might look like this:
About=My Example Plugin for Deadline
# This is a comment
ConcurrentTasks=True
MyPluginRenderExecutable=c:\path\to\my\executable.exe
382
Chapter 7. Scripting
The first thing to note is that were importing the Deadline.Plugins namespace so that we can access the DeadlinePlugin
class.
The GetDeadlinePlugin() function is important, as it allows the Slave to get an instance of our MyPlugin class (which is
extending the abstract DeadlinePlugin class). In Deadline 6.2 and later, the GetDeadlinePluginWithJob( job ) function
can be defined as an alternative. It works the same as GetDeadlinePlugin(), except that it accepts an instance of the
Job object that the plug-in is being loaded for. If either of these functions are not defined, the Slave will report an error
when it tries to render the job.
The MyPlugin class will need to implement certain callbacks based on the type of plug-in it is, and these callbacks must
be hooked up in the MyPlugin constructor. One callback that all plug-ins should implement is the InitializeProcess
function. There are many other callbacks that can be implemented, which are covered in the Events section for the
DeadlinePlugin class in the Deadline Scripting reference.
The CleanupDeadlinePlugin() function is also important, as it is necessary to clean up the plug-in when it is no longer
in use. Typically, this is used to clean up any callbacks that were created when the plug-in was initialized.
To start off, the InitializeProcess callback is typically used to set some general plug-in settings:
from Deadline.Plugins import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main DeadlinePlugin class.
######################################################################
def GetDeadlinePlugin():
return MyPlugin()
######################################################################
## This is the function that Deadline calls when the plugin is no
## longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlinePlugin( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlinePlugin class for MyPlugin.
######################################################################
class MyPlugin (DeadlinePlugin):
## Hook up the callbacks in the constructor.
def __init__( self ):
self.InitializeProcessCallback += self.InitializeProcess
## Clean up the plugin.
def Cleanup():
del self.InitializeProcessCallback
## Called by Deadline to initialize the plugin.
def InitializeProcess( self ):
# Set the plugin specific settings.
self.SingleFramesOnly = False
self.PluginType = PluginType.Simple
These are the common plug-in properties that can be set in InitializeProcess callback. See the DeadlinePlugin class in
the Deadline Scripting reference for additional properties.
383
Property
PluginType
SingleFramesOnly
Description
The type of plug-in this is (PluginType.Simple/PluginType.Advanced).
Set to True or False. Set to True if your plug-in can only work on one frame at a time, rather
than a frame sequence.
Comment lines are supported in the param file, and must start with either ; or #. For example:
# This is the file name picker control to set the executable for this plugin.
[MyPluginRenderExecutable]
Type=filename
Label=My Plugin Render Executable
Default=c:\path\to\my\executable.exe
Description=The path to the executable file used for rendering.
384
Chapter 7. Scripting
Youll notice that the property name between the square brackets matches the MyPluginRenderExecutable custom
setting we defined in our MyPlugin.dlinit file. This means that this control will change the MyPluginRenderExecutable
setting. The available key=value pairs for the properties defined here are:
Key
Name
Category
CategoryIndex
CategoryOrder
Default
DefaultValue
Description
DisableIfBlank
IgnoreIfBlank
Index
Label
Required
Type
Description
The category the control should go under.
This determines the controls order under its category. This does the same thing as Index.
This determines the categorys order among other categories. If more than one CategoryOrder is
defined for the same category, the lowest value is used.
The default value to be used if this property is not defined in the dlinit file. This does the same thing
as DefaultValue.
The default value to be used if this property is not defined in the dlinit file. This does the same thing
as Default.
A short description of the property the control is for (displayed as a tooltip in the UI).
If True, a control will not be shown if this property is not defined in the dinit file (True/False). This
does the same thing as IgnoreIfBlank.
If True, a control will not be shown if this property is not defined in the dinit file (True/False). This
does the same thing as DisableIfBlank.
This determines the controls order under its category. This does the same thing as CategoryIndex.
The control label.
If True, a control will be shown for this property even if its not defined in the dlinit file (True/False).
The type of control (see table below).
Description
A drop-down control that allows the selection of True or False.
Allows the selection of a color.
A drop-down control that allows the selection of an item from a list.
Same as Enum above.
Allows the selection of an existing file.
Allows the selection of a new or existing file.
An floating point spinner control.
Allows the selection of an existing folder.
An integer spinner control.
A read-only text field.
Allows the selection of multiple existing files, which are then separated by semicolons in
the text field.
Allows the selection of multiple existing files, which are then placed on multiple lines in
the text field.
Allows the selection of multiple existing folders, which are then placed on multiple lines
in the text field.
A text field with multiple lines.
A text field that masks the text.
Allows the selection of existing Slaves, which are then separated by commas in the text
field.
A text field.
385
Key Name
DecimalPlaces
Filter
Increment
Items
Maximum
Minimum
Validator
Values
Description
The number of decimal places for the Float controls.
The filter string for the Filename, FilenameSave, or MultiFilename controls.
The value to increment the Integer or Float controls by.
The semicolon separated list of items for the Enum control. This does the same thing as Values.
The maximum value for the Integer or Float controls.
The minimum value for the Integer or Float controls.
A regular expression for the String control that is used to ensure the value is valid.
The semicolon separated list of items for the Enum control. This does the same thing as Items.
Often, these plug-in specific options are used to build up the arguments to be passed to the rendering application. Lets
assume that our render executable takes a -verbose argument that accepts a boolean parameter, and that the plug-in
info file submitted with the job contains the following:
Verbose=True
386
Chapter 7. Scripting
Now we would like to be able to change this value from the Job Properties dialog in the Monitor, so our MyPlugin.options file might look like this:
[Verbose]
Type=boolean
Label=Verbose Logging
Description=If verbose logging is enabled.
Required=true
DisableIfBlank=false
DefaultValue=True
Youll notice that the property name between the square brackets matches the Verbose setting in our plug-in info file.
This means that this control will change the Verbose setting. The available key=value pairs for the properties defined
here are the same as those defined for the param file above. Comment lines are also supported in the options file in the
same way they are supported in the param file.
The ico File - Optional
The MyPlugin.icon file is an optional 16x16 icon file that can be used to easily identify jobs that use this plug-in in
the Monitor. Typically, it is the plug-in applications logo, or something else that represents the plug-in. If a plug-in
does not have an icon file, a generic icon will be shown in the jobs list in the Monitor
The JobPreLoad.py File - Optional
The JobPreLoad.py file is an optional script that will be executed by the Slave prior to loading a job that uses this
plug-in. Note that in this case, the file does not share its name with the plug-in folder. This script can be used to do
things like synchronize plug-ins or scripts prior to starting the render job.
The only requirement for the PreJobLoad.py script is that you define a __main__ function, which is called by the Slave
when it executes the script. It must accept a single parameter, which is the current instance of the DeadlinePlugin class.
Here is an example script that copies a couple files from a server to the local machine, and sets some environment
variables:
from System import *
from System.IO import *
def __main__( deadlinePlugin ):
deadlinePlugin.LogInfo( "Copying some files" )
File.Copy( r"\\server\files\file1.ext", r"C:\local\files\file1.ext", True )
File.Copy( r"\\server\files\file2.ext", r"C:\local\files\file2.ext", True )
deadlinePlugin.LogInfo( "Setting EnvVar1 to True" )
deadlinePlugin.SetProcessEnvironmentVariable( "EnvVar1", "True" )
deadlinePlugin.LogInfo( "Setting EnvVar2 to False" )
deadlinePlugin.SetProcessEnvironmentVariable( "EnvVar2", "False" )
387
environment prior to running any other python script, including setting sys.path to control where additional modules
will be loaded from.
The only requirement for the PluginPreLoad.py script is that you define a __main__ function, which is called by the
Slave when it executes the script. It does not accept any parameters. Here is an example script that updates sys.path
with custom paths:
import sys
def __main__():
path = r"\\server\python"
if path not in sys.path:
sys.path.append( path )
388
Chapter 7. Scripting
389
Stdout Handlers
The AddStdoutHandlerCallback() function accepts a string parameter, which is a POSIX compliant regular expression
used to match against lines of stdout from the command line process. This function also returns a RegexHandlerCallback instance, which you can hook up a callback to that is called when a line of stdout is matched. This can all be
done on one line, which is shown in the example above.
Examples of handler callback functions are also shown in the example above. Within these handler functions, the
GetRegexMatch() function can be used to get a specific match from the regular expression. The parameter passed to
GetRegexMatch() is the index for the matches that were found. 0 returns the entire matched string, and 1, 2, etc returns
the matched substrings (matches that are surrounded by round brackets). If there isnt a corresponding substring, youll
get an error (note that 0 is always a valid index).
In HandleStdoutWarning(), 0 is the only valid index because there is no substring in round brackets in the regular
expression. In HandleStdoutError(), 0 and 1 are valid. 0 will return the entire matched string, whereas 1 will return
the substring in the round brackets.
For further examples, please open up any of our application plugin Python script files and inspect them. An example
of comprehensive Stdout handlers can be found in the MayaBatch plugin.
../plugins/MayaBatch/MayaBatch.py
Note, that Deadlines default shipping StdoutHandlers require the Slaves Operating System to be using ENGLISH as
its language.
Popup Ignorers and Handlers
The AddPopupIgnorer() function accepts a string parameter, which is a POSIX compliant regular expression. If a
popup is displayed with a title that matches the given regular expression, the popup is simply ignored. Popup ignorers
should only be used if the popup doesnt halt the rendering because it is waiting for a button to be pressed. In the
case where a button needs to be pressed to continue, popup handlers should be used instead. The AddPopupHandler()
function takes two parameters: a regular expression string, and the button(s) to press (multiple buttons can be separated
with semicolons).
Note, that Deadlines default shipping PopupIgnorers and PopupHandlers require the Slaves Operating System to be
using ENGLISH as its language.
Here is an example using .* at the beginning and end of the title search string which acts as a wildcard. The dialog
also has a Adopt the Files Unit Scale checkbox that needs to be checked ON and then the OK button should be
pressed in that order.
self.PopupHandling = True
self.AddPopupHandler( ".*File Load: Units Mismatch.*", "Adopt the File's Unit Scale?;OK" )
In this example, the Optical Flares license popup uses a wxWindowClassNR control for its OK button, so we need
to add this special class type to our built-in list of possible button classes, just for the After Effects plugin. Once this
class is added, we can search for it and react by pressing the OK button in its dialog. Although, in this case, visually
the button displays the word OK, but actually the name of the button is panel.
self.PopupHandling = True
self.PopupButtonClasses = ( "Button", "wxWindowClassNR" )
# Handle Optical Flares License popup (the "OK" button is actually called "panel")
self.AddPopupHandler( ".*Optical Flares License.*", "panel" )
For users without access to a recent (2012+) version of Visual Studio which includes the Spy++ utility, then the free
application WinSpy++ is very useful to help identify the correct syntax for a dialogs title or button.
390
Chapter 7. Scripting
In this example, we force all Qt based widgets to be native instead of alien based widgets, set our HandleQtPopups
variable to True and then we are able to handle V-Ray Qt based alien widget based dialogs whilst rendering in Rhino
by pressing the [X] symbol in the top right corner of the Rhino Qt dialog:
self.PopupHandling = True
self.HandleQtPopups = True
self.SetEnvironmentVariable( "QT_USE_NATIVE_WINDOWS","1" )
self.AddPopupHandler( r"Rhino", "[X]" )
In this final example, we need to handle Windows 8 Mobile / Windows 10 based popup dialogs and ensure we react
correctly depending on the name of the dialog title. A sometimes tricky task if you have multiple, very similar named
popup title dialogs in the application. We use the .* characters as a wildcard, the ^ character to ensure the text
appears at the start of the string and the $ character to ensure the text appears at the end of the string we are searching
for.
self.PopupHandling = True
self.HandleWindows10Popups = True
self.AddPopupIgnorer( "SAFE 12.*" )
self.AddPopupIgnorer( "^SAFE$" )
self.AddPopupHandler( "^$", "[X]" )
self.AddPopupHandler( "Tip of the Day", "[X]" )
For further examples, please open up any of our application plugin Python script files and inspect them. Good examples
are to be found in:
../plugins/3dsmax/3dsmax.py
../plugins/AfterEffects/AfterEffects.py
../plugins/CSiSAFE/CSiSAFE.py
../plugins/Rhino/Rhino.py
Further information on Regular Expressions can be found on Wikipedia and many online POSIX compliant RegEx
testers are available to help you develop and test your RegEx before testing your code in Deadline:
regex101
regexr
regexpal
regextester
Finally, the Deadline FranticX.Processes.ManagedProcess class has a number of functions to further assist with
Popup Handling and it is recommended to review our Scripting API docs for further information on these functions:
PopupButtonClasses
PopupMaxChildWindows
PopupTextClasses
PressEnterDuringRender
Render Executable and Arguments
The RenderExecutable() callback is used to get the path to the executable that will be used for rendering. This callback
must be implemented in a Simple plug-in, or an error will occur. Continuing our example from above, well use the
path specified in the MyPlugin.dlinit file, and we can access it using the global GetConfigEntry() function.
391
Another important (but optional) callback is the RenderArgument() callback. This callback should return the arguments you want to pass to the render executable. Typically, these arguments are built from values that are pulled from
the DeadlinePlugin class (like the scene file name, or the start and end frame for the task), or from the plug-in info file
that was submitted with the job using the GetPluginInfoEntry() function. If this callback is not implemented, then no
arguments will be passed to the executable.
After adding these callbacks, our example plug-in script now looks like this:
from Deadline.Plugins import *
from System.Diagnostics import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main DeadlinePlugin class.
######################################################################
def GetDeadlinePlugin():
return MyPlugin()
######################################################################
## This is the function that Deadline calls when the plugin is no
## longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlinePlugin( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlinePlugin class for MyPlugin.
######################################################################
class MyPlugin (DeadlinePlugin):
## Hook up the callbacks in the constructor.
def __init__( self ):
self.InitializeProcessCallback += self.InitializeProcess
self.RenderExecutableCallback += self.RenderExecutable
self.RenderArgumentCallback += self.RenderArgument
## Clean up the plugin.
def Cleanup():
# Clean up stdout handler callbacks.
for stdoutHandler in self.StdoutHandlers:
del stdoutHandler.HandleCallback
del self.InitializeProcessCallback
del self.RenderExecutableCallback
del self.RenderArgumentCallback
## Called by Deadline to initialize the process.
def InitializeProcess( self ):
# Set the plugin specific settings.
self.SingleFramesOnly = False
self.PluginType = PluginType.Simple
# Set the ManagedProcess specific settings.
self.ProcessPriority = ProcessPriorityClass.BelowNormal
self.UseProcessTree = True
self.StdoutHandling = True
self.PopupHandling = True
392
Chapter 7. Scripting
There are many other callbacks that can be implemented for Simple plug-ins, which are covered in the Events section
for the ManagedProcess class in the Deadline Scripting reference. The best place to find examples of Simple plug-ins
is to look at some of the plug-ins that are shipped with Deadline. These range from the very basic (Blender), to the
more complex (MayaCmd).
393
our Lightning plug-in. The Lightning plug-in listens for commands from Deadline and executes them as necessary.
After rendering is complete, 3ds Max is shut down.
Initialization
To indicate that your plug-in is an Advanced plug-in, you need to set the PluginType property in the InitializeProcess()
callback.
from Deadline.Plugins import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main DeadlinePlugin class.
######################################################################
def GetDeadlinePlugin():
return MyPlugin()
######################################################################
## This is the function that Deadline calls when the plugin is no
## longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlinePlugin( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlinePlugin class for MyPlugin.
######################################################################
class MyPlugin (DeadlinePlugin):
## Hook up the callbacks in the constructor.
def __init__( self ):
self.InitializeProcessCallback += self.InitializeProcess
## Clean up the plugin.
def Cleanup():
del self.InitializeProcessCallback
## Called by Deadline to initialize the process.
def InitializeProcess( self ):
# Set the plugin specific settings.
self.SingleFramesOnly = False
self.PluginType = PluginType.Advanced
Render Tasks
The RenderTasks() callback is the only required callback for Advanced plug-ins. If it is not implemented, an error will
occur. It contains the code to be executed for each task that a Slave renders. This could involve launching applications,
communicating with already running applications, or simply running a script to automate a particular task (like backing
up a group of files).
Other common callbacks for Advanced plug-ins are the StartJob() and EndJob() callbacks. The StartJob() callback can
be used to start up an application, or to set some local variables that will be used in other callbacks. If the StartJob()
callback is not implemented, then nothing is done during the StartJob phase. The EndJob() callback can be used to
shut down a running application, or to clean up temporary files. If the EndJob() callback is not implemented, then
nothing is done during the EndJob phase.
394
Chapter 7. Scripting
In the example below, we will be launching our application during the StartJob phase. The benefit to this is that
the application can be left running during the duration of the job, which eliminates the overhead of having to launch
the application for each task. To launch and monitor the application, we will be implementing a ManagedProcess
class, and calling it MyPluginProcess .This ManagedProcess class will define the render executable and command line
arguments for launching the process we will be monitoring. Note that we arent passing it any frame information, as
this needs to be handled in the RenderTasks() callback when it interacts with the process.
After adding these three callbacks, and the MyPluginProcess class, our example code looks like this. Note that the
RenderTasks() callback still needs code to allow it to interact with the running process launched in the StartJob()
callback.
from Deadline.Plugins import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main DeadlinePlugin class.
######################################################################
def GetDeadlinePlugin():
return MyPlugin()
######################################################################
## This is the function that Deadline calls when the plugin is no
## longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlinePlugin( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlinePlugin class for MyPlugin.
######################################################################
class MyPlugin (DeadlinePlugin):
## Variable to hold the Managed Process object.
Process = None
## Hook up the callbacks in the constructor.
def __init__( self ):
self.InitializeProcessCallback += self.InitializeProcess
self.StartJobCallback += self.StartJob
self.RenderTasksCallback += self.RenderTasks
self.EndJobCallback += self.EndJob
## Clean up the plugin.
def Cleanup():
del self.InitializeProcessCallback
del self.StartJobCallback
del self.RenderTasksCallback
del self.EndJobCallback
# Clean up the managed process object.
if self.Process:
self.Process.Cleanup()
del self.Process
## Called by Deadline to initialize the process.
def InitializeProcess( self ):
# Set the plugin specific settings.
self.SingleFramesOnly = False
395
self.PluginType = PluginType.Advanced
## Called by Deadline when the job starts.
def StartJob( self ):
myProcess = MyPluginProcess()
StartMonitoredManagedProcess( "My Process", myProcess )
## Called by Deadline for each task the Slave renders.
def RenderTasks( self ):
# Do something to interact with the running process.
pass
## Called by Deadline when the job ends.
def EndJob( self ):
ShutdownMonitoredManagedProcess( "My Process" )
######################################################################
## This is the ManagedProcess class that is launched above.
######################################################################
class MyPluginProcess (ManagedProcess):
deadlinePlugin = None
## Hook up the callbacks in the constructor.
def __init__( self, deadlinePlugin ):
self.InitializeProcessCallback += self.InitializeProcess
self.RenderExecutableCallback += self.RenderExecutable
self.RenderArgumentCallback += self.RenderArgument
## Clean up the managed process.
def Cleanup():
# Clean up stdout handler callbacks.
for stdoutHandler in self.StdoutHandlers:
del stdoutHandler.HandleCallback
del self.InitializeProcessCallback
del self.RenderExecutableCallback
del self.RenderArgumentCallback
## Called by Deadline to initialize the process.
def InitializeProcess( self ):
# Set the ManagedProcess specific settings.
self.ProcessPriority = ProcessPriorityClass.BelowNormal
self.UseProcessTree = True
self.StdoutHandling = True
self.PopupHandling = True
# Set the stdout handlers.
self.AddStdoutHandlerCallback(
"WARNING:.*" ).HandleCallback += self.HandleStdoutWarning
self.AddStdoutHandlerCallback(
"ERROR:(.*)" ).HandleCallback += self.HandleStdoutError
# Set the popup ignorers.
self.AddPopupIgnorer( "Popup 1" )
self.AddPopupIgnorer( "Popup 2" )
# Set the popup handlers.
self.AddPopupHandler( "Popup 3", "OK" )
396
Chapter 7. Scripting
Because the Advanced plug-ins are much more complex than the Simple plug-ins, we recommend taking a look at the
following plug-ins that are shipped with Deadline for examples:
3dsmax
Fusion
Lightwave
MayaBatch
Modo
Nuke
SoftimageBatch
397
The only functions that arent DeadlinePlugin member functions are listed below, along with their replacement utility
functions.
Original Global Function
CheckPathMapping( path )
CheckPathMappingInFile( inFileName,
outFileName )
CheckPathMappingInFileAndReplaceSeparator(
inFileName, outFileName , separatorToReplace,
newSeparator )
PathMappingRequired( path )
Replacement Function
RepositoryUtils.CheckPathMapping( path )
RepositoryUtils.CheckPathMappingInFile( inFileName,
outFileName )
RepositoryUtils.CheckPathMappingInFileAndReplaceSeparator(
inFileName, outFileName, separatorToReplace,
newSeparator )
RepositoryUtils.PathMappingRequired( path )
Callbacks
You need to set up callbacks in the constructor of your DeadlinePlugin class that you created in your plugin python
file. Examples are shown in the documentation above, and you can look at the plug-ins that ship with Deadline for
references as well. For example:
def __init__( self ):
self.InitializeProcessCallback += self.InitializeProcess
self.RenderExecutableCallback += self.RenderExecutable
self.RenderArgumentCallback += self.RenderArgument
self.PreRenderTasksCallback += self.PreRenderTasks
self.PostRenderTasksCallback += self.PostRenderTasks
Note that these callbacks need to be manually cleaned up when the plug-in is no longer in use. See the documentation
regarding the CleanupDeadlinePlugin function above for more information.
Deprecated Mode
As mentioned above, you can set the DeprecatedMode property in your dlinit file to True. This mode allows
Python.NET plug-ins written for Deadline 5.1 or 5.2 to work with Deadline 6 and later, which can make the transition to Deadline 6 easier if you have custom plug-ins.
Note that when DeprecatedMode is enabled, all global functions will still be available, so if you have custom
Python.NET plug-ins, you just need to drop them in the custom/plugins folder in the Repository, and add DeprecatedMode=True to your dlinit file.
If you have custom IronPython plug-ins from Deadline 5.2 or earlier, they will not work with Deadline 6 and later.
398
Chapter 7. Scripting
the existing ones. See the Scripting Overview documentation for more information, and links to the Deadline Scripting
reference.
Note that because the Python scripts for event plug-ins will be executed in a non-interactive way, it is important that
your scripts do not contain any blocking operations like infinite loops, or interfaces that require user input.
When an event is executed the log will show where the script is being loaded from.
399
tab in the Job Properties dialog. If you have a custom submission tool or script, you can specify the following in the
job info file:
SuppressEvents=True
Note that events will be executed by different Deadline applications, depending on the context of the event. For
example, the job submission event is processed by the Command application after the job has been submitted, while
the job finished event is normally processed by the Slave that finishes the last task for the job. However, the job finished
event could also be processed by the Monitor if manually marking a job as complete.
Description
Set to True or False (default is False). Only enabled event plug-ins will respond to events.
Set to True or False (default is False). Only set to True if you want a custom Python.NET event
plug-in from Deadline 5.1 or 5.2 to work with Deadline 6 or later. More information on
DeprecatedMode can be found later on.
It can also define key=value custom settings to be used by the event plug-in. For example, if you are connecting to an
in-house pipeline tool, you may want the URL and credentials to be configurable, in which case our MyEvent.dlinit
file might look like this:
Enabled=True
PipelineURL=http://[myserver]/pipeline
PipelineUserName=myuser
PipelinePassword=mypassword
400
Chapter 7. Scripting
return MyEvent()
######################################################################
## This is the function that Deadline calls when the event plugin is
## no longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlineEventListener( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlineEventListener class for MyEvent.
######################################################################
class MyEvent (DeadlineEventListener):
# TODO: Place code here to replace "pass"
pass
The first thing to note is that were importing the Deadline.Events namespace so that we can access the DeadlineEventListener class.
The GetDeadlineEventListener() function is important, as it allows Deadline to get an instance of our MyEvent class
(which is extending the abstract DeadlineEventListener class). In Deadline 6.2 and later, the GetDeadlineEventListenerWithJobs( jobs ) function can be defined as an alternative. It works the same as GetDeadlineEventListener(), except
that it accepts a list of the Job objects that the event plug-in is being loaded for. If either of these functions are not
defined, Deadline will report an error when it tries to load the event plug-in.
The MyEvent class will need to implement certain callbacks based on the events you want to respond to, and these
callbacks must be hooked up in the MyEvent constructor. All callbacks are optional, but make sure to include at
least one so that your event plug-in actually does something. For a list of all available callbacks, refer to the DeadlineEventListener class in the Deadline Scripting reference.
The CleanupDeadlineEventListener() function is also important, as it is necessary to clean up the event plug-in when
it is no longer in use. Typically, this is used to clean up any callbacks that were created when the event plug-in was
initialized.
After implementing a few functions, your MyEvent.py script file might look something like this:
from Deadline.Events import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main DeadlineEventListener class.
######################################################################
def GetDeadlineEventListener():
return MyEvent()
######################################################################
## This is the function that Deadline calls when the event plugin is
## no longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlineEventListener( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlineEventListener class for MyEvent.
######################################################################
class MyEvent (DeadlineEventListener):
401
402
Chapter 7. Scripting
[Enabled]
Type=boolean
Label=Enabled
Default=True
Description=If this event plug-in should respond to events.
[PipelineURL]
Type=string
Label=Pipeline URL
Default=http://[myserver]/pipeline
Description=The URL for our pipeline website.
[PipelineUserName]
Type=string
Label=Pipeline User Name
Default=
Description=The user name for our pipeline website.
[PipelinePassword]
Type=string
Label=Pipeline Password
Default=
Description=The password for our pipeline website.
Comment lines are supported in the param file, and must start with either ; or #. For example:
# This is a comment about this PipelineURL property.
[PipelineURL]
Type=string
Label=Pipeline URL
Default=http://[myserver]/pipeline
Description=The URL for our pipeline website.
Youll notice that the property names between the square brackets matches the custom keys we defined in our
MyEvent.dlinit file. This means that these control will change the corresponding settings. The available key=value
pairs for the properties defined here are:
403
Key
Name
Category
CategoryIndex
CategoryOrder
Default
DefaultValue
Description
DisableIfBlank
IgnoreIfBlank
Index
Label
Required
Type
Description
The category the control should go under.
This determines the controls order under its category. This does the same thing as Index.
This determines the categorys order among other categories. If more than one CategoryOrder is
defined for the same category, the lowest value is used.
The default value to be used if this property is not defined in the dlinit file. This does the same thing
as DefaultValue.
The default value to be used if this property is not defined in the dlinit file. This does the same thing
as Default.
A short description of the property the control is for (displayed as a tooltip in the UI).
If True, a control will not be shown if this property is not defined in the dinit file (True/False). This
does the same thing as IgnoreIfBlank.
If True, a control will not be shown if this property is not defined in the dinit file (True/False). This
does the same thing as DisableIfBlank.
This determines the controls order under its category. This does the same thing as CategoryIndex.
The control label.
If True, a control will be shown for this property even if its not defined in the dlinit file (True/False).
The type of control (see table below).
Description
A drop-down control that allows the selection of True or False.
Allows the selection of a color.
A drop-down control that allows the selection of an item from a list.
Same as Enum above.
Allows the selection of an existing file.
Allows the selection of a new or existing file.
An floating point spinner control.
Allows the selection of an existing folder.
An integer spinner control.
A read-only text field.
Allows the selection of multiple existing files, which are then separated by semicolons in
the text field.
Allows the selection of multiple existing files, which are then placed on multiple lines in
the text field.
Allows the selection of multiple existing folders, which are then placed on multiple lines
in the text field.
A text field with multiple lines.
A text field that masks the text.
Allows the selection of existing Slaves, when are then separated by commas in the text
field.
A text field.
404
Chapter 7. Scripting
Key Name
DecimalPlaces
Filter
Increment
Items
Maximum
Minimum
Validator
Values
Description
The number of decimal places for the Float controls.
The filter string for the Filename, FilenameSave, or MultiFilename controls.
The value to increment the Integer or Float controls by.
The semicolon separated list of items for the Enum control. This does the same thing as Values.
The maximum value for the Integer or Float controls.
The minimum value for the Integer or Float controls.
A regular expression for the String control that is used to ensure the value is valid.
The semicolon separated list of items for the Enum control. This does the same thing as Items.
405
[Enabled]
Type=boolean
Label=Enabled
Default=True
Description=If this event plug-in should respond to events.
[QTSettings]
Type=filename
Label=QT Settings XML File
Default=
Description=The QT settings xml file.
406
Chapter 7. Scripting
A regular time interval based event plugin can be called via listening for the House Cleaning event in Deadline to
be completed. This is ideal for the execution of a Deadline event plugin, at a regular time interval when the Deadline
database is as up to date as possible. The time interval of the House Cleaning operation is controlled in the repository
options.
Deadline provides the possibility of integration with IT monitoring systems such as Zabbix, Zenoss, Nagios, Opennms,
SolarWinds or indeed any other monitoring software via the house cleaning event callback. As an example, this event
could be used to regularly inject Deadline data based on its job, slave, pulse, balancer statistics or info/settings into
another database thereby providing integration and consistency between separate information systems in different
departments in a company.
Building your own scheduled event script file might look something like this:
from Deadline.Events import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main DeadlineEventListener class.
######################################################################
def GetDeadlineEventListener():
return ScheduledEvent()
######################################################################
## This is the function that Deadline calls when the event plugin is
## no longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlineEventListener( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlineEventListener class for ScheduledEvent.
######################################################################
class ScheduledEvent (DeadlineEventListener):
def __init__( self ):
# Set up the event callbacks here
self.OnHouseCleaningCallback += self.OnHouseCleaning
def Cleanup( self ):
del self.OnHouseCleaningCallback
def OnHouseCleaning( self ):
# TODO: Execute generic pipeline duties here such as
# reporting to an external studio database or injecting
# Deadline Farm Stats into Zabbix, Zenoss, Nagios for IT
407
pass
408
Chapter 7. Scripting
Deadline provides the possibility of integration with Software Configuration Management (SCM) systems such as
CFEngine, Puppet, Saltstack, Chef, SCCM or indeed any SCM software via the slave event callbacks. Deadline ships
with Puppet and Salt Maintenance Jobs which can be submitted to Deadline via their monitor submission scripts and
also via Puppet and Salt slave centric event plugins.
Building your own SCM event plugin might look something like this:
from Deadline.Events import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main DeadlineEventListener class.
######################################################################
def GetDeadlineEventListener():
return SoftwareEvent()
######################################################################
## This is the function that Deadline calls when the event plugin is
## no longer in use so that it can get cleaned up.
######################################################################
def CleanupDeadlineEventListener( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlineEventListener class for SoftwareEvent.
######################################################################
class SoftwareEvent (DeadlineEventListener):
def __init__( self ):
# Set up the event callbacks here
self.OnSlaveIdleCallback += self.OnSlaveIdle
self.OnSlaveStartedCallback += self.OnSlaveStarted
self.OnSlaveStartingJobCallback += self.OnSlaveStartingJob
def Cleanup( self ):
del self.OnSlaveIdleCallback
del self.OnSlaveStartedCallback
del self.OnSlaveStartingJob
# This is called when a slave becomes idle.
def OnSlaveIdle(self, string):
# If a slave is IDLE, then it is not processing,
409
The only functions that arent DeadlineEventListener member functions are listed below, along with their replacement
utility functions.
410
Chapter 7. Scripting
Replacement Function
RepositoryUtils.CheckPathMapping( path )
RepositoryUtils.CheckPathMappingInFile( inFileName,
outFileName )
RepositoryUtils.CheckPathMappingInFileAndReplaceSeparator(
inFileName, outFileName, separatorToReplace,
newSeparator )
RepositoryUtils.PathMappingRequired( path )
Callbacks
You need to set up callbacks in the constructor of your DeadlineEventListener class that you created in your event
python file. Examples are shown in the documentation above, and you can look at the event plug-ins that ship with
Deadline for references as well. For example:
def __init__( self ):
self.OnJobSubmittedCallback += self.OnJobSubmitted
self.OnJobStartedCallback += self.OnJobStarted
self.OnJobFinishedCallback += self.OnJobFinished
self.OnJobRequeuedCallback += self.OnJobRequeued
self.OnJobFailedCallback += self.OnJobFailed
Note that these callbacks need to be manually cleaned up when the event plug-in is no longer in use. See the documentation regarding the CleanupDeadlineEventListener function above for more information.
Deprecated Mode
As mentioned above, you can set the DeprecatedMode property in your dlinit file to True. This mode allows
Python.NET event plug-ins written for Deadline 5.1 or 5.2 to work with Deadline 6 and later, which can make the
transition to Deadline 6 easier if you have custom event plug-ins.
Note that when DeprecatedMode is enabled, all global functions will still be available, so if you have custom
Python.NET event plug-ins, you just need to drop them in the custom/events folder in the Repository, and add
DeprecatedMode=True to your dlinit file.
If you have custom IronPython event plug-ins from Deadline 5.2 or earlier, they will not work with Deadline 6 and
later.
411
The GetCloudPluginWrapper() function is important, as it allows Deadline to get an instance of our MyCloud class
(which is extending the abstract CloudPluginWrapper class). If this function isnt defined, Deadline will report an
error when it tries to load the cloud plug-in. Notice that were importing the Deadline.Cloud namespace so that we
can access the CloudPluginWrapper class.
The MyCloud class will need to implement certain callbacks so that Deadline can get information from the cloud
provider, and these callbacks must be hooked up in the MyCloud constructor. For a list of all available callbacks, refer
to the CloudPluginWrapper class in the Deadline Scripting reference.
After implementing a few functions, your MyCloud.py script file might look something like this:
from Deadline.Cloud import *
######################################################################
## This is the function that Deadline calls to get an instance of the
## main CloudPluginWrapper class.
######################################################################
def GetCloudPluginWrapper():
412
Chapter 7. Scripting
return MyCloudPlugin()
######################################################################
## This is the function that Deadline calls when the cloud plugin is
## no longer in use so that it can get cleaned up.
######################################################################
def CleanupCloudPlugin( deadlinePlugin ):
deadlinePlugin.Cleanup()
######################################################################
## This is the main DeadlineCloudListener class for MyCloud.
######################################################################
class MyCloud (CloudPluginWrapper):
def __init__( self ):
#Set up our callbacks for cloud control
self.VerifyAccessCallback += self.VerifyAccess
self.AvailableHardwareTypesCallback += self.GetAvailableHardwareTypes
self.AvailableOSImagesCallback += self.GetAvailableOSImages
self.CreateInstancesCallback += self.CreateInstances
self.TerminateInstancesCallback += self.TerminateInstances
self.CloneInstanceCallback += self.CloneInstance
self.GetActiveInstancesCallback += self.GetActiveInstances
self.StopInstancesCallback += self.StopInstances
self.StartInstancesCallback += self.StartInstances
self.RebootInstancesCallback += self.RebootInstances
def Cleanup( self ):
#Clean up our callbacks for cloud control
del self.VerifyAccessCallback
del self.AvailableHardwareTypesCallback
del self.AvailableOSImagesCallback
del self.CreateInstancesCallback
del self.TerminateInstancesCallback
del self.CloneInstanceCallback
del self.GetActiveInstancesCallback
del self.StopInstancesCallback
del self.StartInstancesCallback
del self.RebootInstancesCallback
def VerifyAccess( self ):
#TODO: Return True if connection to cloud provider can be verified.
pass
def GetAvailableHardwareTypes( self ):
#TODO: Return list of HardwareType objects representing the hardware
#types supported by this provider.
#Must be implemented for the Balancer to work.
pass
def GetAvailableOSImages( self ):
#TODO: Return list of OSImage objects representing the OS images
#supported by this provider.
#Must be implemented for the Balancer to work.
pass
def GetActiveInstances( self ):
#TODO: Return list of CloudInstance objects that are currently active.
413
pass
def CreateInstances( self, hardwareID, imageID, count ):
#TODO: Start instances and return list of CloudInstance objects that
#have been started.
#Must be implemented for the Balancer to work.
pass
def TerminateInstances( self, instanceIDs ):
#TODO: Return list of boolean values indicating which instances
#terminated successfully.
#Must be implemented for the Balancer to work.
pass
def StopInstances( self, instanceIDs ):
#TODO: Return list of boolean values indicating which instances
#stopped successfully.
pass
def StartInstances( self, instanceIDs ):
#TODO: Return list of boolean values indicating which instances
#started successfully.
pass
def RebootInstances( self, instanceIDs ):
#TODO: Return list of boolean values indicating which instances
#rebooted successfully.
pass
414
Chapter 7. Scripting
Comment lines are supported in the param file, and must start with either ; or #. For example:
# This is a comment about this Enabled property.
[Enabled]
Type=boolean
Label=Enabled
Default=True
Description=If this cloud plug-in should be enabled.
The available key=value pairs for the properties defined here are:
415
Key
Name
Category
CategoryIndex
CategoryOrder
Default
DefaultValue
Description
DisableIfBlank
IgnoreIfBlank
Index
Label
Required
Type
Description
The category the control should go under.
This determines the controls order under its category. This does the same thing as Index.
This determines the categorys order among other categories. If more than one CategoryOrder is
defined for the same category, the lowest value is used.
The default value to be used if this property is not defined in the dlinit file. This does the same thing
as DefaultValue.
The default value to be used if this property is not defined in the dlinit file. This does the same thing
as Default.
A short description of the property the control is for (displayed as a tooltip in the UI).
If True, a control will not be shown if this property is not defined in the dinit file (True/False). This
does the same thing as IgnoreIfBlank.
If True, a control will not be shown if this property is not defined in the dinit file (True/False). This
does the same thing as DisableIfBlank.
This determines the controls order under its category. This does the same thing as CategoryIndex.
The control label.
If True, a control will be shown for this property even if its not defined in the dlinit file (True/False).
The type of control (see table below).
Description
A drop-down control that allows the selection of True or False.
Allows the selection of a color.
A drop-down control that allows the selection of an item from a list.
Same as Enum above.
Allows the selection of an existing file.
Allows the selection of a new or existing file.
An floating point spinner control.
Allows the selection of an existing folder.
An integer spinner control.
A read-only text field.
Allows the selection of multiple existing files, which are then separated by semicolons in
the text field.
Allows the selection of multiple existing files, which are then placed on multiple lines in
the text field.
Allows the selection of multiple existing folders, which are then placed on multiple lines
in the text field.
A text field with multiple lines.
A text field that masks the text.
Allows the selection of existing Slaves, when are then separated by commas in the text
field.
A text field.
416
Chapter 7. Scripting
Key Name
DecimalPlaces
Filter
Increment
Items
Maximum
Minimum
Validator
Values
Description
The number of decimal places for the Float controls.
The filter string for the Filename, FilenameSave, or MultiFilename controls.
The value to increment the Integer or Float controls by.
The semicolon separated list of items for the Enum control. This does the same thing as Values.
The maximum value for the Integer or Float controls.
The minimum value for the Integer or Float controls.
A regular expression for the String control that is used to ensure the value is valid.
The semicolon separated list of items for the Enum control. This does the same thing as Items.
417
The GetBalancerPluginWrapper() function is important, as it allows Deadline to get an instance of our MyBalancerAlgorithm class (which is extending the abstract BalancerPluginWrapper class). If this function isnt defined, Deadline
will report an error when it tries to load the balancer plug-in. Notice that were importing the Deadline.Balancer
namespace so that we can access the BalancerPluginWrapper class.
The MyBalancerAlgorithm class will need to implement the BalancerAlgorithm callback so that Deadline can know
how to balance your farm, and these callbacks must be hooked up in the MyBalancerAlgorithm constructor.
After implementing a few functions, your MyBalancerAlgorithm.py script file might look something like this:
from Deadline.Balancer import *
###########################################################################
## This is the function that Deadline calls to get an instance of the
## main BalancerPluginWrapper class.
###########################################################################
def GetBalancerPluginWrapper():
return MyBalancerPlugin()
###########################################################################
## This is the main DeadlineBalancerListener class for MyBalancerAlgorithm.
###########################################################################
class MyBalancerAlgorithm (BalancerPluginWrapper):
def __init__( self ):
self.BalancerAlgorithmCallback += self.BalancerAlgorithm
def BalancerAlgorithm(self, stateStruct):
#TODO: Return a target struct to the Balancer.
pass
418
Chapter 7. Scripting
Description
A short description of the plug-in.
Set to True or False (default is False). If tasks for this plug-in can render concurrently without
interfering with each other, this can be set to True.
Set to True or False (default is False). If set to True, then debug plug-in logging will be printed out
during rendering.
Set to True or False (default is False). Only set to True if you want a custom Python.NET plug-in
from Deadline 5.1 or 5.2 to work with Deadline 6 or later. More information on DeprecatedMode
can be found later on.
419
It can also define key=value custom settings to be used by the plug-in. For this example, our MyBalancerAlgorithm.dlinit file might look like this:
About=My Example Plugin for Deadline
SomeSortOfScript=c:\path\to\my\script.py
Comment lines are supported in the dlinit file, and must start with either ; or #.
Build the submission UI: Typically done in the __main__ function by creating a ScriptDialog object, and
adding controls to it. Each controls name must be unique, so that each control can be identified properly. You
can also set the dialogs size (if not using a grid layout), the row and column (if using a grid layout), title, and a
few other settings. For more details, see the ScriptDialog and ScriptControl sections of the Reference Manual.
For an example on how to use the grid layout see the Grid Layout Example Script documentation.
420
Chapter 7. Scripting
Define and Load Sticky Settings: Sticky settings are settings that persist after the dialog has been closed.
They are defined by creating a string array that contains the names of the controls for which you want the
settings to persist. After defining them, you can load them by calling the LoadSettings function of your
ScriptDialog.
Show the Dialog: The last thing you should do in your __main__ function is to show your ScriptDialog,
by using its ShowDialog function.
Define Your Functions: Specify any functions that may be used by your script. These could just be helper
functions, or event handlers that do stuff when UI values are modified.
Note that you dont necessarily need to follow this template, but the closer you stick to it, the more examples youll
have to draw on.
421
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Task Scripts
Task Scripts are typically used to modify or to perform actions on a selected Task in the Monitor. They can be accessed
by right-clicking an existing Task in the Task Panel, under the Scripts sub-menu.
To create new Task scripts, simply navigate to the custom\scripts\Tasks folder in your Repository. Then, create a
new Python file named MyTaskScript.py, where MyTaskScript is the name of your new script.
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Slave Scripts
Slave Scripts are typically used to modify or to perform actions on a selected Slave in the Monitor. They can be
accessed by right-clicking an existing Slave in the Slave Panel, under the Scripts sub-menu.
To create new Slave scripts, simply navigate to the custom\scripts\Slaves folder in your Repository. Then, create a
new Python file named MySlaveScript.py, where MySlaveScript is the name of your new script.
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Pulse Scripts
Pulse Scripts are typically used to modify or to perform actions on a selected Pulse in the Monitor. They can be
accessed by right-clicking an existing Pulse in the Pulse Panel, under the Scripts sub-menu.
To create new Pulse scripts, simply navigate to the custom\scripts\Pulse folder in your Repository. Then, create a
new Python file named MyPulseScript.py, where MyPulseScript is the name of your new script.
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Balancer Scripts
Balancer Scripts are typically used to modify or to perform actions on a selected Balancer in the Monitor. They can
be accessed by right-clicking an existing Balancer in the Balancer Panel, under the Scripts sub-menu.
To create new Balancer scripts, simply navigate to the custom\scripts\Balancer folder in your Repository. Then,
create a new Python file named MyBalancerScript.py, where MyBalancerScript is the name of your new script.
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Limit Scripts
Limit Scripts are typically used to modify or to perform actions on selected Limits in the Monitor. They can be
accessed by right-clicking an existing Limit in the Pulse Panel, under the Scripts sub-menu.
To create new Limit scripts, simply navigate to the custom\scripts\Limits folder in your Repository. Then, create a
new Python file named MyLimitScript.py, where MyLimitScript is the name of your new script.
422
Chapter 7. Scripting
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Job Report Scripts
Job Report Scripts are typically used to modify or to perform actions on selected Job Reports in the Monitor. They
can be accessed by right-clicking an existing Job Report in the Job Report Panel, under the Scripts sub-menu.
To create new Job Report scripts, simply navigate to the custom\scripts\JobReports folder in your Repository. Then,
create a new Python file named MyJobReportScript.py, where MyJobReportScript is the name of your new script.
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Slave Report Scripts
Slave Report Scripts are typically used to modify or to perform actions on selected Slave Reports in the Monitor. They
can be accessed by right-clicking an existing Slave Report in the Slave Report Panel, under the Scripts sub-menu.
To create new Slave Report scripts, simply navigate to the custom\scripts\SlaveReports folder in your Repository.
Then, create a new Python file named MySlaveReportScript.py, where MySlaveReportScript is the name of your
new script.
Once created, you can follow the template outlined above in the General Script Template section to build up your
script.
Once you start a grid you can add controls to it by row and column. There is no need to specify how many rows or
columns you want the grid to have, just specify the row and column where you want the control to be and the grid will
grow to accommodate. Here is an example of adding a label and a text field to the dialog in the same row.
423
dg.AddGrid()
dg.AddControlToGrid("Label1", "LabelControl", "I'm a label.", 0,0, "A tooltip", False)
dg.AddControlToGrid( "TextBox1", "TextControl", "", 0, 1 )
dg.EndGrid()
It is not possible to specify the size of the controls you want to add to the grid, however it is also not necessary to do
so. The contents of the grid(s) will automatically adjust themselves to share the size of the dialog. If you want certain
elements to not grow within a row you can set the expand property to be disabled. If you want a control to take more
space you can set the control span multiple rows or columns using rowSpan and colSpan, respectively. By default
controls have expand set and have their colSpan and rowSpan properties set to 1.
This is an example of a dialog with two rows and four columns. The first row contains a label in the first column and
is set to not grow any bigger than it needs to and a text control that spans the next 3 columns and is allowed to grow.
The second row contains three labels that are not allowed to grow in the first three columns and a text control in the
fourth column that can grow as needed.
dg.AddGrid()
dg.AddControlToGrid(
"L1", "LabelControl", "I'm a label.", 0,0, "A tooltip", expand=False)
dg.AddControlToGrid(
"TextBox1", "TextControl", "", 0, 1, colSpan=3)
dg.AddControlToGrid(
"L2", "LabelControl", "I'm
dg.AddControlToGrid(
"L3", "LabelControl", "I'm
dg.AddControlToGrid(
"L4", "LabelControl", "I'm
dg.AddControlToGrid(
"TextBox2", "TextControl",
dg.EndGrid()
When you expand the dialog horizontally, only the text controls will grow in the above example. Nothing will grow,
other than the dialog itself, when expanding vertically. Note that if you set all controls in a row to not expand that this
will cause the cells in the grid that the controls are in to expand without allowing any of the controls to expand with it.
This will result in the dialog losing its layout when it is expanded.
424
Chapter 7. Scripting
Here is an example of what this dialog would look like expanded horizontally:
Here is an example of what this dialog would look like expanded vertically:
Here is an example of what the dialog would look like expanded horizontally if all controls had expand=False set.
If you want to space controls out in the grid you can use labels filled with white space, or you can use horizontal
spacers. Here is an example of adding two buttons to a dialog and keeping them to the far right of the dialog.
dg.AddGrid()
dg.AddHorizontalSpacerToGrid( "DummyLabel", 0, 0 )
ok = dg.AddControlToGrid(
"Ok", "ButtonControl", "OK", 0, 1, expand=False )
ok.ValueModified.connect(OkButtonPressed)
cancel = dg.AddControlToGrid(
"Cancel", "ButtonControl", "Cancel", 0, 2, expand=False )
cancel.ValueModified.connect(CancelButtonPressed)
dg.EndGrid()
Here is an example of what this dialog will look like when expanded horizontally:
425
All together, here is an example of a basic script dialog using grid layouts.
from DeadlineUI.Controls.Scripting.DeadlineScriptDialog import DeadlineScriptDialog
########################################################################
## Globals
########################################################################
dg = None
########################################################################
## Main Function Called By Deadline
########################################################################
def __main__( *args ):
global dg
dg = DeadlineScriptDialog()
dg.SetTitle( "Example Deadline Script" )
dg.AddGrid()
dg.AddControlToGrid(
"L1", "LabelControl", "I'm a label.", 0,0, "A tooltip", expand=False)
dg.AddControlToGrid(
"TextBox1", "TextControl", "", 0, 1, colSpan=3)
dg.AddControlToGrid(
"L2", "LabelControl", "I'm
dg.AddControlToGrid(
"L3", "LabelControl", "I'm
dg.AddControlToGrid(
"L4", "LabelControl", "I'm
dg.AddControlToGrid(
"TextBox2", "TextControl",
dg.EndGrid()
#Adds an OK and Cancel button to the dialog
dg.AddGrid()
dg.AddHorizontalSpacerToGrid( "DummyLabel", 0, 0 )
ok = dg.AddControlToGrid(
"Ok", "ButtonControl", "OK", 0, 1, expand=False )
ok.ValueModified.connect(OkButtonPressed)
cancel = dg.AddControlToGrid(
"Cancel", "ButtonControl", "Cancel", 0, 2, expand=False )
cancel.ValueModified.connect(CancelButtonPressed)
dg.EndGrid()
dg.ShowDialog( True )
def CloseDialog():
426
Chapter 7. Scripting
global dg
dg.CloseDialog()
def CancelButtonPressed():
CloseDialog()
def OkButtonPressed( *args ):
global dg
dg.ShowMessageBox( "You pressed the OK button.", "Button Pressed" )
Instead, you need to import the DeadlineScriptDialog class, and use its constructor to create an instance:
from DeadlineUI.Controls.Scripting.DeadlineScriptDialog import DeadlineScriptDialog
...
scriptDialog = DeadlineScriptDialog()
Another change is how the ValueModifed event handlers are hooked up for the ScriptDialog controls. For example,
this is how the event was hooked up in Deadline 5:
427
compBox = scriptDialog.AddControl(
"CompBox", "TextControl", "", dialogWidth-labelWidth-24,-1)
compBox.ValueModified += CompChanged
Now, because the ScriptDialog object is a Qt object, you need to use the connect function to hook up events:
compBox = scriptDialog.AddControl(
"CompBox", "TesxtControl", "", dialogWidth-labelWidth-24,-1)
compBox.ValueModified.connect( CompChanged )
The File Browser based controls have also changed their file filter syntax. In Deadline 5, the file filter syntax looked
like this:
scriptDialog.AddRow()
scriptDialog.AddControl(
"FileLabel", "LabelControl", "Select File", labelWidth, -1)
scriptDialog.AddSelectionControl("FileBox", "FileBrowserControl", "",
"All Files (*.*)|*.*|CAD Files: JT (*.jt)|*.jt", dialogWidth-labelWidth-24, -1)
scriptDialog.EndRow()
Now, because the ScriptDialog object is a Qt object, you need to use the following syntax to filter files in any of the
browser controls. Note the replacement of the | character for ;; and there is no longer the requirement to provide
a file extension filter per file format entry as the filter is taken from the text label (*.txt) or (*.*) as per the example
below:
scriptDialog.AddRow()
scriptDialog.AddControl(
"FileLabel", "LabelControl", "Select File", labelWidth, -1)
scriptDialog.AddSelectionControl( "FileBox", "FileBrowserControl", "",
"Text Files (*.txt);;All Files (*.*)", dialogWidth-labelWidth-24, -1)
scriptDialog.EndRow()
Chapter 7. Scripting
window. In addition to this, Job scripts can be specified by custom submitters by including them in the Job Info File
on submission. Note that a full path to the script is required, so it is recommended that the script file be stored in a
location that is accessible to all Slaves.
Creating Job Scripts
The only requirement for a Job script is that you define a __main__ function. This is the function that will be called
by Deadline when it comes time to execute the script, and an instance of the DeadlinePlugin object will be passed as
a parameter.
def __main__( *args ):
#Replace "pass"
pass
A common use for Post-Task scripts is to do some processing with the output image files. Here is a sample script that
demonstrates how to get the output file names for the current task, and print them out to the render log:
import re
from System.IO import *
from Deadline.Scripting import *
def __main__( *args ):
deadlinePlugin = args[0]
job = deadlinePlugin.GetJob()
outputDirectories = job.OutputDirectories
outputFilenames = job.OutputFileNames
paddingRegex = re.compile("[^\\?#]*([\\?#]+).*")
s
for i in range( 0, len(outputDirectories) ):
outputDirectory = outputDirectories[i]
outputFilename = outputFilenames[i]
startFrame = deadlinePlugin.GetStartFrame()
endFrame = deadlinePlugin.GetEndFrame()
for frameNum in range(startFrame, endFrame+1):
outputPath = Path.Combine(outputDirectory,outputFilename)
outputPath = outputPath.replace("//","/")
m = re.match(paddingRegex,outputPath)
if( m != None):
padding = m.group(1)
frame = StringUtils.ToZeroPaddedString(frameNum,len(padding),False)
outputPath = outputPath.replace( padding, frame )
deadlinePlugin.LogInfo( "Output file: " + outputPath )
429
them in the Job Info File on submission. Note that a full path to the script is required, so it is recommended that the
script file be stored in a location that is accessible to all Slaves.
Creating Dependency Scripts
The only requirement for a Job script is that you define a __main__ function. This is the function that will be called
by Deadline when it comes time to execute the script to determine if a job should be released or not.
For jobs without Frame Dependencies enabled, only the job ID will be passed as a parameter. The __main__ function
should then return True if the job should be released or False if it shouldnt be.
For jobs with Frame Dependencies enabled, the job ID will be passed as the first parameter, and a list of pending task
IDs will be passed as the second parameter. The __main__ function should then return the list of task IDs that should
be released, or an empty list of none should be released.
Here is a very simple example that will work regardless of whether Frame Dependencies are enabled or not:
def __main__( jobId, taskIds=None ):
if not taskIds:
# Frame Dependencies are disabled
releaseJob = False
#figure out if job should be released
return releaseJob
else:
# Frame Dependencies are enabled
tasksToRelease = []
#figure out which tasks should be released, and append their IDs to the array
return tasksToRelease
By giving the taskIds parameter a default of None, it allows the script to function regardless of whether Frame Dependencies are enabled or not. You can check if taskIds is None, and if it is, you know that Frame Dependencies are
disabled.
430
Chapter 7. Scripting
It is also possible for the web service script to set the HTTP status code. This can be done by including the status code
after the results in the return statement. For example:
def __main__( *args ):
results = ""
statusCode = "200"
#...
#append data to results, and set statusCode as necessary
#...
return results, statusCode
Finally, it is possible for the web service script to set additional headers to be included in the HTTP response. This
can be done by including an arbitrary number of key=value strings after the status code in the return statement. For
example:
def __main__( *args ):
results = ""
statusCode = "200"
#...
#append data to results, and set statusCode as necessary
#...
return results, statusCode, "header1=value1", "header2=value2"
431
Supporting Arguments
Arguments can be passed to web service scripts as a tuple with 2 items, and can be accepted in two different ways.
The first way is to simply accept args, which will be an array of length 2. The other way is to accept the tuple as two
separate variables, for instance (dlArgs, qsArgs) for Deadline arguments and query string arguments. In the first case,
args[0] is equivalent to dlArgs (Deadline arguments), and args[1] is equivalent to qsArgs (Query String Arguments).
Deadline Arguments
The web service will automatically pass your script a dictionary as the first item in the args tuple. The Dictionary
will contain at least one key (Authenticated), but may contain more if the user authenticated with the web service.
Currently, if the user has not authenticated, the Dictionary will only contain the Authenticated key, with a value
of False. However, if the user has authenticated, it will also contain the UserName key, with a value of the user
executing the script.
Query String Arguments
Arguments are passed to your script by a query string defined in the URL, and can be in one of the following forms:
Key/Value Pairs: This is the preferred method of passing arguments. Arguments in this form will look something like
this at the end of the URL:
?key0=value0&key1=value1
List of Values: Arguments in this form will instead look something like this:
?value0&value1
The query string will be passed to the Python script as a NameValueCollection and it will be the second item of the
tuple passed to your scripts __main__ function.
Relevant API Functions
For functions that will be relevant to most Web Service scripts, see the Deadline.PulseUtils section of the Deadline
Scripting Reference documentation. The full Deadline Scripting Reference can be found on the Thinkbox Software
Documentation Website. Offline PDF and HTML versions can be downloaded from here as well.
Some scripts can take arguments, as detailed in the previous section. To include arguments, you need to place a ?
between the base URL and the first argument, with & separating addition arguments. Here is an example of how you
would pass arg1, arg2, and arg3 as a list of arguments to the GetFarmStatistics.py script:
http://[myhost]:8080/GetFarmStatistics?arg1&arg2&arg3
Here is an example of how you would pass values for arguments named arg1, arg2, and arg3 in the form of
key-value pairs:
432
Chapter 7. Scripting
http://[myhost]:8080/GetFarmStatistics?arg1=value1&arg2=value2&arg3=value3
The way the results of the script will be displayed is entirely dependent on the format in which the Script returns them.
7.9.2 Set-up
In order to use the Standalone Python API you must have Python 2.7 or later installed. Copy the Deadline
Folder containing the Standalone Python API from \\your\repository\api\python to the site-packages folder of
your Python installation and the API is ready to use.
433
Documentation for all the possible API functions can be found on at the Deadline Downloads page.
7.9.4 Authenticating
If your Web Service has authentication enabled then you must set up authentication for the Python API. This can be
achieved through the EnableAuthentication and SetAuthenticationCredentials functions. Setting your authentication credentials allows the Python API to use them for as long as that instance of python is running.
>>> from Deadline.DeadlineConnect import DeadlineCon as Connect
>>> con = Connect('PulseName', 8080)
>>> con.Groups.GetGroupNames()
"Error: HTTP Status Code 401. Authentication with the Web Service failed.
Please ensure that the authentication credentials are set, are correct, and
that authentication mode is enabled."
>>> con.AuthenticationModeEnabled()
False
>>> con.EnabledAuthentication(True)
>>> con.AuthenticationModeEnabled()
True
>>> con.SetAuthenticationCredentials("username", "password")
>>> con.Groups.GetGroupNames()
[u'none', u'group1', u'group2', u'group3']
By default SetAuthenticationCredentials also enables authentication, so it is not actually necessary to explicitly call
EnableAuthentication as well. If you want to store your credentials without enabling authentication you may do so
as well using the optional third parameter.
>>> con.SetAuthenticationCredentials("username", "password", False)
434
Chapter 7. Scripting
Example: Getting a job, changing the pool and priority then saving it.
>>> job = con.Jobs.GetJob(jobId)
>>> str(job['Props']['Pool'])
none
>>> job['Props']['Pool'] = unicode('jobPool')
>>> str(job['Props']['Pool'])
jobPool
>>> print str(job['Props']['Pri'])
50
>>> job['Props']['Pri'] = 75
>>> str(job['Props']['Pri'])
75
>>> con.Jobs.SaveJob(job)
'Success'
>>> job = con.Jobs.GetJob(jobId)
>>> str(job['Props']['Pool']) + ' ' +str(job['Props']['Pri'])
jobPool 75
Note, when submitting a job, the JobInfo and PluginInfo dictionaries should contain ALL the minimum necessary
KEY=VALUE pairs to successfully run this plugin job type in Deadline. As the KEY=VALUE pairs are internal and
change depending on the application plugin, it is recommended you submit a job normally to Deadline and then inspect
the jobs Submission Params to see what KEY=VALUE pairs should be submitted for this job type. You can also use
the Export button to take a copy of the JobInfo and PluginInfo files to submit the job using these files instead of via
Python dictionaries.
435
436
Chapter 7. Scripting
CHAPTER
EIGHT
REST API
437
Task Reports
Users
Balancer
438
Requested operation could not be completed using the request format given.
500 - Internal Server Error
Request message could not be interpreted properly, or the action being attempted causing an exception in Deadline.
501 - Not Implemented
Request type is not supported. For example, a JobReport PUT request would return this because only
GET is supported.
8.2 Jobs
8.2.1 Overview
Job requests can be used to set and retrieve information for one or many jobs. Job requests support GET, PUT, POST
and DELETE request types. For more about these request types and their uses see the Request Formats and Responses
documentation.
8.2. Jobs
439
Get Jobs In Specified State Gets jobs in the specified state(s). Valid states are Active, Suspended, Completed, Failed,
and Pending. Note that Active covers both Queued and Rendering jobs. Specify more than one state by separating them with commas (ie: Active,Completed,Suspended).
URL: http://hostname:portnumber/api/jobs?States=states
Request Type: GET
Message Body: N/A
Response: JSON object containing all the jobs in the specified state(s).
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Get All The Job IDs
URL: http://hostname:portnumber/api/jobs?IdOnly=true
Request Type: GET
Message Body: N/A
Response: JSON object containing all the job IDs in the repository.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Get Job Gets job info for the given job ID.
URL: http://hostname:portnumber/api/jobs?JobID=validjobidhere
Request Type: GET
Message Body: N/A
Response: JSON object containing all the job information for the job ID provided.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Save Job Saves the job info provided. Job info must be in JSON format.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = save
Job = JSON object containing the job info
Response: Success
Possible Errors:
400 Bad Request: There was no Job entry in the JSON object in the message body.
500 Internal Server Error: An exception occurred within the Deadline code.
Suspend Job Puts the job with the matching ID into the Suspended state.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = suspend
440
8.2. Jobs
441
Message Body:
JSON object where the following keys are mandatory:
Command = resumefailed
JobID = the ID of the failed Job to be resumed
Response: Success
Possible Errors:
400 Bad Request: There was no JobID entry in the JSON object in the message body.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Requeue Job Requeues the job with the ID that matches the provided ID.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = requeue
JobID = the ID of the Job to be requeued
Response: Success
Possible Errors:
400 Bad Request: There was no JobID entry in the JSON object in the message body.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Archive Job Archives the job with the ID that matches the provided ID.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = archive
JobID = the ID of the Job to be archived
Response: Success
Possible Errors:
400 Bad Request: There was no JobID entry in the JSON object in the message body.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
442
8.2. Jobs
443
444
Response: Success
Possible Errors:
400 Bad Request: There was no JobID entry in the JSON object in the message body.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Set Job Machine Limit Sets the Job Machine Limit for the job with the ID that matches the provided ID.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = setjobmachinelimit
JobID = the ID of the Job
The following keys are optional:
Limit = the new job machine limit, must be an integer
SlaveList = the slave/s to be set as the slave list (May be an array)
WhiteListFlag = boolean : sets the whitelistflag to true or false
Progress = Floating point number for the release percentage
Response: Success
Possible Errors:
400 Bad Request: There was no JobID entry in the JSON object in the message body.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Add Slaves To Job Machine Limit List Adds the provided Slaves to the job with the ID that matches the provided
ID.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = addslavestojobmachinelimitlist
JobID = the ID of the Job
SlaveList = the slave/s to be added to the slave list (May be an array)
Response: Success
Possible Errors:
400 Bad Request:
8.2. Jobs
445
There was no JobID entry in the JSON object in the message body, or
There needs to be at least one Slave passed.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Remove Slaves From Job Machine Limit List Removes the provided Slaves from the Job Machine Limit List for
the job with the ID that matches the provided ID.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = removeslavesfromjobmachinelimitlist
JobID = the ID of the Job
SlaveList = the slave/s to be removed from the slave list (May be an array)
Response: Success
Possible Errors:
400 Bad Request:
There was no JobID entry in the JSON object in the message body, or
There needs to be at least one Slave passed.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Set Job Machine Limit Listed Slaves Sets provided Slaves as Job Machine Limit Listed Slaves for the Job whose
ID matches the provided ID.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = setjobmachinelimitlistedslaves
JobID = the ID of the Job
SlaveList = the slave/s to be set as the slave list (May be an array)
Response: Success
Possible Errors:
400 Bad Request:
There was no JobID entry in the JSON object in the message body, or
There needs to be at least one Slave passed.
500 Internal Server Error:
446
8.2. Jobs
447
Set Job Frame Range Sets the frame range for the job with the ID that matches the provided ID.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = setjobframerange
JobID = the ID of the Job
FrameList = the new frame list
ChunkSize = the new chunk size
Response: Success
Possible Errors:
400 Bad Request:
There was no JobID entry in the JSON object in the message body, or
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Append Job Frame Range Appends frames to the job with the ID that matches the provided ID. This adds new tasks
without affecting the jobs existing tasks.
URL: http://hostname:portnumber/api/jobs
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = appendjobframerange
JobID = the ID of the Job
FrameList = the frame list to append to the jobs existing frames
Response: Success
Possible Errors:
400 Bad Request:
There was no JobID entry in the JSON object in the message body, or
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Submit Job Submits a job using the job info provided.
URL: http://hostname:portnumber/api/jobs
Request Type: POST
Message Body:
448
8.2. Jobs
449
450
Note that an active job can either be idle or rendering. Use the RenderingChunks property to determine if anything is
rendering.
Timeout (OnTaskTimeout)
0 = Both
1 = Error
2 = Notify
OnComp (OnJobComplete)
0 = Archive
1 = Delete
2 = Nothing
Schd (ScheduledType)
0 = None
1 = Once
2 = Daily
451
Get Job Error Reports Gets all the Job Error Reports for the Job that corresponds to the provided Job ID.
URL: http://hostname:portnumber/api/jobreports?Data=error&JobID=validJobID
Request Type: GET
Message Body: N/A
Response: JSON object containing all the job error reports for the requested job, or a message stating that there
are no error reports for the job.
Possible Errors:
400 Bad Request: No Job ID was provided.
500 Internal Server Error:
An exception occurred within the Deadline code, or
The Job ID provided does not correspond to any Job in the repository.
Get Job Log Reports Gets all the Job Reports for the Job that corresponds to the provided Job ID.
URL: http://hostname:portnumber/api/jobreports?Data=log&JobID=validJobID
Request Type: GET
Message Body: N/A
Response: JSON object containing all the job log reports for the requested job, or a message stating that there
are no log reports for the job.
Possible Errors:
400 Bad Request: No Job ID was provided.
500 Internal Server Error:
An exception occurred within the Deadline code, or
The Job ID provided does not correspond to any Job in the repository.
Get Job Requeue Reports Gets all the Job Requeue Reports for the Job that corresponds to the provided Job ID.
URL: http://hostname:portnumber/api/jobreports?Data=requeue&JobID=validJobID
Request Type: GET
Message Body: N/A
Response: JSON object containing all the job requeue reports for the requested job, or a message stating that
there are no requeue reports for the job.
Possible Errors:
400 Bad Request: No Job ID was provided.
500 Internal Server Error:
An exception occurred within the Deadline code, or
The Job ID provided does not correspond to any Job in the repository.
Get Job History Entries Gets all the Job History Entries for the Job that corresponds to the provided Job ID.
URL: http://hostname:portnumber/api/jobreports?Data=history&JobID=validJobID
Request Type: GET
Message Body: N/A
452
Response: JSON object containing all the job history entries for the requested job, or a message stating that
there are no history entries for the job.
Possible Errors:
400 Bad Request: No Job ID was provided.
500 Internal Server Error:
An exception occurred within the Deadline code, or
The Job ID provided does not correspond to any Job in the repository.
8.4 Tasks
8.4.1 Overview
Task requests can be used to set and retrieve Task information using GET and PUT request types. POST and DELETE
are not supported and sending a message of either of these types will result in a 501 Not Implemented error message.
For more about these request types and their uses see the Request Formats and Responses documentation.
8.4. Tasks
453
Gets the Task that correspond to the Task ID provided for the Job that corresponds to the Job ID provided.
URL: http://hostname:portnumber/api/tasks?TaskID=oneValidTaskID&JobID=aValidJobID
Request Type: GET
Message Body: N/A
Response: JSON object containing the Task information for the requested Task.
Possible Errors:
400 Bad Request:
No Task ID provided, or
Task ID must be an integer value.
500 Internal Server Error: An exception occurred within the Deadline code.
Get All Tasks
Gets the Tasks for the Job that corresponds to the Job ID provided.
URL: http://hostname:portnumber/api/tasks?JobID=aValidJobID
Request Type: GET
Message Body: N/A
Response: JSON object containing the Task information for all the Job Tasks.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Requeue Tasks
Requeues the Tasks that correspond to the Task IDs provided for the Job that corresponds to the Job ID
provided. If no Task IDs are provided, all Job tasks will be requeued.
URL: http://hostname:portnumber/api/tasks
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = requeue
JobID = the id of the Job
The following keys are optional:
TaskList = integer Task ID/s (May be an Array)
Response: Success
Possible Errors:
400 Bad Request: TaskList contains entries, but none of them are valid integers.
404 Not Found: Requested Task ID does not correspond to a Task for the Job.
500 Internal Server Error: An exception occurred within the Deadline code.
Complete Tasks
Completes the Tasks that correspond to the Task IDs provided for the Job that corresponds to the Job ID
provided. If no Task IDs are provided, all Job tasks will be completed.
URL: http://hostname:portnumber/api/tasks
454
8.4. Tasks
455
456
404 Not Found: Requested Task ID does not correspond to a Task for the Job.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Trying to pend a task for a Suspended Job.
Release Pending Tasks
Releases the pending Tasks that correspond to the Task IDs provided for the Job that corresponds to the
Job ID provided. If no Task IDs are provided, all Job pending tasks will be released.
URL: http://hostname:portnumber/api/tasks
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = releasepending
JobID = the id of the Job
The following keys are optional:
TaskList = integer Task ID/s (May be an Array)
Response: Success
Possible Errors:
400 Bad Request: TaskList contains entries, but none of them are valid integers.
404 Not Found: Requested Task ID does not correspond to a Task for the Job.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Trying to release a task from pending for a Suspended Job.
8.4. Tasks
457
458
8.6 Slaves
8.6.1 Overview
Slave requests can be used to set or retrieve Slave information. Slave requests support GET, PUT and DELETE request
types. POST is not supported and sending such a message will result in a 501 Not Implemented error message. For
more about these request types and their uses see the Request Formats and Responses documentation.
8.6. Slaves
459
460
8.6. Slaves
461
Possible Errors:
400 Bad Request: Need to provide at least one Slave name to delete.
500 Internal Server Error: An exception occurred within the Deadline code.
Get Slaves Reports
Gets all Slave Reports for all Slave names provided.
URL: http://hostname:portnumber/api/slaves?Name=oneOrMoreSlaveNames&Data=reports
Request Type: GET
Message Body: N/A
Response: JSON object containing all Slave Reports for all Slave names provided.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Get Slave Reports For All Slaves
Gets all Slave Reports for all Slaves.
URL: http://hostname:portnumber/api/slaves?Data=reports
Request Type: GET
Message Body: N/A
Response: JSON object containing all Slave Reports for all Slave names provided.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Get Slaves History
Gets all Slave History Entries for all Slave names provided.
URL: http://hostname:portnumber/api/slaves?Name=oneOrMoreSlaveNames&Data=history
Request Type: GET
Message Body: N/A
Response: JSON object containing all Slave History Entries for all Slave names provided.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Get Slave History For All Slaves
Gets all Slave History Entries for all Slaves.
URL: http://hostname:portnumber/api/slaves?Data=history
Request Type: GET
Message Body: N/A
Response: JSON object containing all Slave History for all Slaves.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Get Slave Names Rendering Job
Gets all Slave names rendering Job that corresponds to Job ID provided.
URL: http://hostname:portnumber/api/slavesrenderingjob?JobID=validJobID
Request Type: GET
Message Body: N/A
462
Response: JSON object all the Slave names rendering the Job.
Possible Errors:
400 Bad Request: No Job ID was provided.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Get Host Names of Machines Rendering Job
Gets all machine host names for slaves rendering Job that corresponds to Job ID provided.
URL: http://hostname:portnumber/api/machinessrenderingjob?JobID=validJobID
Request Type: GET
Message Body: N/A
Response: JSON object containing all the host names.
Possible Errors:
400 Bad Request: No Job ID was provided.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
Get IP Address of Machines Rendering Job
Gets all machine IP addresses for slaves rendering Job that corresponds to Job ID provided.
URL: http://hostname:portnumber/api/machinessrenderingjob?JobID=validJobID&GetIpAddress=true
Request Type: GET
Message Body: N/A
Response: JSON object containing all the IP addresses.
Possible Errors:
400 Bad Request: No Job ID was provided.
500 Internal Server Error:
An exception occurred within the Deadline code, or
Job ID provided does not correspond to a Job in the repository.
8.6. Slaves
463
3 = Offline
4 = Stalled
8 = StartingJob
Type (ReportType)
0 = LogReport
1 = ErrorReport
2 = RequeueReport
8.7 Pulse
8.7.1 Overview
Pulse requests can be used to set and retrieve Pulse information using GET and PUT. POST and DELETE are not
supported and sending a message of either of these types will result in a 501 Not Implemented error message. For
more about these request types and their uses see the Request Formats and Responses documentation.
464
8.7. Pulse
465
8.8 Balancer
8.8.1 Overview
Balancer requests can be used to set and retrieve Balancer information using GET and PUT. POST and DELETE are
not supported and sending a message of either of these types will result in a 501 Not Implemented error message. For
more about these request types and their uses see the Request Formats and Responses documentation.
8.8. Balancer
467
Possible Errors:
400 Bad Request: Did not provide a Balancer Information JSON object
500 Internal Server Error: An exception occurred within the Deadline code.
Get Balancer InfoSettings
Gets the Balancer information and settings for the Balancer names provided.
URL:
http://hostname:portnumber/api/balancer?Names=oneOrMoreBalancerNamesOR
http://hostname:portnumber/api/balancer?Name=oneBalancerName
Request Type: GET
Message Body: N/A
Response: JSON object containing all the Balancer information and settings for the requested Balancer
names.
Possible Errors:
404 Not Found: Balancer name provided does not exist (can only occur if you use Name= )
500 Internal Server Error: An exception occurred within the Deadline code.
8.9 Limits
8.9.1 Overview
Limit Group requests can be used to set and retrieve information about one or many Limit Groups. Limit Group
requests support GET, PUT, POST and DELETE request types. For more about these request types and their uses see
the Request Formats and Responses documentation.
468
Get Limit Group Names Gets the names of all Limit Groups in the repository.
URL: http://hostname:portnumber/api/limitgroups?NamesOnly=true
Request Type: GET
Message Body: N/A
Response: JSON object containing all the Limit Group names.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Get Limit Groups Gets the Limit Groups for the provided Limit Group names.
URL: http://hostname:portnumber/api/limitgroups?Names=listOfOneOrMoreLimitGroupNames
http://hostname:portnumber/api/limitgroups?Name=aSingleLimitGroupName
Request Type: GET
Message Body: N/A
Response: JSON object containing the requested Limit Group/s
Possible Errors:
404 Not Found: There is no Limit Group with provided Name (this can only occur if a single name is
passed)
500 Internal Server Error: An exception occurred within the Deadline code.
Get All Limit Groups Gets the names of all Limit Groups in the repository.
URL: http://hostname:portnumber/api/limitgroups
Request Type: GET
Message Body: N/A
Response: JSON object containing all the Limit Groups.
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Set Limit Group Sets the Limit, Slave List, White List Flag, Release Percentage and/or Excluded Slaves for an
existing Limit Group, or creates a new Limit Group with the provided properties.
URL: http://hostname:portnumber/api/limitgroups
Request Type: PUT/POST
Message Body:
JSON object where the following keys are mandatory:
Command = set
Name = name of Limit Group
The following keys are optional:
Limit= integer limit
Slaves = list of slave names to include
SlavesEx = list of slave names to exclude
RelPer = floating point number for release percentage
White = boolean white list flag
Response: Success
8.9. Limits
469
Possible Errors:
400 Bad Request: No name provided for the Limit Group
500 Internal Server Error: An exception occurred within the Deadline code.
Save Limit Group Updates a Limit Group using a JSON object containing all the Limit Group information.
URL: http://hostname:portnumber/api/limitgroups
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = save
LimitGroup = JSON object containing all relevant Limit Group information
Response: Success
Possible Errors:
400 Bad Request: No valid Limit Group object provided.
500 Internal Server Error: An exception occurred within the Deadline code.
Reset Limit Group Resets the counts for a Limit Group.
URL: http://hostname:portnumber/api/limitgroups
Request Type: PUT
Message Body:
JSON object where the following keys are mandatory:
Command = save
Name = name of Limit Group
Response: Success
Possible Errors:
400 Bad Request: No name provided for the Limit Group
404 Not Found: Provided Limit Group name does not correspond to a Limit Group in the repository.
500 Internal Server Error: An exception occurred within the Deadline code.
Delete Limit Groups Deletes the Limit Groups for the provided Limit Group names.
URL: http://hostname:portnumber/api/limitgroups
Request Type: DELETE
Message Body: N/A
Response: JSON object containing the requested Limit Group/s
Possible Errors:
400 Bad Request: Must provide at least one Limit Group name to delete.
500 Internal Server Error: An exception occurred within the Deadline code.
470
8.10 Users
8.10.1 Overview
User requests can be used to set and retrieve information for one or many Users. User requests support GET, PUT,
POST and DELETE request types. For more about these request types and their uses see the Request Formats and
Responses documentation.
8.10. Users
471
472
8.10. Users
473
474
Creates and saves new user groups with the given names.
URL: http://hostname:portnumber/api/usergroups
Request Type: POST
Message Body:
JSON object where the following keys are mandatory:
Group = the user group name/s to create (array)
Response: Success
Possible Errors:
400 Bad Request: Missing one or more of the required keys in the JSON object in the message
body.
500 Internal Server Error: An exception occurred within the Deadline code
Delete User Groups
Deletes a user groups with the given name.
URL: http://hostname:portnumber/api/usergroups?Name=user+group+name+to+delete
Request Type: DELETE
Message Body: N/A
Response: Success
Possible Errors:
400 Bad Request: Must provide a user group name to delete.
500 Internal Server Error: An exception occurred within the Deadline code
8.11 Repository
8.11.1 Overview
Repository requests can be used to retrieve Repository information, such as directories or paths, using the GET request
type. Repository requests can also be used for adding history entries for jobs, slaves or the repository using the POST
request type. PUT and DELETE are not supported and sending a message of either of these types will result in a
501 Not Implemented error message. For more about these request types and their uses see the Request Formats and
Responses documentation.
8.11. Repository
475
Possible Errors:
400 Bad Request: Must provide a Directory or an Auxiliary Path to find.
404 Not Found: Requested Directory could not be found.
Get Bin Directory
URL: http://hostname:portnumber/api/repository?Directory=bin
Request Type: GET
Message Body: N/A
Response: JSON Object containing the bin directory, or a message stating that the directory is not set.
Possible Errors:
400 Bad Request: Must provide a Directory or an Auxiliary Path to find.
404 Not Found: Requested Directory could not be found.
Get Settings Directory
URL: http://hostname:portnumber/api/repository?Directory=settings
Request Type: GET
Message Body: N/A
Response: JSON Object containing the settings directory, or a message stating that the directory is not
set.
Possible Errors:
400 Bad Request: Must provide a Directory or an Auxiliary Path to find.
404 Not Found: Requested Directory could not be found.
Get Events Directory
URL: http://hostname:portnumber/api/repository?Directory=events
Request Type: GET
Message Body: N/A
Response: JSON Object containing the events directory, or a message stating that the directory is not set.
Possible Errors:
400 Bad Request: Must provide a Directory or an Auxiliary Path to find.
404 Not Found: Requested Directory could not be found.
Get Custom Events Directory
URL: http://hostname:portnumber/api/repository?Directory=customevents
Request Type: GET
Message Body: N/A
Response: JSON Object containing the custom events directory, or a message stating that the directory is
not set.
Possible Errors:
400 Bad Request: Must provide a Directory or an Auxiliary Path to find.
404 Not Found: Requested Directory could not be found.
476
8.11. Repository
477
URL: http://hostname:portnumber/api/repository?AuxiliaryPath=job&JobID=aValidJobID
Request Type: GET
Message Body: N/A
Response: JSON Object containing the auxiliary path for the provided job id, or a message stating that
the path is not set.
Possible Errors:
400 Bad Request:
Must provide a Directory or an Auxiliary Path to find, or
Must provide a Job ID.
404 Not Found:
Requested Directory could not be found, or
Job ID provided does not correspond to a Job in the repository.
Get Alternate Auxiliary Path
URL: http://hostname:portnumber/api/repository?AuxiliaryPath=alternate
Request Type: GET
Message Body: N/A
Response: JSON Object containing the alternate auxiliary path, or a message stating that the path is not
set.
Possible Errors:
400 Bad Request: Must provide a Directory or an Auxiliary Path to find.
404 Not Found: Requested Directory could not be found.
Get Windows Alternate Auxiliary Path
URL: http://hostname:portnumber/api/repository?AuxiliaryPath=windowsalternate
Request Type: GET
Message Body: N/A
Response: JSON Object containing the windows alternate auxiliary path, or a message stating that the
path is not set.
Possible Errors:
400 Bad Request: Must provide a Directory or an Auxiliary Path to find.
404 Not Found: Requested Directory could not be found.
Get Linux Alternate Auxiliary Path
URL: http://hostname:portnumber/api/repository?AuxiliaryPath=linuxalternate
Request Type: GET
Message Body: N/A
Response: JSON Object containing the linux alternate auxiliary path, or a message stating that the path is
not set.
Possible Errors:
478
8.11. Repository
479
480
8.12 Pools
8.12.1 Overview
Pool requests can be used to set and retrieve information for one or many Pools. Pool requests support GET, PUT,
POST and DELETE request types. For more about these request types and their uses see the Request Formats and
Responses documentation.
8.12. Pools
481
Add Pools Creates new Pools using the provided Pool names.
URL: http://hostname:portnumber/api/pools
Request Type: POST
Message Body:
JSON object that must contain the following keys:
Pool = pool name/s (May be an Array)
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Set Pools Removes all pools not provided and creates any provided pools that did not exist.
URL: http://hostname:portnumber/api/pools
Request Type: POST
Message Body:
JSON object that must contain the following keys:
Pool = pool name/s (May be an Array)
OverWrite = true
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Add Pools to Slaves Adds the provided Pools to the assigned pools for each provided Slave. For both Pools and
Slaves, only the names are required.
URL: http://hostname:portnumber/api/pools
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
Slave = slave name/s (May be an Array)
Pool = pool name/s (May be an Array)
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Set Pools for Slaves Sets provided Pools as the assigned pools for each provided Slave. For both Pools and Slaves,
only the names are required.
URL: http://hostname:portnumber/api/pools
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
Slave = slave name/s (May be an Array)
ReplacementPool = pool name to replace the pools being purged
OverWrite = true
482
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Purge Pools Purges all obsolete pools using the provided replacement pool.
URL: http://hostname:portnumber/api/pools
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
OverWrite = true
ReplacementPool = pool name to replace the pools being purged
Response: Success
Possible Errors:
500 Internal Server Error: An exception occurred within the Deadline code, or
Replacement Pool name provided does not exist.
Set and Purge Pools Sets the list of pools to the provided list of pool names, creating them if necessary. Purges all
the obsolete pools using the provided replacement pool.
URL: http://hostname:portnumber/api/pools
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
OverWrite = true
ReplacementPool = pool name to replace the pools being purged
Pool = the pool/s provided for setting, the replacement pool must be in this pool list or must be
none (May be an Array)
Response: Success
Possible Errors:
500 Internal Server Error: An exception occurred within the Deadline code, or
Replacement Pool name provided does not exist.
Add and Purge Pools Adds the list of provided pools, creating them if necessary. Purges all the obsolete pools using
the provided replacement pool.
URL: http://hostname:portnumber/api/pools
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
OverWrite = true
ReplacementPool = pool name to replace the pools being purged
Pool = the pool/s provided for adding (May be an Array)
Response: Success
8.12. Pools
483
Possible Errors:
500 Internal Server Error: An exception occurred within the Deadline code, or
Replacement Pool name provided does not exist.
Delete Pools Deletes all Pools with the provided Pool names.
URL: http://hostname:portnumber/api/pools?Pool=oneOrMorePoolNames
Request Type: DELETE
Message Body: N/A
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Delete Pools From Slaves Deletes all Pools from the Slaves list of pools.
URL: http://hostname:portnumber/api/pools?Pool=oneOrMorePoolNames&Slaves=oneOrMoreSlaveNames
Request Type: DELETE
Message Body: N/A
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
8.13 Groups
8.13.1 Overview
Group requests can be used to set and retrieve information for one or many Groups. Group requests support GET,
PUT, POST and DELETE request types. For more about these request types and their uses see the Request Formats
and Responses documentation.
484
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Add Groups Creates new Groups using the provided Group names.
URL: http://hostname:portnumber/api/groups
Request Type: POST
Message Body:
JSON object that must contain the following keys:
Group = group name/s (May be an Array)
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Set Groups Removes all groups not provided and creates any provided groups that did not exist.
URL: http://hostname:portnumber/api/groups
Request Type: POST
Message Body:
JSON object that must contain the following keys:
Group = group name/s (May be an Array)
OverWrite = true
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Add Groups to Slaves Adds the provided Groups to the assigned groups for each provided Slave. For both Groups
and Slaves, only the names are required.
URL: http://hostname:portnumber/api/groups
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
Slave = slave name/s (May be an Array)
Group = group name/s (May be an Array)
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Set Groups for Slaves Sets provided Groups as the assigned groups for each provided Slave. For both Groups and
Slaves, only the names are required.
URL: http://hostname:portnumber/api/groups
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
Slave = slave name/s (May be an Array)
ReplacementGroup = group name to replace the groups being purged
OverWrite = true
8.13. Groups
485
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Purge Groups Purges all obsolete groups using the provided replacement group.
URL: http://hostname:portnumber/api/groups
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
OverWrite = true
ReplacementGroup = group name to replace the groups being purged
Response: Success
Possible Errors:
500 Internal Server Error: An exception occurred within the Deadline code, or
Replacement Group name provided does not exist.
Set and Purge Groups Sets the list of groups to the provided list of group names, creating them if necessary. Purges
all the obsolete groups using the provided replacement group.
URL: http://hostname:portnumber/api/groups
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
OverWrite = true
ReplacementGroup = group name to replace the groups being purged
Group = the group/s provided for setting, the replacement group must be in this group list or must
be none (May be an Array)
Response: Success
Possible Errors:
500 Internal Server Error: An exception occurred within the Deadline code, or
Replacement Group name provided does not exist.
Add and Purge Groups Adds the list of provided groups, creating them if necessary. Purges all the obsolete groups
using the provided replacement group.
URL: http://hostname:portnumber/api/groups
Request Type: PUT
Message Body:
JSON object that must contain the following keys:
OverWrite = true
ReplacementGroup = group name to replace the groups being purged
Group = the group/s provided for adding (May be an Array)
Response: Success
486
Possible Errors:
500 Internal Server Error: An exception occurred within the Deadline code, or
Replacement Group name provided does not exist.
Delete Groups Deletes all Groups with the provided Group names.
URL: http://hostname:portnumber/api/groups?Group=oneOrMoreGroupNames
Request Type: DELETE
Message Body: N/A
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
Delete Groups From Slaves Deletes all Groups from the Slaves list of groups.
URL: http://hostname:portnumber/api/groups?Group=oneOrMoreGroupNames&Slaves=oneOrMoreSlaveNames
Request Type: DELETE
Message Body: N/A
Response: Success
Possible Errors: 500 Internal Server Error: An exception occurred within the Deadline code.
8.13. Groups
487
488
CHAPTER
NINE
APPLICATION PLUGINS
489
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The 3ds Command specific options are:
Force Build: You can force 32 bit or 64 bit rendering.
Path Config: Allows you to specify an alternate path file in the MXP format that the slaves can use to find
bitmaps that are not found on the primary map paths.
Show Virtual Frame Buffer: Enable the virtual frame buffer during rendering.
Apply VideoPost To Scene: Whether or not to use VideoPost during rendering.
Continue On Errors: Enable to have the 3ds command line renderer ignore errors during rendering.
Enable Local Rendering: If enabled, the frames will be rendered locally, and then copied to their final network
location.
Gamma Correction: Enable to apply gamma correction during rendering.
490
Split Rendering: Enable split rendering. Specify the number of strips to split the frame into, as well as the
overlap you want to use.
VRay/Mental Ray DBR: Enable this option to offload a VRay or Mental Ray DBR render to Deadline. See the
VRay/Mental Ray DBR section for more information.
Run Sanity Check On Submission: Check for scene problems during submission.
VRay/Mental Ray off-load DBR
You can offload a VRay or Mental Ray DBR job to Deadline by enabling the Distributed Rendering option in your
VRay or Mental Ray settings, and by enabling the VRay/Mental Ray DBR checkbox in the submission dialog. With
this option enabled, a job will be submitted with its task count equal to the number of Slaves you specify, and it will
render the current frame in the scene file.
The slave that picks up task 0 will be the master, and will wait until all other tasks are picked up by other slaves.
Once the other tasks have been picked up, the master will update its local VRay or Mental Ray config file with the
names of the machines that are rendering the other tasks. It will then start the distributed render by connecting to the
other machines. Note that the render will not start until ALL tasks have been picked up by a slave.
It is recommended to setup VRay DBR or Mental Ray DBR for 3ds Max and verify it is working correctly prior
to submitting a DBR off-load job to Deadline. RTT (Render To Texture) is not supported with distributed bucket
rendering. If running multiple Deadline slaves on one machine, having these 2 or more slaves both pick up a different
DBR job concurrently as either master or slave is not supported.
Notes for VRay DBR:
Ensure VRay is the currently assigned renderer in the 3ds Max scene file prior to submission.
You must have the Distributed Rendering option enabled in your VRay settings under the Settings tab.
Ensure Save servers in the scene (Save hosts in the scene in VRay v2) option in VRay distributed rendering
settings is DISABLED as otherwise it will ignore the vray_dr.cfg file list!
Ensure Max servers value is set to 0. When set to 0 all listed servers will be used.
It is recommended to disable Use local host checkbox to reduce network traffic on the master machine,
when using a large number of slaves (5+). If disabled, the master machine only organises the DBR process,
sending rendering tasks to the Deadline slaves. This is particularly important if you intend to use the VRay v3+
Transfer missing assets feature. Note that Windows 7 OS has a limitation of a maximum of 20 other machines
concurrently connecting to the master machine.
VRay v3.00.0x has a bug in DBR when the Use local host is unchecked, it still demands a render node license.
This is resolved in a newer version of VRay. Please contact Chaos Group for more information.
The slaves will launch the VRay Spawner executable found in the 3ds Max root directory. Do NOT install the
VRay Spawner as a service on the master or slave machines. Additionally, Drive Mappings are unsupported
when running as a service.
The vray_dr.cfg file in the 3ds Maxs plugcfg directory must be writeable so that the master machine can
update it. This is typically located in the user profile directory, in which case it will be writeable already.
Chaos Group recommend that each machine to be used for DBR has previously rendered at least one other 3ds
Max job prior to trying DBR on the same machine.
Ensure all slaves can correctly access any mapped drives or resolve all UNC paths to obtain any assets required
by the 3ds Max scene file to render successfully. Use the Deadline Mapped Drives feature to ensure the necessary
drive mappings are in place.
Default lights are not supported by Chaos Group in DBR mode and will not render.
491
Ensure you have sufficient VRay DR licenses if processing multiple VRay DBR jobs through Deadline concurrently. Use the Deadline Limits feature to limit the number of licenses being used at any time.
Ensure the necessary VRay executables & TCP/UDP ports have been allowed to pass-through the Windows
Firewall. Please consult the VRay user manual for specific information.
VRay does NOT currently support in 3ds Max the ability to dynamically add or remove DBR slaves to the
currently processing DBR render once started on the master slave.
Notes for Mental Ray DBR:
Ensure Mental Ray is the currently assigned renderer in the 3ds Max scene file prior to submission.
You must have the Distributed Render option enabled in your Mental Ray settings under the Processing tab.
The Mental Ray Satellite service must be running on your slave machines. It is installed by default during the
3ds Max installation.
The max.rayhosts file must be writeable so that the master machine can update it. Its location is different for
different versions of 3ds Max:
2010 and earlier: It will be in the mentalray folder in the 3ds Max root directory.
2011 and 2012: It will be in the mentalimages folder in the 3ds Max root directory.
2013 and later: It will be in the NVIDIA folder in the 3ds Max root directory.
Ensure the Use Placeholder Objects checkbox is enabled in the Translator Options rollout of the Processing tab. When placeholder objects are enabled, geometry is sent to the renderer only on demand.
Ensure Bucket Order is set to Hilbert in the Options section of the Sampling Quality rollout of the
Renderer tab. With Hilbert order, the sequence of buckets to render uses the fewest number of data transfers.
Contour shading is not supported with distributed bucket rendering.
Autodesk Mental Ray licensing in 3ds Max is restricted. Autodesk says Satellite processors allow any owner
of a 3ds Max license to freely use up to four slave machines (with up to four processors each and an unlimited
number of cores) to render an image using distributed bucket rendering, not counting the one, two, or four
processors on the master system that runs 3ds Max. Mental Ray Standalone licensing can be used to go beyond
this license limit. Use the Deadline Limits feature to limit the number of licenses being used at any time if
required.
Ensure the necessary Mental Ray executables & TCP/UDP ports have been allowed to pass-through the Windows Firewall. Please consult the Autodesk 3ds Max user manual for specific information.
Sanity Check
The 3ds Command Sanity Check script defines a set of functions to be called to ensure that the scene submission does
not contain typical errors like wrong render view and frame range settings, incorrect output path, etc.
The Sanity Check is enabled by the Run Sanity Check Automatically Before Submission checkbox in the User Options
group of controls in the Submit To Deadline (3dsmaxCmd) dialog. You can also run the Sanity Check automatically
by clicking the Run Now! button.
492
493
Clicking the dialog anywhere outside of the two message areas will rerun the Sanity Check and update all
messages.
Double-clicking any Message in the Feedback Messages window will rerun the Sanity Check and update all
messages.
Reparing an error by double-clicking will also automatically rerun the Sanity Check
Pressing the Run Now! button in the Submit To Deadline dialog will update the Sanity Check.
The following Sanity Checks are FATAL. These are errors that must be fixed manually before the job can be submitted.
Message
The scene does not
contain ANY objects!
Maxwell is the renderer
and the current view is
NOT a Camera.
Description
The scene is empty and should not be sent to
Deadline.
Maxwell renderer must render through an
actual camera and will fail through a viewport.
Fix
Load a valid scene or create/ merge
objects, then try again.
Double-click the error message to
open a Select By Name dialog to
pick a camera for the current
viewport.
Ensure you remove any duplicate
named objects from your scene.
The following Sanity Checks can be automatically fixed before the job is submitted.
494
Message
The current Scene
Name is Untitled.
Description
The scene has never been saved to a MAX
file.
While it is possible to submit an untitled
scene to Deadline, it is not a good practice.
The active viewport is not a camera
viewport.
Fix
Double-click the error message to open a
Save As dialog and save to disk.
Description
No frames will be saved to disk. This is allowed if
you want to output render elements only.
Fix
Double-click the error message to open
the Render Dialog and select a valid
path, then double-click again to retest.
This list will be extended to include future checks and can be edited by 3rd parties by adding new definitions and
functions to the original script. Documentation on extending the script will be published later. Please email suggestions
for enhancements and additional test cases to Deadline Support.
495
Render Executables
3ds Max Cmd Executable: The path to the 3dsmaxcmd.exe executable file used for rendering. Enter alternative
paths on separate lines. Different executable paths can be configured for each version installed on your render
nodes.
Render Options
3ds Cmd Verbosity Level: The verbose level (0-5).
VRay DBR and Mental Ray Satellite Rendering
Use IP Addresses: If offloading a VRay DBR or Mental Ray Satellite render to Deadline, Deadline will update
the appropriate config file with the host names of the machines that are running the VRay Spawner or Satellite
service. If this is enabled, the IP addresses of the machines will be used instead.
496
You can either run the Submitter installer or manually install the submission script
Submitter Installer
Run the Submitter Installer located at <Repository>/submission/3dsCmd/Installers
Manual Installation of the Submission Script
Copy [Repository]/submission/3dsCmd/Client/Deadline3dsCmdClient.mcr to [3ds Install Directory]/MacroScripts. If you dont have a MacroScripts folder in your 3ds Max install directory, check to
see if you have a UI/Macroscripts folder instead, and copy the Deadline3dsCmdClient.mcr file there if you do.
Copy
[Repository]/submission/3dsmax/Client/SMTDSetup.ms
tory]/scripts/Startup/SMTDSetup.ms
to
[3ds
Max
Install
Direc-
9.1.4 FAQ
Which versions of Max are supported?
The 3dsCommand plugin has been tested with 3ds Max 2010 and later (including Design editions).
Note: Due to a maxscript bug in the initial release of 3ds Max 2012, the integrated submission scripts
will not work. However, this bug has been addressed in 3ds Max 2012 Hotfix 1. If you cannot apply this
patch, it means that you must submit your 3ds Max 2012 jobs from the Monitor.
When should I use the 3dsCommand plugin to render Max jobs instead of the original?
This plugin should only be used when a particular feature doesnt work with our normal 3dsmax plugin.
For example, there was a time when using the 3dsCommand plugin was the only way to render scenes
that made use of Vrays Frame Buffer features.
9.1. 3ds Command
497
Note that the 3dsCommand plugin has less features in the submission dialog, and the error handling isnt as
robust. In addition, using 3dsCommand causes Max to take extra time to start up because 3dsmaxcmd.exe
needs to be launched for each task, so renders might take a little extra time to complete.
Is PSofts Pencil+ render effects plugin supported?
Yes. Ensure the render output and render element output directory paths all exist on the file server before
rendering commences. Please note at least Pencil+ v3.1 is required if you are using the alternative 3dsmax(Lightning) plugin in Deadline. Note, you will require the correct network render license from PSoft
for each Deadline Slave, which is not the same as the full, workstation license of Pencil+.
498
If you are submitting from RPManager, just select the Network tab in RPManager after setting up the integrated
submitter.
499
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The 3ds Max specific options are as follows.
Scene File Submission Options
SAVE and Submit Current Scene File with the Job to the REPOSITORY: The current scene will be saved
to a temporary file which will be sent with the job and will be stored in the Jobs folder in the Repository.
SAVE and Submit Current Scene File to GLOBAL NETWORK PATH: The current scene will be saved
to a temporary file which will be copied to a Globally-Defined Alternative Network Location (e.g. dedicated
file server). It is specified in [Repository]\submission\3dsmax\Main\SubmitMaxToDeadline_Defaults.ini under
[GlobalSettings] as the SubmitSceneGlobalBasePath key. It will be referenced by the Job via its path only. This
will reduce the load on the Repository server.
SAVE and Submit Current Scene File to USER-DEFINED NETWORK PATH: The current scene will be
saved to a temporary file which will be copied to a User-Defined Alternative Network Location (e.g. dedicated
500
file server) stored as a local setting. It will be referenced by the Job via its path only. This will reduce the load
on the Repository server.
DO NOT SAVE And Use Current Scenes ORIGINAL NETWORK PATH: The current scene will NOT be
saved, but the original file it was opened from will be referenced by the job. Assuming the file resides on a
dedicated file server, this will speed up submission and rendering significantly, but current changes to the scene
objects will be ignored.
Sanity Check
Run Sanity Check Automatically Before Submission: This options forces Submit To Deadline to perform a
Sanity Check before submitting the job. The Sanity Check is implemented as a separate set of scripted functions
which can be enhanced by 3rd parties to meet specific studio needs. For more information, please refer to the
Sanity Check section.
Run Sanity Check Now!: This button performs a Sanity Check without submitting a job. Any potential problems will be reported and can be fixed before actually submitting the job.
Job Tab
Job Options
Render Task Chunk Size (Frames Per Task): Defines the number of Tasks (Frames) to be processed at once
by a Slave.
Limit Number of Machines Rendering Concurrently: When checked, only the number of Slaves specified
by the [Machines] value will be allowed to dequeue the job. When unchecked, any number of Slaves can work
on the job.
Machines: Defines the number of Slaves that will be allowed to dequeue the job at the same time.
Out-Of-Order Rendering Every Nth Frame: Deadline will render every Nth frame based on the order selected
in the drop down box. This option can be very useful when rendering long test animations - you can render a
501
rough animation containing evey Nth frame early enough to detect any major issues before all frames have been
rendered, or in cases where the major action happens in the end of the sequence, reverse the rendering order.
Log: Print Frame Sequence to the Log File, then double-click the feedback window to open the Log, Copy &
Paste into Monitor > Jobs Frame Range.
Render Preview Job First: When the checkbox is checked, two jobs will be submitted. The first job will have
[PREVIEW FRAMES] added to its name, have a priority of 100, and will render only N frames based on the
spinners value. The step will be calculated internally. If the spinner is set to 2, the first and the last frame will
be rendered. With a value of 3, the first, middle and last frames will be rendered and so on. The second job will
have [REST OF FRAMES] added to its name, and will be DEPENDENT on the first job and will start rendering
once the preview frames job has finished. It will have the priority specified in the dialog, and render all frames
not included in the preview job.
Priority+: Defines the Priority Increase for the PREVIEW job. For example if the Job Priority is set to 50 and
this value is +5, the PREVIEW job will be submitted with Priority of 55 and the REST job with 50.
Dependent: When checked, the [REST OF FRAMES] Job will be made dependent on the [PREVIEW
FRAMES] Job. When unchecked, the [REST OF FRAMES] Job will use the same dependencies (none or
custom) as the [PREVIEW FRAMES] Job.
Frames: Defines the number of frames to be submitted as a PREVIEW job. The frames will be taken at equal
intervals, for example a value of 2 will send the first and last frames, a value of 3 will send first, middle and last
and so on.
Task Timeout: When checked, a task will be requeued if it runs longer than the specified time. This is useful
when the typical rendering time of the job is known from previous submissions and will prevent stalling.
Enable Auto Task Timeout: Enables the Auto Task Timeout option.
Restart 3ds Max Between Tasks: When unchecked (default), 3ds Max will be kept in memory for the duration
of the give jobs processing. This can reduce render time significantly as multiple Tasks can be rendered in
sequence without reloading 3ds Max. When checked, 3ds Max will be restarted between tasks, thus releasing
all memory and resetting the scene settings at cost of startup time.
Enforce Sequential Rendering: When checked, the Tasks will be processed in ascending order in order to
reduce the performance hit from History-Dependent calculations, for example from particle systems. When
unchecked, Tasks can be picked up by Slaves in any order. Recommended for Particle Rendering.
Submit Visible Objects Only: This option should be used at your own risk, as it is heavily dependent on the
content of your scene. In most cases, it can be used to submit only a subset of the current scene to Deadline,
skipping all hidden objects that would not render anyway. This feature will be automatically disabled if the
current scene contains any Scene XRefs. The feature will create an incorrect file if any of the scene objects
depend INDIRECTLY on hidden objects.
Concurrent Tasks: Defines the number of Tasks a single Slave can pick up at once (by launching multiple
instances of 3ds Max on the same machine). Note that only one Deadline license will be used, but if rendering
in Workstation Mode, multiple licenses of 3ds Max might be required. This is useful to maximize performance
when the tasks dont saturate all CPUs at 100% and dont use up all memory. Typically, as a rule of thumb, this
feature is NOT required as 3ds Max uses 100% of CPUs during rendering.
Limit Tasks To Slaves Task Limit: When checked, the number of Concurrent Tasks will be limited by the
Slaves Task Limit which is typically set to the number of available CPUs. For example, if Concurrent Tasks
is set to 16 but a Slave has 8 cores, only 8 concurrent tasks will be processed.
On Job Completion: Defines the action to perform when the job has completed rendering successfully. The
job can be either left untouched, ARCHIVED to improve Repository performance, or automatically DELETED
from the Repository.
Submit Job As Suspended: When checked, the Job will be submitted to the Repository as Suspended. It will
require manual user intervention before becoming active.
502
Force 3ds Max Build: This drop-down list allows you to specify which build of 3ds Max (32 bit vs. 64 bit) to
use when rendering the job. The list will be greyed out when running in 3ds Max 8 or earlier.
Make Force 3ds Max Build Sticky: When the checkbox is unchecked, the Force 3ds Max Build dropdown list selection will NOT persist between sessions and will behave as documented above in the Default
section. When the checkbox is checked, the Force 3ds Max Build drop-down list selection will persist between
sessions. For example, if you are submitting from a 64 bit build of 3ds Max to an older network consisting of
only 32 bit builds, you can set the drop-down list to 32bit once and lock that setting by checking Make Force
3ds Max Build Sticky.
Job Dependencies
When the checkbox is checked and one or more jobs have been selected from the multi-list box, the job will be set
to Pending state and will start rendering when all jobs it depends on have finished rendering. Use the Get Jobs List
button to populate the Job List and the Filter options with job data from the Repository.
503
Job Scheduling
Enable job scheduling. See the Scheduling section of the Modifying Job Properties documentation for more information on the available options.
Render Tab
3ds Max Rendering
504
Use Alternate Plugin.ini file: By default, 3ds Max will launch using the default plugin.ini file in the local
installation. You can use this option to select an alternative plugin.ini file to use instead. Alternative plugin.ini
files can be added to [Repository]\plugins\3dsmax, and then they will appear in the drop down box in the
submitter (see the Custom Plugin.ini File Creation section for more information). If you have the [Default]
option selected, its the equivalent to having this feature disabled.
Fail On Black Frames: This option can be used to fail the render if a certain portion of the output image or
its render elements is black. The Black Pixel % defines the minimum percentage of the images pixels that
must be black in order for the image to be considered black. If each of RGB are all less than or equal to the
Threshold, and the alpha is not between the Threshold and (1.0 - threshold), then the pixel is considered black.
If the Threshold is greater than or equal to 0.5, then the alpha value has no effect.
Override Bitmap Pager Setting While Rendering: You can specify if you want the 3dsmax Bitmap Pager
setting to be enabled or disabled.
Submit External Files With Scene: Whether the external files (bitmaps, xrefs etc.) will be submitted with the
scene or not.
Merge Object XRefs: If object XRefs will be merged during submission.
Merge Scene XRefs: If scene XRefs will be merged during submission.
Force 3dsmax Workstation Mode (Uses up a 3dsmax License): Used mainly for testing and debugging
purposes and should be left unchecked. When this option is unchecked, 3ds max will be started in Slave mode
without the User Interface, which does not require a 3ds Max license. When checked, 3ds max will be launched
in full Interactive mode and will require a license. Note that Workstation mode is set automatically when
submitting MAXScripts to Deadline.
Enabled Silent Mode: This option is only available when Force Workstation Mode is checked. It can help
suppress some popups that 3ds Max displays (although some popups like to ignore this setting).
Ignore Missing External File Errors: Missing external files could mean that the 3ds Max scene will render
incorrectly (with textures missing etc). In some cases though, missing external files could be ignored- for
example if the job is meant for test rendering only. If you want the job to fail if a missing external resource is
detected, uncheck this checkbox.
Ignore Missing UVW Errors: Missing UVWs could mean that some 3ds Max object would render incorrectly
(with wrong texture mapping etc). In some cases though, missing UVWs could be ignored (for example if the
job is meant for test rendering).
Ignore Missing XREF Errors: Missing XFEFs could mean that the 3ds Max scene cannot be loaded correctly.
In some cases though, missing XFEFs could be ignored. If you want the job to fail if a missing XFEF message
is detected at startup, keep this checkbox unchecked.
505
Ignore Missing DLL Errors: Missing DLLs could mean that the 3ds Max scene cannot be loaded or rendered
correctly. In some cases though, missing DLLs could be ignored. If you want the job to fail if a missing DLL
message is detected at startup, keep this checkbox unchecked.
Do Not Save Render Element Files: Enable this option to have Deadline skip the saving of Render Element
image files during rendering (the elements themselves are still rendered).
Show Virtual Frame Buffer: If checked, the 3ds Max frame buffer will be displayed on the slave during
rendering.
Override Renderer Frame Buffer Visibility: If checked, the current renderers frame buffer visibility will be
overridden by the next setting (Show Renderer Frame Buffer).
Show Renderer Frame Buffer: If checked, the current renderers frame buffer will be made visible during
rendering (V-Ray and Corona Frame Buffers currently supported).
Disable Progress Update Timeout: Enable this option to disable progress update checking. This is useful for
renders like Fume FX sims that dont constantly supply progress to 3dsmax.
Disable Frame Rendering: Enable this option to skip the rendering process. This is useful for renders like
Fume FX sims that dont actually require any rendering.
Restart Renderer Between Frames: This option can be used to force Deadline to restart the renderer after each
frame to avoid some potential problems with specific renderers. Enabling this option has little to no impact on
the actual render times. This feature should be ENABLED to resolve V-Ray renders where typically the beauty
pass renders correctly but the Render Elements are all black or perhaps seem to be swapped around. When
enabled, the c++ Lightning plugin (unique to Deadline), will unload the renderer plugins and then reload them
instantly. This has the effect of forcing a memory purge and helps to improve renderer stability, as well as ensure
the lowest possible memory footprint. This can be helpful, when rendering close to the physical memory limit
of a machine. Ensure this feature is DISABLED if you are sending FG/LC/IM caching map type jobs to the
farm, as the renderer will get reset for each frame and the FG/LC/IM file(s) wont get incrementally increased
with the additional data per frame.
Disable Multipass Effects: Enable this option to skip over multipass effects if they are enabled for the camera
to be rendered.
V-Ray/Mental Ray DBR: Enable this option to offload a V-Ray or Mental Ray DBR render to Deadline. See
the V-Ray/Mental Ray DBR section for more information.
Job Is Interruptible: If enabled, this job will be cancelled if a job with higher priority is submitted to the queue.
Apply Custom Material To Scene: If checked, all geometry objects in the scene will be assigned one of the
user-defined materials available in the drop down box.
3ds Max Gamma Options
506
Remove Filename Padding: If checked, the output filename will be (for example) output.tga instead of
output0000.tga. This feature should only be used when rendering single frames. If you render a range of
frames with this option checked, each frame will overwrite the previous existing frame.
Force Strict Output Naming: If checked, the output image filename is automatically modified
to include the scenes name.
For example, if the scene name was myScene.max and the output image path was \\myServer\images\output.tga, the output image path would be changed to \\myServer\images\myScene\myScene.tga. If the new output image path doesnt exist, it is created by the 3dsmax
plugin before rendering begins.
Purify Filenames: If checked, all render output including Render Elements will be purged of any illegal characters as defined by PurifyCharacterCodes in SubmitMaxToDeadline_Defaults.ini file.
Force Lower-Case Filenames: If checked, all render output including Render Elements will be forced to have
a lowercase filename.
Update Render Elements Paths: Each Render Element has its own output path which is independent from
the render output path. When this option is unchecked, changing the output path will NOT update the Render
Elements paths and the Elements could be written to the wrong path, possibly overwriting existing passes
from a previous render. When checked, the paths will be updated to point at sub-folders of the current Render
Output path with names based on the name and class of the Render Element. The actual file name will be left
unchanged.
Also Update REs Filenames: If enabled, the Render Element file names will also be updated along with their
paths.
Include RE Name in Paths: 88If enabled, the new Render Element files will be placed in a folder that contains
the RE name.
Include RE Name in Filenames: If enabled, the new Render Element files will contains the RE name in the
file name.
Include RE Type in Paths: If enabled, the new Render Element files will be placed in a folder that contains the
RE type.
507
Include RE Type in Filenames: If enabled, the new Render Element files will contains the RE type in the file
name.
Permanent RE Path Changes: When this checkbox is checked and the above option is also enabled, changes
to the Render Elements paths will be permanent (in other words after the submission, all paths will point at
the new locations created for the job). When unchecked, the changes will be performed temporarily during the
submission, but the old path names will be restored right after the submission.
Rebuild Render Elements: If checked, Render Elements will be automatically removed and rebuilt during
submission to try and work around known 3dsMax issues.
Include Local Paths With Job: (Thinkbox internal use only) Currently not hooked up to any functionality.
Use Alternate Path: Allows you to specify an alternate path file in the MXP format that the slaves can use to
find bitmaps that are not found on the primary map paths.
Render Output Autodesk ME Image Sequence (IMSQ) Creation
Save File: Specify the render output. Note that this updates the 3ds Max Render Output dialog, and is meant as
a convenience to update the output file.
Create Image Sequence (IMSQ) File: If checked, an Autodesk IMSQ file will be created from the output files
at the output location.
Copy IMSQ File On Completion: If checked, the IMSQ file will be copied to the location specified in the text
field.
Options Tab
User Options
508
Enable Local Rendering: If checked, Deadline will render the frames locally before copying them over to the
final network location.
One Cpu Per Task: Forces each task of the job to only use a single CPU. This can be useful when doing single
threaded renders and the Concurrent Tasks setting is greater than 1.
Automatically Update Job Name When Scene File Name Changes: If checked, the Job Name setting in the
submission dialog will automatically match the file name of the scene loaded. So if you load a new scene, the
Job Name will change accordingly.
Override Renderers Low Priority Thread Option (Brazil r/s, V-Ray): When checked, the Low Priority
Thread option of the renderers supporting this feature will be forced to false during the submission. Both
Brazil r/s and V-Ray provide the feature to launch the renderer in a low priority thread mode. This is useful
when working with multiple applications on a workstation and the rendering should continue in the background
without eating all CPU resources. When submitting a job though, this should be generally disabled since we
want all slaves to work at 100% CPU load.
Clear Material Editor In The Submitted File: Clears the material editor in the submitted file during submission.
Unlock Material Editor Renderer: If checked, the Material Editors Renderer will be unlocked to use the
Default Scanline Renderer to avoid problems with some old versions of V-Ray.
Delete Empty State Sets In The Submitted File: Deletes any empty State Sets in the submitted file during
submission and the State Sets dialog/UI will be reset. This fixes an ADSK bug when running 3dsMax as a
service.
Warn about Missing External Files on Submission: When checked, a warning will be issued if the scene
being submitted contains any missing external files (bitmaps etc.). Depending on the state of the Ignore Missing
External File Errors checkbox under the Render tab, such files might not cause the job to fail but could cause
the result to look wrong. When unchecked, scenes with missing external files will be submitted without any
warnings.
Warn about Copying External Files with Job only if: the count is greater than 100 or the size is greater than
1024MB. Both values can be configured to a studios need.
Override 3ds Max Language: If enabled, you can choose a language to force during rendering.
Export Renderer-Specific Advanced Settings
509
If this option is enabled for a specific renderer, you will be able to modify a variety of settings for that renderer after
submission from the Monitor. To modify these settings from the Monitor, right-click on the job and select Modify
Properties, then select the 3dsmax tab.
Submission Timeouts
Job Submission Timeout in seconds: This value spinner defines how many seconds to wait for the external
Submitter application to return from the Job submission before stopping the attempt with a timeout message.
Quicktime Submission Timeout in seconds: This value spinner defines how many seconds to wait for the
external Submitter application to return from the Quicktime submission before stopping the attempt with a
timeout message.
Data Collection Timeout in seconds: This value spinner defines how many seconds to wait for the external
Submitter application to return from data collecting before stopping the attempt with a timeout message. Data
collecting includes collecting Pools, Categories, Limit Groups, Slave Lists, Slave Info, Jobs etc.
Limits Tab
Blacklist/Whitelist Slaves
Set the whitelist or blacklist for the job. See the Scheduling section of the Modifying Job Properties documentation
for more information on the available options.
510
Limits
511
Set the Limits that the job requires. See the Scheduling section of the Modifying Job Properties documentation for
more information on the available options.
512
StateSets Tab
Select the State Sets you want to submit to Deadline. This option is only available in 3ds Max 2012 (Subscription
Advantage Pack 1) and later.
Integration Tab
Project Management Data
The available Integration options are explained in the Draft and Integration documentation.
513
Extra Info
These are some extra arbitrary properties that can be set for the job. Note that some of these are reserved when enabling
514
Scripts Tab
Run Python Scripts
Run Pre-Job Script: Specify the path to a Python script to execute when the job initially starts rendering.
Run Post-Job Script: Specify the path to a Python script to execute when the job finishes rendering.
Run Pre-Task Script: Specify the path to a Python script to execute before each task starts rendering.
Run Post-Task Script: Specify the path to a Python script to execute after each task finishes rendering.
515
Submit Script Job: This checkbox lets you turn the submission into a MAXScript job. When checked, the
scene will NOT be rendered, instead the specified MAXScript code will be executed for the specified frames.
Options that collide with the submission of a MAXScript Job like Tile Rendering and Render Preview Job
First will be disabled or ignored.
Single Task: This checkbox lets you run the MAXScript Job on one slave only. When checked, the job will
be submitted with a single task specified for frame 1. This is useful when the script itself will perform some
operations on ALL frames in the scene, or when per-frame operations are not needed at all. When unchecked,
the frame range specified in the Render Scene Dialog of 3ds Max will be used to create the corresponding
number of Tasks. In this case, all related controls in the Job tab will also be taken into account.
Workstation Mode: This checkbox is a duplicate of the one under the Render tab (checking one will affect the
other). MAXScript Jobs that require file I/O (loading and saving of 3ds Max files) or commands that require the
3ds Max UI to be present, such as manipulating the modifier stack, HAVE TO be run in Workstation mode (using
up a 3ds Max license on the Slave). MAXScript Jobs that do not require file I/O or 3ds Max UI functionality
can be run in Slave mode on any number of machines without using up 3ds Max licenses.
New Script From Template: This button creates a new MAXScript without any execution code, but with all
the necessary template code to run a MAXScript Job on Deadline.
Pick Script: This button lets you select an existing script from disk to use for the MAXScript Job. It is advisable
to use scripts created from the Template file using the New Script From Template button.
516
Edit MAXScript File: This button lets you open the current script file (if any) for editing.
Run Pre-Load Script: This checkbox lets you run a MAXScript specified in the text field below it BEFORE
the 3ds Max scene is loaded for rendering by the Slave.
Run Post-Load Script: This checkbox lets you run a MAXScript specified in the text field below it AFTER the
3ds Max scene is loaded for rendering by the Slave.
Run Pre-Frame Script: This checkbox lets you run a MAXScript specified in the text field below it BEFORE
the Slave renders a frame.
Run Post-Frame Script: This checkbox lets you run a MAXScript specified in the text field below it AFTER
the Slave renders a frame.
Post-Submission Function Call: This field can be used by TDs to enter an arbitrary user-defined MAXScript
Expression (NOT a path to a script!) which will be executed after the submission has finished. This can be used
to trigger the execution of user-defined functions or to press a button in a 3rd party script. In the screenshot, the
expression presses a button in a globally defined rollout which is part of an in-house scene management script.
If you want to execute a multi-line script after each submission, you could enter fileIn c:\temp\somescript.ms
in this field and the content of the specified file will be evaluated. The content of this field is sticky and saved in
the local INI file - it will persist between sessions until replaced or removed manually.
The
MAXScript
Job
Template
file
is
located
in
the
Repository
under
\submission\3dsmax\Main\MAXScriptJobTemplate.ms. When the button is pressed, a copy of the template file with a name
pattern MAXScriptJob_TheSceneName_XXXX.ms will be created in the \3dsmax#\scripts\SubmitMaxToDeadline
folder where XXXX is a random ID and 3dsmax# is the name of the 3ds Max root folder. The script file will open in
3ds Max for editing. You can add the code to be executed in the marked area and save to disk. The file name of the
new template will be set as the current MAXScript Job file automatically. If a file name is already selected in the UI,
you will be prompted about replacing it first.
Deadline exposes an interface to MAXScript, which allows you to gather information about the job being rendered.
See the Maxscript Interface documentation for the available functions and properties.
Tiles Tab
Tile & Region Rendering Options
Region Rendering Mode: This drop-down list controls the various rendering modes:
517
FULL FRAME Rendering, All Region Options DISABLED - this is the default mode of the Submitter.
No region rendering will be performed and the whole image will be rendered.
SINGLE FRAME, MULTI-REGION Jigsaw Rendering - Single Job, Regions As Tasks - this mode
allows one or more regions to be defined and rendered on one or more network machines. Each region
can be optionally sub-divided to a grid of sub-regions to split between machines. The resulting fragments
will then be combined to a new single image, or optionally composited over a previous version of the full
image using DRAFT. This mode is recommended for large format single frame rendering. Note that the
current frame specified by the 3ds Max TIME SLIDER will be rendered, regardless of the Render Dialog
Time settings.
ANIMATION, MULTI-REGION Jigsaw Rendering - One Job Per Region, Frames As Tasks - this
mode allows one or more regions to be defined and rendered on one or more network machines. Each
region can be optionally sub-divided to a grid of sub-regions to split between machines. Each region
can be optionally animated over time by hand or by using the automatic tracking features. The resulting
fragments from each frame will then be combined to a new single image, or optionally composited over
a previous version of the full image using DRAFT. This mode is recommended for animated sequences
where multiple small portions of the scene are changing relative to the previous render iteration.
SINGLE FRAME TILE Rendering - Single Job, Tiles As Tasks - this mode splits the final single image
into multiple equally-sized regions (Tiles). Each Tile will be rendered by a different machine and the final
image can be assembled either using DRAFT, or by the legacy command line Tile Assembler. This mode
is recommended when the whole image needs to be re-rendered, but you want to split it between multiple
machines.
ANIMATION, TILE Rendering - One Job Per Tile, Frames As Tasks - this mode submits a job for each
tile and a post task maxscript will assemble the tiles once they are all rendered per frame for each job.
3DS MAX REGION Rendering - Single Job, Frames As Tasks - this mode allows for traditional 3ds
Max REGION, BLOWUP and CROP render modes to be used via Deadline.
Cleanup Tiles After Assembly: When checked, the Tile image files will be removed after the final image has
been assembled. Keep this unchecked if you intend to resubmit some of the tiles and expect them to re-assemble
with the previous ones.
Pixel Padding: Default is 4 pixels. This is the number of pixels to be added on each side of the region or tile to
ensure better stitching through some overlapping. Especially when rendering Global Illumination, it might be
necessary to render tiles with significant overlapping to avoid artefacts.
Copy Draft Config Files To Output Folder: When checked, the configuration files for Draft Assembly jobs
will be duplicated in the output folder(s) for archiving purposes. The actual assembling will be performed using
the copies stored in the Job Auxiliary Files folder. Use this option if you want to preserve a copy next to the
assembled frames even after the Jobs have been deleted from the Deadline Repository.
Draft Assembly Job Error On Missing Tiles: When unchecked, missing region or tile fragments will not
cause errors and will simply be ignored, leaving either black background or the previous images pixels in the
assembled image. When checked, the Assembly will only succeed if all requested input images have been found
and actually put together.
Override Pool, Group, Priority for Assembly Job: When enabled, the Assembly Pool, Secondary Pool, Group
and Priority settings will be used for the Assembly Job instead of the main jobs settings.
The output formats that are supported by the Tile Assembler jobs are BMP, DDS, EXR, JPG, JPE, JPEG, PNG, RGB,
RGBA, SGI, TGA, TIF, and TIFF.
Jigsaw [Single-Frame | Animation] Multi-Region Rendering
518
This rollout contains all controls related to defining, managing and animating multiple regions for theJigsaw modes.
The rollout title will change to include an ACTIVE: prefix and the Single-Frame or Animation token when the
respective mode is selected in the Region Rendering Mode drop-down list (see above).
UPDATE List: Press this button to refresh the ListView.
LOAD/SAVE File...: Click to open a menu with the following options:
LOAD Regions From Disk Preset File...: Selecting this option will open a file open dialog and let you
select a previously saved Regions Preset. Any existing regions will be replaced by the ones from the file.
MERGE Regions From Disk Preset File...: Selecting this option will open a file open dialog and let you
select a previously saved Regions Preset. Any existing regions will be preserved, and the file regions will
be appended to the end of the list.
SAVE Regions To Disk Preset File...: Only enabled if there are valid regions on the list. When selected,
a file save dialog will open and let you save the current regions list to a disk preset for later loading or
merging in the same or different projects.
GET From Camera...: If the current view is a Camera, a list of region definitions stored in the current views
Camera will be displayed, allowing you to replace the current region list with the stored one. If the current view
is not a Camera view, a warning message will be shown asking you to select a Camera view. If the current views
Camera does not have any regions stored in it, nothing will happen.
STORE In Camera...: If the current view is a Camera, a list of region definitions stored in the current views
9.2. 3ds Max
519
Camera will be displayed, with the added option to Save New Preset... in a new slot. Alternatively, you can
select any of the previously stored slots to override or update. The Notes text specified in the Notes: field
below will be used to describe the preset. Also, additional information including the number of regions, the
user, machine name, date and time and the MAX scene name will be stored with the preset.
Notes: Enter a description of the current Region set to be used when saving a Preset to disk or camera. When a
preset is loaded, the field will display the notes stored with the preset.
ADD New Region: Creates a new region and appends it to the list. If objects are selected in the scene, the
region will be automatically resized to frame the selection. If nothing is selected, the Region will be set to the
full image size.
CREATE From...: Click to open a context menu with several multi-region creation options:
Create from SCENE SELECTION...: Select one or more objects in the scene and pick this option to create one
region for each object in the selection. Note that regions might overlap or be completely redundant depending
on the size and location of the selected objects - use the OPTIMIZE options below to reduce.
Create from TILES GRID...: Pick this option to create one region for each tile specified in the Tiles rollout.
For example, if the Tiles in X is set to 4 and Tiles in Y is 3, 12 regions resembling the Tile Grid will be created.
Note that once the regions are created, some of them can be merged together, others can be subdivided or split
as needed to distribute regions with different content and size to different machines, providing more flexibility
than the original Tiles mode.
Create from 3DS MAX REGION...: Create a region with the size specified by the 3ds Max Region gizmo.
OPTIMAL FILL Of Empty Areas: After the grid is created, two passes are performed: first a Horizontal
Fill where regions are merged horizontally to produce wider regions, then a Vertical Fill merging regions with
shared horizontal edges. The result is the least amount of tiles and equivalent to manually merging any neighbor
tiles with shared edges in Maya Jigsaw. Thus, it is the top (recommended) option.
HORIZONTAL FILL Of Empty Areas: After creating the grid, a pass is performed over all regions to find
neighbors sharing vertical edges. When two regions share an edge and the same top and bottom corner, they
get merged. This is the equivalent to the Maya Jigsaw behavior, producing wider regions where possible, but
leaving a lot of horizontal edges between tiles with the same width.
VERTICAL FILL Of Empty Areas: After creating the grid, a pass is performed to merge neighboring regions
sharing a horizontal edge with the same left/right corners. The result is the opposite of the Horizontal Fill - a lot
of tall regions.
GRID FILL Of Empty Areas: Takes the horizontal and vertical coordinates of all tiles and creates a grid that
contains them all. No merging of regions will be performed.
OPTIMIZE Regions, Overlap Threshold > 25%: Compare the overlapping of all highlighted regions and if
the overlapping area is more than 25% of the size of the smaller one of the two, combine the two regions to a
single region. Repeat for all regions until no overlapping can be detected.
OPTIMIZE Regions, Overlap Threshold > 50%: Same as the previous option, but with a larger overlap
threshold.
OPTIMIZE Regions, Overlap Threshold > 75%: Same as the previous options, but with an even larger
overlap threshold.
Clone LEFT|RIGHT: Select a single region in the list and click with the Left Mouse Button to clone the region
to the left, or Right Mouse Button to clone to the right. The height will be retained. The width will be clamped
automatically if the new copy is partially outside the screen.
Clone UP|DOWN: Select a single region in the list and click with the Left Mouse Button to clone the region up,
or Right Mouse Button to clone down. The width will be retained. The height will be clamped automatically if
the new copy is partially outside the screen.
520
FIT to N Objects / Fit Padding Value: Highlight exactly one region in the list and select one or more objects
in the scene, then click with the Left Mouse Button to perform a precise vertex-based Fit to the selection, or
click with the Right Mouse Button to perform a quick bounding-box based Fit to the selection. Click the small
button with the number to the right to select the Padding Percentage to use when fitting in either modes.
TRACK Region...: Left-click to open the Track dialog in Vertex-based mode for the currently selected region
and scene objects. Right-click for Bounding Box-based mode. While you can switch the mode in the dialog
itself, both the radio buttons and the Padding % values will be adjusted for faster access according to the mouse
button pressed.
SELECT | INVERT: Left-click to highlight all regions on the list. Right-click to invert the current selection.
DELETE Regions: Click to delete the highlighted regions on the list.
SET Keyframe: Highlight one or more regions and click this button to set a keyframe with the current region
settings at the current time.
<< PREVIOUS Key: Click to change the time slider to the previous key of the highlighted region(s), if case
there are such keys.
NEXT Key >>: Click to change the time slider to the next key of the highlighted region(s), if case there are
such keys.
DELETE Keyframe: Click to delete the keys (if any) of the highlighted regions. If there is no key on the
current frame, nothing will happen. Use in conjunction with Previous/Next Key navigation to delete actually
existing keys.
Regions ListView: The list view is the main display of the current region settings. It provides several columns
and a set of controls under each column for editing the values on the list:
On # column: Shows a checkbox to toggle a region on and off for rendering, and the index of the region.
X and Y columns: These two columns display the coordinates of the upper left corner of the Region. Note
that internally the values are stored in relative screen coordinates, but in the list they are shown in current
output resolution pixel coordinates for convenience. Changing the output resolution in the Render Setup
dialog and pressing the UPDATE List button will recalculate the pixel coordinates accordingly.
Width and Height columns: These two columns display the width and height of the region in pixels. Like
the upper left corners X and Y coordinates, they are stored internally as relative screen coordinates and
are shown as pixels for convenience.
Tiles column: Each region can be subdivided additionally horizontally and vertically into a grid of subtiles, each to be rendered by a different network machine. This column shows the number of tiles of the
region, default is 1x1.
Keys column: This column shows the number of animation keys recorded for the region. By default regions have no animation keys and will show in the column unless animated manually or via the Tracking
option.
Locked column: After Tracking, the region will be locked automatically to avoid accidental changes to
its position and size. You can also lock the region manually if you want to prevent it from being moved
accidentally.
Notes column: This column displays auto-generated or user-defined notes for each region. When a region
is created, it might be given a name based on the object it was fitted to, the original region it was cloned or
split from etc. You can enter descriptive notes to explain what every region was meant for.
UNDO... / REDO...: Most operations performed in the Multi-Region rollout will create undo records automatically. The Undo buffer is saved to disk in a similar form as the presets, and you can undo or redo individual
steps by left-clicking the button, or multiple steps at once by right-clicking and selecting from a list.
521
HOLD: Not all operations produce a valid undo record. If you feel that the next operation might be dangerous,
you can press the HOLD button to force the creation of an Undo record at the current point to ensure you can
return back to it in case the following operations dont produce desirable results.
SPLIT To Tiles: Pressing this button will split the highlighted region to new regions according to the Tiles
settings, assuming they are higher than 1x1 subdivisions. You can use this feature together with the Tiles
controls to quickly produce a grid of independent regions from a single large region. For example, if you create
a single region with no scene selection, it will have the size of the full screen. Enter Tile values like 4 and 3 and
hit the SPLIT To Tiles to produce a grid of 12 regions.
MERGE Selected: Highlight two or more regions to merge them into a single region. The regions dont have
to necessarily touch or overlap - the minimum and maximum extents of all regions will be found and they will
be replaced by a single region with that position and size.
Summary Field: This field displays information about the number of regions and sub-regions (tiles), the number
of pixels to be rendered by these regions, and the percentage of pixels that would be rendered compared to the
full image.
Assemble Over... drop-down list: This list provides the assembly compositing options:
Assemble Over EMPTY Background: The regions will be assembled into a new image using a black
empty background with zero alpha.
Compose Over PREVIOUS OUTPUT Image: The regions will be assembled over the previously rendered (or assembled) image matching the current output filename (if it exists). If such an image does not
exist, the regions will be assembled over an empty background.
Compose Over CUSTOM SINGLE Image: The regions will be assembled over a user-defined bitmap
specified with the controls below. The same image will be used on all frames if an animation is rendered.
Compose Over CUSTOM Image SEQUENCE: The regions will be assembled over a user-defined image
sequence specified with the controls below. Each frame will use the corresponding frame from the image
sequence.
Pick Custom Background Image: Press this button to select the custom image or image sequence to be used in
the last compositing modes above. Make sure you specify a network location that can be accessed by the Draft
jobs on Deadline performing the Assembly!
[Single-Frame | Animation] Tile Rendering
522
Tiles In X / Tiles In Y: These values specify the number of tiles horizontally and vertically. The total number
of tiles (and jobs) to be rendered is calculated as X*Y and is displayed in the UI.
Show Tiles In Viewport: Enables the tile display gizmo.
Tile Pixel Padding: This value defines the number of pixels to overlap between tiles. By default it is set to
0, but when rendering Global Illumination, it might be necessary to render tiles with significant overlapping to
avoid artifacts.
Re-Render User-Defined Tiles: When checked, only user-defined tiles will be submitted for re-rendering. Use
the [Specify Tiles To Re-render...] check-button to open a dialog and select the tiles to be rendered.
Specify Tiles To Re-render: When checked, a dialog to select the tiles to be re-rendered will open. To close
the dialog, either uncheck the button or press the [X] button on the dialogs title bar.
Enable Blowup Mode: If enabled, tile rendering will work by zooming in on the region and rendering it at a
smaller resolution. Then that region is blown up to bring it to the correct resolution. This has been known to
help save memory when rendering large high resolution images.
Submit All Tiles As A Single Job: By default, a separate job is submitted for each tile (this allows for tile
rendering of a sequence of frames). For easier management of single frame tile rendering, you can choose to
submit all the tiles as a single job.
Submit Dependent Assembly Job: When rendering a single tile job, you can also submit a dependent assembly
job to assemble the image when the main tile job completes.
Use Draft For Assembly: If enabled, Draft will be used to assemble the images. Note that youll need a Draft
license from Thinkbox.
Region Rendering
523
When enabled, only the specified region will be rendered and depending on the region type selected, it can be cropped
or blown up as well. If the Enable Distributed Tiles Rendering checkbox is checked, it will be unchecked. This
option REPLACES the Crop option in the Render mode drop-down list in the 3ds Max UI. In other words, the
3ds Max option does not have to be selected for Region Rendering to be performed on Deadline. The region can be
specified either using the CornerX, CornerY, Width and Height spinners, or by getting the current region from the
active viewport. To do so, set the Render mode drop-down list to either Region or Crop, press the Render icon and
drag the region marker to specify the desired size. Then press ESC to cancel and press the Get Region From Active
View to capture the new values.
Misc Tab
Quicktime Generation From Rendered Frame Sequence
Create a Quicktime movie from the frames rendered by a 3ds Max job. See the Quicktime documentation for more
information on the available options.
Render To Texture
524
This option enables texture baking through Deadline. Use the Add, Remove, and Clear All buttons to add and remove
objects from the list of objects to bake. * One Object Per Task: If enabled, each RTT object will be allocated to an
individual task thereby allowing multiple machines to carry out RTT processing simultaneously.
Batch Submission
Use Data from 3ds Max Batch Render: This checkbox enables Batch Submission using the 3ds Max Batch
Render dialog settings. If checked, a single MASTER job will be sent to Deadline which in turn will spawn
all necessary BATCH jobs.
Open Dialog: This button opens the 3ds Max Batch Render dialog in Version 8 and higher.
Update Info: This button reads the 3ds Max Batch Render dialog settings and displays the number of enabled
vs. defined Views.
Sanity Check
The 3ds Max Sanity Check script defines a set of functions to be called to ensure that the scene submission does not
contain typical errors like wrong render view and frame range settings, incorrect output path, etc.
The Sanity Check is enabled by the Run Sanity Check Automatically Before Submission checkbox in the User Options
group of controls in the Submit To Deadline (3dsmax) dialog. You can also run the Sanity Check automatically by
clicking the Run Now! button.
525
Clicking the dialog anywhere outside of the two message areas will rerun the Sanity Check and update all
messages.
Double-clicking any Message in the Feedback Messages window will rerun the Sanity Check and update all
messages.
Reparing an error by double-clicking will also automatically rerun the Sanity Check
Pressing the Run Now! button in the Submit To Deadline dialog will update the Sanity Check.
FATAL Sanity Checks
These are errors that must be fixed manually before the job can be submitted.
Message
The scene does not contain
ANY objects!
Maxwell is the renderer and the
current view is NOT a Camera.
The scene contains objects or
groups with the same name as a
camera!
Maxwell is the renderer and the
Render Time Output is set to a
SINGLE FRAME! (Check is
currently disabled in SMTD)
Render Output Path length
exceeds 255 characters!
Render Elements Output Path
length exceeds 255 characters!
Multi-Region Rendering
Requested, But No Active
Regions Found!
Description
The scene is empty and should not
be sent to Deadline.
Maxwell renderer must render
through an actual camera and will
fail through a viewport.
The scene contains objects or groups
with a duplicate name to a camera
which could result in an incorrect
object being used as the camera.
Maxwell has an issue with single
frame rendering.
Fix
Load a valid scene or create/ merge
objects, then try again.
Double-click the error message to
open a Select By Name dialog to pick
a camera for the current viewport.
Ensure you remove any duplicate
named objects from your scene.
527
528
Message
The current Scene
Name is Untitled.
Description
The scene has never been saved to a
MAX file.
While it is possible to submit an untitled
scene to Deadline, it is not a good
practice.
The active viewport is not a camera
viewport.
Fix
Double-click the error message to open a
Save As dialog and save to disk.
529
Warnings
The following Sanity Checks are simply warnings.
Message
The Render Output Path is
NOT DEFINED!
Description
No frames will be saved to disk. This is
allowed if you want to output render
elements only.
Fix
Double-click the error message to
open the Render Dialog and select
a valid path, then double-click
again to retest.
Double-click the error message to
open the Render Dialog and select
a single frame output format, then
double-click again to retest.
This list will be extended to include future checks and can be edited by 3rd parties by adding new definitions and
functions to the original script. Documentation on extending the script will be published later. Please email suggestions
for enhancements and additional test cases to Deadline Support.
530
531
V-Ray does NOT currently support in 3ds Max the ability to dynamically add or remove DBR slaves to the
currently processing DBR render once started on the master slave.
Notes for Mental Ray DBR:
Ensure Mental Ray is the currently assigned renderer in the 3ds Max scene file prior to submission.
You must have the Distributed Render option enabled in your Mental Ray settings under the Processing tab.
The Mental Ray Satellite service must be running on your slave machines. It is installed by default during the
3ds Max 2014 or earlier installation. Note that ADSK changed this default from 3dsMax 2015 onwards and the
Mental Ray Satellite Service is installed as part of the install process but is NOT automatically started, so you
will need to start it manually the very first time. See this AREA blog post about Distributed Bucket Rendering
in 3ds Max 2015.
The max.rayhosts file must be writeable so that the master machine can update it. Its location is different for
different versions of 3ds Max:
2010 and earlier: It will be in the mentalray folder in the 3ds Max root directory.
2011 and 2012: It will be in the mentalimages folder in the 3ds Max root directory.
2013 and later: It will be in the NVIDIA folder in the 3ds Max root directory.
Ensure the Use Placeholder Objects checkbox is enabled in the Translator Options rollout of the Processing tab. When placeholder objects are enabled, geometry is sent to the renderer only on demand.
Ensure Bucket Order is set to Hilbert in the Options section of the Sampling Quality rollout of the
Renderer tab. With Hilbert order, the sequence of buckets to render uses the fewest number of data transfers.
Contour shading is not supported with distributed bucket rendering.
Autodesk Mental Ray licensing in 3ds Max is restricted. Autodesk says Satellite processors allow any owner
of a 3ds Max license to freely use up to four slave machines (with up to four processors each and an unlimited
number of cores) to render an image using distributed bucket rendering, not counting the one, two, or four
processors on the master system that runs 3ds Max. Mental Ray Standalone licensing can be used to go beyond
this license limit. Use the Deadline Limits feature to limit the number of licenses being used at any time if
required.
Ensure the necessary Mental Ray executables & TCP/UDP ports have been allowed to pass-through the Windows Firewall. Please consult the Autodesk 3ds Max user manual for specific information.
532
533
Use IP Addresses: If offloading a V-Ray DBR or Mental Ray Satellite render to Deadline, Deadline will update
the appropriate config file with the host names of the machines that are running the V-Ray Spawner or Satellite
service. If this is enabled, the IP addresses of the machines will be used instead.
to
[3ds
Max
Install
Direc-
534
to
535
Click OK to close the preferences, and then click on the Network tab to see the submitter
536
Function
string GetAuxFilename( int index)
string GetJobInfoEntry( string key )
string GetOutputFilename( int
index )
string GetSubmitInfoEntry( string
key )
int
GetSubmitInfoEntryElementCount(
string key )
string GetSubmitInfoEntryElement(
int index, string key )
void FailRender( string message )
void LogMessage( string message )
void SetProgress( float percent )
void SetTitle( string title )
void WarnMessage( string message
)
Description
Gets the file with the given index that was submitted with the job.
Gets a value from the plugin info file that was submitted with the job, and
returns an empty string if the key doesnt exist.
Gets the output file name for the job at the given index.
Gets a value from the job info file that was submitted with the job, and
returns an empty string if the key doesnt exist.
If the job info entry is an array, this gets the number of elements in that
array.
If the job info entry is an array, this gets the element at the given index.
Fails the render with the given error message.
Logs the message to the slave log.
Sets the progress of the render in the slave UI.
Sets the render status message in the slave UI.
Logs a warning message to the slave log.
Properties
Property
int CurrentFrame
int CurrentTask
string JobsDataFolder
string PluginsFolder
string SceneFileName
string SceneFilePath
Description
Gets the current frame.
Gets the current task ID.
Gets the local folder on the slave where the Deadline job files are copied to.
Gets the local folder on the slave where the Deadline plugin files are copied to.
Gets the file name of the loaded 3ds Max scene.
Gets the file path of the loaded 3ds Max scene.
537
The key to the left of = is the string that will be replaced in the job name. The value to the right of the = is the maxscript
code that is executed to return the replacement string (note that the value returned must be returned as a string). So if
you use $scene in your job name, it will be swapped out for the scene file name. You can append additional key-value
pairs or modify the existing ones as you see fit.
538
By default, the [>>] button will already have $scene or $outputfilename as selectable options. You can then create an
optional JobNames.ini file in the 3dsmax submission folder, with each line representing an option. For example:
$scene
$outputfilename
$scene_$camera_$username
$maxversion_$date
These options will then be available for selection in the submission dialog. This allows for all sorts of customization
with regards to the job name.
Generate Job Name For Shows
This advanced feature allows the addition of custom project, sequence, shot and pass names to the [>>] list to the
right of the Job Name field. Producers in larger facilities could provide full shot lists via a central set of files in the
Repository to allow users to pick existing shot names and ensuring consistent naming conventions independent from
the 3ds Max scene naming.
To create a new set of files, go to the ..\submission\3dsmax\Main\ folder in your Repository and create the following
files:
Projects.ini - This file describes the projects currently available for Custom Job Naming. Each Project is defined as a
Category inside this file, with two keys: Name and ShortName.
For example:
[SomeProject]
Name=Some Project in 3D
ShortName=SP
[AnotherProject]
Name=Another Project
ShortName=AP
SomeProject.ini - This is a file whose name should match exactly the Category name inside the file Projects.ini and
contains the actual sequence, shot and pass description of the particular project. One file is expected for each project
definition inside the Projects.ini file.
For example:
[SP_SS_010]
Beauty=true
Diffuse=true
Normals=true
ZDepth=true
Utility=true
[SP_SS_150]
Beauty=true
Diffuse=true
Utility=true
[SP_SO_020]
Beauty=true
[SP_SO_030]
Beauty=true
The Submitter will parse this file and try to collect the Sequences by matching the prefix of the shot names, for example
in the above file, it will collect two sequences - SP_SS and SP_SO - and build a list of shots within each sequence,
then also build a list of passes within each shot.
539
Then, when the [>>] button is pressed, the context menu will contain the name of each project and will provide a
cascade of sub-menus for its sequences, shots and passes.
You can enter as many projects into your Projects.ini file as you want and provide one INI file for each project
describing all its shots and passes. If an INI file is missing, no data will be displayed for that project.
Custom Comment Controls
Just like job names, you can use keys in the comment field that are replaced with actual values (like $scene). There is a
file in the ..\submission\3dsmax\Main\ folder in your Repository called SubmitMaxToDeadline_CommentFormats.ini.
In addition, a local copy of the SubmitMaxToDeadline_CommentFormats.ini file can be saved in a users application
data folder. This file will OVERRIDE the comment formats in the Repository and can contain a sub-set of the definitions in the global file. This file will contain some key-value pairs such as:
$default=("3ds Max " + SMTDFunctions.getMaxVersion() + " Scene Submission")
$scene=(getfilenamefile(maxfilename))
$date=((filterstring (localtime) " ")[1])
$deadlineusername=(SMTDFunctions.GetDeadlineUser())
$username=(sysInfo.username)
$maxversion=(((maxVersion())[1]/1000) as string)
The key to the left of = is the string that will be replaced in the comment. The value to the right of the = is the maxscript
code that is executed to return the replacement string (note that the value returned must be returned as a string). So if
you use $scene in your comment, it will be swapped out for the scene file name. You can append additional key-value
pairs or modify the existing ones as you see fit.
By default, the [>>] button will already have $default. You can then create an optional Comments.ini file in the 3dsmax
submission folder, with each line representing an option. For example:
540
$default
$scene
$outputfilename
$scene_$camera_$username
$maxversion_$date
These options will then be available for selection in the submission dialog. This allows for all sorts of customization
with regards to the comment field.
Auto-Suggest Category and Priority Mechanism
This feature has been implemented to help Producers suggest categories and priorities based on Shots and Sequence
signatures which are part of the 3ds Max Scene Name.
This feature DOES NOT ENFORCE the Category and Priority for the job, it only suggests a value based on project
guidelines - the Category and Priority can be changed manually after the suggestion.
To use this feature, you have to edit the file called SubmitMaxToDeadline_CategoryPatterns.ms located in the Repository in the \submission\3dsmax folder. As a shortcut, you can press the button Edit Patterns... in the Options tab of
the Submitter - the file will open in the built-in MAXScript Editor.
The file defines a global array variable called SMTD_CategoryPatterns which will be used by the Submitter to perform
pattern matching on the Job Name and try to find a corresponding Category and optionally a priority value in the array.
The array can contain one or more sub-arrays, each one representing a separate pattern definition.
Every pattern sub-array consists of four array elements:
The first element is an array containing zero, one or more string patterns using * wildcards. These strings will
be used to pattern match the Job Name. If it matches, it will be considered for adding to the Category and for
changing the Priority. If the subarray is empty, all jobs will be considered matching the pattern.
The second element is also an array containing similar pattern strings. These strings will be used to EXCLUDE
jobs matching these patterns from being considered for this Category and Priority. If the subarray is empty, no
exclusion matching will be performed.
The third element contains the EXACT name (Case Sensitive!) of the category to be set if the Job Name matches
the patterns. If the category specified here does not match any of the categories defined via the Monitor, no action
will be performed.
The fourth element specifies the Priority to give the job if it matches the patterns. If the value is -1, the existing
priority will NOT be changed.
The pattern array can contain any number of pattern definitions. The higher a definition is on the list, the higher its
priority - if a Job Name matches multiple pattern definitions, only the first one will be used.
The pattern matching will be performed only if the checkbox Auto-Suggest Job Category and Priority in the Options
Tab is checked. It will be performed when the dialog first opens or when the the Job Name is changed.
An example:
Lets assume that a VFX facility is working on a project called SomeProject with multiple sequences labelled
AB, CD and EF.
The network manager has created categories called SomeProject, AB_Sequence, CD_Sequence,
EF_Sequence and High_Priority via the Monitor.
The Producers have instructed the Artists to name their 3ds Max files SP_AB_XXX_YYY_where SP stands
for SomeProject, AB is the label of the sequence followed by the scene and shot numbers.
541
Now we want to set up the Submitter to suggest the right Categories for all Max files sent to Deadline based on
these naming conventions.
We want jobs from the CD sequence to be set to Priority of 60 unless they are from the scene with number
007.
We want jobs from the AB sequence to be set to Priority of 50
We dont want to enforce any priority to jobs for sequence EF.
Also we want shots from the AB sequence with scene number 123 and EF sequence with scene shot
number 038 to be sent at highest priority and added to the special High Priority category for easier filtering
in the Monitor.
Finally we want to make sure that any SP project files that do not contain a sequence label are added to the
general SomeProject category with lower priority.
To implement these rules, we could create the following definitions in the SubmitMaxToDeadline_CategoryPatterns.ms - press the Edit Patterns... button in the Options tab to open the file:
SMTD_CategoryPatterns = #(
#(#("*AB_123*","*EF_*_038*"),#(),"High_Priority",100),
#(#("*AB_*"),#(),"AB_Sequence",50),
#(#("*CD_*"),#("*CD_007_*"),"CD_Sequence",60),
#(#("*EF_*"),#(),"EF_Sequence",-1),
#(#("SP_*"),#(),"SomeProject",30),
)
The first pattern specifies that files from the AB sequence, scene 123 and EF sequence, shot 038 (regardless of scene number) will be suggested as Category High_Priority and set Priority to 100.
The second pattern specifies all AB jobs to have priority of 50 and be added to Category AB_Sequence. Since
the special case of AB_123 has been handled in the previous pattern, this will not apply to it.
The third pattern sets jobs that contain CD_ in their name but NOT the signature CD_007_ to the
CD_Sequence Category and sets the Priority to 60.
The fourth pattern sets jobs that contain EF_ in their name to the EF_Sequence Category but does not
change the priority (-1).
The fifth pattern specifies that any jobs that have not matched the above rules but still start with the SP_
signature should be added to the SomeProject Category and set to low priority of 30.
Note that since we used * instead of SP_in the beginning of the first 4 patterns, even if the job is not named
correctly with the project prefix SP_, the pattern will correctly match the job name.
Custom Plugin.ini File Creation
This section covers the Alternate Plugin.ini feature in the 3ds Max Rendering rollout (under the Render tab).
Alternate Plugin.ini File
The plugin.ini list will show a list of alternative plug-in configuration files located in the Repository. By default, there
will be no alternative plugin.ini files defined in the repository. The list will show only one entry called [Default],
which will cause all slaves to render using their own local plugin.ini configuration and is equivalent to having the Use
Custom Plugin.ini file unchecked.
To define an alternative plugin.ini, copy a local configuration file from one of the slaves to [Repository]\plugins\3dsmax in the repository. Edit the name of the file by adding a description of it. For example, plugin_brazil.ini, plugin_vray.ini, plugin_fr.ini, plugin_mentalray.ini, etc. Open the file and edit its content to include the
542
plug-ins you want and exclude the ones you dont want to use in the specific case. The next time you launch Submit
To Deadline, the list will show all alternative files whose names start with plugin and end with .ini. The list will
be alphabetically sorted, with [Default] always on top. You can then select an alternative plugin.ini file manually from
the list.
Pressing the Edit Plugin.ini File button will open the currently selected alternative configuration file in a MAXScript
Editor window for quick browsing and editing, except when [Default] is selected. Pressing the Browse Directory
button will open Windows Explorer, taking you directly to the plug-ins directory containing the alternative plugin.ini
files. Note that if you create a new plugin.ini file, you will have to restart the Submit To Deadline script to update the
list.
Since the alternative plug-in configuration file is located in the Repository and will be used by all slave machines, the
plug-in paths specified inside the alternative plugin.ini will be used as LOCAL paths by each slave. There are two
possible installation configurations that would work with alternative plug-ins (you could mix the two methods, but its
not recommended):
Centralized Plug-ins Repository: In this case, all 3dsmax plug-ins used in the network are located at a centralized location, with all Slaves mapping a drive letter to the central plug-in location and loading the SAME copy
of the plug-in. In this case, the alternative plugin.ini should also specify the common drive letter of the plug-in
repository.
Local Plug-in: To avoid slow 3dsmax booting in networks with heavy traffic, some studios (including ones we
used to work for) deploy local versions of the plug-ins. Every slaves 3dsmax installation contains a full set
of all necessary plug-ins (which could potentially be automatically synchronized to a central repository to keep
all machines up-to-date). In this case, the alternative plugin.ini files should use the LOCAL drive letter of the
3dsmax installation, and all Slaves 3dsmax copies MUST be installed on the same partition, or at least have the
plug-ins directory on the same drive, for example, C:.
Auto-Detect Plugin.ini For Current Renderer
When enabled, the following operations will be performed:
1. When you check the checkbox, the current renderer assigned to the scene will be queried.
2. The first 3 characters of the renderers name will be compared to a list of known renderers.
3. If the renderer is not on the list, the alternative list will be reset to [Default].
4. If the renderer is the Default Scanline Renderer of 3dsmax, the alternative list will be reset to [Default].
5. If the renderer is a known renderer, the plugin*.ini file that matches its name will be selected.
Supported renderers for auto-suggesting an alternative configuration are:
Brazil plugin*.ini should contain brazil in its name (i.e.: plugin_brazil.ini, plugin-brazil.ini, pluginbrazil_1_2.ini etc).
Entropy plugin*.ini should contain entropy in its name (i.e.: plugin_entropy.ini, plugin-entropy.ini, pluginentropy.ini, etc).
finalRender plugin*.ini should contain fr or final in its name (i.e.: plugin_fr.ini, plugin-finalrender.ini, plugin_finalRender_Stage1.ini etc).
MaxMan plugin*.ini should contain maxman in its name (i.e.: plugin_maxman.ini, plugin-maxman.ini, pluginmaxman001.ini etc).
mentalRay plugin*.ini should contain mr or mental in its name (i.e.: plugin_mr.ini, plugin-mentalray.ini,
plugin_mental33.ini etc).
V-Ray plugin*.ini should contain vray in its name (i.e.: plugin_vray.ini, plugin-vray.ini, pluginvray109.ini
etc).
Notes:
543
In 3dsmax 5 and higher, opening a MAX file while the Auto-Detect option is checked will trigger a callback
which will perform the above check automatically and switch the plugin.ini to match the renderer used by the
scene.
In 3dsmax 6 and higher, changing the renderer via the Current Renderers rollout of the Render dialog will
also trigger the auto-suggesting mechanism.
You can override the automatic settings anytime by disabling the Auto-Detect option and selecting from the list
manually.
Custom Extra Info Controls
Just like job names and comments, you can use keys in the Extra Info 0-9 fields (under the Integration tab in SMTD)
that are replaced with actual values (like $scene). There is a file in the ..\submission\3dsmax\Main\ folder in your
Repository called SubmitMaxToDeadline_ExtraInfoFormats.ini. In addition, a local copy of the SubmitMaxToDeadline_ExtraInfoFormats.ini file can be saved in a users application data folder. This file will OVERRIDE the comment
formats in the Repository and can contain a sub-set of the definitions in the global file. This file will contain some
key-value pairs such as:
$scene=(getfilenamefile(maxfilename))
$date=((filterstring (localtime) " ")[1])
$deadlineusername=(SMTDFunctions.GetDeadlineUser())
$username=(sysInfo.username)
$maxversion=(((maxVersion())[1]/1000) as string)
The key to the left of = is the string that will be replaced in the comment. The value to the right of the = is the maxscript
code that is executed to return the replacement string (note that the value returned must be returned as a string). So if
you use $scene in your comment, it will be swapped out for the scene file name. You can append additional key-value
pairs or modify the existing ones as you see fit.
NOTE, if you are using Shotgun or FTrack Integration, ExtraInfo0 to ExtraInfo5 will be used automatically and take
precendence over any $keys in these particular fields.
As an example, you may wish to use the automatic SMTD BatchName functionality to group logical job submissions
together in your Deadline queue, but also use custom Extra Info fields to help track pipeline information such as
Project, Sequence, Shot or Job Number of a particular 3dsMax/Jigsaw/Draft/Quicktime job submission such as:
$project=[execute maxscript code here, returning a string value]
$sequence=123456
$shot=[use maxscript to get shot # from the current render output naming convention]
$jobnumber=[maxscript to query database and get project's job number as a string]
Once this additional pipeline information is injected into your Deadline jobs, the Extra Info columns can be given
user friendly names so that they can easily be identified and used to filter and sort jobs in the Monitor. See the Job
Extra Properties section for more information. NOTE, the Extra Info X columns are also injected into the Completed
Job Stats, thereby allowing you to store and later analyse/create reports against previous jobs by the data stored in your
Extra Info X columns.
9.2.7 FAQ
Which versions of 3ds Max are supported?
3ds Max versions 2010 and later are all supported (including Design editions).
544
Note: Due to a maxscript bug in the initial release of 3ds Max 2012, the integrated submission scripts
will not work. However, this bug has been addressed in 3ds Max 2012 Hotfix 1. If you cannot apply this
patch, it means that you must submit your 3ds Max 2012 jobs from the Deadline Monitor.
Which 3ds Max renderers are supported?
Deadline should already be compatible with all 3ds Max renderers, but it has been explicitly tested with
Scanline, MentalRay, Brazil, V-Ray, Corona, finalRender, and Maxwell. If you have successfully used a
3ds Max renderer that is not on this list, please email Deadline Support.
Does Backburner need to be installed to render with Deadline?
Yes. Backburner installs the necessary files that are needed for command line and network rendering, so
it must be installed to render with Deadline.
Does the 3ds Max plugin support Tile Rendering?
Yes. See the Tile Rendering section of the submission dialog documentation for more details.
Does the 3ds Max plugin support multiple arbitrary sized, multi-resolution Tile Rendering for both stills or
animations and automatic re-assembly, including the use of multi-channel image formats and Render Elements
(incl. V-Ray VFB specific image files)?
Yes. We call it Jigsaw and its unique to the Deadline system! See the Tile Rendering section of the
submission dialog documentation for more details.
Does the 3ds Max plugin support Batch Rendering?
Yes. See the Batch Rendering section of the submission dialog documentation for more details.
Is PSofts Pencil+ render effects plugin supported?
Yes. Please note at least Pencil+ v3.1 is required to resolve an issue with the line element render element
failing to be rendered. Note, you will require the correct network render license from PSoft for each
Deadline Slave or render with a Deadline Slave that already has a full, workstation license of Pencil+
already installed.
When I submit a render with a locked viewport, Deadline sometimes renders a different viewport.
Prior to the release of 3ds Max 2009, the locked viewport feature wasnt exposed to the 3ds Max SDK,
so it was impossible for Deadline to know whether a viewport is locked or not. Now that the feature has
been exposed, we are working to improve Deadlines locked viewport support. However, in the 3ds Max
2010 SDK, there is a bug that prevents us from supporting it completely (Autodesk is aware of this bug).
As of 3ds Max 2015, this bug is now resolved. For earlier versions, we can only continue to recommend
that users avoid relying on the locked viewport feature, and instead ensure that the viewport they want to
render is selected before submitting the job. The SMTD sanity check continues to provide a warning for
those versions of 3ds Max, where the locked viewport SDK bug still exists.
When Deadline is running as a service, 3ds Max 2015 render jobs crash during startup.
This can happen if the new Scene (Content) Explorer is docked.
This is a known issue with 3ds Max network rendering when it is launched by a program running as a
service. See this AREA blog post about running 3ds Max 2015 as a service for a workaround and more
information.
Can I mix 3ds Max and 3ds Max Design jobs in Deadline?
Yes. ADSK have introduced (April 2014) a new system environment variable you can set which will
make all jobs from 3ds Max and 3ds Max Design appear as 3ds Max jobs: MIX_MAX_DESIGN_BB
set to 1 to enable this feature. Note, Windows typically requires a machine restart or log-off/log-on for
the new environment setting value to become available once set. ADSK have confirmed this works for
3ds Max 2015, 3ds Max Design 2015 with Backburner 2015.0.1. It may also work with 2014 SP5 version
545
of 3ds Max and 3ds Max Design, with Backburner 2015.0.1. See this AREA blog post about mixing 3ds
Max and 3ds Max design on a render farm for more information. Note, Backburner Manager or Server
are NOT required to be running to make this system work in Deadline, although Backburner software still
needs to be installed on your rendernodes.
When I submit a render job that uses more than one default light, only one default light gets rendered.
The workaround for this problem is to add the default lights to the scene before submitting the job. This
can be done from within 3ds Max by selecting Create Menu -> Lights -> Standard Lights -> Add Default
Lights to Scene.
Is it possible to submit MAXscripts to Deadline instead of just a *.max scene?
Yes. Deadline supports MAXscript jobs from the Scripts tab in the submission dialog.
Does Deadlines custom interface for rendering with 3ds Max use workstation licenses?
No. Deadlines custom interface for rendering with 3ds max does not use any workstation licenses when
running on slaves unless you have the Force Workstation Mode option checked in the submission dialog,
a workstation license will be used.
Slaves are rendering their first frame/tile correctly, but subsequent frames and render elements have problems
or are rendered black.
Try enabling the option to Restart Renderer Between Frames in the submission dialog before submission, or in the job properties dialog after submission. We have found that this works 99% of the time in
these cases. When enabled, the c++ Lightning plugin (unique to Deadline), will unload the renderer plugins and then reload them instantly. This has the effect of forcing a memory purge and helps to improve
renderer stability, as well as ensure the lowest possible memory footprint. This can be helpful, when
rendering close to the physical memory limit of a machine. See note below for when this feature should
be disabled.
V-Ray Light-Cache / Irradiance Maps are not the correct file size or seem to be getting reset between incremental frames on Deadline but calculate correctly when executed locally.
Ensure the option Restart Renderer Between Frames is DISABLED if you are sending FG/LC/IM
caching map type jobs to the farm, as the renderer will get reset for each frame and the FG/LC/IM file(s)
wont get incrementally increased with the additional data per frame and will only contain the data from
the last frame it calculated. (The resulting file size will be too small as well).
3dsMax Point Cache Files dropping geometry in renders randomly
Sometimes 3dsMax can drop point cache geometry in renders, in an almost random only certain rigs
fashion. Typically but not exclusively, this happens on the 2nd assigned frame processed by a particular
slave. Ensure the option Restart Renderer Between Frames is DISABLED in the submission dialog
before submission, or in the job properties dialog after submission. We have found that this works 99%
of the time in these cases.
When rendering with V-Ray/Brazil, it appears as if some maps are not being displayed properly.
Try enabling the option to Restart Renderer Between Frames in the submission dialog before submission, or in the job properties dialog after submission. We have found that this works 99% of the time in
these cases.
Tile rendering with a Mental Ray camera shader known as wraparound results in an incorrect final image.
How can I fix this?
This is another situation where enabling the option to Restart Renderer Between Frames in the submission dialog seems to fix the problem.
When tile rendering with a renderer that supports global/secondary illumination, I get bucket stamps (different
lighting conditions in each tile) on the final image.
546
Try calculating the irradiance/final gather light caching map first in one pass at full resolution. Then
perform your tile render on a scene that reads the irradiance/final gather map created at full resolution. If
creating the map at full resolution is impossible then you can make it in the tile, but you need to make
sure the tiles are overlapping each other (use Deadlines tile/jigsaw padding to help here) and make sure to
use the irradiance/final gather map method that appends to the map file. Alternatively, you could consider
using the VRay/Mental Ray DBR off-load system to accelerate the caculation of the light caching map.
In summary: you create (pre-calculate) the secondary/global illumination map first then run the final
render in tiles as a second job. Deadline job dependencies can be used here to release the second job as
the first job successfully completes the lighting pre-calculation job.
Can I perform Distributed Bucket Rendering (DBR) with V-Ray or V-Ray RT?
Yes. A special reserve job is submitted that will run the V-Ray Spawner/V-Ray standalone process on
the render nodes. Once the V-Ray Spawner/V-Ray standalone process is running, these nodes will be able
to participate in distributed rendering. Please see the VRay Distributed Rendering (DBR) Plug-in Guide
for more information.
Can I fully off-load 3dsMax V-Ray or Mental Ray DBR rendering from my machine?
Yes, see the VRay/Mental Ray DBR section for more information. The advantages to off-loading a VRay DBR job fully from your workstation include; releasing your local workstation to carry out other
processing tasks and helping to accelerate the irradiance map/photon cache calculation process as the
V-Ray DBR system supports distributing this across multiple machines. A risk/disadvantage to this way
of working is if a single machine currently being used to calculate a DBR bucket crashes/fails for an
unknown reason, then the whole process will fail at its current stage and start from the beginning again.
Can I Perform Fume FX Simulations With Deadline?
Yes. To do so, follow these steps:
1. Your render nodes need to have Fume FX licensed properly, either with a full or simulation
licenses. This requirement is the same if you were rendering with Backburner.
2. Before you launch the 3dsmax submission script, make sure that the Fume FX NetRender toggle
button is ON in the Fume FX options in 3dsmax.
3. Before you submit the job, make sure the Disable Progress Update Timeout option is enabled
under the Render in the 3dsmax submission window.
4. Note that Fume FX uses its own frame range (in the Fume FX settings/prefs), so submit the
Max scene file to Deadline as a single frame/task.
Can I force a render to use a specific language?
Yes. Using the option located in User Options tab of SMTD or in the monitor submission, Advanced
Options tab (2013+ only). This will change the default on the machine it is rendered on to the chosen
language. Note that the change is permanent on the machine until such time 3dsMax is restarted and the
language is forced to a different language. You can manually force the language to be changed back via
the language specific shortcuts in the start menu, which effectively start 3dsMax with the language flag.
In this example, EN-US (default) is forced: C:/Program Files/Autodesk/3ds Max 2015/3dsmax.exe
/Language=ENU
When submitting to Deadline, non-ASCII characters in output paths, camera names, etc, are not being sent to
Deadline properly.
You need to enable the Save strings in legacy non-scene files using UTF8 property in the Preference Settings in 3ds Max. After enabling this, the Deadline submission files will be saved as UTF8 and therefore
non-ASCII characters will be saved properly. See the Character Encoding Defaults in 3ds Max section in
the 3ds Max Character Encoding documentation for more information.
Why do 3ds Max jobs add a period delimiter to the output filename?
547
Deadline 7 introduced a new Delimiter option in the integrated 3ds Max submitter (SMTD) to avoid some
problems with the way render elements and other auto-generated names were formatted in previous version. The Delimiter option is set to a factory default of . as this is the typical convention in VFX
pipelines, but it can be overridden via the Defaults INI file in the Repository. Since this setting is considered a company-wide pipeline value and should not be overridden by individual users, it is currently not
exposed in the SMTD UI.
To change the Delimiter to an empty string, you can do the following:
1. Navigate to your Repository folder
2. Go to ...\submission\3dsmax\Main\
3. Locate the SubmitMaxToDeadline_Defaults.ini file and open it in a text editor
4. Add the following to the [RenderingOptions] category:
[RenderingOptions]
Delimiter=
In summary, you can use the new Delimiter option to provide a consistent file naming convention across
your studio pipeline. Few caveats; the file naming convention for Thinkboxs tile, region and Jigsaw
remains unchanged and V-Ray v3 has introduced a maxscript property #fileName_addDot which can
be accessed via
renderers.current.fileName_addDot
which by default is True, so it will also try to add a DOT character to its filenames if one is not present.
This is a known issue with 3ds Max, and can occur when IPv6 is enabled on the render node. The issue
can be fixed by disabing IPv6 on the machines, or by disabing the IPv6 to IPv4 tunnel. See this Area blog
post about IPv6 errors for more information.
Could not delete old lightning.dlx... This file may be locked by a copy of 3ds max
Usually this is because a 3dsmax.exe process didnt quit or get killed properly. Looking in task manager
on the slaves reporting the message for a 3dsmax.exe process and killing it is the solution.
3dsmax crashed in GetCoreInterFace()->LoadFromFile()
There are a number of things that can be tried to diagnose the issue:
Try opening the file on a machine where it crashed. You may already have done this.
Try rendering a frame of it on a machine where it crashed, using the 3dsmaxcmd.exe renderer.
This will make it open the file in slave mode and possibly give an idea of whats failing.
Submit the job to run in workstation mode. In workstation mode theres often more diagnostic
output. Theres a checkbox in the submission script for this.
If youre comfortable sending us the .max file which is crashing, wed be happy to diagnose the
issue here.
Try stripping down the max file by deleting objects and seeing if it still crashes then.
Trapped SEH Exception in CurRendererRenderFrame(): Access Violation
An Access Violation means that when rendering the frame, Max either ran out of memory, or memory
became corrupted. The stack trace in the error message usually shows which plugin the error occurred in.
If that doesnt help track down the issue, try stripping down the max file by deleting objects and seeing if
the error still occurs.
3dsmax: Trapped SEH Exception in LoadFromFile(): Access Violation
An Access Violation means that when loading the scene, Max either ran out of memory, or memory
became corrupted. The stack trace in the error message usually shows which plugin the error occurred in.
549
If that doesnt help track down the issue, try stripping down the max file by deleting objects and seeing if
the error still occurs.
3dsmax: PNG Plugin: PNG Library Internal Error
3dsMax Render Elements can become corrupt or be placed in a bad state with regard the image file format
plugin trying to being used to save each Render Element to your file server. This issue is not limited to
the PNG file format (TGA, TIF) but is common. A known option, which has been known to fix the issue
in most circumstances, is to rebuild the render elements by deleting and re-creating them in the 3dsmax
scene file. This feature is automated in SMTD if you enable the checkbox Rebuild Render Elements
under the Render tab -> 3ds Max Pathing Options.
RenderTask: 3dsmax exited unexpectedly (it may have crashed, or someone may have terminated)
This generic error message means that max crashed and exited before the actual error could be propagated
up to Deadline. Often when you see this error, it helps to look through the rest of the error reports for that
job to see if they contain any information thats more specific.
RenderTask: 3dsmax may have crashed (recv: socket error trying to receive data: WSAError code 10054)
This generic error message means that max crashed and exited before the actual error could be propagated
up to Deadline. Often when you see this error, it helps to look through the rest of the error reports for that
job to see if they contain any information thats more specific.
3dsmax startup: Error getting connection from 3dsmax: 3dsmax startup: Deadline/3dsmax startup error:
lightningMax*.dlx does not appear to have loaded on 3dsmax startup, check that it is the right version and
installed to the right place.
This error is likely the side effect of another error, but the original error wasnt propagated to Deadline
properly. Often when you see this error, it helps to look through the rest of the error reports for that job to
see if they contain any information thats more specific.
3dsmax startup: Max exited unexpectedly. Check that 1) max starts up with no dialog messages and in the
case of 3dsmax 6, 2) 3dsmaxcmd.exe produces the message Error opening scene file: when run with no
command line arguments
This message is often the result of an issue with the way Max starts up. Try starting 3ds Max on the slave
machine that produced the error to see if it starts up properly. Also try running 3dsmaxcmd.exe from the
command line prompt to see if it produces the message Error opening scene file: when run with no
command line arguments. If it doesnt produce this message, there may be a problem with the Max install
or how its configured. Sometimes reinstalling Max is the best solution.
The 3dsmax command line renderer, ...\3dsmaxcmd.exe, hung during the verification of the 3ds max install
Try running 3dsmaxcmd.exe from the command line prompt to see if it pops up an error dialog or crashes,
which is often the cause of this error message. If this is the case, there may be a problem with the Max
install or with how it is configured. Sometimes reinstalling Max is the best solution.
3dsmax: Failed to load max file: ...
There could be many reasons my Max would fail to load the scene file. Check for ERR or WRN messages
included in the error message for information that might explain the problem. Often, this error is the result
of a missing plugin or dll.
Error: 3ds Max The Assembly Autodesk.Max.Wrappers.dll encountered an error while loading
This is a specific 3ds Max 2015 crash when you try to launch the program. Ensure you perform a Windows
update and get latest updates for Windows 7 or 8. Additionally, install the update for Autodesk 3ds Max
2015 Service Pack 1 and Security Fix. See this ADSK Knowledge post for more information.
Error message: 3dsmax adapter error : Autodesk 3dsMax 17.2 reported error: Could not find the specified file
in DefaultSettingsParser::parse() ; Could not find the specified file in DefaultSettingsParser::parse() ;
550
The error Could not find the specified file in DefaultSettingsParser::parse() ; occurs if you dont
have the Populate Data installed on each of your Deadline Slave machines. To resolve the issue you
need to ensure that the Populate Data is installed on all the render machines. You can run the 3dsMax_2015_PopulateData.msi installer from the \x64\PDATA\ folder of the 3ds Max 2015 installer. In
case there was a previous install of the Populate Data on the machine please delete the following folder
before installing C:\Program Files\Common Files\Autodesk Shared\PeoplePower\2.0\. See this Area
blog post for more information.
Error message: ERR: To use this feature, you need the Evolver data. Please check the Autodesk web site for
more information.
You may get the above error message when you try to run a Populate simulation in your 3dsMax scene file.
This is a known Autodesk bug and the fix is to install the Autodesk 3ds Max 2014 64-bit Populate Data
component. The actual file is 3dsMax_2014_PopulateData.msi which you can find in the \x64\PDATA\
folder of the install media. Note that if youre running 3ds Max Design the filename will be 3dsMaxDesign_2014_PopulateData.msi. Simarily, the same bug in 3ds Max 2015 doesnt mention Evolver anymore.
Instead, it tells you to install the Populate data. See this Fixing missing Evolver data errors Area blog post
for more information.
Error message: ERROR: Please, make sure the Populate data is installed.
This is the same error message as the previous Populate FAQ entry and is fixed by installing the Autodesk Populate Data component. See this Fixing missing Evolver data errors Area blog post for more
information.
Unexpected exception (Error in bm->OpenOutput(): error code 12)
Ensure all instances of 3dsMax are running a consistent LANGUAGE. By default 3dsMax ships with the
LANGUAGE code set to ENU - US English and this is recommended for the majority of customers.
If you are using a 3rd party plugin in 3dsMax, please contact the plugin developer to verify that their
plugin is capable of running as a different language inside of 3dsMax. Note, that the majority of 3rd party
plugins are still only developed to work in ENU. Please see this FAQ for more information regarding
options to control the LANGUAGE: 3dsMax Language Code FAQ.
Exception: Failed to render the frame.
There could be many reasons my Max would fail to render the frame. Check for ERR or WRN messages
included in the error message for information that might explain the problem.
DBG: in Init. nrGetIface() failed
This error message is often an indication that 3dsmax or backburner is out of date on the machine. Updating both to the latest service packs should fix the problem.
ERROR: ImageMagick: Invalid bit depth for RGB image [path to tile/region render output image]
This error is due to the old TileAssembler executable not supporting certain bit depth images such as
V-Rays REs Reflection, Refraction and Alpha when saved from the V-Ray Frame Buffer (VFB).
Please note that the Tile Assembler plugin is EOL (End-Of-Life/deprecated). Please use the newer Draft
Tile Assembler plugin (Use Draft for Assembly) checkbox option in SMTD when rendering using the
older tile system to ensure all image types/bit depths are correctly assembled. Draft Tile Asssembler jobs
can also be submitted independently if you already have the *.config file(s) and is explained further in the
Draft Tile Assembler documentation.
Error when using Mental Ray DBR in 3ds Max 2016: Could not locate MDL shared core library.
When you try to use DBR (Distributed Bucket Rendering) you will get the following error message:
Could not locate MDL shared core library.
551
To help Mental Ray satellite find this .dll copy libmdl.dll from the main 3ds Max 2016 folder to the
NVIDIA/Satellite folder. Note that you have to do this on all the machines that will be used for DBR. See
this Error when using Mental Ray DBR in 3ds Max 2016 Area blog post for more information.
552
553
Project Configuration
In After Effects, place the comps you want to render in the Render Queue (CTRL+ALT+0). Due to an issue with the
Render Queue, if you have more than one comp with the same name, only the settings from the first one will be used
554
(whether they are checked or not). It is important that all comps in the Render Queue have unique names, and our
submission script will notify you if they do not. Each comp that is in the Render Queue and that has a check mark
next to it will be submitted as separate job to Deadline.
Note that under the comps Output Module settings, the Use Comp Frame Number check box must be checked. If this
is not done, every frame in the submitted comp will try to write to the same file.
555
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. Note that the Draft/Integration options are only available in
After Effects CS4 and later.
The After Effects specific options are:
Use Comp Name As Job Name: If enabled, the jobs name will be the Comp name.
556
Use Frame List From The Comp: Check this option to use the frame range defined for the comp.
Comps Are Dependent On Previous Comps: If enabled, the job for each comp in the render queue will be
dependent on the job for the comp ahead of it. This is useful if a comp in the render queue uses footage rendered
by a comp ahead of it.
Render The First And Last Frames Of The First: Enable this option to render the first and last frames first,
followed by the the remaining frames in the comps frame list. Note that this ignores the Frame List setting in
the submission dialog.
Submit The Entire Render Queue As One Job With A Single Task: Use this option when the entire render
queue needs to be rendered all at once because some queue items are dependent on others or use proxies. Note
though that only one machine will be able to work on this job.
Multi-Process Rendering: Enable multi-process rendering.
Submit Project File With Job: If enabled, the After Effects Project File will be submitted with the job.
Ignore Missing Layer Dependencies: If enabled, Deadline will ignore errors due to missing layer dependencies.
Fail On Warning Messages: If enabled, Deadline will fail the job whenever After Effects prints out a warning
message.
Export XML Project File: Enable to export the project file as an XML file for Deadline to render (After Effects
CS4 and later). The original project file will be restored after submission. If the current project file is already an
XML file, this will do nothing.
Ignore Missing Effects References: If enabled, Deadline will ignore errors due to missing effect references.
Continue On Missing Footage: If enabled, rendering will not stop when missing footage is detected.
Enable Local Rendering: If enabled, Deadline will render the frames locally before copying them over to the
final network location.
Override Fail On Existing AE Process: If enabled, the global repository setting Fail on Existing AE Process
will be overridden.
Fail on Existing AE Process: If enabled, the job will be failed if any After Effects instances are currently
running on the slave. Existing After Effects instances can sometimes cause 3rd party AE plugins to malfunction
during network rendering.
The following After Effects specific options are only available in After Effects CS4 and later:
Multi-Machine Rendering: This mode submits a special job where each task represents the full frame range.
The slaves will all work on the same frame range, but if Skip existing frames is enabled for the comps, they
will skip frames that other slaves are already rendering.
This mode requires Skip existing frames to be enabled for each comp in the Render Queue.
Set the number of tasks to be the number of slaves you want working simultaneously on the render.
This mode ignores the Frame List, Machine Limit, and Frames Per Task settings.
This mode does not support Local Rendering or Output File Checking.
Minimum Output File Size: If an output images file size is less than whats specified, the task is requeued
(specify 0 for no limit).
Enable Memory Management: Whether or not to use the memory management options.
Image Cache %: The maximum amount of memory after effects will use to cache frames.
Max Memory %: The maximum amount of memory After Effects can use overall.
557
Layer Submission
In addition to normal job submission, you also have the option to submit layers in your After Effects project as separate
jobs. To do so, first select the layers you want to submit. Then run the submission script, set the submission options
mentioned above as usual, and press the Submit Selected Layers button. This will bring up the layers window.
558
Use Subfolders: Enable this to render each layer to its own subfolder. If this is enabled, you must also specify
the subfolder format.
Render Executables
559
After Effects Executable: The path to the After Effects aerender executable file used for rendering. Enter
alternative paths on separate lines. Different executable paths can be configured for each version installed on
your render nodes.
Render Options
Fail On Existing After Effects Process: Prevent Deadline from rendering when After Effects is already open.
Force Rendering In English: You can configure the After Effects plug-in to force After Effects to render in
English. This is useful if you are rendering with a non-English version of After Effects, because it ensures that
Deadlines progress gathering and error checking function properly (since they are currently based on English
output from the After Effects renderer).
Font Folder Synchronization
The new FontSync event plugin that ships with Deadline v7.1 can be used to synchronize fonts on Mac OS X and
Windows before the Slave application starts rendering any job, or when the Slave first starts up. This general FontSync
Python based event plugin replaces the font synchronization options here in the After Effects plugin and now works
for ALL plugin types in Deadline. This FontSync event plugin is located at <Repository>/events/FontSync
Path Mapping For aepx Project Files (For Mixed Farms)
Enable Path Mapping For aepx Files: If enabled, a temporary aepx file will be created locally on the slave for
rendering and Deadline will do path mapping directly in the aepx file.
560
9.3.5 FAQ
Which versions of After Effects are supported?
After Effects CS3 and later are supported.
Why is there no Advanced tab in the integrated submission script for After Effects CS3?
561
Tabs are only supported in CS4 and later, so the Advanced tab and its options are not available in CS3 and
earlier.
Does network rendering with After Effects require a full After Effects license?
In After Effects CS5.0 and earlier, a license is not required. In After Effects CS5.5, a full license is
required. In After Effects CS6.0 and later, a license isnt required if you enable non-royalty-bearing
mode.
Rendering through Deadline seems to take longer than rendering through After Effects locally.
After Effects needs to be restarted at the beginning of each frame, and this loading time results in the
render taking longer than expected. If you know ahead of time that your frames will render quickly, it is
recommended to submit your frames in groups of 5 or 10. This way, After Effects will only load at the
beginning of each group of frames, instead of at the beginning of every frame.
When rendering a job, only the images from the first task are saved, and subsequent tasks just seem to overwrite
those initial image files.
In the comps Output Module Settings, make sure that the Use Comp Frame Number checkbox is
checked. Check out step 1 here for complete details.
I get the error that the specified comp cannot be found when rendering, but it is in the render queue.
This can occur for a number of reasons, most of which are related to the name of the comp. Examples are names with two spaces next to each other, or names with apostrophes in them. Try using only
alphanumeric characters and underscores in comp names and output paths to see if that resolves the issue.
Why do the comps in the After Effects Render Queue require unique names?
Due to an issue with the Render Queue, if you have more than one comp with the same name, only the
settings from the first one will be used (whether they are checked or not). It is important that all comps in
the Render Queue have unique names, and our submission script will notify you if they do not.
Understanding the different After Effects command line flags.
Adobe have a web page, Automated Rendering which explains the different network render command
line options and how they work. Deadline currently supports as many of these options as possible.
How can I optimize After Effects for high performance?
Adobe provide an excellent web page, Memory and Storage documenting different areas of After Effects
and what can be done by users to improve performance, particularly in the areas of disk storage/caching
& RAM.
562
Just add the following function to the AfterEffectsPlugin class in AfterEffects.py, which can be found in
[Repository]/plugins/AfterEffects.
def CheckExitCode( self, exitCode ):
if exitCode != 0:
if exitCode == -1073741819:
LogInfo( "Ignoring exit code -1073741819" )
else:
FailRender( "Renderer returned non-zero error code %d." % exitCode )
You can find another example of the CheckExitCode function in MayaCmd.py, which can be found in
[Repository]/plugins/MayaCmd.
aerender ERROR: No comp was found with the given name.
This can occur for a number of reasons, most of which are related to the name of the comp. Examples are
names with to spaces next to each other, or names with apostrophes in them. Try using only alphanumeric
characters and underscores in comp names and output paths to see if that resolves the issue.
563
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Anime Studio specific options are:
564
565
Render Executables
Anime Studio Executable: The path to the Anime Studio executable file used for rendering. Enter alternative
paths on separate lines. Different executable paths can be configured for each version installed on your render
nodes.
9.4.3 FAQ
Which versions of Anime Studio are supported by Deadline?
Anime Studio 8 and later are supported.
567
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Integration options are
explained in the Integration documentation. The Arion specific options are:
Arion File: The Arion scene that will be rendered. Can be a .rcs or .obj file.
LDR Output File: The name of the rendered LDR output file. If no output file is specified a default image file
will be saved beside the Arion file.
HDR Output File: The name of the rendered HDR output file. If no output file is specified a default image file
will be saved beside the Arion file.
Passes: If enabled, Arion will render until the specified number of passes have completed.
Minutes: If enabled, Arion will render until the specified number of minutes have passed.
Threads: The number of threads that Arion will use to render the input file. If no threads are specified, a default
of one will be used.
Command Line Args: Here you can specify additional command line arguments. Arion accepts command line
arguments in the format -arg:value.
Channels: Each channel enabled will generate a different image appended with the channel name.
If both Passes and Minutes are specified, Arion will finish rendering when the first limit is reached. If neither are
enabled, Arion will render indefinitely and the job will have to be stopped manually.
568
Render Executables
Arion Engine Executable: The path to the Arion engine executable file used for rendering. Enter alternative
paths on separate lines.
9.5.3 FAQ
Which versions of Arion are supported?
Only the Arion 2 Standalone is supported.
Are there any issues with referencing a file in the global input folder when one or more other files exist with the
same name?
Yes. When there is a file in the scene that has the same name as a file in another subdirectory, the network
renderer will reference the first file with that name that it finds. It ignores the direct path to the correct
subdirectory.
Can I render multiple channels?
Yes! The Arion submitter supports the selection of individual channels.
How can I pass additional information to Arion?
The Command Line Args field allows you to specify additional arguments to Arion. For example, typing
-h:100 -w:100 in the Command Line Args field will tell Arion to change the image size to 100px by
100px. To find out more information about additional command line arguments, please visit Arions
website.
Can I submit a Arion animations?
The Arion 2 Standalone does not support animations and can only render single images. Arion does still
support animations through there Live plugins.
569
570
571
Render Executables
Arnold Kick Executable: The path to the Arnold kick executable file used for rendering. Enter alternative
paths on separate lines. Different executable paths can be configured for each version installed on your render
nodes.
9.6.3 FAQ
Is Arnold Standalone supported by Deadline?
Yes.
Can I submit a sequence of Arnold .ass files that each contain one frame?
Yes, this is supported.
572
9.7 AutoCAD
9.7.1 Job Submission
You can submit jobs from within AutoCAD by installing the integrated submission script, or you can submit them
from the Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within AutoCAD, press the Submit To Deadline button on the Deadline tab or run the command
SubmitToDeadline
9.7. AutoCAD
573
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation.
AutoCAD has 3 types of submission jobs each of which have their own specific options.
The render job options are:
Render Views: Which views to render, each one will be a separate frame in a single job.
574
Render Procedure: View or Selected - whether or not to render everything in the view or only the selected
objects.
The plotter job options are:
Plotter to use: Which plotter should be used.
Plot Area: Extents or Display - what area should be plotted, everything in the scene or what is currently
displayed.
Paper Size: The size of paper to plot to.
Paper Units: Which units to use for the paper.
Fit Plot Scale: Whether or not the plot should be scaled as much as possible to fit on the paper.
Plot Scale: The scale to use if not fitting
Plot Style Table: Which plot style table should be used.
Use Line Weight: Whether or not the lines should have extra weight on them.
Scale Line Weights: Whether or not the lines should be scaled.
The export job options are:
Selection: Which objects should be exported. Only available in the integrated submitter.
Types to Export: Which types of objects should be exported.
Textures: How textures should be handled.
DGN Settings: DGN specific settings such as version and seed file.
9.7. AutoCAD
575
Render Executables
AutoCAD 2015 Executable: The path to the AutoCAD 2015 executable file used for rendering. Enter alternative paths on separate lines. Different executable paths can be configured for each version installed on your
render nodes.
AutoCAD 2016 Executable: The path to the AutoCAD 2016 executable file used for rendering. Enter alternative paths on separate lines. Different executable paths can be configured for each version installed on your
render nodes.
to
%APP-
576
9.7.4 FAQ
Is AutoCAD supported by Deadline?
Yes.
AutoCAD 2016 requires signed dlls. Are Deadlines plugins signed?
Yes, all of Deadlines plugins are signed, due to the new system though you will have to add Thinkbox
as a trusted company to each of your machines. This can be done by opening AutoCAD 2016 on the
machines that have the plugins (including the render plugin) and then allow the plugins to always load.
9.8 Blender
9.8.1 Job Submission
You can submit jobs from within Blender by installing the integrated submission script, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within Blender 2.5 and later, select Render -> Submit To Deadline. For previous versions of Blender,
you must submit from the Monitor.
9.8. Blender
577
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Blender specific options are:
578
Render Executables
Blender Executable: The path to the Blender executable file used for rendering. Enter alternative paths on
separate lines.
Output
Suppress Verbose Progress Output To Log: When enabled, this will prevent excessive progress logging to the
Slave and task logs.
579
Click on the Render filter on the left, and check the box next to the Render: Submit To Deadline add-on.
580
Click the Install Add-On button at the bottom, browse to [Repository]\submission\Blender\Client, and select
the DeadlineBlenderClient.py script. Then press the Install Add-On button to install it. Note that on Windows, you may not be able to browse the UNC repository path, in which case you can just copy [Repository]\submission\Blender\Client\DeadlineBlenderClient.py locally to your machine before pointing the Add-On
installer to it.
Then click on the Render filter on the left, and check the box next to the Render: Submit To Deadline add-on.
9.8. Blender
581
After closing the User Preferences window, the Submit To Deadline option should now be in your Render menu.
9.8.4 FAQ
Which versions of Blender are supported?
Blender 2.x is currently supported.
9.9 Cinema 4D
9.9.1 Job Submission
You can submit jobs from within Cinema 4D by installing the integrated submission script, or you can submit them
from the Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within Cinema 4D, select Python -> Plugins -> Submit To Deadline.
582
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Cinema 4D specific options are:
Threads To Use: The number of threads to use for rendering.
Build To Force: Force rendering in 32 bit or 64 bit.
Export Project Before Submission: If your project is local, or you are rendering in a cross-platform environment, you may find it useful to export your project to a network directory before the job is submitted.
Enable Local Rendering: If enabled, the frames will be rendered locally, and then copied to their final network
location.
9.9. Cinema 4D
583
Render Executables
C4D Executable: The path to the C4D executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
initial properties in the submission script prior to displaying the submission window. You can also use it to run your
own checks and display errors or warnings to the user. Here is a very simple example of what this script could look
like:
import c4d
from c4d import gui
def RunSanityCheck( dialog ):
dialog.SetString( dialog.DepartmentBoxID, "The Best Department!" )
dialog.SetLong( dialog.PriorityBoxID, 33 )
dialog.SetLong( dialog.ConcurrentTasksBoxID, 2 )
gui.MessageDialog( "This is a custom sanity check!" )
return True
The available dialog IDs can be found in the SubmitC4DToDeadline.py script mentioned above. They are defined near
the top of the SubmitC4DToDeadlineDialog class. These can be used to set the initial values in the submission dialog.
Finally, if the RunSanityCheck method returns False, the submission will be cancelled.
9.9.5 FAQ
Which versions of Cinema 4D are supported?
Cinema 4D 12 and later are supported.
When I use Adobe Illustrator files as textures, the render fails with Asset missing
While Cinema 4D is able to use AI files in workstation mode, there is often problems when rendering
in command line mode. Convert the AI files to another known type such as TIFF or JPEG before using
them.
Sometimes when I open the submission dialog in Cinema 4D, the pool list or group list are empty.
Simply close the submission dialog and reopen it to repopulate the lists.
Does rendering with Cinema 4D with Deadline use up a full Cinema 4D license?
There are separate Cinema 4D command line licenses that are required to render with Deadline. Please
contact Maxon for more information regarding licensing requirements.
Can Deadline render with Cinema 4Ds Net Render Client software?
No. It isnt possible for 3rd party software such as Deadline to control Cinema 4Ds Net Render Client,
which is why it uses the command line renderer.
I have copied over SubmitToDeadline.pyp file but the integrated submission script does not show up under the
python menu.
This is likely caused by some failure in the script. Check your repository path to ensure the client is able
to read and write to that folder. Using the python console within C4D may provide more specific hints.
My frames never seem to finish rendering. When I check the slave machine, it doesnt appear to be doing
anything.
This can occur if Cinema 4D hasnt been licensed yet. Try starting Cinema 4D normally on the machine
and see if you are prompted for a license. If you are, configure everything and then try rendering on that
machine again.
9.9. Cinema 4D
585
586
587
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Cinema 4D Team Render
specific options are:
Render Client Count: The number of render clients to use.
Security Token: The security token that the Team Render application will use on the slaves (it will be generated
automatically if left blank).
588
Rendering
After youve configured your submission options, press the Reserve Clients button to submit the Team Render job.
After the job has been submitted, you can press the Update Clients button to update the jobs ID and Status in the
submitter. As nodes pick up the job, pressing the Update Clients button will also show them in the Active Servers list.
Cinema 4Ds Team Render Machines window will will also appear after pressing the Reserve Clients button, and will
show you the Team Render machines that are currently available. Before you can render with them though, you must
verify them by following these steps:
1. Copying the Security Token from the submitter to the clipboard (use the Copy to Clipboard button).
2. Right-click on each machine in the Team Render Machines window and select the Verify option, then paste the
Security Token and press OK.
When you are ready to render, select the Team Render To Picture Viewer option in C4Ds Render menu to start
rendering.
589
Cinema 4D Options
C4D Team Render Executable: The path to the Cinema 4D Team Render Client executable file used for
rendering. Enter alternative paths on separate lines. Different executable paths can be configured for each
version installed on your render nodes.
9.10.4 FAQ
Which versions of Cinema 4D are supported?
590
After you specify the render archive file, the submitter will come up with the Render Archive and Frame List fields
already populated.
591
Note that if you are submitting from the Monitor, you will have to manually export your render archive from inside
Clarisse iFX, and then browse to the Render Archive file in the Monitor submitter.
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Clarisse iFX specific options
are:
Threads: The number of threads to use for rendering. If set to 0, the value in the Clarisse configuration file will
be used.
Verbose Logging: Enables verbose logging during rendering.
592
Render Executables
CRender Executable: The path to the Clarisses crender executable file used for rendering. Enter alternative
paths on separate lines.
Configuration Options
Global Config File: A global configuration file to be used for rendering. If left blank, the Clarisse.cfg file in the
user home directory will be used instead.
Module Paths: Additional paths to search for modules (one path per line).
Search Paths: Additional paths to search for includes (one path per line).
593
594
the
DeadlineClarisseClient.py
script
from
[Reposi-
Click Add, and you should now see a Deadline tab in the toolbar with a button that you can click on to submit
the job.
9.11.4 FAQ
Which versions of Clarisse iFX are supported?
595
The crender application is used for rendering, so any version of Clarisse iFX that includes this application is supported.
9.12 Combustion
9.12.1 Job Submission
You can submit Combustion jobs from the Monitor.
596
Workspace Configuration
In Combustion, when you are ready to submit your workspace, open the Render Queue by selecting File ->
Render... (CTRL+R).
Select which items you want to render in the box on the left.
9.12. Combustion
597
Under the tab Global Settings, specify an Input Folder (a shared folder where all the footage for you workspace
can be found) and an Output Folder (a shared folder where the output will be dumped). Note that Combustion
will search any subfolders in you Input Folder for footage as well.
598
Render Executables
Combustion Executable: The path to the ShellRender executable file used for rendering. Enter alternative
paths on separate lines. Different executable paths can be configured for each version installed on your render
nodes.
9.12. Combustion
599
9.12.3 FAQ
Which versions of Combustion are supported?
Combustion 4 and later are supported.
All my input footage is spread out over the network, so how do I specify a single Input Folder during submission?
When Combustion is given an Input Folder, it will search all subfolders for the required footage until
the footage is found. So if you have a root folder that all of your footage branches off from, you should
specify that root as the Input Folder.
Are there any issues with referencing a file in the global input folder when one or more other files exist with the
same name?
Yes. When there is a file in the scene that has the same name as a file in another subdirectory, the network
renderer will reference the first file with that name that it finds. It ignores the direct path to the correct
subdirectory.
Can Deadline render multiple outputs?
No. Only one output can be enabled in your Combustion workspace. If no outputs are enabled, or multiple
outputs are enabled, the workspace cannot be submitted to Deadline.
When rendering, I receive a pop up error message. Since rendering is supposed to be silent, should I not be
getting error messages like this in the first place?
Make sure that youre using ShellRenderer.exe as the render executable, (combustion.exe starts up Combustion normally, while ShellRenderer.exe is the command line rendering appliation). You can make the
switch in the Plugin Configuration (Tools -> Configure Plugins in the Monitor while in super user mode).
Why isnt path mapping working properly between Windows and Mac?
On the Mac, the Combustion workspace file saves network paths in the form share:\\folder\..., so you have
to set up your Path Mapping settings in the Repository options accordingly.
600
601
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Command Line specific options
are:
Job Type: Choose a normal job or maintenance job. A normal job will let you specify an arbitrary frame list,
but a maintenance job requires a start frame and an end frame.
Executable: The executable to use for rendering.
Arguments: The arguments to pass to the executable. Use the Start Frame and End Frame buttons to add their
corresponding tags to the end of the current arguments. See the Manual Job Submission documentation for more
information on these tags.
Frame Tag Padding: Determines the amount of frame padding to be added to the Start and End Frame tags.
Start Up Folder: The folder that the executable will be started in. If left blank, the executables folder will be
used instead.
9.13.3 FAQ
How do I handle paths in the arguments with spaces in them?
Use double-quotes around the path. For example, T:\projects\path with spaces\project.ext.
Do I need to use the <QUOTE> tags?
These are only needed when submitting manually from the command line. When using the Monitor
submitter, you can just type in the double-quote character in the Arguments field.
602
603
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Command Script specific
options are:
Commands To Execute: Specify a list of commands to execute by either typing them in, or by loading them
from a file. You also have the option to save the current list of commands to a file. To insert file or folder paths
into the Commands field, use the Insert File Path or Insert Folder Path buttons.
Startup Directory: The directory where each command will startup. This is optional, and if left blank, the
executables directory will be used as the startup directory.
Commands Per Task: Number of commands that will be executed for each task.
The Command file contains the list of commands to run. There should be one command per line, and no lines should
be left blank. If youre executable path has a space in it, make sure to put quotes around the path. The idea is that
one frame in the job represents one command in the Command file. For example, lets say that your Command file
contains the following:
"C:\Program
"C:\Program
"C:\Program
"C:\Program
"C:\Program
Files\Executable1.exe"
Files\Executable1.exe" -param1
Files\Executable1.exe"
Files\Executable1.exe" -param1 -param2
Files\Executable1.exe"
Because there are five commands, the Frames specified in the Job Info File should be set to 0-4. If the Chunksize is set
to 1, then a separate task will be created for each command. When a slave dequeues a task, it will run the command
that is on the corresponding line number in the Command file. Note that the frame range specified must start at 0.
If you wish to run the commands in the order that they appear in the Command file, you can do so by setting the
MachineLimit in the Job Info File to 1. Only one machine will render the job at a given time, thus dequeuing each
task in order. However, if a task throws an error, the slave will move on to dequeue next task.
To submit the job, run the following command (where DEADLINE_BIN is the path to the Deadline bin directory):
DEADLINE_BIN\deadlinecommand JobInfoFile PluginInfoFile CommandFile
604
9.14.4 FAQ
Can I use executables with spaces in the path?
Yes, just add quotes around the executable path.
9.15 Composite
9.15.1 Job Submission
You can submit jobs from within Composite by installing the integrated submission script, or you can submit them
from the Monitor. The instructions for installing the integrated submission script can be found further down this page.
9.15. Composite
605
To submit from within Composite, select the version you would like to submit, hit render, and choose the Background
option when prompted.
606
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Integration options are
explained in the Integration documentation. The Composite specific options are:
Project File: The Composite .txproject file.
Composition: Path to the composition that you want to submit.
Composition Version: The version of the current composition selected.
Users ini file: The path to the user.ini file for this composition.
Version: The version of Composite to use.
Build to Force: Force 32 bit or 64 bit rendering.
9.15. Composite
607
Render Executables
Composite Executable: The path to the txrender executable file used for rendering. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
608
In the Render window, select Deadline as the Action and press Start.
9.15.4 FAQ
Which versions of Composite are supported?
Composite 2010 and later are supported.
609
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Corona specific options are:
Corona Scene: The Corona scene that will be rendered. Must be a .scn file.
Output File Directory: The directory for the output to be saved to.
Output File Name: The prefix for the output file names. If not specified it defaults to output.
Frame List: The list of frames to be rendered. Each frame will be rendered to a separate output file.
610
Render Executables
Corona Executable: The path to the corona standalone executable file used for rendering. Enter alternative
paths on separate lines.
611
612
Port Configuration
Here is a consolidated list of port requirements for Corona DR. Ensure any applicable firewalls are opened to allow
pass-through communication. Typically if in doubt, opening TCP/UDP ports in the range: 19660-19670 will cover all
Corona implementations for DR. During initial testing, it is recommended to open all ports in this range, verify and
9.17. Corona Distributed Rendering
613
Port Number
19666
19667
19668
Application
3dsMax
3dsMax
3dsMax
Notes
loopback
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Corona DR specific options
are:
Maximum Servers: The maximum number of Corona DR Servers to reserve for distributed rendering.
Enable Verbose Logging (Optional): When checked, Corona DR server will create verbose logs.
Use Server IP Address Instead of Host Name: If checked, the Active Servers list will show the server IP
addresses instead of host names.
Automatically Update Server List: This option when un-checked stops the automatic refresh of the active
servers list based on the current Deadline queue.
Complete Job after Render: When checked, as soon as the DR session has completed (max quick render
finished), then the Deadline job will be marked as complete in the queue.
Rendering
After youve configured your submission options, press the Reserve Servers button to submit the Corona DR job. The
jobs ID and Status will be tracked in the submitter, and as nodes pick up the job, they will show up in the Active
Servers list. Once you are happy with the server list, press Start Render to start distributed rendering.
Note that the Corona DR Server process can sometimes take a little while to initialize. This means that a server in
the Active Server list could have started the Corona DR server, but its not fully initialized yet. If this is the case, its
probably best to wait a minute or so after the last server has shown up before pressing Start Render.
Update Servers (3dsMax only) button will manually update the Active Servers List. Note, if you modify the Maximum
Servers value, the jobs frame range will be updated when this button is pressed or if Automatically Update Server
List is enabled.
Whilst using the interactive Corona DR Server submission system in 3dsMax, it is recommended to NOT use the
Search LAN button or enable the Search LAN during render checkbox, as you risk accidently selecting the wrong
Corona DR servers running on your network, if another user in your studio is also running 1 or more Corona DR
servers for their rendering needs.
After the render is finished, you can press Release Servers or close the submitter UI (Setup Corona DR With Deadline)
to mark the Corona DR job as complete so that the render nodes can move on to another job in your queue.
614
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Corona DR specific options
are:
Maximum Servers: The maximum number of Corona DR Servers to reserve for distributed rendering.
Verbose Logging: Enable for verbose logging from the DrServer application.
Rendering
After youve configured your submission options, press the Submit button to submit the Corona DR job. Note that
this doesnt start any rendering, it just allows the Corona DR Server application to start up on nodes in the farm. Once
youre happy with the nodes that have picked up the job, you can initiate the distributed render manually from within
the application. This will likely require manually configuring your Corona Server list or conveniently, you could use
the Search LAN button to automatically find ANY Corona DR servers running on your network. Additionally,
Corona provides a Search LAN during render checkbox, which can be used to locate additional Corona DR Servers
615
whilst the render is progressing on your workstation and it also allows any errored or user interrupted servers to re-join
this rendering session again.
After the distributed render has finished, remember to mark the job as complete or delete it so that the nodes can move
on to other jobs. Alternatively, use the DR Session timeout functionality described below or the auto task timeout to
control whether these type of jobs are automatically completed after a certain period of time.
616
to
[3ds
Max
Install
Direc-
617
9.17.5 FAQ
Is Corona Distributed Rendering (DR) supported?
Yes. A special reserve job is submitted that will run the Corona DR Server application on the render
nodes. Once the Corona DR Server process is running, these nodes will be able to participate in distributed
rendering.
Which versions of Corona DR are supported?
Corona interactive rendering is supported for 3ds Max 2012-2015.
Corona DR Server application fails to start manually?
During initial configuration of Corona DR Server & any future debugging, it is recommended to disable
any firewall & anti-virus software at both the DR master host machine as well as all render slave machines
which are intended to participate in the DR process. We suggest you manually get Corona DR up and
running in your studio pipeline to verify all is well before then introducing Deadline as a framework to
handle the DR Server application.
Is Backburner required for 3dsMax based Corona DR via Deadline?
Yes. Normal 3dsMax rendering via Deadline requires the Backburner dlls to be present on a system
and this is the same prerequisite for Corona DR rendering to work correcty. Ensure you have the latest/corresponding version of Backburner to ensure it supports the version of 3dsMax you are using. You
can submit a normal 3dsmax render job to verify that Backburner & 3dsMax rendering via Deadline are
all operating correctly before attempting to configure Corona DR rendering. Use the Deadline job report
to verify the correctly matched version of Backburner and 3dsMax are in order.
Do I need to run the Corona DR Server application executable on each machine?
Do NOT execute Render Legions Corona DR Server executable manually on each intended machine.
Deadline is more flexible here and will spawn the Corona DR Server standalone executable as a child
process of the Deadline Slave. This makes our system flexible and resilient to crashes as when we terminate/complete the Corona DR job in the Deadline queue, the Deadline Slave application will cleanly
tidy up the DR Server and more importantly, any instances of 3dsMax which it in turn has spawned as a
child process. This can be helpful if Corona DR or that instance of 3dsMax becomes unstable and a user
wishes to reset the system remotely. You can simply re-queue or delete/complete the current Corona DR
job or re-submit.
Can I force Corona DR to run over a certain port?
No. Currently this is not possible and the ports used are fixed. Please see the Port Configuration table at
the top of this page for more information.
Corona DR rendering seems a little unstable sometimes or my machine slows down dramatically!
Depending on the number of slave machines being used (Win7 OS < 20), scene file sizes being moved
around together with asset files, and your network/file storage configuration, it may help to increase the
Synchronization interval [s]: 60 and decrease the Max pixels transfer at once: 500000 settings, which
can help to reduce the load on your local machine and network.
618
9.18 CSiBridge
9.18.1 Job Submission
You can submit CSiBridge jobs from the Monitor.
Submission Options
The general Deadline options are explained in the Job Submission documentation. The CSiBridge specific options are:
CSi Bridge Data File(s): The CSi Bridge Data File to be processed. CSi Bridge Files (*.BDB), Microsoft
Access Files (*.MDB), Microsoft Excel Files (*.XLS), CSi Bridge Text Files (*.$BR *.B2K) are supported.
Override Output Directory: If this option is enabled, an output directory can be used to re-direct all processed
files to.
Build To Force: You can force 32 or 64 bit processing with this option.
Submit Data File With Job: If this option is enabled, the Bridge file will be submitted with the job, and then
copied locally to the slave machine during processing.
Version: The version of CSiBridge to render with.
CSiBridge Process/Solver Options are:
Process Selection: Choose to execute inside of the existing Bridge application process or as a separate process.
Solver Selection: Select the Solver to perform the analysis on the data file.
CSiBridge Design Options are:
9.18. CSiBridge
619
4 options are available to automatically perform design after the data file has been opened & analysis results are
available.
Steel Frame Design: Perform steel frame design after the analysis has completed.
Concrete Frame Design: Perform concrete frame design after the analysis has completed.
Aluminium Frame Design: Perform aluminium frame design after the analysis has completed.
Cold Formed Frame Design: Peform cold formed frame design after analysis has completed.
CSiBridge Deletion Options are:
Temp File Deletion: Choose a deletion option to cleanup the analysis/log/out files if required.
CSiBridge Additional Options are:
Include Data File: If enabled, the output zip file will contain the data file OR if outputting to a directory path,
the data file will be included.
Compress (ZIP) Output: Automatically compress the output to a single zip file.
Command Line Args: Additional command line flags/options can be added here if required.
Executables
Bridge 15 Executable: The path to the Bridge 15 executable file used for simulating. Enter alternative paths on
separate lines.
620
Bridge 2014 Executable: The path to the Bridge 2014 executable file used for simulating. Enter alternative
paths on separate lines.
Bridge 2015 Executable: The path to the Bridge 2015 executable file used for simulating. Enter alternative
paths on separate lines.
9.18.3 FAQ
Is CSiBridge supported by Deadline?
Yes.
9.19 CSiETABS
9.19.1 Job Submission
You can submit CSiETABS jobs from the Monitor.
9.19. CSiETABS
621
Submission Options
The general Deadline options are explained in the Job Submission documentation. The CSiETABS specific options
are:
CSi ETABS Data File(s): The CSi ETABS Data File to be processed. CSi ETABS Files (*.EDB), Microsoft
Access Files (*.MDB), Microsoft Excel Files (*.XLS), CSi ETABS Text Files (*.$ET *.E2K) are supported.
Override Output Directory: If this option is enabled, an output directory can be used to re-direct all processed
files to.
Build To Force: You can force 32 or 64 bit processing with this option.
Submit Data File With Job: If this option is enabled, the ETABS file will be submitted with the job, and then
copied locally to the slave machine during processing.
Version: The version of CSi ETABS to render with.
CSiETABS Design Options are:
4 options are available to automatically perform design after the data file has been opened & analysis results are
available.
Steel Frame Design: Perform steel frame design after the analysis has completed.
Concrete Frame Design: Perform concrete frame design after the analysis has completed.
Composite Beam Design: Perform composite beam design after the analysis has completed.
Shear Wall Design: Peform shear wall design after analysis has completed.
CSiETABS Deletion Options are:
Delete Analysis Results: Choose to delete the analysis results if required.
CSiETABS Additional Options are:
Include Data File: If enabled, the output zip file will contain the data file OR if outputting to a directory path,
the data file will be included.
Compress (ZIP) Output: Automatically compress the output to a single zip file.
Command Line Args: Additional command line flags/options can be added here if required.
622
Executables
ETABS 2013 Executable: The path to the ETABS 2013 executable file used for simulating. Enter alternative
paths on separate lines.
ETABS 2014 Executable: The path to the ETABS 2014 executable file used for simulating. Enter alternative
paths on separate lines.
9.19.3 FAQ
Is CSiETABS supported by Deadline?
Yes.
9.19. CSiETABS
623
9.20 CSiSAFE
9.20.1 Job Submission
You can submit CSiSAFE jobs from the Monitor.
Submission Options
The general Deadline options are explained in the Job Submission documentation. The CSiSAFE specific options are:
CSi SAFE Data File(s): The CSi SAFE Data File to be processed. CSi SAFE Files (*.FDB), Microsoft Access
Files (*.MDB), Microsoft Excel Files (*.XLS), CSi SAFE Text Files (*.$2K *.F2K) are supported.
Override Output Directory: If this option is enabled, an output directory can be used to re-direct all processed
files to.
Build To Force: You can force 32 or 64 bit processing with this option.
Submit Data File With Job: If this option is enabled, the SAFE file will be submitted with the job, and then
copied locally to the slave machine during processing.
Version: The version of CSi SAFE to process with.
CSiSAFE Analysis/Design/Detailing Option:
624
Run Method: Choose a run combination option such as Disabled, Run Analysis, Run Analysis & Design
or Run Analysis, Design & Detailing.
CSiSAFE Process/Solver Options:
Process Selection: Choose to execute inside of the existing SAFE application process or as a separate process.
Solver Selection: Select the Solver to perform the analysis on the data file.
Force 32bit Process: Force analysis to be calculated in 32 bit even when the computer is 64 bit.
CSiSAFE Report Option:
Create Report: Create a report based on the current report settings in the model file.
CSiSAFE Export Options:
File Export: File export a Microsoft Access, Microsoft Excel, or text file.
DB Named Set (required): The name of the database tables named set that defines the tables to be exported.
This parameter is required.
DB Group Set (optional): The specified group sets the selection for the exported tables. This parameter is
optional. If it is not specified, the group ALL is assumed.
CSiSAFE Deletion Options:
Temp File Deletion: Choose a deletion option to cleanup the analysis/output files if required such as keep
everything, delete analysis & output files, delete analysis files only or delete output files only.
CSiSAFE Additional Options:
Include Data File: If enabled, the output zip file will contain the data file OR if outputting to a directory path,
the data file will be included.
Compress (ZIP) Output: Automatically compress the output to a single zip file.
9.20. CSiSAFE
625
Executables
SAFE 12 Executable: The path to the SAFE 12 executable file used for simulating. Enter alternative paths on
separate lines.
SAFE 2014 Executable: The path to the SAFE 2014 executable file used for simulating. Enter alternative paths
on separate lines.
9.20.3 FAQ
Is CSiSAFE supported by Deadline?
Yes.
626
9.21 CSiSAP2000
9.21.1 Job Submission
You can submit CSiSAP2000 jobs from the Monitor.
Submission Options
The general Deadline options are explained in the Job Submission documentation. The CSiSAP2000 specific options
are:
CSi SAP2000 Data File(s): The CSi SAP2000 Data File to be processed. CSi SAP2000 Files (*.SDB), Microsoft Access Files (*.MDB), Microsoft Excel Files (*.XLS), CSi SAP2000 Text Files (*.$2K *.S2K) are
supported.
Override Output Directory: If this option is enabled, an output directory can be used to re-direct all processed
files to.
Build To Force: You can force 32 or 64 bit processing with this option.
Submit Data File With Job: If this option is enabled, the SAP2000 file will be submitted with the job, and then
copied locally to the slave machine during processing.
Version: The version of CSi SAP2000 to render with.
CSiSAP2000 Process/Solver Options are:
Process Selection: Choose to execute inside of the existing SAP2000 application process or as a separate
process.
9.21. CSiSAP2000
627
Solver Selection: Select the Solver to perform the analysis on the data file.
CSiSAP2000 Design Options are:
4 options are available to automatically perform design after the data file has been opened & analysis results are
available.
Steel Frame Design: Perform steel frame design after the analysis has completed.
Concrete Frame Design: Perform concrete frame design after the analysis has completed.
Aluminium Frame Design: Perform aluminium frame design after the analysis has completed.
Cold Formed Frame Design: Peform cold formed frame design after analysis has completed.
CSiSAP2000 Deletion Options are:
Temp File Deletion: Choose a deletion option to cleanup the analysis/log/out files if required.
CSiSAP2000 Additional Options are:
Include Data File: If enabled, the output zip file will contain the data file OR if outputting to a directory path,
the data file will be included.
Compress (ZIP) Output: Automatically compress the output to a single zip file.
Command Line Args: Additional command line flags/options can be added here if required.
Executables
628
SAP2000 14 Executable: The path to the SAP2000 14 executable file used for simulating. Enter alternative
paths on separate lines.
SAP2000 15 Executable: The path to the SAP2000 15 executable file used for simulating. Enter alternative
paths on separate lines.
SAP2000 16 Executable: The path to the SAP2000 16 executable file used for simulating. Enter alternative
paths on separate lines.
SAP2000 17 Executable: The path to the SAP2000 17 executable file used for simulating. Enter alternative
paths on separate lines.
9.21.3 FAQ
Is CSiSAP2000 supported by Deadline?
Yes.
9.22 DJV
9.22.1 Job Submission
You can submit DJV jobs from the Monitor. You can use the Submit menu, or you can right-click on a job and select
Scripts -> Submit DJV Quicktime Job To Deadline to automatically populate some fields in the DJV submitter based
on the jobs output.
9.22. DJV
629
Submission Options
The general submission options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. You can get more information about the DJV specific
options by hovering your mouse over the label for each setting. The Settings buttons can be used to quickly save and
load presets, or reset the settings back to their defaults.
630
DJV Executables
DJV Executable: The path to the djv_convert executable file used for rendering. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
9.22.3 FAQ
Is DJV supported by Deadline?
Yes.
Can I create Apple Quicktime mov files with DJV?
Yes. On Windows, you must use the x32 bit version of DJV only. The LibQuicktime based codecs are
only available in DJV v1.0.1 or later AND only on Linux. As an alternative, you can also use Thinkboxs
Draft product (image/movie creation automation toolkit) which is included in Deadline and is licensed
against your active Deadline support subscription. See Draft for more information.
Can I create EXR files compressed with DreamWorks Animations DWAA or DWAB compression?
Yes, but this is only supported in DJV v1.0.01 or later.
9.22. DJV
631
DJV has a bug causing DJV to crash which is currently stopping these 2 command line flag options from
working. The code has been commented out in the DJV plugin and can be re-enabled as such time the
bug is fixed by the DJV developer.
Various Command Line options failing in DJV
Many of the [djv_convert] commmand line flags are broken due to spaces being present between the
flag options in DJV versions earlier than v1.0.1. This is all resolved in DJV v1.0.1 and later, so it is
recommended to use at least this version (wrapping the flag options with additional quotation marks does
not resolve the issue as its a bug in the actual [djv_convert] command line args parser function).
9.23 Draft
9.23.1 Job Submission
There are many ways to submit Draft jobs to Deadline. As always, you can simply submit a Draft job from within the
Monitor from the Submit menu. In addition, weve also added a right-click job script to the Monitor, which will allow
you to submit a Draft job based on an existing job. This will pull over output information from the original job, and
fill in Draft parameters automatically where possible.
632
On top of the Monitor scripts, you can also get set up to submit Draft jobs directly from Shotgun. This will again
pull over as much information as possible, this time from the Shotgun database, in order to pre-fill several of the Draft
parameter fields. See the Integrated Submission Script Setup section below for more details on this.
Weve also added a Draft section to all of our other submitters. Submitting a Draft job from any of these uses our
9.23. Draft
633
Draft Event Plug-in to submit a Draft job based on the job currently being submitted (this is similar in concept to the
right-click job script described above). The Draft job will get automatically created upon completion of the original
job.
634
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Draft Tile Assembler specific
options are:
Input Config File: The file that will control a majority of the assembly.
635
Error on Missing File: If enabled, the job will error if any of the tiles in the config file are missing.
Cleanup Tiles: If enabled, the job Delete all of the tile files after the assembly is complete.
Build To Force: You can force 32 bit or 64 bit rendering.
Config File Setup
The config file is a plain text file that uses Key/Value pairs (key=value) to control the draft tile assembly.
TileCount=<#>: The number of tiles that are going to be assembled
DistanceAsPixels=<true/false>: Distances provided in pixels or in a 0.0-1.0 percentage range (Defaults to
True)
BackgroundSource=<BackgroundFile>: If provided, the assembler will attempt to assemble the new tiles
over the specified image.
TilesCropped=<true/false>: If disabled, the assembler will crop the tiles before assembling them.
ImageHeight=<#>: The height of the final image. This will be ignored if a background is provided. If this is
not provided and the tiles are not cropped then the first tile will be used to determine the final image size.
ImageWidth=<#>: The height of the final image. This will be ignored if a background is provided. If this is
not provided and the tiles are not cropped then the first tile will be used to determine the final image size.
Tile<#>Filename=<FileName>: The file name of the tile to be assembled. (Only used if ImageFolder is not
included, 0 indexed)
Tile<#>X=<#>: The X coordinates for the tile that is to be assembled. 0 at the left side.
Tile<#>Y=<#>: The Y coordinates for the tile that is to be assembled. 0 at the bottom.
Tile<#>Width=<#>: The width of the tile that is to be cropped. (Only used if TilesCropped is false)
Tile<#>Height=<#>: The height of the tile that is to be cropped. (Only used if TilesCropped is false)
ImageFolder=<Folder>: The folder that you would like to assemble images from. (If included the assembler
will render all tiles within the specified Folder )
ImagePadding=<#>: The amount of padding on the file names within the folder.(Only used if ImageFolder is
included)
ImageExtension=<ext>: The extension that the files to be assembled. (Only used if ImageFolder is included)
Tile<#>Prefix=<Prefix>: The Prefix that the file must contain (Only used if ImageFolder is included)
Example Config Files
The first example config file will control a simple tile assembly.
#We are assembling 4 tiles into an image
TileCount=4
#The final image will have the following filename
ImageFileName=C:/ExampleConfig/outputFileName.png
#The final Image will have a resolution of 960x540
ImageWidth=960
ImageHeight=540
#The Images are already Cropped
TilesCropped=True
#What is the file that will be the first tile assembled
Tile0FileName=C:/ExampleConfig/_tile_1x1_2x2_sceneName.png
636
The second example config file controls a folder render. It will assemble all files within the folder C:/ExampleConfig/
that have the extension exr and have the given prefixes. So if the files region_0_test.exr, region_1_test.exr, region_2_test.exr, region_3_test.exr then this file will create the images test.exr:
#We are assembling 4 tiles into an image
TileCount=4
#In the config files we are using relative coordinates instead of pixel coordinates
DistanceAsPixels=0
#The tiles have not yet been cropped so the tile assembler has to crop each tile.
TilesCropped=false
#We are going to assemble all files within the specified folder.
ImageFolder=C:/ExampleConfig
#We are going to only assemble files with the following extension
ImageExtension=exr
#The first tile in each of the images will start with the following prefix
Tile0Prefix=region_0_
#Where should the tile go
Tile0X=0
Tile0Y=0
#Because we are cropping the tiles we need to give it a width and height to crop to
Tile0Width=0.5
Tile0Height=0.5
#The second tile in each of the images will start with the following prefix
Tile1Prefix=region_1_
#Where should the tile go
Tile1X=0.5
Tile1Y=0
#Because we are cropping the tiles we need to give it a width and height to crop to
Tile1Width=0.5
Tile1Height=0.5
Tile2Prefix=region_2_
Tile2X=0
Tile2Y=0.5
Tile2Width=0.5
Tile2Height=0.5
Tile3Prefix=region_3_
Tile3X=0.5
Tile3Y=0.5
637
Tile3Width=0.5
Tile3Height=0.5
9.24.3 FAQ
There are no FAQ entries at this time.
9.25 EnergyPlus
9.25.1 Job Submission
You can submit EnergyPlus jobs from the Monitor.
638
Submission Options
The general Deadline options are explained in the Job Submission documentation. The EnergyPlus specific options
are:
EnergyPlus IDF File(s): The EnergyPlus IDF file(s) to be processed.
Weather EPW File(s): The Weather EPW File(s) to be referenced (Optional).
Override Output Directory: If this option is enabled, an output directory can be used to re-direct all processed
files to.
Build To Force: You can force 32 or 64 bit processing with this option.
Submit File(s) With The Job: If this option is enabled, the data file(s) will be submitted with the job, and then
copied locally to the slave machine during processing.
EnergyPlus Post-Process Options are:
../ReadVarsESO.exe Max.Columns: Limit the maximium number of columns used when calling readVarsESO.exe.
Execute ../convertESOMTR.exe: Execute the convertESOMTR.exe application as a post-process.
Execute ../CSVproc.exe: Execute the csvProc.exe application as a post-process.
EnergyPlus Processing Options are:
9.25. EnergyPlus
639
Multithreading: If enabled, EnergyPlus simulations will use multithreading. Ignored if Concurrent Tasks > 1.
Pause Mode (DEBUG only): Only for Debug purposes. Will PAUSE the program execution at key steps.
EnergyPlus Other Options are:
Include Data File: If enabled, the output zip file will contain the data file OR if outputting to a directory path,
the data file will be included.
Compress (ZIP) Output: Automatically compress the EP output to a single zip file.
Executables
EnergyPlus Executable: The path to the EnergyPlus executable file used for simulating. Enter alternative paths
on separate lines.
9.25.3 FAQ
Is EnergyPlus supported by Deadline?
Yes.
640
9.26 FFmpeg
9.26.1 Job Submission
You can submit FFmpeg jobs from the Monitor.
9.26. FFmpeg
641
Submission Options
The general Deadline options are explained in the Job Submission documentation. The FFmpeg specific options are:
642
Render Executables
FFmpeg Executable: The path to the FFmpeg executable file used for rendering. Enter alternative paths on
separate lines.
9.26.3 FAQ
Currently, there are no FAQs for this plug-in.
9.26. FFmpeg
643
9.27 Fusion
9.27.1 Job Submission
You can submit jobs from within Fusion by installing the integrated submission script, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within Fusion, select Script -> DeadlineFusionClient.
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation.
Fusion Comp: The flow/comp file to be rendered.
Frame List: The list of frames to render.
Frames Per Task: This is the number of frames that will be rendered at a time for each job task.
Proxy: The proxy level to use (not supported in command line mode).
Version: The version of Fusion to render with.
Build: Force 32 or 64 bit rendering. Default is None.
Use Frame List In Comp: Enable this option to pull the frame range from the comp file.
644
Check Output: If checked, Deadline will check all savers to ensure they have saved their image file (not
supported in command line mode).
High Quality: Whether or not to render with high quality (not supported in command line mode).
Command Line Mode: Render using separate command line calls instead of keeping the scene loaded in
memory between tasks. Using this feature disables the High Quality, Proxy, and Check Saver Output options.
This uses the FusionCmd plug-in, instead of the Fusion one.
Submit Comp File: If this option is enabled, the flow/comp file will be submitted with the job, and then copied
locally to the slave machine during rendering.
In-app submitter submission options.
Render First And Last Frames First: The first and last frame of the flow/comp will be rendered first, followed
by the remaining frames in the sequence. Note that the Frame List above is ignored if this box is checked (the
frame list is pulled from the flow/comp itself).
Submit Comp File With Job: If this option is enabled, the flow/comp file will be submitted with the job, and
then copied locally to the slave machine during rendering.
Check Saver Output: If checked, Deadline will check all savers to ensure they have saved their image file (not
supported in command line mode).
9.27. Fusion
645
Fusion Options
Fusion Render Executable: The path to the Fusion Render Slave executable used for rendering. Enter alternative paths on separate lines. Different executable paths can be configured for each version installed on your
render nodes.
Fusion Wait For Executable: If you use a proxy RenderSlave.exe, set this to the name of the renamed original.
For example, it might be set to RenderSlave_original.exe. Leave blank to disable this feature.
Fusion Version To Enforce: Deadline will only render Fusion jobs on slaves running this version of Fusion.
Use a ; to separate alternative versions. Leave blank to disable this feature.
Fusion Slave Preference File: The path to a global RenderSlave.prefs preference file that is copied over before
starting the Render Slave. Leave blank to disable this feature.
General Fusion Options
Load Comp Timeout: Maximum time for Fusion to load a comp, in seconds.
Script Connect Timeout: Amount of time allowed for Fusion to start up and accept a script connection, in
seconds.
FusionCmd
Fusion Render Executable: The path to the Fusion Console Slave executable used for rendering. Enter alternative paths on separate lines. Different executable paths can be configured for each version installed on your
render nodes.
Fusion Slave Preference File: The path to a global RenderSlave.prefs preference file that is copied over before
starting the Render Slave. Leave blank to disable this feature.
646
All your checks should be placed within this function. This function should return a message that contains the sanity
check warnings. If an empty message is returned, then it is assumed the sanity check was a success and no warning is
displayed to the user. Here is a simple example that checks if any CineFusion tools are being used in the comp file:
function CustomDeadlineSanityChecks(comp)
local message = ""
------------------------------------------------------ RULE: Check to make sure Cinefusion is disabled
----------------------------------------------------cinefusionAttrs = fusion:GetRegAttrs("CineFusion")
if not (cinefusionAttrs == nil) then
cinefusion_regID = cinefusionAttrs.REGS_ID
local i = nil
for i, v in comp:GetToolList() do
if (v:GetID() == cinefusion_regID) then
if (v:GetAttrs().TOOLB_PassThrough == false) then
message = message ..
"CineFusion '" ..
v:GetAttrs().TOOLS_Name ..
"' should be disabled\n"
end
end
end
end
9.27. Fusion
647
return message
end
9.27.4 FAQ
Which versions of Fusion are supported?
Fusion 5 and later are supported.
Whats the difference between the Fusion and FusionCmd plugins?
The Fusion plugin starts the Fusion Render Node in server mode and uses eyeonscript to communicate
with the Fusion renderer. Fusion and the comp remain loaded in memory between tasks to reduce overhead. This is usually the preferred way of rendering with Fusion.
The FusionCmd plugin renders with Fusion by executing command lines, and can be used by selecting
the Command Line mode option in the Fusion submitter. Because Fusion needs to be launched for each
task, there is some additional overhead when using this plugin. In addition, the Proxy, High Quality, and
Saver Output Checking features are not supported in this mode. However, this mode tends to print out
better debugging information when there are problems (especially when the Fusion complains that it cant
load the comp), so we recommend using it to help figure out problems that may be occurring when using
the Fusion plugin.
Can I use both workstation and render node licenses to render jobs in Deadline?
You can use workstation licenses to render, you just need to do a little tweaking to get this to work nicely.
In the Plugin Configuration settings, you need to specify two paths for the render executable option. The
first path will be the render node path, and the second will be the actual Fusion executable path. You then
have to make sure that the render node is not installed on your workstations. Because you have specified
two paths, Deadline will only use the second path if the first one doesnt exist, which is why the render
nodes cant be installed on your workstations.
Why is it not possible to have to 2 instances of Fusion running?
With Fusion there is only one tcp/ip port to which eyeonscript (the scripting language used to run Fusion
renders on a slave computer) can connect. If Fusion is open on a slave computer then the port will be in
use and the Fusion Render Node will have to wait for the port to become available before rendering of
Fusion jobs on that slave can begin.
Fusion alone renders fine, but with Deadline, the slaves are failing on the last frame.
This is usally accompanied by this error message:
INFO: Checking file \\path\to\filename####.ext
INFO: Saver "SaverName" did not produce output file.
INFO: Expected file "\\path\to\filename####.ext" to exist.
The issue likely has to do with the processing of fields as opposed to full frames. When processing your
output as fields, the frames are rendered in two halves (for example, frame 1 would be rendered as 1.0 and
1.5). This error often occurs when the Global Timeline is not set to include the second half of the final
frame. Simply adding a .5 to the Global End Time should resolve this issue.
For example, let us assume that you are processing fields and your output range is 0 - 100. If the Global
Timeline is set to be 0.0 - 100.0, Fusion will render everything, but Deadline will fail on the last frame. If
the Global Timeline is set to be 0.0 - 100.5, Deadline will render everything just fine.
Is there a way to increase Deadlines efficiency when rendering Fusion frames that only take a few seconds to
complete?
648
Rendering these frames in groups (groups of 5 for example) tends to reduce the jobs overall rendering
time. The group size can be set in the Fusion submission dialog using the Task Group Size option.
Does Fusion cache data between frames on the network, in the same way it does when rendering sequences
locally?
Deadline renders each block of frames using the eyeonscript comp.render function. The Fusion Render
Node is kept running between each block rendered, so when Fusion caches static results, it can be used
by the next block of frames to be rendered on the same machine.
Fusion seems to be taking a long time to start up when rendering. What can I do to fix this?
If you are running Fusion off a remote share, this can occur when there is a large number of files in the
Autosave folder. If this is the case, deleting the files in the Autosave folder should fix the problem.
Can I use relative paths in my Fusion comp when rendering with Deadline?
If your comp is on a network location, and everything is relative to that network path, you can use relative
paths if you choose the option to not submit the comp file with the job. In this case, the slaves will load the
comp directly over the network, and there shouldnt be any problems with the relative paths. Just make
sure that your render nodes resolve the paths the same way your workstation does.
649
right clicking on the icon it creates, and choosing preferences. From there, pick the Script option, and you
will see radio buttons, one of which says No login required. Make sure that that is the option selected,
then click Save to save the preferences, and exit Fusion Render Node.
650
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation.
Fusion Options
9.28. Fusion Quicktime
651
Fusion Version: Select the version of Fusion to generate the Quicktime with.
Build: Force 32 or 64 bit rendering.
Load/Save Preset: Allows you to save your Fusion Quicktime options to a preset file, so that you can load them
again later.
Input/Output Options
Input Images: The frames you would like to generate the Quicktime from. If a sequence of frames exist in
the same folder, Deadline will automatically collect the range of the frames and will set the Frame Range field
accordingly.
Frames: The frame range used to generate the Quicktime.
Frame Rate: The frame rate of the Quicktime.
Overide Start: Allows the starting frame in the quicktime to be overridden. For example, if you are making
a quicktime from images with a range 101-150, you can override the start frame to be 1, and the range in the
quicktime will appear as 1-50.
Output Movie File: The name of the Quicktime to be generated.
Codec: The codec format to use for the Quicktime.
On Missing Frames: What the generator will do when a frame is missing or is unable to load. There are 4
options:
Fail: Nothing will be generated until the missing frame becomes available.
Hold Previous: The last valid frame will be included instead of the missing frame.
Output Black: A black frame will be included instead of the missing frame.
Wait: The generator will wait until the missing frame becomes available.
Quicktime Options
BG Plate: Specify an optinal background plate. The Quicktime will render using the selected file as the background.
Template: Specify an optional comp template. See the Template documentation below for more information.
Artist Name: if you have a text tool with artist in its name in the selected template comp, its text will be set
to the name that is specified.
Curve Correction: Select to turn on the color curves tool (available when using templates only).
Quality %: The quality of the Quicktime.
Proxy: The ratio of pixels to render (for example, if set to 4, one out of every four pixels will be rendered).
Gamma: The gamma level of the Quicktime.
Exposure Compensation: The stops value used to calculate the gain parameter of the Brightness/Contrast
tool. The gain parameter is calculated by using the value pow(2,stops).
652
As you can see, this simple template consists of a loader, a saver, a text tool, and a merge tool. This template simply
merges the text tool with the loader so that This is a test appears in your Quicktime. You can create your own
template files, but they must meet the following requirements. As long as these requirements are met, you can add
whatever you like between the loader and the saver.
There must be exactly one loader and one saver.
The loader must have a dummy file name specified (the file doesnt have to exist).
653
9.28.4 FAQ
Which versions of Fusion are supported?
Fusion 5 and later are supported.
How is this different than submitting regular Quicktime jobs?
Regular Quicktime jobs are more generic, and provide more general Quicktime options. Fusion Quicktime
jobs are more customizable (ie: using templates), but requires Fusion to render.
9.29 Generation
9.29.1 Job Submission
You can submit comp jobs to Fusion from within Generation by installing the integrated submission script. The
instructions for installing the integrated submission script can be found further down this page.
In Generation, select the comp(s) you want to submit, and then right-click and select Submit.
654
This will bring up the submission window. Note that the submission window is only shown once, and all jobs that are
submitted will use the same job settings.
9.29. Generation
655
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Fusion options are:
Use Frame List In Comp: Uses the frame list defined in the comp files instead of the Frame List setting. If you
are submitting more than one comp from Generation, you should leave this option enabled unless you want the
Frame List setting to be used for each comp.
Proxy: The proxy level to use.
High Quality Mode: Whether or not to render with high quality.
Check Output: If checked, Deadline will check all savers to ensure they have saved their image file.
Version: The version of Fusion to render with.
Build: Force 32 or 64 bit rendering.
Command Line Mode: Render using separate command line calls instead of keeping the scene loaded in
memory between tasks. Using this feature disables the High Quality, Proxy, and Check Saver Output options.
This uses the FusionCmd plug-in, instead of the Fusion one.
656
Save the file. The next time you start up Generation, this script will be used when you select the Submit option
for the selected comps.
9.29.4 FAQ
Which versions of Generation are supported?
Generation 2 and later are supported.
9.30 Hiero
9.30.1 Job Submission
You can submit transcoding jobs to Nuke from within Hiero by installing the integrated submission script. The
instructions for installing the integrated submission script can be found further down this page.
To submit from within Hiero, open the Export window from the File menu, or by right-clicking on a sequence. Then
choose the Submit To Deadline option in the Render Background Tasks drop down and press Export.
9.30. Hiero
657
This will bring up the submission window. Note that the submission window is only shown once, and all jobs that are
submitted will use the same job settings.
658
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Nuke specific options are:
Render With NukeX: Enable this option if you want to render with NukeX instead of Nuke.
Render Threads: The number of threads to use for rendering.
Continue On Error: If enabled, Nuke will attempt to keep rendering if an error occurs.
Maximum RAM Usage: The maximum RAM usage (in MB) to be used for rendering.
Use Batch Mode: If enabled, Deadline will keep the Nuke file loaded in memory between tasks.
Build To Force: Force 32 or 64 bit rendering.
9.30. Hiero
659
to
the
Startup
folder
(~/.hi-
The next time you launch Hiero, there should be a Submit To Deadline option in the Hiero Export window, in the
Render Background Tasks drop down.
9.30.5 FAQ
The Hiero submitter submits jobs to the Nuke plug-in. See the Nuke Plug-in Guide for additional FAQs related to
Nuke.
Which versions of Hiero are supported?
Hiero 1.0 and later are supported.
How does the Deadline submission script for Hiero work?
The submission script submits transcoding jobs from Hiero to Deadline, which are rendered with the Nuke
plugin.
660
9.31 Houdini
9.31.1 Job Submission
You can submit jobs from within Houdini by installing the integrated submission script, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within Houdini, select Render -> Submit To Deadline.
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Houdini specific options are:
ROP To Render:
Choose: Allows you to choose your ROP from the dropbox to the right.
Selected: Allows you to render each ROP that you currently have selected in Houdini (in the order that
you selected)
All: Allows you to render every ROP in the Houdini file.
Ignore Inputs: If enabled, only the selected ROP will be rendered. No dependencies will rendered.
Build to Force: Force 32 or 64 bit rendering.
Submit Wedges as Separate Jobs: If enabled, each Wedge in a Wedge ROP will be submitted as a separate job
with the current Wedge settings. This option is only enabled if the selected ROP is a Wedge ROP, or if all ROPs
are being rendered and at least one of them is a Wedge ROP.
9.31. Houdini
661
When submitting from the Monitor, you just need to enable the Override Export IFD option. When submitting from
within Houdini using the integrated submission script, you must first make sure that the ROPs you wish to export have
the Disk File option enabled in their properties, and then enable the Submit Dependent Mantra Standalone Job option
662
in the submitter. Note that if a ROP does not have the Disk File setting enabled, it will simply render the image, and
no dependent Mantra Standalone job will be submitted.
The general Deadline options for the Mantra Standalone job are explained in the Job Submission documentation. The
Mantra Standalone specific options are:
Mantra Threads: The number of threads to use for the Mantra stanadlone job.
9.31. Houdini
663
Render Executables
Hython Executable: The path to the hython executable. It can be found in the Houdini bin folder. Enter
alternative paths on separate lines. Different executable paths can be configured for each version installed on
your render nodes.
Licensing Options
Slaves To Use Escape License: A list of slaves that should use a Houdini Escape license instead of a Batch
license. Use a , to separate multiple slave names, for example: slave001,slave002,slave003
Path Mapping (For Mixed Farms)
Enable Path Mapping: If enabled, Deadline will use Houdinis HOUDINI_PATHMAP environment variable
to perform path mappings on the contents of the Houdini scene file. This feature can be turned off if there are
no Path Mapping entries defined in the Repository Options.
664
Copy [Repository]\submission\Houdini\Client\DeadlineHoudiniClient.py
tory]\houdini\scripts\deadline\DeadlineHoudiniClient.py
to
[Houdini
Install
Direc-
On Mac OSX, copy the client script to the Houdini Framework folder
If the folder [Houdini Framework]/Versions/[Houdini Version]/Resources/houdini/scripts/deadline/ doesnt exist, create it.
Copy
[Repository]\submission\Houdini\Client\DeadlineHoudiniClient.py
to
[Houdini
work]/Versions/[Houdini Version]/Resources/houdini/scripts/deadline/DeadlineHoudiniClient.py
Frame-
For example, this is what the last few lines of your MainMenuCommon file might look like:
</menuBar>
<addScriptItem id="h.deadline">
<parent>render_menu</parent>
<label>Submit To Deadline</label>
<scriptPath>$HFS/houdini/scripts/deadline/DeadlineHoudiniClient.py</scriptPath>
<scriptArgs></scriptArgs>
<insertAfter/>
</addScriptItem>
</mainMenu>
9.31.4 FAQ
Which versions of Houdini are supported by Deadline?
Houdini 9 and later are supported. To render with Houdini 7 or 8, use the Mantra Plug-in.
Which Houdini license(s) are required to render with Deadline?
Deadline uses Hython to render, which uses hbatch licenses. If those are not available, it will try to use a
Master License instead.
665
9.32 Lightwave
9.32.1 Job Submission
You can submit jobs from within Lightwave by installing the integrated submission script, or you can submit them
from the Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within Lightwave, select the Render Tab and click the SubmitToDeadline button on the left.
666
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Lightwave specific options are:
Content Directory: The Lightwave Content directory. Refer to your Lightwave documentation for more information.
Config Directory: The Lightwave Config directory. Refer to your Lightwave documentation for more information.
Force Build: For Lightwave 9 and later, force rendering in 32 bit or 64 bit.
Use FPrime Renderer: If you want to use the FPrime renderer instead of the normal Lightwave renderer.
Use ScreamerNet Rendering: ScreamerNet rendering keeps the Lightwave scene loaded in memory between
frames, which reduces overhead time when rendering.
Notes:
At the moment, there is no support for rendering animation (movie) files. Any animation options will be ignored,
and an RGB output and/or Alpha output must be specified in order to submit to Deadline.
9.32. Lightwave
667
In the Scene file, some versions of Lightwave use a number to specify the output file type and some use the
actual file type extension (.tif, .tga, etc). In the versions that use the actual file type extension, individual
rendered images can be viewed from the Monitor task list by right-clicking on them.
For information on how to properly set up your network for Lightwave rendering, see the ScreamerNet section
of your Lightwave documentation. When Lightwave is properly configured for ScreamerNet rendering, it will
then render properly through Deadline.
From here, you can set the list of executables that will be used for rendering. To get a more detailed description of
each setting, simply hover the mouse cursor over a setting and a tool tip will be displayed.
668
Click the Edit menu in the top-left corner and select the Edit Menu Layout... option.
9.32. Lightwave
669
In the Command list on the left, expand the Plug-ins section in Lightwave 8 or the Additional section in Lightwave 9 and later, and find the DeadlineLightwaveClient plugin. Drag and drop it into the Menus list in the
Render section. Click Done.
Click the Render tab. There should be a DeadlineLightwaveClient button on the right. If there is not, check to
make sure you placed the DeadlineLightwaveClient plugin in the correct section.
670
9.32.4 FAQ
Which version of Lightwave are supported?
Lightwave versions 8 and later are supported. On Mac OSX, both the PPC and Universal Binary versions
work. However, the integrated Lightwave submission script only works with the Universal Binary version.
Lightwave 10 integrated submitter crashes with Deadline 5.0 and older on Mac OSX.
Due to an API change in LightWave, previous integrated submission scripts will not work under LightWave 10 on OSX. This is fixed in Deadline 5.1.
Does Deadline support the FPrime renderer?
Yes. FPrime has its own net rendering application called wsn.exe, which can be configured in the Lightwave plugin configuration. When you submit your Lightwave job, just make sure to have the Use FPrime
Renderer option checked.
When rendering with FPrime, I get an error that it cant create a temporary config directory.
This can occur when the job is using a shared Config folder on the network. FPrime tries to create a
temporary config directory in this shared folder, and this can fail if many slaves are trying to access that
Config folder at the same time.
To avoid this problem, we suggest enabling the FPrime Use Local Config option in the Lightwave Plugin
Configuration, which can be accessed from the Monitor while in Super User mode by selecting Tools ->
9.32. Lightwave
671
Configure Plugins. When this option is enabled, Deadline will copy the contents of the shared Config
folder to a local folder, and this is the Config folder that FPrime will use.
What does the Use ScreamerNet Rendering option in the submission dialog do?
When using ScreamerNet rendering, the Lightwave scene is kept loaded in memory between each frame
for a job, which greatly reduces the overhead of having to load the scene at the beginning of each frame.
Does Deadline work if one renames the Lightwave configuration files in the configuration directory?
Currently, Deadline assumes that you have not renamed the Lightwave configuration files in the Lightwave
configuration directory.
9.33 LuxRender
9.33.1 Job Submission
You can submit LuxRender jobs from the Monitor.
672
Submission Options
The general Deadline options are explained in the Job Submission documentation. The LuxRender specific options
are:
LXS File: The file to render.
Threads: The number of threads to use. Specify 0 to use the same number of threads as there are CPUs.
9.33. LuxRender
673
Render Executables
Luxrender Executable: The path to the luxconsole executable file used for rendering. Enter alternative paths
on separate lines.
9.33.3 FAQ
Is LuxRender supported by Deadline?
Yes.
9.34 LuxSlave
9.34.1 Job Submission
You can submit LuxRender Slave jobs from the Monitor, which can be used to reserve render nodes for distributed
rendering. Note, you will need to manually configure/update your locally running LuxRender UI network queue to
point to the correct, corresponding Deadline slaves or IP addresses, over an identical port number.
674
Submission Options
The general Deadline options are explained in the Job Submission documentation. The LuxSlave specific options are:
Maximum LC Slaves: The maximum number of Luxconsole Slaves to reserve for distributed rendering.
Port Number: Override the default Luxconsole Slave TCP port number of 18018 to use.
Threads: The number of threads to use. Specify 0 to use the same number of threads as there are CPUs.
Verbosity Level: The level of verbosity to use.
9.34. LuxSlave
675
Console Executables
Luxconsole Executable: The path to the luxconsole executable file used for rendering. Enter alternative paths
on separate lines.
Luxconsole Slave Options
Write film to disk before transmitting: Write film to disk before transmitting.
Specify the cache directory to use: Specify the local cache directory to use instead of the default: local users
temp directory.
Slave Process Handling
Handle Existing Slave Process: Either Do Nothing, FAIL on existing Slave process or KILL the existing Slave
process if already running.
Slave Session Timeout
Slave Session Auto Timeout Enable: Enable to force Slave Session to be marked as complete after a Slave
Session closes on a Deadline slave.
Slave Session Auto Timeout (Seconds): Slave Session minimum timeout before last closed Slave Session is
marked as complete on Deadline slave (seconds).
676
9.34.3 FAQ
Is Luxconsole DR Slave supported by Deadline?
Yes.
677
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Mantra specific options are:
IFD File: Specify the Mantra IFD file(s) to render.
678
If you are submitting a sequence of .IFD files, select one of the numbered frames in the sequence, and
the frame range will automatically be detected if Calculate Frames From IFD File is enabled. The
frames you choose to render should correspond to the numbers in the .IFD files.
Output File: The output file path.
Version: The Mantra version to render with.
Threads: The number of threads to use for rendering.
Additional Arguments: Additional command line arguments to pass to the renderer.
Tile Rendering Options
Enable Tile Rendering to split up a single frame into multiple tiles.
Enable Tile Rendering: If enabled, the frame will be split into multiple tiles that are rendered individually and
can be assembled after.
Tiles In X: Number of horizontal tiles.
Tiles In Y: Number of vertical tiles.
Single Frame Tile Job Enabled: Enable to submit all tiles in a single job.
Single Job Frame: The frame that will be split up.
Submit Dependent Assembly Job: Submit a job dependent on the tile job that will assemble the tiles.
Cleanup Tiles after Assembly: If selected the tiles will be deleted after assembly.
Error on Missing Tiles: If enabled, then if any of the tiles are missing the assembly job will fail.
Assemble Over: Determine what the Draft Tile Assembler should assemble over be it a blank image, previous
output or a specified file.
Error on Missing Background: If enabled, then if the background file is missing the job will fail.
679
Render Executables
Mantra Executable: The path to the Mantra executable file used for rendering. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
Path Mapping (For Mixed Farms)
Enable Path Mapping: If enabled, Deadline will use Houdinis HOUDINI_PATHMAP environment variable
to perform path mappings on the contents of the IFD file. This feature can be turned off if there are no Path
Mapping entries defined in the Repository Options.
9.35.3 FAQ
Which versions of Mantra are supported by Deadline?
Mantra for Houdini 7 and later supported by Deadline.
680
9.36 Maxwell
9.36.1 Job Submission
You can submit Maxwell jobs from the Monitor.
9.36. Maxwell
681
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Maxwell specific options are:
Maxwell Options
Maxwell File(s): The Maxwell files to be rendered. Can be a single file, or a sequence of files.
Version: The version of Maxwell to render with.
Verbosity: Set the amount of information that Maxwell should output while rendering.
Single Frame Job: This should be checked if youre submitting a single Maxwell file only.
Build To Force: Force 32 bit or 64 bit rendering.
Threads: The number of threads to use during rendering. Specify 0 to use the default setting.
Co-op Rendering
Cooperative Rendering: Enable this to use Maxwells co-op rendering feature to render the same image across
multiple computers. You can then use Maxwell to combine the resulting output after the rendering has completed.
Split Co-op Renders Into Separate Jobs: By default, a co-op render is submitted as a single job, where each
task represents a different seed. If this option is enabled, a separate job will represent each seed.
Adjust Sampling Overrides For Cooperative Rendering: If this option is enabled, the sampling level given
to each slave will be reduced accordingly to ensure that final merged sampling level will match the requested
one.
Number of Co-op Renders: The number of co-op render jobs to submit to Deadline.
Auto-Merge Files: Enable this option to auto-merge the co-op renders into the final image.
Fail On Missing Intermediate Files: If enabled, the auto-merge will fail if any co-op renders are missing.
Delete Intermediate Files: If enabled, the co-op renders will be deleted after the final image is merged together.
Output Options
Output MXI File: Optionally configure the output path for the MXI file which can be used to resume the render
later. Note that this is required for co-op rendering though.
Output Image File: Optionally configure the output path for the image file.
Render Camera: Optionally specify which camera to render with.
Enable Local Rendering: If enabled, Deadline will save the output locally and then copy it to the final network
location.
Resume Rendering From MXI File: If enabled, Maxwell will use the specified MXI file to resume the render
if it exists. If you suspend the job in Deadline, it will pick up from where it left off when it resumes.
Overrides
Override Time: Enable to override the Time setting in the Maxwell file.
Override Sampling: Enable to override the Sampling setting in the Maxwell file.
Extra Sampling (requires Maxwell 3.1 or later)
Override Extra Sampling: If the extra sampling settings should be overridden.
Enabled: If extra sampling is enabled.
Sampling Level: The extra sampling level.
682
9.36. Maxwell
683
Render Executable: The path to the Maxwell executable file used for rendering. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
Merge Executable: The path to the Maxwell executable file used for merging. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
9.36.3 FAQ
Which version of Maxwell is supported by Deadline?
Versions 2 and later are supported.
Is Co-op Rendering supported?
Yes.
Can I resume from a previous Maxwell render?
If you have the Resume Rendering From MXI File option enabled when submitting the job, Maxwell will
use the specified MXI file to resume the render if it exists. If you suspend the job in Deadline, it will pick
up from where it left off when it resumes.
9.37 Maya
9.37.1 Job Submission
You can submit jobs from within Maya by installing the integrated submission script, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
684
To submit from within Maya, select the Thinkbox shelf and press the green button there. If the green icon is missing,
you can delete the shelf and restart Maya to get it back.
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Maya specific options are:
9.37. Maya
685
Render Options
Camera: Select the camera to render with. Leaving this blank will force Deadline to render using the default
camera settings (including multiple camera outputs).
Project Path: The Maya project folder (this should be a shared folder on the network).
Output Path: The folder where your output will be dumped (this should be a shared folder on the network).
Maya Build: Force 32 bit or 64 bit rendering.
686
Use MayaBatch Plugin: This uses our new MayaBatch plugin that keeps the scene loaded in memory between
frames, thus reducing the overhead of rendering the job. This plugin is no longer considered experimental.
Ignore Error Code 211: This allows a Maya task to finish successfully even if the Maya command line renderer
returns the non-zero error code 211 (not available when using the MayaBatch plugin). Sometimes Maya will
return this error code even after successfully saving the rendered images.
Startup Script: Maya will source the specified script file on startup (only available when using the MayaBatch
plugin).
Command Line Args: Specify additional command line arguments to pass to the Maya command line renderer
(not available when using the MayaBatch plugin).
Deadline Job Type: Select the type of Maya job you want to submit. The available options are covered in the
next few sections.
Maya Render Job
If rendering a normal Maya job, select the Maya Render Job type.
General Options
The following options are available:
Threads: The maximum number of CPUs per machine to render with.
Frame Number Offset: Uses Mayas frame renumbering option to offset the frames that are rendered.
Submit Render Layers As Separate Jobs: Enable to submit each layer in your scene as a seperate job.
Override Layer Job Settings: If submitting each layer as a separate job, enable this option override the job
name, frame list, and task size for each layer. When enabled, the override dialog will appear after you press
Submit.
9.37. Maya
687
Submit Cameras As Separate Jobs: Enable to submit each camera as a separate job.
Ignore Default Cameras: Enable to have Deadline skip over cameras like persp, top, etc, when submitting each
camera as a separate job (even if those cameras are set to renderable).
Enable Local Rendering: If enabled, Deadline will render the frames locally before copying them over to the
final network location. This has been known to improve the speed of Maya rendering in some cases.
Strict Error Checking: Enable this option to have Deadline fail Maya jobs when Maya prints out any error
or warning messages. If disabled, Deadline will only fail on messages that it knows are fatal.
Render Half Frames: If checked, frames will be split into two using a step of 0.5. Note that frame 0 will save
out images 0 and 1, frame 1 will save out images 2 and 3, frame 2 will save out images 4 and 5, etc.
688
9.37. Maya
689
Mental Ray Verbosity: Set the verbosity level for Mental Ray renders.
Auto Memory Limit: If enabled, Mental Ray will automatically detect the optimal memory limit when rendering.
Memory Limit: Soft limit (in MB) for the memory used by Mental Ray (specify 0 for unlimited memory).
If rendering with VRay, there is an additional VRay Options section under the Maya Options:
Auto Memory Limit Detection: If enabled, Deadline will automatically detect the dynamic memory limit for
VRay prior to rendering.
Memory Buffer: Deadline subtracts this value from the systems unused memory to determine the dynamic
memory limit for VRay.
If rendering with Redshift, there will be an additional Redshift Options under the Maya Options:
GPUs Per Task: If set to 0 (the default), then Redshift will be responsible to choosing the GPUs to use for
rendering. If this is set to 1 or greater, then each task for the job will be assigned specific GPUs. This can be
used in combination with concurrent tasks to get a distribution over the GPUs.
For example:
if this is set to 1, then tasks rendered by the Slavess thread 0 would use GPU 0, thread 1 would use GPU
1, etc.
if this is set to 2, then tasks rendered by the Slavess thread 0 would use GPUs {0,1}, thread 2 would use
GPUs {2,3}, etc.
Mental Ray Export Job
If rendering a Mental Ray Export job, select the Mental Ray Export Job type.
690
9.37. Maya
691
You have the option to submit a dependent Mental Ray Standalone job that will render the exported mi files after the
export job finishes. The Mentral Ray specific job options are:
Threads: The number of threads to use for rendering.
Frame Offset: The first frame in the input MI file being rendered, which is used to offset the frame range being
passed to the mental ray renderer.
Mental Ray Build: You can force 32 or 64 bit rendering.
Enable Local Rendering: If enabled, the frames will be rendered locally, and then copied to their final network
location.
Command Line Args: Specify additional command line arguments you would like to pass to the mental ray
renderer.
692
9.37. Maya
693
694
You have the option to submit a dependent Arnold Standalone job that will render the exported .ass files after the
export job finishes. The Arnold Standalone specific job options are:
Local Export to Arnold: If this option is set to true, the Arnold .ass files will be export locally.
Threads: The number of threads to use for rendering.
Command Line Args: Specify additional command line arguments you would like to pass to the Arnold renderer.
Maxwell Export Job
If rendering a Maxwell Export job, select the Maxwell Export Job type.
9.37. Maya
695
696
However, if you are using absolute paths in your Maya scene file, it is possible for Deadline to swap them as well, but
you must save your scene file as a Maya Ascii (.ma) file. Because .ma files are ascii files, Deadline can read them and
swap out paths as necessary. If theyre saved as Maya Binary (.mb) files, they cant be read, and cant have their paths
swapped.
9.37. Maya
697
Render Executables
Maya Executable: The path to the Maya executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
Maxwell For Maya (version 2 and later)
Slaves To Use Interactive License: A list of slaves that should use an interactive Maxwell license instead of a
render license. Use a , to separate multiple slave names, for example: slave001,slave002,slave003
Path Mapping For ma Scene Files (For Mixed Farms)
Enable Path Mapping For ma Files: If enabled, a temporary ma file will be created locally on the slave for
rendering and Deadline will do path mapping directly in the ma file.
Debugging
Log Script Contents To Render Log: If enabled, the full script that Deadline is passing to Maya will be written
to the render log. This is useful for debugging purposes.
698
MayaCmd
Render Executables
9.37. Maya
699
Maya Executable: The path to the Maya executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
Maxwell For Maya (version 2 and later
Slaves To Use Interactive License: A list of slaves that should use an interactive Maxwell license instead of a
render license. Use a , to separate multiple slave names, for example: slave001,slave002,slave003
Path Mapping For ma Scene Files (For Mixed Farms)
Enable Path Mapping For ma Files: If enabled, a temporary ma file will be created locally on the slave for
rendering and Deadline will do path mapping directly in the ma file.
On
Mac
OS
X,
copy
the
file
[Repository]/submission/Maya/Client/DeadlineMayaClient.melto
[Maya Install Directory]/Maya.app/Contents/scripts/startup.
If you do not have a userSetup.mel in /Users/[USERNAME]/Library/Preferences/Autodesk/maya/scripts, copy the file [Repository]/submission/Maya/Client/userSetup.mel to /Users/[USERNAME]/Library/Preferences/Autodesk/maya/scripts.
If you have a userSetup.mel file, add the following line to the end of this file:
source "DeadlineMayaClient.mel";
On Linux, copy the file [Repository]/submission/Maya/Client/DeadlineMayaClient.mel to [Maya Install Directory]/Maya.app/Contents/scripts/startup. If you do not have a userSetup.mel in /home/[USERNAME]/maya/scripts,
copy the file [Repository]/submission/Maya/Client/userSetup.melto /home/[USERNAME]/maya/scripts. If you have
a userSetup.mel file, add the following line to the end of this file:
source "DeadlineMayaClient.mel";
700
The next time Maya is started, a Deadline shelf should appear with a green button that can be clicked on to launch the
submitter.
If you dont see the Deadline shelf, its likely that Maya is loading another userSetup.mel file from somewhere. Maya
can only load one userSetup.mel file, so you either have to configure Maya to point to the file mentioned above, or you
have to modify the file that Maya is currently using as explained above. To figure out which userSetup.mel file Maya
is using, open up Maya and then open up the Script Editor. Run this command:
whatIs userSetup.mel
The available Deadline globals are defined in the SavePersistentDeadlineOptions function in the SubmitMayaToDeadline.mel script. These can be used to set the initial values in the submission dialog.
You can also create a CustomPostSanityChecks.mel file alongside the main SubmitMayaToDeadline.mel in the
[Repository]submission\Maya\Main folder. It can be used to run some additional checks after the user clicks the
Submit button in the submitter. It must define a global proc called CustomPostSanityCheck() that takes no arguments,
and must return 0 or 1. If 1 is returned, the submission process will continue, otherwise it will be aborted. Here is an
example script:
global proc CustomPostSanityCheck()
{
// Don't allow mayaSoftware jobs to be submitted
if( GetCurrentRenderer() == "mayaSoftware" )
return 0;
return 1;
}
9.37.5 FAQ
Do I need to install Maya on each machine that will render and all 3rd party plugins that are required?
Yes. Traditionally, Maya and all required scripts & 3rd party plugins should always be installed and
licensed (where applicable) on each machine where it is intended to network render on. However, VFX
studios tend to operate a Linux OS platform and take advantage of installing software onto a centralized
file server that importantly has the performance to support this configuration and then all local machines
can be configured to point at this central location. Additionally, 3rd party plugins/scripts can then be added
to this central server path location in combination with floating licenses. This level of custom deployment
and configuration is beyond the scope of Thinkbox support and you would be best advised to engage
an approved Autodesk reseller or Autodesk directly on best practices here. Here are some URL links,
9.37. Maya
701
which may be of assistance. If you are able to install and successfully run Maya & all your plugins/scripts
from a network location in your studio, then Deadline will be able to support network rendering from
this location as well. Simply update the MayaBatch & MayaCmd plugins with the new executable path
location using Deadline Monitor, click on Tools > Super User Mode > Configure Plugins... >
MayaBatch or MayaCmd.
How to install Maya on a network share
Maya Environment Variables
Which versions of Maya are supported?
Maya versions 2010 and later are all supported.
Which Maya renderers are supported?
All Maya renderers should work fine with Deadline. The renderers that are known to work with Deadline are 3Delight, Arnold, Caustic Visualizer, Final Render, Gelato, Krakatoa, Maxwell, MayaSoftware,
MayaHardware, MayaVector, Mental Ray, Octane, RedShift, Renderman, Renderman RIS, Turtle, and
VRay. If you see a Maya renderer thats not on this list, email Deadline Support and let us know!
Does the Maya plugin support Tile Rendering?
Yes. See the Region Rendering Options section above for more details.
Does the Maya plugin support multiple arbitrary sized, multi-resolution Tile Rendering for both stills or animations and automatic re-assembly, including the use of multi-channel image formats and arbitary Render
Passes? (incl. VRay/Arnold/MR support?)
Yes. We call it Jigsaw and its unique to the Deadline system! See the Region Rendering Options
section above for more details.
Which Maya application should I select as the render executable in the MayaCmd plugin configuration?
Select the Render.exe application. This is Mayas command line renderer.
Which Maya application should I select as the render executable in the MayaBatch plugin configuration?
Select the MayaBatch.exe application. This is Mayas batch renderer.
What is the MayaBatch plugin, and how is it different than the MayaCmd plugin?
This plugin keeps the Maya scene loaded in memory between frames, thus reducing the overhead of
rendering the job. This is the recommended plugin to use, but if you run into any problems, you can
always try using the MayaCmd plugin.
Why is each task of my job is rendering the same frame(s)?
This happens if you have the Renumber Frames option enabled in your Maya render settings. Each task
is a separate batch, and if Renumber Frames is enabled, each batch will start at that frame number.
I have a multi-core machine, but when rendering the machine isnt using 100% of the cpu. What can I do?
When submitting the job to Maya, set the Threads option to 0. This will instruct Maya to use the optimal
number of threads when rendering based on the machines core-count.
Does Deadline support Maya render layers?
Yes. You can either submit one job that renders all the layers, or you can submit a single job per layer.
Can I render scenes that use Maya Fur?
A recommended setup for Maya is to have your project folder on a shared location that all of your machines can see (whether it be a Windows folder share or a mapped path), then create your Maya scene
in this project folder. This way, when you submit the job, you can specify the shared project path in the
702
submission dialog, and all of your slave machines will be able to see it (and therefore see the Maya Fur
folders within the project folder).
Can I make use of the particle cache during network renders?
Yes you can. All that is necessary to do this is to make your scenes project directory network-accessible
by your slaves. For a guide to setting up particle caches, check out this guide on the ResPower Website
that describes the proper set-up procedure for the Maya particle cache.
When clicking on one of the folder browser buttons in the Maya submission dialog, I sometimes get an error.
There is an article on this problem. Its a .NET problem that seems to randomly occur when the user
specifies a path of more than 130 characters, but it looks like Microsoft provides a hotfix for it.
When submitting the job from Maya, if I check the Submit Each Render Layer As A Separate Job box, no jobs
are submitted when you click submit.
The render layers you want to submit need to be set to renderable (the letter R need to be there next to
the render layer) for the submitter to submit the layer. Note that render layer should not be confused with
display layer. Deadline only deals with render layers. It is not using the Maya option to render only the
content of a specific display layer.
Im trying to render certain frame range from maya, but Deadline is rendering the entire frame range set in
the Maya render globals.
If you have the Submit Each Render Layer As A Separate Job box checked, Deadline grabs the frame
information from each individual layers render globals when submitting the job. If unchecked, Deadline
will use the info from the Frame List in the submission dialog.
Rendering Maya scenes with Deadline is taking forever in comparison to a local render of the same file.
One thing you can try is ensuring that the Local Rendering option is enabled when submitting the job to
Deadline. This forces Maya to render the frame locally, then copy it to the final destination after. This has
been known to improve rendering speeds.
How do I configure Mental Ray Satellite to render Mental Ray for Maya jobs with Deadline?
1. Choose a satellite master machine, then modify the maya.rayhosts of the that machine so that it uses the slaves
you want.
2. Only put the master machine in Deadline.
3. Submit a job, and make sure that the job will be picked-up by the master machine you have setup. Use pools to
do so.
4. In the job property page of the Maya job, in the Maya tab, you could add the following line in the additional
arguments field: -rnm 1
This -rnm 1 means render no master true, whicht will force the master not to participate in the rendering but only
submit and receive the render tasks. You will get better results this way.
You could also use -rnm 0 which means render no master false and force 1 cpu on the master (if your master is a
dual cpu) so you have 1 cpu free on the master to dispatch the task. In short you should always have 1 cpu free on the
master machine for dispatching or else your render time will suffer.
Can I submit MEL or Python Maya script files to Deadline?
Yes, you can submit your own custom scripts from the Advanced tab in the Maya submission script in the
Monitor Submit menu.
Can I Perform Fume FX Simulations With Deadline?
Yes and its supported by both our MayaBatch & MayaCmd plugins. To do so, follow these steps:
1. Requires min. FumeFX for Maya v3.5.4
9.37. Maya
703
704
Certain versions of Maya come with satellite licenses for Mental Ray, but this requires some additional
setting up to enable network rendering. Its probably best to contact the Maya support team about this.
Exception during render: Renderer returned non-zero error code, 211
When Maya prints this error message it usually means that Maya cant access a particular path because it
either doesnt exist or it doesnt have the necessary read/write permissions to access it. This error tends
to occur when Maya is either loading the scene or other referenced data or when saving the final output
images.
When you get this error, you should check the slave log that is included with the error report. If it is a path
problem, Maya shows which path it wasnt able to access. Check to make sure that the slave machine
rendering the job can see the path, and that it has the necessary permissions to read/write to it. If its
not a path problem, the slave log should still provide some useful information that can help explain the
problem.
There is also the case where Maya exits with this error code after successfully rendering the images. If
this is the case, there are two things to try:
1. When you submit the job, enable the option to ignore error code 211.
2. When you submit the job, enable the MayaBatch option. Deadline doesnt check error codes in this
case.
Cannot open renderer description file vrayRenderer.xml
We are not sure if this is specific to a studios installation of V-Ray for Maya on OSX (f.e. a studios
custom environment variables might be confusing the V-Ray installer) or if this is just a bug in the V-Ray
installer specficially on OSX. The issue has been reported to support@chaosgroup.com. Currently, Maya
20xx on OSX (where xx indicates any year of Maya that V-Ray ships support for) has 3 x rendererDesc
directories in the following locations:
1. /Applications/Autodesk/maya20xx/Maya.app/Contents/bin/rendererDesc/
2. /Applications/Autodesk/maya20xx/Maya.app/Contents/MacOS/rendererDesc/
3. /Applications/Autodesk/maya20xx/bin/rendererDesc
The V-Ray installer adds the vrayRenderer.xml file to locations (2) & (3). However, Maya requires this
file to reside primarily in location(s) (1) and/or (2).
There are a couple of ways to resolve this issue whilst hopefully a fix is provided by Chaos Group in the
future.
Ensure your slaves have the environment variable MAYA_RENDER_DESC_PATH defined and
pointing to: /Applications/Autodesk/maya20xx/Maya.app/Contents/MacOS/rendererDesc.
Alternatively, ensure the user shell of the Deadline Slave has this setting exported such as:
export MAYA_RENDER_DESC_PATH=
/Applications/Autodesk/maya20xx/Maya.app/Contents/MacOS/rendererDesc
Finally, another solution is to ensure on each of your rendernodes you copy the vrayRenderer.xml
from location (2) to location (1).
Exception during render: Error: Cannot find procedure getStrokeUVFromPoly
This error can occur when rendering with paint effects. When you write prerender/postrender scripts be
sure to use maya commands and not function wrappers that the gui posts since a huge number of functions
dont get loaded when rendering in batch mode.
For a quick fix, add the following before the call to the prerenderscripts main functions:
9.37. Maya
705
source "getStrokes";
706
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Adobe Media Encoder specific options are:
Input Path: The path to the media to be encoded. It may be a media file in any of the formats supported by
AME, a Premiere Pro project (.prproj), or an FCP XML project (.xml).
Output Path: A path to a file that will contain the encoded result.
Preset File: A path to an AME preset .epr file.
707
Overwrite Output If Present: If enabled, Adobe Media Encoder will overwrite any output file with the same
name as the output file found in the output location with the new output.
Submit Preset With Job: If enabled, the Preset File will be uploaded to the Deadline Repository with the Job.
Enable this if the Preset File file is local.
Render Executables
Web Service Executable: The path to the Adobe Media Encoder Web Service executable file used for encoding.
Enter alternative paths on separate lines.
Web Service
Deadline Slaves that render Media Encoder jobs use the media encoder web service. To modify the web service port
number or address you need to modify the ame_webservice_config.ini file.
Example of the config file:
# leaving IP blank/commented out will default to whatever IP address the
# web service is able to sniff out
ip = 127.0.0.1
port = 8080
# restart_threshold:
708
The ame_webservice_config.ini file is found in the same directory as the Adobe Media Encoder Web Service executable file. Note that the default port being used is 8080.
9.38.3 FAQ
Is Media Encoder supported by Deadline?
Yes.
709
710
711
Render Executables
Render Executable: The path to the Mental Ray Standalone executable file used for rendering. Enter alternative
paths on separate lines.
Render Options
Error Codes To Ignore: Mental Ray error codes that Deadline should ignore and instead assume the render has
finished successfully. Use a ; to separate the error codes.
Treat Exit Code 1 As Error: If set then Exit Code 1 will not be treated as success.
9.39.3 FAQ
Is Mental Ray Standalone supported by Deadline?
Yes.
Can I submit a sequence of MI files that each contain one frame, or must I submit a single MI file that contains
all the frames?
Deadline supports both methods.
When rendering a single MI file that contains all the frames, the frame range I tell Deadline to render doesnt
match up with the files that are actually rendered.
When submitting a single MI file that contains all the frames, make sure the Input MI File Start Frame
option is set to the first frame that is in the MI file. This value is used to offset the frame range being
passed to the mental ray renderer.
Mental Ray is printing out an error that is causing Deadline to fail the render, but when I render from the
command line outside of Deadline, the error is still printed out, but the render finishes successfully.
712
By default, Deadline fails a Mental Ray job whenever it prints out an error. However, you can configure
the Mental Ray plugin to ignore certain error codes, which are printed out alongside the error in the error
lob.
After a frame is rendered, Deadline takes a long time releasing the task before it moves on to another. Whats
going on?
This can occur when a single MI file that contains all the frames is submitted to Deadline. Try exporting
your frames to a sequence of MI files (one per frame) and submit the sequence of MI files to Deadline
instead.
9.40 Messiah
9.40.1 Job Submission
You can submit jobs from within Messiah by installing the integrated submission plugin, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
9.40. Messiah
713
To submit from within Messiah, select the Customize tab, and then from the drop down, select Submit To Deadline.
Click the Submit Messiah Job button to launch the submitter.
714
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Messiah specific options are:
Messiah File: The scene file to render.
Content Folder: This is the folder that contains the Messiah scene assets. It is recommended that you have a
network accessible content folder when network rendering with Messiah.
Output Folder: The folder where the output files will be saved (including images from all enabled buffers). If
left blank, the output folders in the scene file will be used.
Threads: The number of threads to use for rendering.
Build To Force: The build of Messiah to force.
Frame Resolution: Override the width and height of the output images. If a value is set to 0, the value from the
scene file will be respected.
Antialiasing: Override the antialiasing settings in the scene file.
9.40. Messiah
715
Messiah Settings
Messiah Host Library: The path to the messiahHOST.dll library. Enter alternative paths on separate lines.
716
9.40.4 FAQ
Which versions of Messiah are supported by Deadline?
Messiah 5 is currently supported.
9.41 MetaFuze
9.41.1 Job Submission
You can submit MetaFuze jobs from the Monitor.
9.41. MetaFuze
717
718
Render Executables
MetaFuze Executable: The path to the MetaFuze executable file used for rendering. Enter alternative paths on
separate lines.
9.41.3 FAQ
Is MetaFuze supported by Deadline?
Yes.
9.41. MetaFuze
719
9.42 MetaRender
9.42.1 Job Submission
You can submit MetaRender jobs from the Monitor.
720
Submission Options
The general Deadline options are explained in the Job Submission documentation. The MetaRender specific options
are:
Input File: The input file. It could be a movie file or part of an image sequence.
Output File: The file or image sequence name that MetaRender will write to.
Encoding Profile: The path to the encoding profile saved with the Profile Editor.
Burn File (optional): Superimpose the specified burn-in template over the output frames.
Rendering Mode: Select CPU or GPU.
Strip Alpha Channel: Strips the alpha channel from the input sequence during conversion.
Threads: The number of render threads to use (CPU mode only).
Draft Mode: Speed up rendering for non-critical color work (GPU mode only).
Render CPU Masks: Uses high quality mask rendering instead of low quality GPU-based masks (GPU/Draft
mode only).
Write Flex File: Writes a flex file for the entire timeline.
Render Takes Into Subfolders: If the Flex File option is enabled, render takes into subfolders.
Core Command Args: Specify additional Core Command Line arguments (the basic command line options for
all IRIDAS applications).
MetaRender Args: Specify additional MetaRender-specific command line arguments.
9.42. MetaRender
721
Render Executables
Meta Render Executable: The path to the Meta Render executable file used for rendering. Enter alternative
paths on separate lines.
9.42.3 FAQ
Is MetaRender supported by Deadline?
Yes.
9.43 MicroStation
9.43.1 Job Submission
You can submit jobs from within MicroStation by installing the integrated submission script, or you can submit them
from the Deadline Monitor. The instructions for installing the integrated submission script can be found further down
this page.
722
To submit from within MicroStation (once the submitter has been installed), navigate to the Utilities->Render menu
and select Submit To Deadline. Alternatively, you can use the Key-In mdl load DLSubmit to bring up the submission UI (or dlsubmit open, once its already been loaded).
9.43. MicroStation
723
Submission Options
The general Deadline options are explained in the Job Submission documentation. The MicroStation-specific options
are:
Operation: This is the type of MicroStation operation that will be performed by the Deadline Job. The different
options are described below:
Animation Render: This will render the currently active Animation Script through Deadline.
Single View Render: This will render a single view as an image through Deadline.
Save Multiple Images: This will submit the currently active Save Multiple Images script as a Deadline
Job, or use the specified SM file.
File Export: This will perform a File->Export operation as a Deadline Job (only a specific subset of these
operations are currently available).
Print: This will perform a Print operation as a Deadline Job using the current settings, or the specified
PSET file.
Mode: This option is dependent on the type of Operation selected. Will either specify the Render Mode, or type
of File Export to perform.
Color Model: This drop-down allows you to select the Color output of the Render (e.g. full RGB, GrayScale,
MonoChrome, etc.)
Design File: This option is only relevant to the Monitor Submitter, and specifies which Design File to use for
the selected operation. For the integrated Submitter, this will always be the DGN file that is currently open.
Submit Files with Job: This option, if checked, will submit files with the job, as opposed to leaving them in
their current location.
View Number: The number of the Viewport that will be used for rendering (1-8).
View Name: (Optional) The name of the Saved View that will be applied before rendering.
Output Size X: The X (horizontal) component of the output size. Set to 0 to use current value, or maintain
Aspect Ratio (depending on whether or not the Aspect is currently locked).
Output Size Y: The Y (vertical) component of the output size. Set to 0 to use current value, or maintain Aspect
Ratio (depending on whether or not the Aspect is currently locked).
Environment: The name of the Environment to use for Luxology Renders. If the specified Environment is not
found, the Untitled setup will be used.
Render Setup: The name of the Render Setup to use for Luxology Renders. If the specified Render Setup is
not found, the Untitled setup will be used.
Frame List: The list of Frames to render during Animation Renders.
Task Size: The number of Frames (Animation) or Script Entries (Save Multiple Images) to process per Deadline
Task.
Settings File: The path to an operation-specific file that will specify additional settings for the operation (e.g.
Print Settings file, DWG Export settings, etc.).
Use Current Settings: This checkbox is only available from the integrated submitter. If checked, a new settings
file will be created and submitted with the Job, based on the settings in the current MicroStation session.
Output Path: The Path to the output that will be created. Frame padding should be represented by either #s
or 0s. Unrecognized file formats for the current operation will be changed to a default known format at render
time.
724
Convert Network Paths to UNC: If this option is selected, Deadline will attempt to convert paths from using
Mapped Network Drives to using the full UNC network path.
Note that some of these parameters might not apply to all Operations/Modes. The Submitters will automatically
disable or hide controls that are not relevant to the currently chosen Operation/Mode.
MicroStation Executables
This section defines the possible locations for ustation.exe for different versions of MicroStation. The Deadline Slaves
will look for the executable in each of these locations (in order) when it tries to render a MicroStation job.
725
something
like:
C:\Program
Files
This should allow you to use the MDL KeyIn mdl load DLSUBMIT to load the Deadline Submitter within the
MicroStation GUI.
If you want to have a menu item to Submit to Deadline, you can append the file path [Repository]\submission\MicroStation\Client\DeadlineMenu.dgnlib to your MS_GUIDGNLIBLIST configuration variable
(under Workspace -> Configuration... in MicroStation), or you can manually create your own menu item in a
custom DGNLIB by following these instructions from Bentley.
NOTE, the MicroStation Submitter Installer, will by default, install the DeadlineMenu.dgnlib file to this location: C:\ProgramData\Bentley\MicroStation V8i (SELECTseries)\WorkSpace\Interfaces\MicroStation\default
which may be preferred instead of pointing all MicroStation machines to use the copy that is stored on the Deadline Repository. If you use the MicroStation Submitter Installer approach, note that it is NOT necessary to edit the
MS_GUIDGNLIBLIST configuration variable.
726
9.43.4 FAQ
Is MicroStation supported by Deadline?
Yes.
Which versions of MicroStation are supported?
Currently, only MicroStation v8i SS3 (08.11.09) is officially supported. We will look to support different
versions of MicroStation as they come out in the future, or as demand dictates.
Does the MicroStation plugin support Tile Rendering?
9.43. MicroStation
727
Not currently. The plan is to investigate the possibility of including this feature in MicroStation going
forward.
How do I remove the multiple entries of the Submit to Deadline menu entry in MicroStation GUI?
Please ensure you do NOT store the DeadlineMenu.dgnlib file in this location on your local machine:
C:\ProgramData\Bentley\MicroStation V8i (SELECTseries)\WorkSpace\Interfaces\MicroStation\default.
The only place the DeadlineMenu.dgnlib
file should be declared is in the MS_GUIDGNLIBLIST configuration variable and it should make
reference to the file from the Deadline repository network path.
Do I need a DeadlineSubmission.dll file in my [MicroStation install]\mdlapps directory?
No. This is an old version which is now deprecated. Ensure you only have the files which are identified
above in the Integrated Submission Script Setup section as copied over to your \mdlapps directory.
Ensure you either use the DeadlineMenu.dgnlib UI configuration method described above or the manual
MDL KeyIn method to start the Deadline Submission UI: mdl load DLSUBMIT.
9.44 modo
9.44.1 Job Submission
You can submit jobs from within modo by using the integrated submitter (7xx and up), running SubmitModoToDeadline.pl script, or you can submit them from the Monitor.
728
To run the integrated submitter within modo 7xx or later, after its been installed:
Render -> Submit To Deadline
To run the integrated submitter within modo 6xx or earlier, after its been installed:
Under the system menu, choose Run Script
Choose the DeadlineModoClient.pl script from [Repository]\submission\Modo\Client
Alternatively, you can also copy this script to your local machine and run it from there. You
should do this if the path to your Deadline repository is a UNC path and you are running modo
on Windows OS.
9.44. modo
729
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The modo specific options are:
** Job Options**
These are the general modo options:
Render With V-Ray: Enable this option to use V-Rays renderer instead of modos renderer. This requires the
V-Ray for modo plugin to be installed on your render nodes.
Pass Group: The pass group to render, or blank to not render a pass group.
Submit Each Pass Group As A Separate Job: If enabled, a separate job will be submitted for each Pass Group
in the scene.
Override Output
You have the option to override where the rendered images will be saved. If this is disabled, Deadline will respect
the output paths in the modo Output items in your scene file. If this is enabled, be sure to set the Output Pattern
appropriately if your scene has multiple passes, output items, or left and right eye views.
Override Render Output: Enable to override where the rendered images are saved.
Output Folder: The folder where the rendered images will be saved.
Output File Prefix: The prefix for the image file names (extension is not required).
Output Pattern: The pattern for the image file names.
Output Format: The format of the rendered images. Note that you can choose the layered PSD or EXR formats
here, and that Tile Rendering supports the layered EXR format.
Tile Rendering Options
Enable Tile Rendering to split up a single frame into multiple tiles.
730
Enable Tile Rendering: If enabled, the frame will be split into multiple tiles that are rendered individually and
can be assembled after.
Frame To Tile Render: The frame that will be split up.
Tiles In X: Number of horizontal tiles.
Tiles In Y: Number of vertical tiles.
Submit Dependent Assembly Job: Submit a job dependent on the tile job that will assemble the tiles.
Assemble With Draft: Draft is required when using Jigsaw Rendering. However, when Tile Rendering is the
chosen type, you can choose to assemble with Draft, or with the old Tile Assembler application.
Cleanup Tiles after Assembly: If selected the tiles will be deleted after assembly.
Error on Missing Tiles: If enabled, then if any of the tiles are missing the assembly job will fail.
Assemble Over: Determine what the Draft Tile Assembler should assemble over be it a blank image, previous
output or a specified file.
Error on Missing Background: If enabled, then if the background file is missing the job will fail.
Use Jigsaw: Enable to use Jigsaw for tile rendering.
Open Jigsaw Panel: Opens the Jigsaw UI
Reset Jigsaw Background: Resets the background of the jigsaw regions
Save Jigsaw Regions: Saves the Jigsaw Regions to the scene File
Load Jigsaw Regions: Loads the save Jigsaw Regions and sends them to the open panel.
9.44. modo
731
Submission Options
The general Deadline options are explained in the Job Submission documentation. The modo Distributed Rendering
specific options are:
Maximum Servers: The maximum number of modo Servers to reserve for distributed rendering.
Use Server IP Address instead of Host Name: If checked, the Active Servers list will show the server IP
addresses instead of host names.
Rendering
After youve configured your submission options, press the Reserve Servers button to submit the modo Server job.
After the job has been submitted, you can press the Update Servers button to update the jobs ID and Status in the
submitter. As nodes pick up the job, pressing the Update Servers button will also show them in the Active Servers list.
Once you are happy with the server list, press Start Render to start distributed rendering.
Note that the modo Server process can sometimes take a little while to initialize. This means that a server in the Active
Server list could have started the modo Server, but its not fully initialized yet. If this is the case, its probably best to
732
wait a minute or so after the last server has shown up before pressing Start Render.
After the render is finished, you can press Release Servers or close the submitter to mark the modo Server job as
complete so that the render nodes can move on to another job.
However, the modo scene file will probably be storing texture paths as Volumes:share/ instead of /Volumes/share/.
This means youll need another Mapped Path entry that looks like this:
Replace Path: Volumes:share/
Windows Path: \\server\share\
Linux Path:
Mac Path:
If you wish to disable the Path Mapping setting in the modo Plug-in Configuration, but still wish to perform crossplatform rendering with modo, you must ensure that your modo scene file is on a network shared location, and that
any footage or assets that the project uses is in the same folder. Then when you submit the job to Deadline, you must
make sure that the option to submit the scene file with the job is disabled. If you leave it enabled, the scene file will be
copied to and loaded from the Slaves local machine, and thus wont be able to find the footage.
9.44. modo
733
Render Executables
modo Executable: The path to the modo executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
Geometry Cache
Auto Set Geometry Cache: Enable this option to have Deadline automatically set the modo geometry cache
before rendering (based on the geometry cache buffer below).
Geometry Cache Buffer (MB): When auto-setting the geometry cache, Deadline subtracts this buffer amount
from the systems total memory to calculate what the geometry cache should be set to.
Path Mapping (For Mixed Farms)
Enable Path Mapping: If enabled, a temporary modo file will be created locally on the slave for rendering
because Deadline does the path mapping directly in the modo file. This feature can be turned off if there are no
Path Mapping entries defined in the Repository Options.
734
Copy the DeadlineModo folder from \\your\repository\submission\Modo\Client to this User Scripts folder.
Restart modo, and you should find the Submit To Deadline menu item in your Render menu.
6xx or earlier:
Under the system menu, choose Run Script
Choose the DeadlineModoClient.pl script from [Repository]\submission\Modo\Client
Alternatively, you can also copy this script to your local machine and run it from there. You should do this
if the path to your Deadline repository is a UNC path and you are running modo on Windows OS.
Custom Sanity Check
A CustomSanityChecks.py file can be created in [Repository]\submission\Modo\Main, and will be executed if it exists
when the user clicks the Submit button in the integrated submitter. This script will let you override any of the properties
in the submission script prior to submitting the job. You can also use it to run your own checks and display errors or
warnings to the user. Finally, if the RunSanityCheck method returns False, the submission will be cancelled.
Here is a very simple example of what this script could look like:
import
import
import
import
lx
lxu
lxu.command
lxifc
9.44. modo
735
9.44.7 FAQ
Which versions of modo are supported?
Modo 3xx and later are supported.
Which versions of modo can I use for interactive distributed rendering?
Modo 7xx and later are supported.
When rendering with modo on Windows, it hangs after printing out @start modo_cl [48460] Luxology LLC.
Were not sure of the cause, but a known fix is to copy the perl58.dll from the extra folder into the
main modo install directory (C:Program FilesLuxologymodo601).
When rendering with modo on Mac OSX, the Slave icon in the Dock changes to the modo icon, and the render
gets stuck.
This is a known problem that can occur when the Slave application is launched by double-clicking it in
Finder. There are a few known workarounds:
1. Start the Launcher application, and launch the Slave from the Launchers Launch menu.
2. Launch the slave from the terminal by simply running DEADLINE_BIN/deadlineslave or DEADLINE_BIN/deadlinelauncher -slave, where DEADLINE_BIN is the Deadline bin folder.
3. Use modo as the render executable instead of modo_cl.
When tile rendering, each tile is rendered, but there is image data in the unrendered region of each tile.
This happens when there is a cached image in the modo frame buffer. Open up modo on the offending
render node(s) and delete all cached images to fix the problem.
9.45 Naiad
9.45.1 Job Submission
You can submit jobs from the Monitor.
736
9.45. Naiad
737
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Naiad specific options are:
Naiad File: The Naiad file to simulate.
Naiad Simulation Job
Submit Simulation Job: Enable to submit a Simulation job to Deadline.
Run Simulation On A Single Machine: If enabled, the simulation job will be submitted as a single task
consisting of all frames so that a single machine runs the entire simulation.
Threads: The number of render threads to use. Specify 0 to let Naiad determine the number of threads to use.
Enable Verbose Logging: Enables verbose logging during the simulation.
EMP to PRT Conversion Job
Submit an EMP to PRT Conversion Job: Enable to submit a PRT Conversion job to Deadline.
If you are also submitting a simulation job, this job will use the EMP files created by the simulation job.
If you are not submitting a simulation job, the EMP files must already exist.
EMP Body Name: The EMP body name.
EMP Body File Name: The path to the EMP files to be converted.
738
Naiad Executables
Simulation Executable: The path to the command line client executable file used for simulation. Enter alternative paths on separate lines.
Emp to Prt Executable: The path to the emp2prt executable file used for emp conversion. Enter alternative
paths on separate lines.
9.45.3 FAQ
Is Naiad supported by Deadline?
Yes.
9.46 Natron
9.46.1 Job Submission
You can submit Natron jobs from the Monitor.
9.46. Natron
739
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Natron specific options are:
Writer Node To Render: A custom writer node to render can be specified here. This is optional and can be left
blank.
Frame List: Override the frame list of writer node frames to render. This is optional and can be left blank.
Frames Per Task: This is the number of frames that will be rendered at a time for each job task. Default is 1.
740
Render Executables
Natron Executable: The path to the Natron executable file used for rendering. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
Path Mapping (For Mixed Farms)
Enable Path Mapping: If enabled, a temporary Natron file will be created locally on the slave for rendering
because Deadline does the path mapping directly in the Natron file. This feature can be turned off if there are
no Path Mapping entries defined in the Repository Options.
9.46.4 FAQ
Which versions of Natron are supported?
Natron 0.9 and later are supported.
Why doesnt Deadline Slave/Monitor report Natrons task progress?
Currently (v1.0), Natron has limited task reporting although when specifying a particular writer node and
frame list then frame progress is supported.
How do I specify a frame range to be rendered?
Unfortunately, Natron does not currently support specifying a frame range to be rendered, but renders
by default the settings within the Natron project file per writer node. If you optionally, specify a writer
node to be rendered under advanced options in the monitor submission UI, then it is possible to specify a
particular frame range and number of frames per task for this writer node.
9.46. Natron
741
9.47 Nuke
9.47.1 Job Submission
You can submit jobs from within Nuke by installing the integrated submission script, or you can submit them from the
Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within Nuke, select Submit To Deadline from the Thinkbox menu.
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Nuke specific options are:
Render With NukeX: Enable this option if you want to render with NukeX instead of Nuke.
Use Batch Mode: If enabled, Deadline will keep the Nuke file loaded in memory between tasks.
Render Threads: The number of threads to use for rendering.
Use The GPU For Rendering: If Nuke should also use the GPU for rendering (Nuke 7 and later only).
Maximum RAM Usage: The maximum RAM usage (in MB) to be used for rendering.
742
Enforce Write Node Render Order: Forces Nuke to obey the render order of Write nodes.
Minimum Stack Size: The minimum stack size (in MB) to be used for rendering. Set to 0 to not enforce a
minimum stack size.
Continue On Error: If enabled, Nuke will attempt to keep rendering if an error occurs.
Use Performance Profiler: If enabled, Nuke will profile the performance of the Nuke script while rendering
and create a xml file per task for later analysis (Nuke 9 and later only).
XML Directory: If Use Performance Profiler is enabled, this is the directory on the network where the performance profile xml files will be saved.
Render in Proxy Mode: If enabled, Nuke will render using the proxy file paths.
Choose Views To Render: Enable this option to choose which view(s) to render. By default, all views are
rendered.
Submit Write Nodes As Separate Jobs: Each write node is submitted as a separate job.
Use Nodes Frame List: If submitting each write node as a separate job or task, enable this to pull the frame
range from the write node, instead of using the global frame range.
Set Dependencies Based on Write Node Render Order: When submitting write nodes as separate jobs, this
option will make the separate jobs dependent on each other based on write node render order.
Submit Write Nodes As Separate Tasks For The Same Job: Enable to submit a job where each task for the
job represents a different write node, and all frames for that write node are rendered by its corresponding task.
Selected Nodes Only: If enabled, only the selected Write nodes will be rendered.
Nodes With Read File Enabled Only: If enabled, only the Write nodes that have the Read File option
enabled will be rendered.
Render Precomp Nodes First: If enabled, all write nodes in precomp nodes will be rendered before the main
job.
Only Render Precomp Nodes: If enabled, only the Write nodes that are in precomp nodes will be rendered.
The Submit Each Write Node As A Separate Task option can be useful if you have a bunch of write nodes in a Nuke
script to output different Quicktime movies. You can enable this option, and bump up the Concurrent Tasks value to
allow machines to process multiple write nodes concurrently. Since Quicktime generation only uses a single thread,
you can get much better throughput with this option on multi-core machines.
9.47. Nuke
743
Render Executables
Nuke Executable: The path to the Nuke executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
Licensing Options
Slaves To Use Interactive License: A list of slaves that should use an interactive Nuke license instead of a
render license. Use a , to separate multiple slave names, for example: slave001,slave002,slave003
OFX Cache
Prepare OFX Cache Before Rendering: If enabled, Deadline will try to create the temporary ofxplugincache
folder before rendering, which helps ensure that comps that use OFX plugins render properly.
Path Mapping (For Mixed Farms)
Enable Path Mapping: If enabled, a temporary Nuke file will be created locally on the slave for rendering
because Deadline does the path mapping directly in the Nuke file. This feature can be turned off if there are no
Path Mapping entries defined in the Repository Options.
744
You can also choose which sequences you want to submit comps for
9.47. Nuke
745
Note this is only an option in the Integrated Submitter in Nuke Studio. It is also required to have a saved project with
sequences that have comps in order to have this option.
746
The next time you launch Nuke, there should be a Thinkbox menu with the option to Submit Nuke to Deadline.
Custom Sanity Check
A CustomSanityChecks.py file can be created alongside the main SubmitNukeToDeadline.py submission script (in
[Repository]\submission\Nuke\Main), and will be evaluated if it exists. This script will let you set any of the initial
properties in the submission script prior to displaying the submission window. You can also use it to run your own
checks and display errors or warnings to the user. Here is a very simple example of what this script could look like:
import nuke
import DeadlineGlobals
def RunSanityCheck():
DeadlineGlobals.initDepartment = "The Best Department!"
DeadlineGlobals.initPriority = 33
DeadlineGlobals.initConcurrentTasks = 2
nuke.message( "This is a custom sanity check!" )
return True
The DeadlineGlobals module can be found in the same folder as the SubmitNukeToDeadline.py script mentioned
above. It just contains the list of global variables that you can set, which are then used by the submission script to
set the initial values in the submission dialog. Simply open DeadlineGlobals.py in a text editor to view the global
variables.
Finally, if the RunSanityCheck method returns False, the submission will be cancelled.
9.47.6 FAQ
Which versions of Nuke are supported?
9.47. Nuke
747
748
749
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Nuke specific options are:
Host: The IP address or Host Name of the Master Machine that the frame server slaves will connect to. This
machine is the machine that will start the actual Nuke render.
Port Number: The port number to use to connect to the Master Machine. Default is 5560. Slave must NOT run
on the same machine as the Master Machine.
750
Worker Count: The number of workers that should be spawned on each Deadline Slave that is reserved. Each
worker runs an instance of Nuke and renders independently of other workers.
Worker Threads: The number of threads each worker should use.
Worker Memory: The amount of memory to reserve for each worker.
Reserving From Inside Nuke Studio
It is required that you have Nuke Studio 9.0v3 or newer installed in order to properly use the Frame Server with
Deadline. After youve configured your submission options, press the Reserve Machines button to submit the Nuke
Frame Server job. The jobs ID and Status will be tracked in the submitter, and as nodes pick up the job, they will
show up in the Reserved Machines list. Once you are happy with the server list you can start rendering or exporting
over the frame server.
Note that the Nuke Frame Server process can sometimes take a little while to initialize. This means that a machine in
the Reserved Machines list could have started the Nuke Frame Server process, but its not fully initialized yet. If this
is the case, its probably best to wait a minute or so after the last server has shown up before starting the render.
After the render is finished, you can press Release Machines or close the submitter UI (Setup Frame Server Slaves
With Deadline) to mark the Frame Server job as complete so that the render nodes can move on to another job.
Note: Only one Slave per machine may pick up a Nuke Frame Server job, as allowing multiple Slaves on the same
machine to try to bind to the same port would not work. Deadline will also fail a render if a slave running on the
Master Machine tries to pick up the job, as it is already running an instance of the Frame Server and the same port
binding conflict can occur.
Reserving From The Monitor
After youve configured your submission options, press the Submit button to submit the Nuke Frame Server job. Note
that this doesnt start any rendering, it just allows the Nuke Frame Server to start up on nodes in the farm. Once youre
happy with the nodes that have picked up the job, you can initiate the distributed render manually from within Nuke
Studio.
After the distributed render has finished, remember to mark the job as complete or delete it so that the nodes can move
on to other jobs.
751
Render Executables
Nuke Executable: The path to the Nuke executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
Licensing Options
Slaves To Use Interactive License: A list of slaves that should use an interactive Nuke license instead of a
render license. Use a , to separate multiple slave names, for example: slave001,slave002,slave003
OFX Cache
Prepare OFX Cache Before Rendering: If enabled, Deadline will try to create the temporary ofxplugincache
folder before rendering, which helps ensure that comps that use OFX plugins render properly.
752
The next time you launch Nuke Studio, there should be a Thinkbox menu with the option to Reserve Frame Server
Slaves.
Custom Sanity Check
A CustomSanityChecks.py file can be created alongside the main ReserveFrameServerSlaves.py submission script (in
[Repository]\submission\Nuke\Main), and will be evaluated if it exists. This script will let you set any of the initial
properties in the submission script prior to displaying the submission window. You can also use it to run your own
checks and display errors or warnings to the user. Here is a very simple example of what this script could look like:
import nuke
import DeadlineFRGlobals
def RunSanityCheck():
DeadlineFRGlobals.initDepartment = "The Best Department!"
DeadlineFRGlobals.initPriority = 33
DeadlineFRGlobals.initPort = 5570
nuke.message( "This is a custom sanity check!" )
return True
The DeadlineFRGlobals module can be found in the same folder as the ReserveFrameServerSlaves.py script mentioned
above. It just contains the list of global variables that you can set, which are then used by the submission script to
set the initial values in the submission dialog. Simply open DeadlineFRGlobals.py in a text editor to view the global
variables.
Finally, if the RunSanityCheck method returns False, the submission will be cancelled.
9.48.5 FAQ
Which versions of Nuke are supported?
Nuke Studio 9.0v3 and onwards is supported.
What Nuke license does Frame Server use?
Nuke Studios Frame Server uses by default a standard Nuke rendernode -r license. Note for every
license of Nuke Studio you own, a number of Nuke render licenses are included from The Foundry.
753
These licenses are intended to be used for local Nuke Studio background rendering using Frame Server
running locally. Deadlines Frame Server jobs are for when additional processing power is required by
your local running instance of Nuke Studio and its Master Frame Server functionality. Note, in Deadlines
Nuke Frame Server plugin configuration section, you can also provide a list of slaves that should use an
interactive Nuke license instead of a render license, albeit this is a somewhat expensive thing to do with
your Nuke gui licenses!
Can I run Frame Server via Deadline Slave on the Master Machine?
No. You wont be able to run Frame Server via Deadline Slave on the same machine that is also acting
as the Master Machine (the machine currently running your session of Nuke Studio). Deadline will fail a
render if a slave running on the Master Machine tries to pick up the job, as it is already running an instance
of the Frame Server and a port binding conflict will occur. You will need to use a different machine even
for simple testing purposes.
If running multiple Deadline Slaves, can I run a normal Nuke network rendering job simultaneously with Nuke
Frame Server jobs?
Yes. You will want to consider using Deadline limits here to ensure you dont blow your Nuke license
budget. See our Limits documentation for how to implement limits for each of your software license
needs. Ensure you use Machine as the Usage Level in your Limits configuration, to ensure only 1 x
Nuke license is used by the one physical/virtual machine. Dont forget to consider licensing implications
for any 3rd party Nuke plugins such as Optical Flares you may be using.
754
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Octane specific options are:
755
Octane Scene File: Specify the Octane scene file(s) to render. If you have an animation with one OCS file per
frame, you just need to select one of the OCS files from the sequence.
Output File: The output file path. This is optional and can be left blank.
Render Target: Select the target to render. This list is automatically populated based on the selected OCS file.
Single Frame Job: Check this option if you are submitting a single frame to render, as opposed to an animation
consisting of a sequence of OCS files.
Override Sampling: Overrides the maximum samples in the OCS file.
Command Line Args: Additional command line arguments to pass to the renderer.
Render Executables
Octane Executable: The path to the Octane executable file used for rendering. Enter alternative paths on
separate lines.
9.49.3 FAQ
Is Octane Standalone supported by Deadline?
Yes!
756
757
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The PRMan specific options are:
758
RIB Files: The RIB files to be rendered (can be ASCII or binary formatted). These files should be network
accessible.
Working Directory: The working directory used during rendering. This is required if your RIB files contain
relative paths.
Threads: The number of threads to use for rendering. Set to 0 to let PRMan automatically determine the optimal
thread count.
Additional Arguments: Specify additional command line arguments you would like to pass to the PRMan
renderer.
Render Executables
PRMan Executable: The path to the PRMan executable file used for rendering. Enter alternative paths on
separate lines.
9.50.3 FAQ
Is PRMan supported by Deadline?
Yes.
Is PRMans folder structure where each frame has its own folder supported by Deadline?
759
Yes. Deadline can render rib files that are in separate folders per frame, and can also render rib files that
are all stored in the same folder.
9.51 Puppet
9.51.1 Job Submission
You can submit Puppet update jobs from the Monitor.
760
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Puppet specific options are:
Verbose Output: Prints very detailed output when the job is run.
9.51. Puppet
761
Options
Puppet Batch: The path to the Puppet executable file. Enter alternative paths on separate lines.
9.52 Python
9.52.1 Job Submission
You can submit Python jobs from the Monitor.
762
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Python specific options are:
Script File: The script you want to submit.
Arguments: The arguments to pass to the script. Leave blank if the script takes no arguments.
Version: The version of Python to use.
9.52. Python
763
Python Executables
Python Executable: The path to the Python executable file used. Enter alternative paths on separate lines.
Different executable paths can be configured for each version installed on your render nodes.
9.52.3 FAQ
Which versions of Python are supported?
Python 2.3 to 3.2 are all supported. Additional versions can be added when necessary.
764
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Quicktime specific options are:
Input Images: The frames you would like to generate the Quicktime from. If a sequence of frames exist in
the same folder, Deadline will automatically collect the range of the frames and will set the Frame Range field
accordingly.
765
766
9.53.3 FAQ
Which version of Apple Quicktime is required to create Quicktime movies with Deadline using the Apple Quicktime renderer?
Apple Quicktime version 7.04 or later is required. It must be installed on all slaves that will be rendering
Quicktime movies, as well as any machines from which Quicktime jobs will be submitted from. You can
download the latest version of Quicktime from here.
Can I submit an Apple Quicktime job from Windows to run on Mac OSX, or vice versa?
No, because the export settings are saved out differently on each operating system. The Windows Quicktime generator doesnt recognize settings that are exported on a Mac, and vice versa. We hope to find a
solution for this in the future, but for now you should ensure that your Quicktime job renders on the same
operating system from which it was submitted from (using groups, pools, machine lists, etc).
Can multiple machines work together to render a single movie file?
No, this is not possible. This is why Quicktime Generation jobs should always consist of a single task that
contains all the frames to be included in the movie file.
When submitting an Apple Quicktime, an error message pops up when I click the Submit button.
This error pops up when you have an older version of Apple Quicktime installed. Installing the latest
version should fix the problem.
9.54 Realflow
9.54.1 Job Submission
You can submit jobs from within RealFlow by installing the integrated submission script, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
9.54. Realflow
767
To submit from within RealFlow 5 or later, select Commands -> System Commands -> SubmitToDeadline.py.
To submit from within RealFlow 4, select Scripts -> User Scripts -> Deadline -> Submit To Deadline.
768
9.54. Realflow
769
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Realflow specific options are:
Submit IDOC Jobs: Enable to submit separate IDOC jobs for each IDOC name specified. Separate multiple
IDOC names with commas. For example: IDOC01,IDOC02,IDOC03
Start Rendering At [Start Frame - 1]: Enable this option if RealFlow rendering should start at the frame
preceding the Start Frame. For example, if you are rendering frames 1-100, but you need to pass 0-100 to
RealFlow, then you should enable this option.
Use One Machine Only: Forces the entire job to be rendered on one machine. If this is enabled, the Machine
Limit, Task Chunk Size and Concurrent Tasks settings will be ignored.
Version: The version of RealFlow to render with.
Build: Force 32 bit or 64 bit rendering.
Rendering Threads: The number of threads to use during simulation.
Reset Scene: If this option is enabled, the scene will be reset before the simulation starts.
Generate Mesh: This option will generate the mesh for a scene where particle cache files were created previously.
Use Particle Cache: If you have created particle cache files for a specific frame and you want to resume your
simulation from that frame you have to use this option. The starting cached frame is the Start Frame entered
above.
Render Preview: Enable this option to create a Maxwell Render preview.
770
Render Executables
Realflow Executable: The path to the Realflow executable file used for rendering. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
9.54. Realflow
771
RealFlow 4:
Copy [Repository]\submission\RealFlow\Client\DeadlineRealFlowClient.py to [RealFlow Install Directory]\scripts.
Launch RealFlow and select Scripts -> Add.
772
In the Add Script dialog, for the Name enter Submit To Deadline, and for the Script enter the path to the
DeadlineRealFlowClient.py file that you just copied over. Then click the New Folder button and name the
folder Deadline. Then select the Deadline folder and click OK.
Now you can select Scripts -> User Scripts -> Deadline -> Submit To Deadline to launch the submission
dialog.
9.54.4 FAQ
What versions of RealFlow are supported by Deadline?
RealFlow versions 3 and later are supported. The integrated submission script is only supported in RealFlow 4 and later. RealFlow 3 jobs can still be submitted from the Monitor.
Does rendering with RealFlow require a separate license?
Yes. You need separate command line licenses to render.
Can I render separate IDOCs from the same scene across different machines?
Yes. You can specify which IDOCs you want to render in the submitter, and a separate job will be
submitted for each one.
Why is RealFlow looking for the particle cache on the local C: instead of on the network?
9.54. Realflow
773
This is likely happening because you are choosing to submit the RealFlow file with the job. This means
the file is copied locally to the slave machines, which is why they are looking for the cache locally. If
you disable the option to submit the file with the job, the slave machines should be able to find the cache
properly.
9.55 REDLine
9.55.1 Job Submission
You can submit REDLine jobs from the Monitor. REDLine is the command line tool that ships with Redcine-X, and
previously with REDAlert.
774
9.55. REDLine
775
Submission Options
The general Deadline options are explained in the Job Submission documentation. The REDLine specific options are:
Input R3D File: Specify the R3D file you want to render.
Output Folder: The folder where the output files will be saved.
Output Filename: The prefix portion of the output filename. It is not necessary to specify the extension.
Output Format: The output format. This will determine the filename extension.
Render Resolution: The resolution to render at.
Make Output Subfolder: Makes subdirectory for each output.
Frame List: The list of frames to render if rendering an animation.
Renumber Start Frame: The new start frame number (optional).
Frames Per Task: The number of frames per task.
Submit Input R3D File With Job: If checked, the input file is submitted with the job to the repository.
Deadline basically supports all the options that are available in the Redcine-X application. It also supports the ability
to specify RSX files to use when rendering, so you can set your options in Redcine-X and then use them to render
the job through Deadline. Please refer to your Redcine-X documentation for more information about these additional
render options.
776
Render Executables
REDLine Executable: The path to the REDline executable file used for rendering. Enter alternative paths on
separate lines.
9.55.3 FAQ
Is Redcine-X/REDAlert supported by Deadline?
Yes. Both applications ship with a command line application called REDLine, which Deadline uses to
render.
Which Operating System(s) can I render REDLine jobs with?
Currently, REDLine is available on Windows and OSX, so you can render REDLine jobs on these operating systems.
777
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. Note that a Draft job can only be submitted if Deadline is
able to parse absolute Display paths from the selected rib file. If it cannot extract the output paths, it will let you know
during submission so that you can disable the Draft job option.
778
Render Executables
Executable: The path to the RIB executable file used for rendering. Enter alternative paths on separate lines.
Different executable paths can be configured for each RIB renderer installed on your render nodes.
9.56.3 FAQ
Which RIB renderes are supported by Deadline?
The following renders are supported:
3Delight
Air
Aqsis
BMRT
9.56. Renderman (RIB)
779
Entropy
Pixie
PRMan
RenderDotC
RenderPipe
If you use a RIB renderer that is not on this list, please contact Deadline Support and let us know.
9.57 Rendition
9.57.1 Job Submission
You can submit Rendition jobs from the Monitor.
780
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Rendition specific options are:
Input MI File: The MI file to render. This needs to be on a network location, since Rendition often saves the
9.57. Rendition
781
Render Executables
Rendition Executable: The path to the Rendition executable file used for rendering. Enter alternative paths on
separate lines.
9.57.3 FAQ
Is Rendition supported by Deadline?
Yes.
Why do the image format options (like color depth) get reverted to defaults when rendering with Deadline?
782
This only happens when overriding the output file in the submission script. When we pass the output path
to Rendition, it uses the default image format options for the output type. If you dont want this to occur,
simply dont override the output file.
9.58 Rhino
9.58.1 Job Submission
You can submit jobs from within Rhino by installing the integrated submission script, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
9.58. Rhino
783
To submit from within Rhino, left-click on the Deadline button you created during the integrated submission script
installation.
784
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Rhino specific options are:
Rhino File: The Rhino file to be rendered.
Output File: The filename of the image(s) to be rendered.
Renderer: Specify the renderer to use.
Render Bongo Animation: If your Rhino file uses the Bongo animation plugin, you can enable a Bongo
animation job.
Tile Rendering
The following options are available for tile rendering. Note that tile rendering is only available when submitting from
within Rhino.
Enable Tile Rendering: If enabled, the image will be rendered in regions and automatically assembled by
Draft.
Use Jigsaw: Use Jigsaw to determine the regions.
Tiles in X and Tiles in Y: The number of tiles to divide the job into if not using Jigsaw.
Submit Dependent Assembly Job: If enabled, then a dependent job will be submitted to assemble the
tiles/regions into a single image.
Cleanup Tiles as Assembly: If enabled, then after assembly the tiles/regions will be deleted.
Error on Missing Tiles: If enabled, then if any of the tiles/regions are missing the assembly job will fail.
Assemble Over: Determines what the tiles/regions will be assembled over; nothing, a single image or the same
image as the final image.
Error on Missing Background: Determines if the assembler should fail if the background image is missing.
9.58. Rhino
785
Supported Renderers
Deadline supports many of the Rhino renderers out of the box, including Rhino Render, Flamingo, VRay, Brazil,
Penguin, and TreeFrog. If you are using a renderer that Deadline does not currently support, please email Deadline
Support and let us know!
786
It is also possible to manually add new renderers to the list that Deadline supports.
Go to
\\your\repository\script\Submission\RhinoSubmission and open Renderers.ini in a text editor. Youll see that this file
contains the list of renderers that Deadline currently supports, one per line. Just add the missing renderer as a new line
and save the file. Note that the name needs to match that of the renderer exactly!
Render Executables
Rhino Executable: The path to the Rhino executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
9.58. Rhino
787
Select the Toolbar Collection file that you want to add the Deadline submission button to, and then select File
-> Import Toolbars.... Browse to [Repository]\submission\Rhino\Client\ and select the deadline.rui file.
Check the box next to Deadline and press OK.
788
There should not be a toolbar with a Deadline button on your screen, which you can dock anywhere you want.
Left-click on the button to submit a Rhino Job to Deadline.
Right-click on the button to launch the Monitor.
Rhino 4
The following installation procedure is intended and has been tested for Rhino 4.0. It is largely similar to the procedure
described for Rhino 5 above, with some slight differences.
In Rhino, select Tools -> Toolbar Layout.
9.58. Rhino
789
Select the Toolbar collection file that you want to add the Deadline submission button to, then select Toolbar ->
Import. Browse to [Repository]\submission\Rhino\Client\ and select the deadline.tb file.
Check the box next to Deadline and press Import.
790
Select File -> Save to save the changes to the selected Toolbar collection file.
There should now be a toolbar with a Deadline button on your screen, which you can dock.
Left-click on the button to submit a Rhino job to Deadline.
Right-click on the button to launch the Monitor.
9.58.4 FAQ
Which versions of Rhino are supported?
Rhino 4 and later are supported.
Does Rhino need to be licensed on each render node?
Yes.
Is the Bongo plugin for animation supported?
Yes. The Rhino submission dialog has the option to render a Bongo animation.
Is V-Ray for Rhino fully supported?
Yes. Please see the V-Ray Distributed Rendering plugin for details on how V-Ray interactive DBR in
Rhino operates.
9.58. Rhino
791
9.59 RVIO
9.59.1 Job Submission
You can submit RVIO jobs from the Monitor, or you can right-click on a job and select Scripts -> Submit RVIO Job
To Deadline to automatically populate some fields in the RVIO submitter based on the jobs output.
792
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation.
The RVIO submitter allows you to create and save Layers, each of which can contain one or two source images, an
arbitrary number of audio files, and a list of overrides.
Click the New button to add a new Layer.
Click the Rename button to rename the selected Layer.
Click the Remove button to remove the selected Layer.
Click the Clear All button to remove all Layers.
Click the Load Layers button to load saved Layers from disk.
Click the Save Layers button to save the list of current Layers to disk.
For Layers, the only required setting is the Source 1 file(s). If specifying a sequence, you can set the range to the right
of the file name (the same for the Source 2 file if specified). Note that the .rv file format is also supported as a Source
file. For Audio Files, a comma separated list is used to allow the submission of multiple files. Other than submitting at
least one Layer, the only other required option is the Output File under the Output tab. See the RVIO Documentation
for more information about the available options and overrides.
Codec Lists
The RVIO submitter pulls its codec settings from the GetRawCodecText() function in
\\your\repository\scripts\submission\RVIOSubmission.py. The raw text was retrieved from running rvio.exe
-formats in a command prompt. This means that if your installation of RVIO supports additional codecs that arent
available in the submitter, you can run the following and then copy the text in the resulting Codecs.txt file and paste it
between the triple quotes in GetRawCodecText():
rvio.exe -formats > Codecs.txt
9.59. RVIO
793
Render Executables
RVIO Executable: The path to the rvio executable file used for rendering. Enter alternative paths on separate
lines.
9.59.3 FAQ
Is RVIO supported by Deadline?
Yes.
9.60 Salt
9.60.1 Job Submission
You can submit Salt update jobs from the Monitor.
794
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Salts specific options are:
Verbose Logging Level: The level of logging a Salt job will output.
9.60. Salt
795
Options
Salt Executable: The path to the Salt Executable. Enter alternative paths on separate lines.
9.61 Shake
9.61.1 Job Submission
You can submit Shake jobs from the Monitor.
796
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Shake specific options are:
Shake Script File: The Shake script file to be rendered.
9.61. Shake
797
Render Executables
Shake Executable: The path to the Shake executable file used for rendering. Enter alternative paths on separate
lines.
9.61.3 FAQ
Is Shake supported by Deadline?
Yes.
798
9.62 SketchUp
9.62.1 Job Submission
You can submit jobs from within SketchUp by installing the integrated submission script, or you can submit them from
the Monitor. The instructions for installing the integrated submission script can be found further down this page.
To submit from within SketchUp, select Plugins -> Submit To Deadline.
9.62. SketchUp
799
Submission Options
The general Deadline options are explained in the Job Submission documentation. The SketchUp specific options are:
SketchUp File: The file to be exported.
800
Render Executables
SketchUp Executable: The path to the SketchUp executable file used for rendering. Enter alternative paths on
separate lines. Different executable paths can be configured for each version installed on your render nodes.
9.62. SketchUp
801
like
this,
Mac OS X:
Copy [Repository]/submission/SketchUp/Client/DeadlineSketchUpClient.rb to [SketchUp Plugin Directory]
which will look different depending on your version of SketchUp.
SketchUp 8 and ealier, the plug-in directory may look something like this, /Library/Application Support/Google SketchUp #/SketchUp/plugins
SketchUp 2013 and later, the plug-in directory may look something like this (Note: it may have to be in
the specific users /Library/ directory as of 2014), /Library/Application Support/SketchUp #/plugins
9.62.4 FAQ
Which versions of SketchUp are supported by Deadline?
The commercial versions of SketchUp 7 and later are supported.
9.63 Softimage
9.63.1 Job Submission
You can submit jobs from within Softimage by installing the integrated submission script, or you can submit them
from the Monitor. The instructions for installing the integrated submission script can be found further down this page.
802
To submit from within Softimage, select the Render toolbar on the left and click Render -> Submit To Deadline.
9.63. Softimage
803
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Softimage specific options are:
Workgroup: Specify the workgroup that Softimage should use during rendering. Leave blank to ignore.
Force Build: Force 32 bit or 64 bit rendering.
Submit Softimage Scene File: The Softimage scene file will be submitted with the job. If your Softimage scene
is stored in a project folder on the network, it is recommended that you leave this box unchecked.
Threads: The number of render threads to use during rendering.
Use Softimage Batch Plugin: This plugin keeps Softimage and the scene loaded in memory between tasks.
804
Enable Local Rendering: If enabled, the frames will be rendered locally, and then will be copied to the final
network location. Note that this feature doesnt support the Skip Existing Frame option.
Skip Batch Licensing Check: If enabled, Softimage wont try to check out a Batch license during rendering.
This allows you to use 3rd party renderers like VRay or Arnold without using a Softimage batch license.
Selecting passes to render:
Select which passes you would like to render. A separate job is submitted for each pass. If no passes are selected,
then the current pass is submitted. Note that if you are using FxTree Rendering, the passes are ignored.
9.63. Softimage
805
Enable tile rendering to split up a frame into multiple tiles that are rendered individually. By default, a separate
job is submitted for each tile (this allows for tile rendering of a sequence of frames). For easier management of
single frame tile rendering, you can choose to submit all the tiles as a single job.
You can submit a dependent assembly job to assemble the image when the main tile job completes. If using
Draft for the assembly, youll need a license from Thinkbox. Otherwise, the output formats that are supported
are BMP, DDS, EXR, JPG, JPE, JPEG, PNG, RGB, RGBA, SGI, TGA, TIF, and TIFF.
Note that the Error On Missing Tiles option only applies to Draft assemblies.
Note that if you are using FxTree Rendering, the tile rendering settings are ignored.
806
Notes:
Softimage gives the option to specify file paths as being relative to the current directory or absolute. Deadline
requires that all file paths be absolute.
When specifying the image output, make sure to include the extension (.pic, .tga, etc) at the end so that you can
view the individual rendered images from the task list in the Monitor.
Redshift Renderer Options
If submitting a Softimage scene that uses the Redshift renderer, there will be an additional option in the integrated
submitter called GPUs Per Task. If set to 0 (the default), then Redshift will be reponsible to choosing the GPUs to use
for rendering.
If this is set to 1 or greater, then each task for the job will be assigned specific GPUs. This can be used in combination
with concurrent tasks to get a distribution over the GPUs. For example:
if this is set to 1, then tasks rendered by the Slavess thread 0 would use GPU 0, thread 1 would use GPU 1, etc.
if this is set to 2, then tasks rendered by the Slavess thread 0 would use GPUs {0,1}, thread 2 would use GPUs
{2,3}, etc.
9.63. Softimage
807
Softimage
Render Executables
Softimage Render Excecutable: The path to the XSIBatch.bat file used for rendering. Enter alternative paths
on separate lines. Different executable paths can be configured for each version installed on your render nodes.
Options
Enable Strict Error Checking: If enabled, Deadline will fail in almost all cases when the job whenever Softimage prints out ERROR for whatever reason.
Return Codes To Ignore: Error codes (other than 0) that Deadline should ignore and instead assume the render
has finished successfully. Use a ; to separate the error codes.
808
SoftimageBatch
Render Executables
Softimage Render Excecutable: The path to the XSIBatch.bat file used for rendering. Enter alternative paths
on separate lines. Different executable paths can be configured for each version installed on your render nodes.
Options
Enable Strict Error Checking: If enabled, Deadline will fail in almost all cases when the job whenever Softimage prints out ERROR for whatever reason.
Connection Timeout: The amount of seconds to give the Deadline plugin and Softimage to establish a connection before the job fails.
Timeout For Progress Updates: The amount of seconds to between Softimage progress updates before the job
is failed. Set to 0 to disable this feature.
9.63. Softimage
809
Once Python is an available scripting option in Softimage, you can follow these steps to install the submission script:
You can either run the Submitter installer or manually install the submission script
Submitter Installer
Run the Submitter Installer located at <Repository>/submission/Softimage/Installers
Manual Installation of the Submission Script
Copy the file [Repository]/submission/Softimage/Client/DeadlineSoftimageClient.py to the folder [Softimage
Install Directory]/Application/Plugins
Launch Softimage. The submission script is automatically installed when Softimage starts up. To make sure
the script was installed correctly, select the Render toolbar on the left and click the Render button. A Submit To
Deadline menu item should be available.
810
9.63. Softimage
811
return True
The opSet parameters can be found in the SoftimageToDeadline.py script in the Main folder mentioned above. Look
for the following line in the script:
opSet = Application.ActiveSceneRoot.AddProperty(
"CustomProperty",False,"SubmitSoftimageToDeadline")
After this line, all the available parameters are added to the opSet. These can be used to set the initial values in the
submission dialog.
Finally, if the RunSanityCheck method returns False, the submission will be canceled.
9.63.5 FAQ
Which versions of Softimage are supported?
Softimage versions 2010 and later are supported.
What is the difference between the Softimage and SoftimageBatch plug-ins?
The SoftimageBatch plug-in keeps the scene loaded in memory between subsequent tasks for the same
job. This saves on the overhead of having to load Softimage and the scene file for each task. The Softimage
plug-in uses standard command line rendering, and should only be used if you experience problems with
the SoftimageBatch plug-in.
Is FxTree rendering supported?
Yes. Simply enable FxTree rendering in the submission dialog and specify the FxTree and Output Node
you want to render.
Is the Arnold renderer for Softimage supported?
Yes. Deadline supports the Arnold plug-in for Softimage, as well as Arnolds standalone renderer
(kick.exe). For more information on rendering Arnold Standalone jobs, see the Arnold Standalone Plug-in
Guide.
Can Softimage script jobs be submitted to Deadline?
Yes. Deadline provides very basic support for script jobs, though there is currently no interface to submit
them. The option for submitting a script job can be specified in the plug-in info file.
After installing the Softimage integrated submission script, Softimage failes to load (it goes to a white screen
and hangs).
We have heard of this problem before, but we have not been able to reproduce it. The workaround for this
problem is to remove the script from the plugins folder, and manually path to the submission script plugin
after starting Softimage.
When Deadline renders the job, Softimage isnt able to find anything in the scenes project folder.
If youre Softimage scene file is saved in a project folder on the network, leave the Submit Softimage
Scene File check box unchecked in the submission dialog. This allows Deadline to load the Softimage
scene in the context of its project folder.
I have Softimage configured to save output to a network share, but when Deadline renders the scene, the render
slaves save their output to their local C drive rather than to the network share.
There are two possible solutions:
812
1. If youre Softimage scene file is saved in a project folder on the network, leave the Submit
Softimage Scene File check box unchecked in the submission dialog. This is the recommend
solution.
2. Specify the full resolved path for the scene output directory, instead of something like [Project
Path]\Render_Pictures.
Rendering with Deadline seems a lot slower than rendering through Softimage itself.
If youre submitting your jobs with the Use Softimage Batch option disabled, then Softimage needs to be
restarted and the scene needs to be reloaded for every task in the job, which can add a lot of overhead to
the render time, especially if cached data needs to be loaded.
To speed up your renders, you can increase the task group size (aka: chunk size) from 1 to 5 or 10. This
way, the scene is loaded once for every 5 or 10 frames. Increasing the chunksize like this is recommended
if you know ahead of time your frames will only take seconds to render, or if a large amount of cached
data needs to be loaded.
9.64 Terragen
9.64.1 Job Submission
You can submit Terragen jobs from the Monitor.
9.64. Terragen
813
814
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Terragen specific options are:
Project File: The Terragen project file to render.
Render Node: Select the render node to render. Leave blank to use the default in the project.
Output: Override the output path in the project file. If rendering a sequence of frames, remember to include the
%04d format in the output file name so that padding is added to each frame.
Extra Output: Override the extra output path in the project file. If rendering a sequence of frames, remember
to include the IMAGETYPE.%04d format in the output file name so that padding is added to each frame.
Enable Local Rendering: If enabled, the frames will be rendered locally, and will then be copied to the final
network location. Note that this requires that an Output file be specified above.
Version: The version of Terragen.
Render Executables
Terragen CLI Executable: The path to the Terragen executable file used for rendering. Enter alternative paths
on separate lines. Different executable paths can be configured for each version installed on your render nodes.
9.64. Terragen
815
9.64.3 FAQ
Which versions of Terragen are supported?
The commercial version of Terragen 2 and later are supported.
816
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Tile Assembler specific options
are:
Input Tile Files: Select just one of your image tile files from a group to perform the tile assembly for. The files should have the format [PREFIX]_tile_[I]x[J]_[X]x[Y].[EXTENSION]. For example,
r:\projects\deadline\Tests\example_tile_1x1_2x1_0000.exr. Ensure the filenames match this naming convention.
Tiles Are Uncropped: Enable this option if a tile consists of the full resolution of the image, with only a part
of it rendered.
817
Ignore Overlap: If assembling uncropped tiles, enable this option to ignore any overlap that exists for the given
tiles. For example, if two tiles share a few pixels between them.
Clean Up Tile Files After Assembly: Enable to automatically delete the tile files after successfully assembling
the final image.
Opaque Opacity: Use this option if non-exr tiles use opaque opacity in empty pixels.
9.65.3 FAQ
Is the Tile Assembler plugin still officially supported in Deadline?
No. Please note that the Tile Assembler plugin is EOL (End-Of-Life) and we recommend using the newer
Draft Tile Assembler plugin for all tile/region assembly duties. The old Tile Assembler system is still
available in Monitor and via some of the in-app tile rendering submission scripts and will still work.
However, it is now deprecated, so please do not build any in-house tools around Tile Assembler. The
newer Draft Tile Assembler contains all the features of the old Tile Assembler and more! Tile Assembler
will be removed at an undetermined date in the future. You have been warned!
instances which it in turn has spawned as a child process. This can be helpful if V-Ray DBR becomes unstable and
a user wishes to reset the system remotely. You can simply re-queue or delete/complete the current DBR job or
re-submit.
Port Configuration
Here is a consolidated list of port requirements for the various versions of V-Ray. Ensure any applicable firewalls are
opened to allow pass-through communication. Typically if in doubt, opening TCP/UDP ports in the range: 2020020300 will cover all V-Ray implementations for DBR. During initial testing, it is recommended to open all ports in
this range, verify and then consider tightening up security.
ProDefault Port
tocol
Number
TCP/IP 20204
TCP/IP FIXED
TCP/IP 20206
TCP/IP
TCP/IP
TCP/IP
TCP/IP
TCP/IP
TCP/IP
20207
20207
20207
20207
20207
20207
Application
Notes
3dsMax V-Ray
Production
3dsMax /
V-Ray Spawner
3dsMax V-Ray
RT/GPU
Maya
Softimage
modo
Rhino
SketchUp
C4D
V-Ray 2.x, V-Ray 3.x - Production and Nightly beta builds (v2 & v3:
20204)
Used by render servers to broadcast a message that they are ready to
join an ongoing DR session (v2 & v3: 20205)
V-Ray 2.x, V-Ray 3.x RT/GPU - Production and Nightly beta builds
(v2 & v3: 20206)
V-Ray 2.x and 3.x RT/GPU - Production and Nightly beta builds
V-Ray 2.x and 3.x - Production
V-Ray Standalone
V-Ray Standalone
V-Ray Standalone
V-Ray Standalone
819
Submission Options
The general Deadline options are explained in the Job Submission documentation. The V-Ray DBR specific options
are:
Maximum Servers: The maximum number of V-Ray Servers to reserve for distributed rendering.
Port Number (Softimage/Maya/3dsMax/3dsMaxRT only): The port number that V-Ray will use for distributed rendering. In the case of Softimage, this is necessary because Softimage uses V-Ray standalone for
distributed rendering and the default port number for V-Ray in Softimage is different from the default port
number in V-Ray standalone. The port number needs to be identical on all machines including the workstation machine for a particular DCC application to communicate correctly. It is recommended to disable any
client firewall whilst initial testing/configuration is carried out. Typically, opening TCP/UDP ports in the range:
20200-20300 will cover all V-Ray implementations for DBR.
Use Server IP Address instead of Host Name: If checked, the Active Servers list will show the server IP
addresses instead of host names.
Automatically Update Server List (3dsMax only): This option when un-checked stops the automatic refresh
of the active servers list based on the current Deadline queue.
Complete Job After Render (3dsMax only): When checked, as soon as the DR session has completed (max
quick render finished), then the Deadline job will be marked as complete in the queue.
Active Servers (3dsMax only): Individual Deadline Slaves can be enabled/disabled here (V-Ray Spawner as a
job will still continue to run on the disabled slaves until the job is deleted/completed).
Check ALL/INVERT/Check NONE (3dsMax only): Easily enable all/invert/none all currently listed Deadline
Slaves in the Active Servers List.
Rendering
After youve configured your submission options, press the Reserve Servers button to submit the V-Ray Spawner job.
The jobs ID and Status will be tracked in the submitter, and as nodes pick up the job, they will show up in the Active
Servers list. Once you are happy with the server list, press Start Render (3ds Max and Maya) or Render Current
Pass/Render All Passes (Softimage) to start distributed rendering.
Note that the V-Ray Spawner/V-Ray standalone process can sometimes take a little while to initialize. This means that
a server in the Active Server list could have started the V-Ray Spawner, but its not fully initialized yet. If this is the
case, its probably best to wait a minute or so after the last server has shown up before pressing Start Render.
Update Servers (3dsMax only) button will manually update the Active Servers List. Note, if you modify the Maximum
Servers value, the jobs frame range will be updated when this button is pressed or if Automatically Update Server
List is enabled.
After the render is finished, you can press Release Servers or close the submitter UI (Setup V-Ray DBR With Deadline)
to mark the V-Ray Spawner/V-Ray standalone job as complete so that the render nodes can move on to another job.
820
Submission Options
The general Deadline options are explained in the Job Submission documentation. The V-Ray DBR specific options
are:
Maximum Servers: The maximum number of V-Ray Servers to reserve for distributed rendering.
Application: The application you will be initiating the distributed render from.
Version: The version of the application, if applicable.
Port Number (Softimage/Maya/3dsMaxRT only): The port number that V-Ray will use for distributed rendering. In the case of Softimage, this is necessary because Softimage uses V-Ray standalone for distributed
rendering and the default port number for V-Ray in Softimage is different from the default port number in VRay standalone. The port number needs to be identical on all machines including the workstation machine for
a particular DCC application to communicate correctly. It is recommended to disable any client firewall whilst
initial testing/configuration is carried out. Typically, opening TCP/UDP ports in the range: 20200-20300 will
cover all V-Ray implementations for DBR.
9.66. V-Ray Distributed Rendering
821
Rendering
After youve configured your submission options, press the Submit button to submit the V-Ray Spawner/V-Ray standalone job. Note that this doesnt start any rendering, it just allows the V-Ray Spawner/V-Ray standalone application
to start up on nodes in the farm. Once youre happy with the nodes that have picked up the job, you can initiate the
distributed render manually from within the application (ie: Rhino or Sketchup). This will likely require manually
configuring your V-Ray Server list.
After the distributed render has finished, remember to mark the job as complete or delete it so that the nodes can move
on to other jobs.
V-Ray Executables
Here you can specify the executable used for rendering for the different versions of V-Ray.
DR Process Handling
Handle Existing DR/DBR Process: Only one instance of the same DR process running over the same port is
possible. This option allows for Deadline to fail the task if this is the case or attempt to kill the currently running
process, to allow the Deadline managed DR process to run successfully. Note, if the process is set to kill and
does indeed kill a currently present process, but seems to auto-restart even after killing; then this indicates the
process is already running as a service and the service will need to be stopped by your IT staff. Do NOT install
as a service as Deadline can NOT support this configuration.
DR Session Timeout (unsupported in 3dsMax)
822
DR Session Auto Timeout Enable: If enabled, when a DR session has successfully completed on a slave, the
task on the slave will be marked as complete after the DR session auto timeout period in seconds has been
reached (Default: False).
DR Session Auto Timeout (Seconds): This is the timeout period (Default: 30 seconds) when a DR session will
timeout and be marked as complete by a slave.
to
[3ds
Max
Install
Direc-
823
Maya
The following procedure describes how to install the integrated V-Ray DBR submission script for Maya. The integrated submission script and the following installation procedure has been tested with Maya versions 2012 and later.
You can either run the Submitter installer or manually install the submission script
Submitter Installer
Run the Submitter Installer located at <Repository>/submission/MayaVRayDBR/Installers
Manual Installation of the Submission Script
On Windows, copy the file [Repository]\submission\MayaVRayDBR\Client\DeadlineMayaVRayDBRClient.mel
to [Maya Install Directory]\scripts\startup. If you do not have a userSetup.mel in [My Documents]\maya\scripts,
copy the file [Repository]\submission\MayaVRayDBR\Client\userSetup.mel to [My Documents]\maya\scripts.
If you have a userSetup.mel file, add the following line to the end of this file:
source "DeadlineMayaVRayDBRClient.mel";
824
The next time Maya is started, a Deadline shelf should appear with an orange button that can be clicked on to
launch the submitter.
If you dont see the Deadline shelf, its likely that Maya is loading another userSetup.mel file from somewhere.
Maya can only load one userSetup.mel file, so you either have to configure Maya to point to the file mentioned
above, or you have to modify the file that Maya is currently using as explained above. To figure out which
userSetup.mel file Maya is using, open up Maya and then open up the Script Editor. Run this command:
whatIs userSetup.mel
Softimage
The following procedure describes how to install the integrated V-Ray DBR submission script for Softimage. The
integrated submission script and the following installation procedure has been tested with Softimage versions 2012
and later.
Submitter Installer
Run the Submitter Installer located at <Repository>/submission/SoftimageVRayDBR/Installers
Manual Installation of the Submission Script
Copy the file [Repository]/submission/SoftimageVRayDBR/Client/DeadlineSoftimageVRayDBRClient.py to
the folder [Softimage Install Directory]/Application/Plugins
Launch Softimage. The submission script is automatically installed when Softimage starts up. To make sure the
script was installed correctly, select the Render toolbar on the left and click the Render button. A Setup V-Ray
DBR With Deadline menu item should be available.
825
9.66.5 FAQ
Is V-Ray Distributed Rendering (DBR) supported?
Yes. A special reserve job is submitted that will run the V-Ray Spawner/V-Ray standalone process on
the render nodes. Once the V-Ray Spawner/V-Ray standalone process is running, these nodes will be able
to participate in distributed rendering.
Which versions of V-Ray DBR are supported?
V-Ray DBR interactive rendering is supported for 3ds Max, Maya, and Softimage 2012-2015. You can
also submit V-Ray Spawner jobs for Rhino and Sketchup from the Monitor. In the latter case, the render
nodes will simply be reserved for DBR, and the distributed rendering process itself will have to be initiated
manually from within Rhino or Sketchup.
V-Ray Slave or V-Ray Spawner application fails to start manually?
During initial configuration of V-Ray DBR & any future debugging, it is recommended to disable any
firewall & anti-virus software at both the DBR master host machine as well as all render slave machines
which are intended to participate in the DBR render. We suggest you manually get V-Ray DBR up and
running in your studio pipeline to verify all is well before then introducing Deadline as a framework to
handle the Spawner/Slave process.
Is Backburner required for 3dsMax based V-Ray DBR rendering via Deadline?
Yes. Normal 3dsMax rendering via Deadline requires the Backburner dlls to be present on a system
and this is the same prerequisite for V-Ray DBR rendering to work correcty. Ensure you have the latest/corresponding version of Backburner to ensure it supports the version of 3dsMax you are using. You
can submit a normal 3dsmax render job to verify that Backburner & 3dsMax rendering via Deadline are
all operating correctly before attempting to configure V-Ray DBR rendering. Use the Deadline job report
to verify the correctly matched version of Backburner, 3dsMax are in order.
826
3dsmax.exe starts (via vrayspawnerYYYY.exe) in the taskbar (minimized) but then instantly disappears?
V-Ray DBR rendering requires Deadline to have rendered at least one normal 3dsMax render job on the
slave machine prior to attempting DBR rendering via vrayspawnerYYYY.exe. Essentially, to test/debug if
this is an issue, try to manually start the vrayspawnerYYYY.exe program from the Start menu (Start menu
> Programs > Chaos Group > V-Ray for 3dsmax > Distributed rendering > Launch V-Ray DR spawner).
It will automatically try to find the 3dsmax.exe file and start it in server mode. You should end up with
3dsmax minimized in the task bar with the title vraydummyYYYY.max. If 3ds Max stays there alive
without closing then V-Ray DBR is working correctly. If you see the 3ds Max window flashing on the
taskbar and then instantly disappearing, right-click on the V-Ray DR spawner icon in the taskbar tray,
select Exit to close the DR spawner application, and try submitting a regular Deadline 3dsMax render
job with this machine running Deadline slave. After that, try to start the V-Ray DR spawner again.
Do I need to run the vrayspawner (or RT/vrayslave/vray standalone) application or install vrayspawner (or
RT/vrayslave/vray standalone) executable as a service/daemon on each machine?
No. Do NOT execute or install the Chaos Group V-RaySpawner (V-RaySpawner/V-RaySpawner RT/VRay standalone) executable as a background service (NT service/daemon). Deadline is more flexible here
and will spawn the V-RaySpawner/standalone executable as a child process of the Deadline Slave. This
makes our system more flexible and resilient to crashes as when we terminate the V-Ray DBR job in the
Deadline queue, the Deadline Slave application will cleanly tidy up V-RaySpawner/standalone and more
importantly, any DCC application (3dsMax/Maya) or standalone instances which it in turn has spawned
as a child process. This can be helpful if V-Ray DBR becomes unstable and a user wishes to reset the
system remotely. You can simply re-queue or delete/complete the current DBR job or re-submit.
Can I force V-Ray Spawner/Slave to run over a certain control port?
Yes. Set the system environment variable VRAY_DR_CONTROLPORT to the required port number
or where possible, in the case of some supported applications we expose the Port Number option in
our Monitor/in-app submitters. Please consult the V-Ray version 2 or version 3 user manual for more
information on TCP/IP Port Numbers.
Can I force V-Ray DBR to run over a specific port for 3dsMax?
Yes. V-Ray production renderer specifically via 3dsMax uses a TCP port (default:20204), which can
be changed via the Port Number spinner. V-Ray RT as the renderer uses a different TCP port (default:20206). See here for more information on Port Configuration. Please consult the V-Ray version 2 or
version 3 user manual for more information on TCP/IP Port Numbers. Note, the Port Number can only
be controlled via the 3dsMax in-app submitter and NOT when reserving a V-Ray DBR job for 3dsMax
via the Deadline Monitor submission script.
V-Ray DBR rendering seems a little unstable sometimes or my machine slows down dramatically!
Depending on the number of slave machines being used (Win7 OS < 20), scene file sizes being moved
around together with asset files, and your network/file storage configuration, it may help to disable your
local machine from participating in the DR render process. Depending on your 3D application used and
the V-Ray version, there might be a Use local host or Dont use local machine checkbox option,
which can help to reduce the load on your local machine.
Can I fully off-load 3dsMax V-Ray or Mental Ray DBR rendering from my machine?
Yes, although please note, this is a different workflow and is supported directly in the 3dsmax plugin. See
the V-Ray/Mental Ray DBR section for more information.
827
error: Failed to start network server: Failed to open listening port (98)
VRay.exe/vrayslave has been configured as a service/daemon on the machine generating this error message, possibly during the V-Ray/Maya install process and this is conflicting with Deadline trying to
also spawn the same process on the same TCP port (default: 20207). On Linux, ensure you check
the contents of the file: /usr/autodesk/maya20##-x64/vray/bin/vrayslave for a line entry as follows:
/usr/autodesk/maya2014-x64/vray/bin/vray.bin $* -server -portNumber=20207 where ## is the Maya
version. This line entry should not be present. Note, we are unable to attach to an already running process
as part of the V-Ray Spawner Plugin, hence the V-Ray executable must NOT already be running. Do NOT
execute or install V-Ray as a service. Deadline is more flexible here and will spawn the executable as a
child process of the Deadline Slave.
828
829
830
Render Executables
VRay Executable: The path to the VRay executable file used for rendering. Enter alternative paths on separate
lines.
Path Mapping For vrscene Files (For Mixed Farms)
Enable Path Mapping For vrscene Files: If enabled, a temporary vrscene file will be created locally on the
slave for rendering and Deadline will do path mapping directly in the vrscene file.
9.67.3 FAQ
Is VRay Standalone supported?
Yes.
831
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Ply2Vrmesh specific options
are:
Input File: The file to be converted.
Output File: Optionally override the output file name. If left blank, the output name will be the same as the
input name (with the vrmesh extension).
Append: appends the information as a new frame to the .vrmesh file
Merge Voxels: merge objects before voxelization to reduce overlapping voxels
832
Smooth Angle: a floating point number that specifies the angle (in degrees) used to distinguish if the normals
should be smoothed or not. If present it automatically enables the -smoothNormals flag.
Smooth Normals: generates smooth vertex normals. Only valid for .obj and .geo files; always enabled for .bin
files
Map Channel: stores the UVW coordinates to the specified mapping channel (default is 1). Only valid for .obj
and .geo files. When exporting a mesh that will be used in Maya, currently this must be set to 0 or the textures
on the mesh will not render properly
FPS: a floating-point number that specifies the frames per second at which a .geo or .bin file is exported, so that
vertex velocities can be scaled accordingly. The default is 24.0
Preview Faces: specifies the maximum number of faces in the .vrmesh preview information. Default is 9973
faces.
Faces Per Voxel: specifies the maximum number of faces per voxel in the resulting .vrmesh file. Default is
10000 faces.
Preview Hairs: specifies the maximum number of hairs in the .vrmesh preview information. Default is 500
hairs.
Segments Per Voxel: specifies maximum segments per voxel in the resulting .vrmesh file. Default is 64000
hairs.
Hair Width Multiplier: specifies the multiplier to scale hair widths in the resulting .vrmesh file. Default is 1.0.
Preview Particles: specifies the maximum number of particles in the .vrmesh preview information. Default is
20000 particles.
Particles Per Voxel: specifies maximum particles per voxel in the resulting .vrmesh file. Default is 64000
particles.
Particle Width Multiplier: specifies the multiplier to scale particles in the resulting .vrmesh file. Default is
1.0.
Velocity Attr Name: specifies the name of the point attribute which should be used to generate the velocity
channel. By default the v attribute is used.
Disable Color Set Packing: only valid for .geo and .bgeo files; disables the packing of float1 and float2 attributes in vertex color sets.
Material IDs: only valid for .geo files; assigns material IDs based on the primitive groups in the file.
Flip Normals: reverses the face/vertex normals. Only valid for .obj, .geo and .bin files
Flip Vertex Normals: reverses the vertex normals. Only valid for .obj, .geo and .bin files
Flip Face Normals: reverses the face normals. Only valid for .obj, .geo and .bin files
Flip YZ: swap y/z axes. Needed for some programs i.e. Poser, ZBrush. Valid for .ply, .obj, .geo and .bin files.
Flip Y Positive Z: same as -flipYZ but does not reverse the sign of the z coordinate.
Flip X Positive Z: same as -flipYPosZ but swaps x/z axes.
833
Render Executables
Ply2Vrmesh Executable: The path to the ply2vrmesh.exe executable file used for rendering. Enter alternative
paths on separate lines. Different executable paths can be configured for each version installed on your render
nodes.
9.68.3 FAQ
Which versions of Ply2Vrmesh are supported?
Ply2Vrmesh for VRay 2 and 3 are currently supported.
834
835
Submission Options
The general Deadline options are explained in the Job Submission documentation. The Vrimg2Exr specific options
are:
VRay Image File: The VRay Image file(s) to be converted. If you are submitting a sequence of files, you only
need to select one vrimg file from the sequence.
Output File: Optionally override the output file name (do not specify padding). If left blank, the output name
will be the same as the input name (with the exr extension).
Frame List: The list of frames convert.
Specify Channel: Enable this option to read the specified channel from the vrimg file and write it as the RGB
channel in the output file.
Long Channel Names: Enable channel names with more than 31 characters. Produced .exr file will NOT be
compatible with OpenEXR 1.x if a long channel name is present.
Set Gamma: Enable this option to apply the specified gamma correction to the RGB colors before writing to
the exr file.
Crop EXR Data Window: Enable this option to auto-crop the EXR data window.
Set Buffer Size: Enable this option to set the maximum allocated buffer size per channel in megabytes. If the
image does not fit into the max buffer size, it is converted in several passes.
Store EXR Data as 16-bit (Half): Enable this option to store the data in the .exr file as 16-bit floating point
numbers instead of 32-bit floating point numbers.
Set Compression: Enable this option to set the compression type. The Zip method is used by default.
Separate Files: Writes each channel into a separate .exr file.
Threads: The number of computation threads. Specify 0 to use the number of processors available.
Multi Part: Writes each channel into a separate OpenEXR2 part.
Convert RGB Data to the sRGB Color Space: Enable this option to converts the RGB data from the vrimg
file to the sRGB color space (instead of linear RGB space) before writing to the exr file.
Delete Input vrimg Files After Conversion: Enable this option to delete the input vrimg file after the conversion has finished.
836
Render Executables
Vrimg2Exr Executable: The path to the vrimg2exr.exe executable file used for rendering. Enter alternative
paths on separate lines.
9.69.3 FAQ
Is Vrimg2Exr supported?
Yes.
9.70 VRED
9.70.1 Job Submission
You can submit jobs for VRED from the Monitor.
9.70. VRED
837
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation.
The VRED specific options are:
Version: Which version of vred to use.
Job Type: What type of job to submit.
838
Render Executables
VRED 2015 Executable: The path to the VRED 2015 executable file used for rendering. Enter alternative paths
on separate lines. Different executable paths can be configured for each version installed on your render nodes.
9.70. VRED
839
VRED 2016 Executable: The path to the VRED 2016 executable file used for rendering. Enter alternative paths
on separate lines. Different executable paths can be configured for each version installed on your render nodes.
9.70.3 FAQ
Is VRED supported by Deadline?
Yes.
Can VRED render non-GUI via Deadline?
Yes. We already pass an additional argument at render time (-nogui) to force a non-GUI session of VRED
Pro is run during network rendering. If you would prefer to save on your VRED Pro license being used
in non-GUI mode, then please consider using VRED Server Node instead which has no GUI. See the
next FAQ for more information.
Is VRED via Deadline able to render using VRED Server Node (render node) Licenses?
Yes. In order to render using VRED Server Node (render node) licenses (and save using expensive VREDPro licenses) you should edit the VRED Render Executable path to VREDServerNode.exe instead of
VREDPro.exe. Note, you must actually own some Autodesk VRED Render Node 20xx licenses
(where xx is the YEAR) to be able to use the VREDServerNode.exe executable as it does NOT use
the same license that VREDPro.exe uses. Please note that we believe these Autodesk VRED Render
Node licenses are actually referred to on the ADSK pricing list as Autodesk Raytracing Cluster Module
for Autodesk VRED 20xx if you have trouble finding them via your ADSK reseller.
840
Submission Options
The general Deadline options are explained in the Job Submission documentation.
The VRED Cluster specific options are:
Cluster Count: The number of tasks/maximum number of slaves to create in the cluster. Default: 1
Port Number: The port number to be used for the cluster service. Default: 8889. Ensure firewall is open.
VRED Version: The VRED application version to use.
841
Cluster Executables
VRED 2015 Cluster Executable: The path to the VRED 2015 Cluster executable file used for rendering. Enter
alternative paths on separate lines. Different executable paths can be configured for each version installed on
your render nodes.
VRED 2016 Cluster Executable: The path to the VRED 2016 Cluster executable file used for rendering. Enter
alternative paths on separate lines. Different executable paths can be configured for each version installed on
your render nodes.
9.71.3 FAQ
Is VRED supported by Deadline?
Yes.
842
9.72 Vue
9.72.1 Job Submission
You can submit jobs from within Vue, or you can submit them from the Monitor.
9.72. Vue
843
If you are submitting an animation from within Vue, select Animation -> Animation Render Options, then do the
following:
Find the Renderer section, select Network Rendering/RenderNode Network, then press the Edit button.
In the Options dialog that pops up, enter the submission command described below.
You can also enter the folder you want the temporary Vue scene file saved in during submission. By default, you
should be able to leave this blank. Press OK when finished.
Press Render Animation to bring up the submission dialog.
844
This is the submission command to submit a job from within Vue. Make sure this is entered as one line, and make sure
to set the deadlinecommand.exe and repository paths correctly. Note that the last two arguments 10 and 64bit are
optional, are are used to automatically populate the Version and Build settings respectively. Check the Vue submission
dialog in the Monitor for the available options for Version and Build.
"[Client Bin Folder]\deadlinecommand.exe" -executescript
[Repository]\scripts\submission\VueSubmission\VueSubmission.py
"[FILE_PATH]" "[SCENE_NAME]" "[NUM_FRAMES]" 10 64bit
9.72. Vue
845
Submission Options
The general Deadline options are explained in the Job Submission documentation, and the Draft/Integration options
are explained in the Draft and Integration documentation. The Vue specific options are:
Vue File: The Vue scene file to be rendered.
Render animation sequence: Whether or not to render the full animation.
Version: The version of Vue to render with.
Build To Force: Force 32 bit or 64 bit rendering.
Render Executables
Vue Executable: The path to the Vue executable file used for rendering. Enter alternative paths on separate
lines. Different executable paths can be configured for each version installed on your render nodes.
9.72.3 FAQ
Which versions of Vue are supported?
Vue 6 and later are supported (Infinite and xStream editions).
I have Vue render node licenses, but when I render with Deadline, I get the error No serial number found.
846
If you have render node licenses for Vue, you need to use the *RenderNode.exe executable (ie: Vue 9
xStream RenderNode.exe) instead of the StandaloneRenderer.eon executable for rendering.
If this is the case, it means that Vue cant get a license. If you have render node licenses for Vue, you need to use the
*RenderNode.exe executable (ie: Vue 9 xStream RenderNode.exe) instead of the StandaloneRenderer.eon executable
for rendering.
9.73 xNormal
9.73.1 Job Submission
You can submit xNormal jobs from the Monitor.
9.73. xNormal
847
Submission Options
The general Deadline options are explained in the Job Submission documentation. The xNormal specific options are:
XML File: The xNormal XML file to render.
Build To Force: Force 32 bit or 64 bit rendering.
848
Render Executables
xNormal Executable: The path to the xNormal executable file used for rendering. Enter alternative paths on
separate lines.
9.73.3 FAQ
Is xNormal supported?
Yes
9.73. xNormal
849
850
CHAPTER
TEN
EVENT PLUGINS
10.1 Draft
10.1.1 Overview
Draft is a tool that provides simple compositing functionality. It is implemented as a Python library, which exposes
functionality for use in python scripts. Draft is designed to be tightly integrated with Deadline, but it can also be used
as a standalone tool.
Using Deadlines Draft plugin, artists can automatically perform simple compositing operations on rendered frames
after a render job finishes. They can also convert them to a different image format, or generate Quicktimes for dailies.
The options available here are similar to those discussed in the Draft Plugin section. Although it might appear as
though there are less options here than in the Monitor submitter, all the same information will get passed to the Draft
template. This approach just allows us to automatically pull a lot of the needed info directly from the scene file and
from information filled in elsewhere in the submitter.
851
10.1.3 Setup
Since Draft is being shipped alongside Deadline, there is not a whole lot of configuration that is needed for this event
plugin to work (beyond simply enabling it). There are, however, options that allow you to select the priority, group
and pool to which the Draft event plugin will submit Draft jobs.
To access these settings, simply enter Super User mode and select Tools -> Configure Events form the Monitors menu.
From there, select the Draft entry from the list on the left.
852
10.2 FontSync
10.2.1 Overview
The FontSync event plugin can be used to synchronize fonts from a central server to Windows and Mac OS X render
nodes. It can be configured to synchronize the fonts when the Slave application is launched on the render node, or
before each job the Slave renders.
The font folder on the central server must be accessible by the render nodes, and it is recommended to use separate
folders for Windows and Mac OS X fonts.
10.2.2 Setup
Some configuration is needed to use the FontSync event plugin. To access these settings, simply enter Super User
mode and select Tools -> Configure Events form the Monitors menu. From there, select the FontSync entry from the
list on the left.
General Options
Enabled: If this event plugin is enabled.
Perform Font Synchronization: If the event plugin should synchronize fonts when a slave starts up, or before
each job it renders.
Mac OSX Font Synchronization Options
Network Mac OSX Font Folder: The network Mac OSX font folder used for synchronization.
10.2. FontSync
853
Local Mac OSX Font Folder: The local Mac OSX font folder to synchronize with the network font folder.
Enter alternative paths on separate lines.
Windows Font Synchronization Options
Network Windows Font Folder: The network Windows font folder used for synchronization.
Use Users Temp Folder as Font Folder: If enabled, the fonts will be copied to a DeadlineFonts folder in the
current users TEMP folder. Using this option avoids having to create a font folder on each machine, and avoids
permission issues.
Local Windows Font Folder: The local Windows font folder to synchronize with the network font folder. Enter
alternative paths on separate lines. This is ignored if Use Users Temp Folder as Font Folder is enabled.
Timeout For Font Registration (ms): The amount of milliseconds the event plugin will wait before timing out
per font when registering fonts.
10.3 ftrack
10.3.1 Overview
ftrack is a cloud-based Project Management tool that provides Production Tracking, Asset Management, and Team
Collaboration tools to digital studios; see the ftrack website for more information.
Using Deadlines ftrack event plugin, artists can automatically create new Asset Versions in ftrack when they submit
a render Job to the farm. When a Job completes, Deadline will automatically update associated Asset Versions with a
proper Status, Thumbnail, and Components (if the output location is known).
854
10.3. ftrack
855
Choose ftrack from the Project Management drop down, and then press the Connect button to bring up Deadlines
ftrack browser. Enter your ftrack Login Name and press Connect. If the connection is successful, Deadline will collect
the list of Projects and Tasks you are assigned to. If there are problems connecting, Deadline will try to display the
appropriate error message to help you diagnose the problem.
856
10.3. ftrack
857
After you have selected a Task and Asset, you must specify a Version Description.
858
After you have configured the Version information, press OK to return to the Nuke submitter. The ftrack settings will
now contain the Version information you just specified. To include this information with the job, leave the Create
New Version option enabled. If you want to change the Version name or description before submitting, you can do so
without reconnecting to ftrack.
10.3. ftrack
859
You can now press OK to submit the job. If the ftrack event plugin is configured to create the new version during
Submission, the log report from the ftrack event plugin will show the Versions ID. Otherwise, the Version wont be
created in ftrack until the job completes.
You can view the log report for the job by right-clicking on the job in the Monitor and selecting View Job Reports.
860
10.3. ftrack
861
862
Choose ftrack from the Project Management drop down, and then press the Connect button to bring up Deadlines
ftrack browser. Enter your ftrack Login Name and press Connect. If the connection is successful, Deadline will collect
the list of Tasks you are assigned to. If there are problems connecting, Deadline will try to display the appropriate
error message to help you diagnose the problem.
10.3. ftrack
863
864
After you have selected a Task and Asset press OK to return to the Quicktime submitter. The ftrack settings will now
contain the Version information you just specified. To upload the movie file to the selected Version, leave the Create
New Version option enabled.
10.3. ftrack
865
You can now press OK to submit the job. When the job finishes, the rendered movie will automatically be uploaded to
the selected Version.
10.3.3 Setup
In order to be able to create versions within Deadline, you must first follow the steps below to setup Deadlines
connection to ftrack.
866
The name of the key doesnt matter much (as long as its descriptive), but make sure Enabled is set to On and that
you select the API role. Once youve filled in all the values, click the Create button to finalize the keys creation.
Once youve created the new entry, take note of its Key value you will need this when configuring Deadline in the
next step.
Configure Deadline
Once youve created an API Key as detailed above, you can now set up the Event Plugin to connect to ftrack. To
perform this setup, you need to enter Super User Mode (from the Tools menu), and then select Tools -> Configure
Events. Once in the Event Plugin Configuration window, select FTrack from the list on the left.
10.3. ftrack
867
This is where you will configure all the ftrack-relevant settings in Deadline. There are several different categories of
settings you can configure; they are described in more detail below.
Options
This section contains general high-level options that control the behaviour of the Deadlines ftrack integration.
Enabled: This will turn Deadlines ftrack integration on/off. In order for this feature to function properly, this
must be set to True.
Create Version On Submission: This setting controls when an Asset Version is created in ftrack. If this is
True, they will be created when a Job is submitted. On the other hand, if this is False, the Asset Version will
only be created when the Job is Completed.
Connection Settings
This section contains information that Deadline uses to connect to the ftrack API; these settings must be configured
properly in order for this feature to work at all.
FTrack URL: This is the URL which you use to connect to your ftrack installation.
FTrack Proxy: The proxy you use to connect to ftrack. This is only relevant if you use a Proxy; if in doubt,
leave this field blank.
FTrack API Key: This is where you must enter the API Key created in the Create API Key step.
Version Status Mappings
868
This section contains mappings from Deadline Job Statuses to ftrack Asset Version Statuses. These are not necessary, but if specified, Deadline will update the status of Asset Versions as Deadline Jobs change status (based on the
mappings provided).
Rename ExtraInfo Columns
The ftrack integration uses ExtraInfo columns 0-5 to display relevant information about the Asset Versions that are
tied to Deadline Jobs. Given that ExtraInfo0 isnt exactly a descriptive name for what that column is being used for
in this context, many people find it useful to rename these columns to be more descriptive.
To do so, you must be in Super User mode and select Tools -> Repository Options. You must then go to the Job
Settings section, and select the Extra Properties tab; from here youll be able to change these column names to
something more appropriate.
10.4 Puppet
10.4.1 Overview
Puppet is management system that can be used to keep applications and plugins synched across your render nodes.
See the Puppet Labs Website for more information.
The Puppet event plugin that ships with Deadline can be used to run a Puppet update on a slave when it starts and
when it becomes idle, thus allowing you to keep your render nodes in sync without interupting jobs that are currently
rendering.
Note that Puppet must already be configured to work outside of Deadline. Once your Puppet system is set up, you can
then enable the Puppet event plugin for Deadline to automatically trigger Puppet updates.
10.4.2 Setup
Some configuration is needed to use the Puppet event plugin. To access these settings, simply enter Super User mode
and select Tools -> Configure Events form the Monitors menu. From there, select the Puppet entry from the list on
the left.
10.4. Puppet
869
10.5 Salt
10.5.1 Overview
Salt (or SaltStack) is management system that can be used to keep applications and plugins synched across your render
nodes. See the SaltStack Website for more information.
The Salt event plugin that ships with Deadline can be used to run a Salt update on a slave when it starts and when it
becomes idle, thus allowing you to keep your render nodes in sync without interupting jobs that are currently rendering.
Note that Salt must already be configured to work outside of Deadline. Once your Salt system is set up, you can then
enable the Salt event plugin for Deadline to automatically trigger Salt updates.
10.5.2 Setup
Some configuration is needed to use the Salt event plugin. To access these settings, simply enter Super User mode and
select Tools -> Configure Events form the Monitors menu. From there, select the Salt entry from the list on the left.
870
10.6 Shotgun
10.6.1 Overview
Shotgun is a customizable web-based Production Tracking system for digital studios, and is developed by Shotgun
Software.
Using Deadlines Shotgun event plug-in, artists can automatically create new Versions for Shots or Tasks in Shotgun
when they submit a render job to the farm. When the job finishes, Deadline can automatically update the Version by
uploading a thumbnail and marking it as complete or pending for review.
10.6. Shotgun
871
872
Choose Shotgun from the Project Management drop down, and then press the Connect button to bring up Deadlines
Shotgun browser. Enter your Shotgun Login Name and press Connect. If the connection is successful, Deadline
will collect the list of Tasks you are assigned to. If there are problems connecting, Deadline will try to display the
appropriate error message to help you diagnose the problem.
10.6. Shotgun
873
874
After you have selected a Task, you must specify a Version name and a description. If you have configured Version
name templates in the Shotgun event plugin configuration, you can select one from the drop down. You can also
manually type in the version name instead.
10.6. Shotgun
875
After you have configured the Version information, press OK to return to the Nuke submitter. The Shotgun settings
will now contain the Version information you just specified. To include this information with the job, leave the Create
New Version option enabled. If you want to change the Version name or description before submitting, you can do so
without reconnecting to Shotgun.
876
You can now press OK to submit the job. If the Shotgun event plugin is configured to create the new version during
Submission, the log report from the Shotgun event plugin will show the Versions ID. Otherwise, the Version wont be
created in Shotgun until the job completes.
You can view the log report for the job by right-clicking on the job in the Monitor and selecting View Job Reports.
10.6. Shotgun
877
878
10.6. Shotgun
879
Choose Shotgun from the Project Management drop down, and then press the Connect button to bring up Deadlines
Shotgun browser. Enter your Shotgun Login Name and press Connect. If the connection is successful, Deadline
will collect the list of Tasks you are assigned to. If there are problems connecting, Deadline will try to display the
appropriate error message to help you diagnose the problem.
880
10.6. Shotgun
881
After you have selected a Task, you can select a Version for that Task. Then press OK to return to the Quicktime
submitter. The Shotgun settings will now contain the Version information you just specified. To upload the movie file
to the selected Version, leave the Create New Version option enabled.
You can now press OK to submit the job. When the job finishes, the rendered movie will automatically be uploaded to
the selected Version.
882
10.6.4 Setup
Follow these steps to setup Deadlines connection to Shotgun.
Create the API Script in Shotgun
In Shotgun, you must first create a new API script so that Deadline can communicate with Shotgun. This can by done
from the Admin menu.
10.6. Shotgun
883
After the Scripts page is displayed, press the [+] button to create a new script, and enter the following information in
the window that appears. If you cant see one or more of the following fields, use the More Fields drop down to show
them.
Script Name: deadline_integration
Description: Script for Deadline integration
Version: 1.0
Permission Group: API Admin
After you have created the new script, click on the deadline_integration link in the Scripts list and note the value in
Application Key field (its a long key consisting of alphanumeric characters). Youll need this key when configuring
Deadlines Shotgun connection in the next step.
884
10.6. Shotgun
885
The event plugin settings are split up into a few sections. The most important sections are the Options and Connection
Settings, as these control how Deadline connects to Shotgun. In most cases, the Field and Value Mapping sections can
be left alone because they map to fields that exist in the default Shotgun installation. Only studios that have deeply
customized their Shotgun installations might have to worry about changing the Field and Value Mapping settings.
Options
Enabled: The Shotgun event plugin must be enabled before Deadline can connect to Shotgun.
Create Version On Submission: If enabled, Deadline will create the Shotgun Version at time of submission
and update its status as the job progresses. Otherwise, the Version will only be created once the job completes.
Enable Advanced Workflow: If enabled, the user can select a Project and Entity instead of just a Task.
Thumbnail Frame: The frame to upload to Shotgun as a thumbnail.
Convert Thumbnails with Draft: Whether or not to attempt to use Draft to convert the Thumbnail frame prior
to upload.
Thumbnail Conversion Format: The format to convert the Thumbnail to prior to upload (see above).
Version Templates: Presets for Version names that users can select from (one per line). Available tokens include
${project}, ${shot}, ${task}, ${user}, and ${jobid}. For example:
${project} - ${shot} - ${task}
${project}_${shot}_${task} (${jobid})
Enable Verbose Errors: Whether or not detailed (technical) error information should be displayed when errors
occur while connecting to Shotgun.
Connection Settings
Shotgun URL: Your Shotgun URL.
Shotgun Proxy: Your proxy (if you use one).
886
API Script Name: The name of the API script you created in Shotgun earlier (deadline_integration).
API Application Key: The key from the script you created in Shotgun earlier (its a long key consisting of
alphanumeric characters).
Shotgun Field Mappings
These are the Version fields that Deadline is expecting to exist in Shotgun. The default values match those from a
default Shotgun installation, so you will only have to edit these settings if you have customized the Version Field
names in your Shotgun installation.
Note that some of the Fields you can specify arent created by default in Shotgun. You will have to manually create
those fields in Shotgun and specify their names here, if you wish to use them. An example of such fields would be
Deadline Job ID, and Average/Total Render Time.
Status Value Mappings
These are the Version status values that Deadline is expecting to exist in Shotgun. The default values match those from
a default Shotgun installation, so you will only have to edit these settings if you have customized the Version Status
values in your Shotgun installation.
Draft Field Mappings
Draft Template Field: The field code for a Task field that contains a Draft Template relevant to the Task. If this
is specified, Deadline can automatically pull in the specified template at submission time.
Test the Shotgun Connection
After you have configured the Shotgun connection, you can test it from the Deadline Monitor by selecting Scripts ->
TestIntegrationConnection. This will bring up the Test Integration Connection dialog.
10.6. Shotgun
887
Choose Shotgun from the Project Managemnt drop down, and then press Connect. If the connection is successful,
Deadline will collect the list of Tasks you are assigned to. If there are problems connecting, Deadline will try to
display the appropriate error message to help you diagnose the problem.
888
10.6. Shotgun
889
Rename the Extra Info properties as shown in the following image. After committing these changes, you will now be
able to see these Shotgun specific columns in the Job List in the Monitor.
890
10.6.5 FAQ
Which editions of Shotgun does Deadline support?
Deadline supports the Studio and Partner editions of Shotgun, because those editions include the necessary
API access.
Which versions of Shotgun does Deadline support?
Deadline supports Shotgun 2.3 and later.
Which version of the Shotgun API does Deadline use?
Deadline 7.1 ships with version 3.0.17 of the Python Shotgun API.
10.6. Shotgun
891
892
CHAPTER
ELEVEN
CLOUD PLUGINS
11.1.2 Configuration
Before you can configure the Amazon EC2 plugin for Deadline, you must add Amazon as a provider in the Cloud
Providers dialog in the Monitor. The Amazon EC2 plugin requires only a few credentials before it can be used in
Deadline. These can be collected from the Amazon EC2 web site (see the image below).
893
Configuration Settings
894
General
Enabled: Enables the cloud region for use in Deadline.
Options
Access Key ID: Your EC2 Access key.
Secret Access Key: Your EC2 secret key.
Region: The EC2 region you want to use.
Account Number: Your EC2 account number. Used to filter the image list.
VM Configuration
Key Pair Name: The Key Pair to be used for the instance.
Subnet ID: ID of the Subnet to start instances in.
Instance Types: List of the Hardware Types used on EC2. Make sure you use types that are supported by
Amazon. You can find a list of them Here
User Data: Any data you want to pass to an instance. Can be used to configure instances as part of start up
scripts. Doing curl http://169.254.169.254/latest/user-data from within the instance will get you everything you
11.1. Amazon EC2
895
put in here.
Security Group: Start instances in this security group.
Customization
Instance Name: The name of the instances that are started by the Balancer. We add some random hex values to
the end for uniqueness.
11.1.3 FAQ
Is Amazon EC2 Cloud supported by Deadline?
Yes.
11.2.2 Configuration
Before you can configure the Google Cloud plugin for Deadline, you must add Google Cloud as a provider in the
Cloud Providers dialog in the Monitor. The Google plugin requires only a few credentials before it can be used in
Deadline (see the image below). You can download the Client Secrets file from the API->Credentials section of the
Google Compute console.
11.2.3 Credentials
Here is a guide for how to get your client secrets file from the Google Cloud Console and verify your access to your
GCE project in Deadline. Step 1: Get your project ID
This is used in Deadline to verify access to this project.
896
897
898
899
A web browser will start asking you to sign into your Google Account.
Next, youll see the consent screen you created in Step 4. Heres what a basic consent screen looks like.
Now your oauth.dat file should be downloaded and you should be all set to use your GCE project with Deadline.
Notes:
This verify access process is time sensitive. It will timeout if you wait too long. The timeout is not very long,
only lasting about 30 seconds.
One issue that has come up is blocked ports. If you have a process running that blocks on port 8080 this will not
work. Youll have to change the port number. (Programs such as Skype might do this)
oauth.dat files do eventually expire. You will need to repeat this process to get a new one.
900
General
Enabled: Enables the cloud region for use in Deadline.
Credentials
Client Secrets *.json File: The path to your Client Secrets file. You can download it by going to your project,
clicking on Credentials (under the APIs & auth heading) and clicking download JSON. Note: Deadline requires
a Client ID for native applications.
OAuth 2.0 *.dat File: The path to your OAuth2 dat file. This file wont exist until youve Verified Access at
least once (or try to use the plugin) for the first time. The API will download it after you grant it access.
Project ID: The non-human readable ID of your Google Cloud Project. Found on the Overview tab and the
Cloud Console. See Step 1.
Options
Region: The GCE region you are using.
Network: The GCE network to spawn instances.
901
Disk Size: The size of the Persistent Disk to start with your instance. The default is 10GB.
Port Number: The port number used for authentication.
Show Images in Cloud Panel: Show the image name in the Cloud Panel. Enabling this may cause performance
issues.
Customization
Instance Name: The name of the Instances that will be spawned. We add some random hex values on the end
to make them unique.
Tags: Tag firewall rules to the instances that Deadline starts. Each tag should be on a new line.
11.2.6 FAQ
Does the Google Compute Engine API need to be enabled?
Yes, ensure that the Google Compute Engine API is enabled for the project. In the Google Developer
Console, click the project name. From the left-side menu, choose the APIs link under the APIs and auth
heading. If Google Compute Engine is not shown in the list of Enabled APIs at the top of the page, do
the following: Scroll down and find Google Compute Engine and enable it. If you have not previously
enabled billing for the account, you will be prompted to do so. After enabling billing, you will again need
to enable the Google Compute Engine API. It should now appear under the list of Enabled APIs at the
top of the page. Once Google Compute Engine appears in the list of Enabled APIs at the top of the page,
re-generate and re-save the Client Secret JSON file.
902
11.3.2 Configuration
Before you can configure the Azure plugin for Deadline, you must add Azure as a provider in the Cloud Providers
dialog in the Monitor. The Azure plugin requires only a few credentials before it can be used in Deadline (see the
image below). Youll also have to create and upload a Management Certificate.
903
Configuration Settings
General
Enabled: Enables the cloud region for use in Deadline.
Credentials
Subscription ID: Your access ID for your Azure account.
Certificate Path: Path to your Azure Certificate.
VHD Blob Storage: The url of your Blob Storage.
Blob Storage Password: Password for Blob Storage if you have one.
VM Config
Affinity Group: The Affinity Group to start instances in. Can be used instead of Location.
Location: The Location to start instances in. Can be used instead of Affinity Group.
Virtual Network: The virtual network that the instance will be a part of.
Subnet Name: Name of the subnet that the instance will be in.
904
11.3.3 FAQ
Is Azure Cloud supported by Deadline?
Yes.
11.4 OpenStack
11.4.1 Overview
The Openstack plugin for Deadline allows for communication between Deadline and an Openstack server. It works
with both the Cloud Panel in the Monitor and the Deadline Balancer application.
11.4.2 Configuration
Before you can configure the OpenStack plugin for Deadline, you must add OpenStack as a provider in the Cloud
Providers dialog in the Monitor. The Openstack plugin requires only a few credentials before it can be used in
Deadline (see image below).
11.4. OpenStack
905
Configuration Settings
General
Enabled: Enables the cloud region for use in Deadline.
Options
User Name: Your Openstack user name.
Password: The password for your Openstack account.
Keystone Endpoint: The endpoint of the Openstack server. This is listed as Identity in the Access & Security
section of the Openstack project.
Tenant Name: The Tenant name (aka Project Name).
Keypair Name: Instances start with these key names.
Security Group: Start instances in this security group.
Customization
Instance Name: The name of newly created instances. We add some random characters on the end for uniqueness.
906
11.4.3 FAQ
Is Openstack Cloud supported by Deadline?
Yes.
11.5 vCenter
The vCenter plugin for Deadline allows for communication between Deadline and a vCenter server. It only works
with the Cloud Panel in the Monitor. It does not work with the Deadline Balancer application.
11.5.1 Configuration
Before you can configure the vCenter plugin for Deadline, you must add vCenter as a provider in the Cloud Providers
dialog in the Monitor. The vCenter plugin requires only a few credentials before it can be used in Deadline.
11.5. vCenter
907
Configuration Settings
General
Enabled: Enables the cloud region for use in Deadline.
Options
vCenter Server: The name of the vCenter Server you want to connect to.
User Name: Username for vCenter.
Password: Password for vCenter.
Customization
Instance Name: Name used when starting new instances. We add some random hex values to the end for
uniqueness.
11.5.2 FAQ
Is VMware vCenter supported by Deadline?
908
Yes, but not with Balancer. Only basic manual cloud instance starting/stopping/terminating is supported
via the cloud plugin architecture.
11.5. vCenter
909
910
CHAPTER
TWELVE
RELEASE NOTES
911
912
Note: We need to modify the DraftParamParser.py library so that unicode strings arent mangled in the Deadline/Draft boundary, but once theyre in, Draft handles them properly.
Licensing Improvements
Draft Licences are now more flexible! Most Draft features require only that a license be present. Actual checkout
of licenses now happens only while videos are being encoded or decoded.
Lost connection to license server no longer pops up dialog boxes on Windows.
Mono Upgraded to 3.8
Deadline now runs against Mono 3.8 on Linux and Mac OSX, which helps improve stability. In addition, the Mac
OSX version of Mono is now 64-bit. This new version is bundled with the Linux and Mac OSX Client and Repository
installers.
Mono Included in Linux Installers
Mono is now installed automatically as part of the installation procedure on Linux. It is installed to the Deadline
installation folder, and wont impact any existing Mono installations. Now Mono no longer needs to be installed
manually on Linux prior to installing Deadline.
Updated Slave Licensing Model
When running multiple slaves on a single machine, they will now share a single license instead of needing one license
per slave instance. In addition, the slaves will only hold onto their license while they are rendering. When they become
idle, they will return their license.
Customizable Styles for Deadline Applications
The new Styles configuration panel in the Monitor options allows you to customize the color of the Deadline applications. Simply specify a palette color and the User Interface will automatically use lighter and darker variants of that
color where necessary. In addition, the font style and size can be configured as well. Finally, you can export styles and
share them with other users.
New Batch Property for Grouping Jobs
A new Batch property has been added to jobs that allows jobs to be grouped together in the Job List. All jobs with the
same Batch name will be grouped under that Batch name, and the Batch name can be expanded or collapsed to show
and hide all the jobs, respectively. Jobs in the same Batch will also be grouped together in the Job Dependency View.
Finally, the properties for the jobs in the same Batch can be modified by simply right-clicking on the Batch item in the
Job List or the Job Dependency View.
New Graphs in the Monitor
New graphs have been added to the Monitor. The Jobs panel can show pie charts based on the job pool, secondary
pool, group, user, and plugin. The Tasks panel can show graphs representing the task render times, image sizes, cpu
usage, and memory usage. The Slaves panel can now show bar charts that show how many slaves are in certain pools
and groups. The Job Reports panel can now show a pie chart that shows the percentage of errors generated by each
slave.
913
914
Finally, Slave Scheduling can now be configured to launch the slave if the machine has been idle for a certain amount
of time (idle means no keyboard or mouse input). There is also additional criteria that can be checked before
launching the slave, including the machines current memory and CPU usage, the current logged in user, and the
processes currently running on the machine. Finally, this system can stop the slave automatically when the machine is
no longer idle.
Note that Idle Detection can be set in the Slave Scheduling settings, or on a per-slave basis in the Slave Settings dialog
in the Monitor. It can also be set in the new Local Slave Control dialog so that users can configure if their local slave
should launch when the machine becomes idle.
Job Dequeueing Mode
Slaves now have a new Job Dequeuing mode that controls which jobs a slave dequeues based on how the job was
submitted. By default, a slave will dequeue any job, but it can be configured to only dequeue jobs submitted from the
same machine that the slave is running on, or submitted by specific users.
The Job Dequeuing Mode can be configured in the Slave Settings dialog in the Monitor. It can also be set in the new
Local Slave Control dialog so that users can configure if their local slave should only render their own jobs, or if they
want to help another user render their jobs.
Local Slave Controls
The Monitor and Launcher applications now have a new dialog that can be used to control the slave running on the
local machine. It can be used to start and stop the slave, or connect to the slaves log. This is useful if the slave is
running as a service on the machine.
In addition, you can set up the slave to launch if the machine has been idle for a certain amount of time (idle means
no keyboard or mouse input). It can also stop the slave automatically when the machine is no longer idle.
Finally, the slaves Job Dequeuing Mode can be configured here. By default, a slave will dequeue any job, but it can
be configured to only dequeue jobs submitted from the same machine, or submitted by specific users. This is useful if
a user wants their slave to only render their jobs, or they want to help another user render their jobs.
Note that the Idle Detection and Job Dequeuing Mode settings can also be changed by administrators for all slaves.
In addition, the Local Slave Controls feature can be disabled by administrators if they dont want users to be able to
control their local slaves.
Render As User
A new option has been added to Deadline to render jobs with the account that is associated with the jobs user. The
account information can be configured in the Deadline user settings. On Windows, the users login name, domain, and
password are required. On Linux and Mac OSX, just the users login name is required, but the Slave must run as root
so that the Slave has permission to launch the rendering process as another user.
Improved Slave Statistics
Additional statistical information is now gathered for individual slaves, including the slaves running time, rendering
time, and idle time. It also includes information about the number of tasks the slave has completed, the number of
errors it has reported, and its average Memory and CPU usage. Like job statistics, Pulse does not need to be running
to gather this information.
915
Pulse Redundancy
You can run now multiple instances of Pulse on separate machines as backups in case your Primary Pulse instance goes
down. If the Primary Pulse goes offline or becomes stalled, Deadlines Repository Repair operation can elect another
running instance of Pulse as the Primary, and the Slaves will automatically connect to the new Primary instance.
Note that when multiple Pulse instances are running, only the Primary Pulse is used by the Slaves for Throttling.
In addition, only Primary Pulse is used to perform Housecleaning, Power Management, and Statistics Gathering.
However, you can connect to any Pulse instance to use the Web Service.
New Events and Asynchronous Job Events
New events have been added to the Event Plugin system. The first is the OnHouseCleaning event, which triggers
whenever Deadline performs Housecleaning. This allows you to set up event plugins to do custom cron-job style
operations within Deadline.
In addition, there are four new events that trigger when a slave changes state: OnSlaveStarted, OnSlaveStopped,
OnSlaveRendering, and OnSlaveStartingJob. As an example, an event plugin could be written to have slaves automatically add themselves to Groups when they start up based on some custom criteria, or an event plugin could be written
to have slaves perform maintenance checks when they become idle.
Finally, there is now an option to process many types of job events asynchronously. The benefit is that job events will
no longer slow down batch operations in the Monitor (for example, deleting 1000 jobs will be much faster if you are
using event plugins because those events will be processed later). These job events are queued up in the Database
and Deadlines Pending Job Scan will process them at regular intervals. Because they are placed in a queue, they will
still be processed in the same order that they were triggered. Note that if this option is enabled, some events are still
processed synchronously, like the OnJobSubmitted and OnJobStarted events.
Auto Configuration Overhaul
The Auto Configuration feature has undergone a couple of significant changes. The first is that all Deadline applications can now pull the Auto Configuration settings, instead of just the Slave. This means that Auto Configuration can
now be used to automatically configure workstations, not just render nodes.
The second change is with how Auto Configuration works. Previously, all Auto Configuration settings were pulled
from Pulse. Now, only the Repository Path is pulled from Pulse, and the other settings are pulled when the Deadline
application connects to the Repository. The benefit to this is that most of the Auto Configuration settings will work
without Pulse running.
Finally, Auto Configuration rule sets can now be enabled or disabled, so you no longer have to delete a rule set if you
want to remove it temporarily.
Region Awareness
Regions can now be configured in Deadline, and users and slaves can be assigned to a specific region. Currently, this
is useful for Path Mapping, and allows you to map paths differently based on the region that the users or slaves are in.
Note that when VMX launches a slave, it will automatically be added to the region associated with the cloud provider
settings.
Grid-Based Script Dialogs
New grid-based functions have been added to the DeadlineScriptDialog class which makes it easier to create custom
dialogs. Instead of setting the width and height when adding new controls to a row, you can instead add them to a grid
and indicate which row and column the control should go in. Optionally, you can also indicate how many rows and
916
columns the control should occupy. By being part of a grid, the controls will now grow and shrink dynamically based
on the size of the dialog and the size of the font.
FTrack Integration
The Deadline/FTrack integration enables a seamless render and review data flow. When Deadline starts a render, an
Asset Version is automatically created within FTrack using key metadata. When the render is complete, Deadline
automatically updates the created Version appropriately a thumbnail image is uploaded, components are created
from the Jobs output paths (taking advantage of FTracks location plugins), and the Version is flagged for Review. In
doing so, Deadline provides a seamless transition from Job Submission to Review process, without artists needing to
monitor their renders.
Jigsaw for Maya, modo, and Rhino
Jigsaw, which was previously only available for 3ds Max, is now available for Maya, modo, and Rhino. It gives you
more control over the tiles and/or regions that you are submitting to Deadline. This feature uses Thinkbox Softwares
Draft library to assemble the final image instead of the old TileAssembler.exe application. Note that Draft requires a
license, so contact Thinkbox Sales if you dont already have a Draft license.
Submission Script Installers
Submission script installers can now be found in each application folder in the Submission folder in the Repository.
These allow for most of the submission scripts to be installed automatically, instead of having to manually copy over
files.
Support for Salt and Puppet
Application and Event plugins have been added to support the Salt and Puppet automation applications. Jobs can be
submitted to the application plugin to update software and machine configurations on specific machines, while the
event plugins can be used to update all of your machines when the slave running on them becomes idle.
Updated Application Support
Support has been added for After Effects CC 2014, Arnold for Houdini, Cinema 4D 16, Corona, Fusion 7, Nuke 9,
Realflow 2014, and SketchUp 2015.
917
Added option to process many of the job events asynchronously to improve performance (particularly in the
Monitor).
Added application and event plugins for Puppet and Salt automation applications.
Users, slaves, and pulse can now be added to regions, which affects how Path Mapping is performed for them
(regions can be configured in the Repository Options).
Path mapping can now be associated with regions so that different path mappings can be set for different regions.
There is now an option in slave scheduling to keep the slave running during scheduled hours.
Housecleaning and the Pending Job scan are now performed on a more regular basis by the Slaves when Pulse
isnt running.
During the Pending Job Scan, the task dependency check now handles a missing __main__ function in the
dependency script properly.
Fixed a typo where the Pending Job Scan would refer to itself as Housecleaning.
Fixed an encoding issue when saving and loading job and slave reports.
Added new slave statistics gathering that logs more information about individual slaves.
Added new vCenter Cloud plugin.
Limits can now be configured with different usage levels. They can be per task, per slave, or per machine.
Previously, they could only be per slave.
Bumped up the maximum thread/cpu setting limit in the submission scripts.
The Deadline temp folder on the Client machines now gets cleaned up on a regular basis.
Split out the critical Housecleaning operations into a new Repository Repair operation (orphaned task and limit
stub checking, stalled slave checking, and available DB connection checking).
The randomness of the housecleaning checks has been removed to make the system more reliable and predictable.
Fixed some cases where timestamps were still using 12 hour clocks.
Added IP address/hostnames to the power management logging.
Fixed a bug that prevented Deadline from shutting down an OSX machine.
Most integrated submitter client scripts now print out where theyre getting the main script file from prior to
running the main script.
Housecleaning can now detect if a task is waiting to start, but the slave hasnt updated its state to show that its
rendering that task.
Fixed a bug that prevented the timeout from triggering when running the housecleaning operations as separate
processes.
Added an option for splitting the output from the different housecleaning operation to separate logs.
Fixed how the timestamps look when connecting to a remote slave/pulse/balancer log.
Job event triggers now fire properly when changing states of individual tasks.
Improved performance when checking pending jobs with frame dependencies.
The Deadline applications no longer set the PYTHONHOME and PYTHONPATH environment variables for
their current session.
The error message that is displayed when auto-archiving a job fails now shows the job ID instead of the job
name.
918
Housecleaning only loads event plugins once when deleting or archiving completed jobs.
When purging jobs in housecleaning, the event plugins are only loaded once per batch.
Added a Machine Startup option in Power Management to not send the command to the machine to launch the
slave.
Added user group permission option to disable job submission (enabled by default).
Added stalled Pulse and Balancer detection to housecleaning.
Removed a misleading message that was printed when getting the user from deadline.ini and one wasnt defined
yet.
Housecleaning, pending job scan, and repository repair are no longer run as a separate process by default.
Installer Improvements
Mono is now shipped with the Linux installers, so it is no longer required for Mono to be installed prior to
installing Deadline.
Added the major version number to the shortcuts created on windows, and to the uninstaller shortcuts created
on all operating systems.
Added command line option to Client installer to set the NoGuiMode setting.
When setting up the database, the Repository installer now checks to make sure the database version is the
minimum supported version.
The Repository installer now checks to make sure its not installing over an existing repository thats a different
version.
The Repository installer now sets the default database name to include the major Deadline version number.
The Repository installer now creates a repository.ini file in the repository install directory which contains the
Version information.
The Windows Repository installer now ships with both the standard and legacy versions of MongoDB. The
standard version will be installed on Windows Server 2008 R2 and later, and the legacy version will be installed
on older versions of Windows.
The Repository uninstaller now removes all subfolders except for the custom one.
Fixed a bug in the Client Installer that was causing the license server entry to be reset if the repository directory
was invalid.
The MongoDB service name and port can now be customized in the Repository installer, and its default is based
on the current Deadline version.
The Windows client installer now creates a DeadlineLauncher# registry key to start the Launcher on login (where
# is the major version number). This allows different versions of the Launcher to start on login.
Fixed a bug in the Repository installer that was causing Password: to be set for the user name in dbConnect.xml on OSX.
Fixed some errors when running the Repository installer in unattended mode.
Installers on OSX are now signed with codesign v2 so that Gatekeeper doesnt flag them on OSX 10.9.5.
The replica set name and mongo password fields in the Repository installer are now wider.
The Mono.Posix and Mono.Security dlls are no longer installed with the Linux version of Deadline.
The api, balancer, cloud, and draft folders in the repository are now backed up during an upgrade.
919
921
Added a NoGuiMode setting to the deadline.ini file. Its set to False by default, but if True, then the launcher,
slave, and pulse will always run in nogui mode, regardless if the -nogui flag is passed or not.
All logs for the Deadline applications and for jobs now have timestamps.
The LaunchPulseAtStartup and LaunchBalancerAtStartup settings are now stored in the system deadline.ini file,
not the per user one.
The Monitor, Pulse, and Balancer listening ports and process IDs are now stored in separate ini files, not the
system deadline.ini file. This means that a symlinked deadline.ini file can now be shared between multiple
machines.
Added the major version number to the app packages on OSX.
Fixed some bad logic when the applications try to determine if they should run in GUI mode or not.
Fixed a typo in the dbConnect.xml error that would be shown if the Client application couldnt find or read the
dbConnect.xml file.
The look of the disabled text in labels now matches Qts default look.
On OSX, any popups that appear when the splash screen is visible now appear in front of the splash screen.
On Windows, a task bar item is now visible when the splash screen is visible.
Improved the Connection Error message when a Deadline application cannot connect to the Repository or
Database.
Menus that are too long for the screen are now scrollable.
Launcher Improvements
The Launcher now controls the the scheduled starting and stopping of slaves.
The Launcher displays a popup message when a slave is scheduled to start, allowing a user to delay launching
the slave if they are still using the machine.
The Launcher can detect if the system is idle and launch the slave. It can also stop the slave when the system is
no longer idle.
Added new Local Slave Settings dialog to the Launcher menu to control the local slave and configure its Idle
Detection and Job Dequeuing Mode settings.
The Launcher system tray icon now shows the Deadline version number in the tooltip.
The launcher now waits 5 minutes after starting before it starts checking if it should restart a stalled slave. This
ensures that if the launcher is set to launch the slave at startup, and that slave previously stalled, the slave will
have a chance to cleanup after itself. Otherwise, the launcher might try to launch the slave multiple times.
Added new -shutdownall command line option to launcher, which shuts down the slaves, pulse, and balancer
before shutting down the launcher.
On Linux, Deadlines init.d script now shuts down the slaves and the launcher during a reboot/shutdown, which
ensures the slaves check their licenses back in. Pulse and the balancer are shut down if they are running as well.
On Linux, fixed some other issues in Deadlines init.d script.
The Restart Slave If Stalled option is now disabled by default.
Fixed some bugs in the Launcher init script on Linux.
Cleaned up the output of a successful remote command.
The Launcher can now process multiple remote commands simultaneously.
922
Added the -balancer command line option to launch the Balancer through the Launcher.
A LaunchBalancerAtStartup=true entry can be added to the system deadline.ini file to have the Launcher start
the Balancer when the Launcher starts.
When running as a service on Windows, the Launcher now properly shuts down the slave when the machine is
shut down, which ensures the slaves check their licenses back in. Pulse and the balancer are shut down if they
are running as well.
Added new optional entries to deadline.ini file to have the launcher keep pulse and balancer running (KeepPulseRunning=true and KeepBalancerRunning=true).
Added a -slavenames command line option to the launcher to be used with -slave to launch slaves with
specific names by specifying a comma-separated list of slave names.
Added -upgrade command line option to launcher to simply trigger an upgrade if its required.
Up to 5 attempts are now made during an auto-upgrade to copy over the binaries, with an increasing interval
between attempts.
When the launcher checks for upgrades, it now performs an upgrade if the local Version file is missing (but the
network one exists).
When doing an automatic upgrade, the launcher now copies the bootstrap files to the systems temp directory,
instead of using the Deadline temp directory.
Monitor Improvements
General
The UI Lock can now be toggled on and off using the Shortcut ALT+.
Font sizes are now consistent for all column headers in the lists in the Monitor.
Added new graphs to the Monitor.
There are no longer artifacts in the images when saving graphs to disk.
Default list layouts can now be saved for each panel in the Monitor. These defaults are used when new panels
are opened.
List layouts for each panel in the Monitor can be saved to disk and opened again later.
The lists no longer auto-scroll horizontally when clicking on a column that is only partially visible.
Added ability to add Separators when customizing Script Menus.
The Monitor now gives the user the option to save the Location and Size when pinning a layout or saving a
layout to disk.
When right-clicking on the column headers for a list to show hidden columns, the column will now appear where
the mouse cursor is instead of at the end.
Added search history to the search boxes in the Monitor. The search history can be cleared from the down-arrow
menu for each list.
The default size for the Manage Pools and Manage Groups dialogs are now bigger.
The Slave list in the Pool and Group Management dialogs can now be filtered, and all columns in the list are
now available.
Fixed a bug when deleting groups and pools from the Manage Pool and Group Dialog that was preventing
deletion of a single pool or group, or deleting them all if one was selected for deletion.
923
The Slave Scheduling feature has been broken out of the Power Management dialog and now has its own
configuration dialog.
Added new Repository Options panel to create regions.
In Repository Options, moved the database threshold to the Notifications panel, and grouped it with the database
email address setting.
Added an option to the Email Notification panel in the Repository Options to enable/disable auto-generating
email addresses for new users. If enabled, the email address will be based on the SMTP server unless a postfix
override is specified.
The statistics panel in the repository options now has all of its settings in a group box.
Added a toggle to the FarmOverviewReport to switch between percentages and counts for the graphs.
Repository options dialog now notes that it can take up to 10 minutes for the settings to propagate.
Improved the tooltips in the Repository Options dialog.
Fixed a typo in the House Cleaning panel in the Repository Options.
Updated Repository Options, Job Properties, Slave Properties, and Monitor Options dialogs so that each panel
takes up a bit more space.
New rows created in the Path Mappings, Drive Mappings, and Monitor Layout panels in the Repository Options
now have the correct height.
Moved the database threshold in the repository options dialog to the Notifications panel, and grouped it with the
database email address setting.
Added a button in the Repository Options dialog to reset all settings back to factory defaults.
In the Repository Options, all performance-related settings are now on a new Performance panel. Use the new
Auto Adjust spinner control to automatically pick good default settings based on the number of Slaves in your
farm.
Fixed a bug in the Auto Configuration page in the Repository Options that occurred when the last entry in the
Auto Configuration list was deleted.
Auto Configuration rule sets can now be enabled or disabled.
Manage User, Manage Groups and Manage Pools dialogs no longer close the Name dialog if an invalid name is
entered.
When a new user group, pool or group is created, it is automatically selected.
Fixed an error that could occur when deleting multiple users at the same time.
The Farm Statistics dialog now has a drop down to choose an interval, rather than 4 separate buttons.
The Configure Cloud Providers dialog now initializes the cloud plugins before displaying to improve performance when viewing the settings for different cloud plugins.
Added Import Settings option to the Tools menu, which allows you to import settings from other Repositories
running a minimum of Deadline 6.
Added new Local Slave Settings dialog to the Tools menu and the main toolbar to control the local slave and
configure its Idle Detection and Job Dequeuing Mode settings.
Improved layouts of controls in Plugin and Event Plugin configuration dialogs.
Features that require Pulse now mention it in their respective property dialogs.
Updated all Monitor scripts to use the new grid-based system for the script dialogs.
924
All Monitor submission scripts now save their sticky settings if the dialog is closed using the X button, or if
Alt+F4 is pressed.
Added additional command line arguments for the Monitor to set specific Monitor Options at startup.
Fixed the filter types for some columns in the slave list, job report list, and slave report list.
If a Remote Control command succeeds, the result will now be Connection Accepted instead of just being
empty.
Added Monitor option to show when the last house cleaning and pending job scan operations where performed
in the Monitor status bar. If they havent been performed for more than 10 minutes, they will be highlighted in
red.
Added Monitor option to enable slave pinging (its now disabled by default).
Fixed some Remote Control commands that were not checking if they should be using the slaves IP address, or
a machine name or IP address override.
Fixed a bug where trying to send a Remote Command to an unknown host would hang indefinitely on Linux
and OSX.
When executing a remote command, if the process returns a non-zero exit code, then the result is returned as a
failure instead of a success.
Fixed a ManageListForm error.
The limit dialog and the power management dialogs now disable the name field instead of just making it readonly when in edit mode.
Added settings in the Repository Options to control how long the local Launcher and Balancer logs should be
kept for.
Fixed a layout issue in the multi-line file browser control in the Plugin Configuration dialogs.
Resetting the Repository Options in the dialog is now visually smoother.
Added a panel menu item to reset the default list layout back to the original default.
Fixed an error when removing users from the User Group permissions dialog.
When cloning an existing user group, the clone is selected automatically.
Increased the default height for the Manage Users dialog.
Added Monitor Option settings to configure the double click task behavior for rendering, completed and failed
tasks.
The scripts menus are now hidden when right-clicking on a panel with nothing selected.
The job scheduling weight settings in the Repository Options now have 4 decimal places instead of 2.
Updated the icon/script sync icon to be the refresh icon.
Added View menu option to show/hide the main toolbar.
Cleaned up the layout of the View menu a bit.
Graph names are now shown in the panel titles when they are showing a graph.
The splitter for job reports, slave reports, and remote command panels no longer moves when resizing the panel.
Fixed a leak caused by the context menus in the panels.
Fixed a bug in the Auto Job Timeout settings in the Repository Options that caused the Timeout Multiplier to
be disabled when it shouldnt be.
925
When restarting the Monitor, the location of the splitters for all panels is now restored properly from the previous
session.
When switching between saved layouts, the Monitor is now hidden and shown to ensure that the location of the
splitters is restored properly.
Fixed a bug in the Manage Users dialog where the password confirmation fields were not being verified on
accept.
Tweaked some labels in the idle shutdown and machine startup tabs in the power management dialog.
Cleaned up the error message when a job import fails due to the job already existing.
Deleting a ruleset in the auto configuration panel of the repository options now resets all controls to their defaults.
When creating new Path Mappings in the Repository Options, they are no longer case-sensitive by default.
Plugin and Event configuration settings are now sanitized when they are saved.
Added a new general TestIntegrationConnection script to the General script menu that can be used to test connecting to Shotgun or ftrack, and it shows the results.
Added stacktraces to the error messages if the Monitor cant update its data cache.
Added Repository Configuration settings for maximum repository, slave, job, pulse, and balancer history entries.
Double clicking the title bar of a floating panel in the Monitor now maximizes it on Windows.
Repository history entries are now logged when changing Repository Options.
Fixed a bug when collapsing and expanding group boxes in the Configure Plugins/Events dialogs.
Improved the performance of bulk delete operations in the Monitor.
Improved the default widths of some of the columns for lists in the Repository Options.
When switching between the global pinned monitor layouts, the local pinned layout settings (column layouts
and filters) are ignored so that they do not get clobbered.
Fixed a typo in the Application Logging panel in the Repository Options.
Fixed a layout bug in the Plugin Configuration if CategoryOrder was specified in the .options file of a plugin.
Fixed some errors when editing idle shutdown overrides, and when editing existing thermal shutdown sensors
and overrides.
When connecting to a remote log from the Monitor, it now connects to the correct machine if the Monitor is
connected to a different repository than the one stored in the deadline.ini file.
CMD+R shortcuts now work properly on OSX (ie: resume job, resume task).
Jobs and Tasks
Added new progress bars to the job list to show the state of all tasks for the job at a glance.
Jobs with only a single task now show better job progress in the Job list.
Fixed some issues that caused the job counts in the job list to be incorrect.
Fixed some issues where requeue reports werent getting created properly for jobs.
Improved layout of controls in the plugin-specific properties in the Job Properties dialog.
Selecting multiple jobs and modifying their properties only overwrites shared properties for dependencies, extra
info variables and environment variables.
All dependency related job properties are now in the Dependencies panel in the job properties dialog, instead of
being spread across three separate panels.
926
The job timeout panel in the job properties now lets you specify a timeout in terms of hours, minutes, and
seconds.
Fixed a color control bug in the plugin-specific job properties that would cause the property to appear as modified
when pressing Cancel on the color picker dialog.
Split up the job history logging to be more granular when modifying certain Job Properties.
Jobs can now be grouped together in the Job list if they share the same Batch name.
Improved the performance of the Quick Filters for the job list.
User name quick filters now have Me (userName) as the entry for the current user, and will be the first user in
the list.
Changed the right-click menu item text in the quick filters to avoid confusion.
Added an option when suspending a job to only suspend the non-rendering tasks for the job.
Updated the Transfer Job script to include some missing job properties that werent getting transferred.
Fixed an error that could show up in the Console when closing the Job Details panel.
The Explore Output menu in the job and task list no longer shows any duplicate paths.
The task list now shows the current CPU and RAM information for a rendering task.
Added right-click menu item to task list to suspend/resume individual tasks.
Swapped the default location of the Startup Time and Render Time columns in the task list.
The Job Dependency nodes in the Monitor have also been updated to show per-dependency settings.
Added a new feature to toe Job Dependency View to test the dependencies and see which ones pass and which
ones do not.
The backgrounds for the graphs and the Job Dependency View now match the look of the rest of the Monitor.
Jobs can now be grouped together in the Job Dependency View if they share the same Batch name.
The layout can now be pinned for the Job Dependency View panel.
You can now select multiple jobs in the job list and have them show up in the job dependency view.
The Job report list in the Monitor now have columns that show Memory and CPU usage information.
Moved the Explore Path menu for non-job nodes to the main context menu in the Job Dependency View, and
fixed a bug that caused it to be disabled when it shouldnt be.
Cleaned up the error message when changing the frame range for a job, and the new task count exceeds the
maximum allowed.
Added ability to pin and save quick filters.
The archive job path is now remembered within a session (it will revert back to the default repository folder the
next time the Monitor is restarted).
Added new Cleanup panel to job properties window (for auto-cleanup override settings).
Added option to auto-filter Job Reports based on the selected Task.
Added option to switch Job Reports panel to a horizontal orientation.
Added a Render Status column to task list, which shows the same information that the Task Render Status
column in the slave list shows.
Fixed some layout and font-size issues in the job dependency drag and drop dialog.
Fixed a bug that could cause output paths in the job/task context menus to show double path separators.
927
Event plugins are only loaded once when archiving a batch of jobs.
Fixed a bug when parsing the frame padding of an output path that contained multiple sections of padding
characters.
The Task ID column in the Task and Job Report lists are now string filters instead of integers.
Capped the job and task sub-menu length for viewing output and auxiliary files to 50 menu items.
Deleting jobs from the monitor now logs to the repository history.
If a job report cant be loaded, the error message is now shown in the job report viewer.
Task progress bars are now only visible for completed and rendering tasks.
Task progress bars no longer change color based on the tasks state, although they will still match the completed
job color when the task is complete.
Disabled ability to resubmit tasks for Tile and Maintenance jobs.
For tile jobs, the tile numbers under the Frame column in the Task List now start at 1 instead of 0.
Fixed a bug in Job Properties where editing a jobs existing Script Dependencies wasnt being committed properly when pressing OK.
Fixed some errors when removing multiple asset or script dependencies from their respective lists in the job
properties.
Fixed spelling of interruptiple in the job properties dialog.
Fixed a bug in the Job Dependency View that could lock up the Monitor when clicking on different jobs.
When resubmitting a job that was scheduled to start at a certain time, the flag that indicates if the job has been
resumed already is now reset.
Slaves and Pulse
You can now right-click on specific slaves in the Slave list in the Monitor to modify Pools and Groups for the
selected slaves only.
Added job icon to the Job Name column in the slave list.
The Slave list now shows which Limits the slaves are whitelisted, blacklisted, and excluded for.
The Slave report list in the Monitor now have columns that show Memory and CPU usage information.
The utilization value in the slave list now takes into account rendering and idle slaves (necessary if there are
multiple slaves running on the same machine, but not all are rendering).
If the slave list is filtered, the utilization will show the total utilization, as well as the utilization for just the
visible slaves.
Fixed a bug where the utilization would only update if you click on a slave in the list.
Cleaned up the utilization text a bit so that its easier to read.
Added option for viewing history to the Pulse list.
Moved the Modify Pools/Groups menu items in the slave list menu below the Modify Slave Properties menu
item.
The slave list now shows the time a slave has been in its current state for all states (previously it would only
show this for rendering slaves).
A warning now appears when trying to shut down the local machine from the slave list, instead of failing silently.
Added option to switch Slave Reports panel to a horizontal orientation.
928
If a slave report cant be loaded, the error message is now shown in the slave report viewer.
When deleting a pulse, the history entry is now logged in the repository history.
The Mark Slave As Offline menu item is now shown if the slave is in the StartingJob state.
Fixed a bug where history entries for saving slave settings werent logged if only one slave was selected.
The Job Candidate Filter in the Slave list now handles jobs with empty whitelists properly.
The Slave Reports panel now shows render logs in addition to render errors.
Added new graphs to Slave Reports panel.
Added Connect Host, Primary, and Region columns to pulse list.
Pulse settings can now be modified from the pulse list.
The pulse list is now used to connect to the pulse log, instead of the Tools menu.
Limits and Cloud
The Limit list shows who the current stub holders are if that Limit is in use.
The Limit list now has a new column that shows the Usage Level for the Limit.
The Limit property dialog now has an option to use the Usage Level for the Limit.
Many context menu items in the Cloud panel (ie: starting and stopping instances) are now performed asynchronously.
User group permissions can now be set for the Cloud panel.
The cloud panel will show dialog boxes if an error occurs when interacting with the cloud instances.
Cloud plugin data is now only loaded and updated if the Cloud panel is being displayed.
Added some messages to the cloud commands so you get some feedback when a command is successful.
Console and Remote Commands
Fixed a timestamp bug in the Console panel.
The Remote Commands panel is now enabled by default in the User Group Permissions (so that the Monitors
Local Slave Controls can display it).
Fixed a spacing inconsistency between the timestamp and the text in the Monitors Console panel.
Slave Improvements
Multiple slaves on a single machine now share one license, instead of requiring one license each.
Slaves now return their license when they become idle.
New Idle Detection settings can be set per slave. They can be used to launch the slave when the machine is idle
and/or stop the slave when the machine is in use again.
New Job Dequeueing Mode settings can be set per slave. They can be used to force slaves running on workstations to only pick up jobs submitted from the same machine, or by specific users.
Slaves can now be added to regions, which mainly affect how the slave applies Path Mappings.
The slave system tray icon now shows the Deadline version number in the tooltip.
Added timestamps when streaming the slave log.
Fixed a startup bug on Linux and Mac OSX that could result in multiple slaves with the same name starting up
on the same machine.
12.1. Deadline 7.0.0.54 Release Notes
929
Improved how the slave picks its IP address on Windows and Linux so that it picks a network interface with a
gateway (the Mac OSX version already did this).
If a slave is initially running in Free Mode and it later gets a license, the License information in the slave UI and
the slave list in the Monitor will be updated appropriately.
When a slave cant connect to a license server, it only tries to do auto-discovery every 5 minutes so that it doesnt
saturate the network.
The slave now queries the machines CPU speed at regular intervals while its running, instead of just caching
the value it gets at startup. This is useful for machines with CPU speeds that dynamically change while the
system is running.
Fixed a bug that was not checking the Job failure detection settings when a plugin failed to sync its files.
When searching for a job, we no longer prune jobs that have a QueuedChunk count less than or equal to 0. This
helps ensure that if a jobs state gets messed up, queued tasks will still be dequeued for that job.
When searching for a job, the slave will now cache any Limits that it failed to acquire, and ignore other jobs that
require the same limit during that search.
The idle interval between job searches is now calculated based on the percentage of the idle slaves in the farm.
The interval increases as more slaves become idle.
Improved the message printed by the slave when it is doing a self-cleanup because it didnt close properly the
previous sessions.
Limit stub returning a little more robust.
Improved verbose log messages when the slave is looking for a higher priority job.
Fixed a bug that allowed the slave to move on to another task before finishing saving the log for the current task.
Significantly improved how the slave handles large amounts of stdout from the rendering process (both to performance and memory usage).
Improved speed and reduced database load when a slave is processing limit groups while searching for a job to
render.
Fixed a null reference exception when the slave would check if it needed to return limit stubs based on progress,
and the limit no longer exists.
The check that the slave makes to see it needs to return limit stubs based on progress is now done every few
minute instead of every second.
If the dlinit file is not found after a plugin sync, the slave will try three more times and then throws an exception.
When dequeuing a job, the slave now returns job limit stubs immediately if it cant find any tasks for that job.
When dequeuing a job, the slave will check if the job has any queued tasks available before trying to get a task
for it.
When updating the job state information during rendering, the slave no longer reads the full job object back
from the database.
Slaves only do partial updating of their state when possible to reduce bandwidth.
Fixed a bug that could cause the slave to crash during shutdown.
Fixed a bug that would result in only partial logs for a task that rendered across different days.
The local task logs have been renamed to ensure they are unique to the slave and render thread that is rendering
them.
Any orphaned local task logs are now cleaned up the next time that render thread renders a task.
930
Fixed a bug that could cause a render to fail if the jobs name changed between tasks.
Added some additional logging just before the slave exits.
Slaves now save their own copy of the task report, which can be viewed from the Slave Reports panel in the
Monitor.
Fixed some text fields in the Slave UI that werent readonly.
Fixed some typos in some error messages.
During each job scan, the slaves cache if a plugin supports concurrent tasks or not to avoid repeatedly reloading
that information from the repository.
Pulse Improvements
A primary pulse can now be configured, which is the ones that the slaves will connect to. Only the primary
instance of pulse will do things like housecleaning and the pending job scan.
If the primary pulse is offline or stalled, the repository repair operation can elect another running pulse as the
primary. This can be enabled in the repository repair settings in the repository options.
Fixed some text fields in the Pulse UI that werent readonly.
Pulse no longer controls the Slave Scheduling feature. It is now handled by the Launcher.
Pulse now only sends the Repository Path for Auto Configuration requests. The other settings are pulled from
the Repository after the Deadline applications have connected to it.
The Pulse system tray icon now shows the Deadline version number in the tooltip.
In Power Management, Idle Shutdown now takes into account if there are multiple slaves running on the same
machine.
Added field to the Pulse UI that shows the state of the web service.
The slave can now be shutdown with deadlineslave -s when it hasnt connected to a repository yet.
Added more information to the pulse throttling messages such as the slave name, job id, number of requests and
throttle limit.
Made some tweaks to the web service new and delete user groups functions to not return error codes for certain
outcomes.
Fixed bugs in some REST API functions that could cause Pulse to crash.
Added a catch to prevent REST API functions from causing Pulse to crash.
Changed some of the error messages that were inconsistent with the rest of the REST API.
Pulse no longer prints out an error when favicon.ico is requested from the web service.
Cleaned up the web service messages when the command is an invalid API command, and when no command
is specified.
Added Access Control Allow Origin header to Web Service responses.
The options request type is now supported by the Web Service.
When deleting from the restful API, we now log to the repository history, not the jobs history.
Added support to the restful API for only grabbing certain job properties in a request for jobs to reduce the
amount of data getting passed around.
Fixed a bug in the Machine Startup feature of Power Management that would result in no slaves being woken
up for a job with an empty whitelist.
12.1. Deadline 7.0.0.54 Release Notes
931
Command Improvements
All command line options now support any number of leading dashes (for example, deadlinecommand.exe
-pools or deadlinecommand.exe groups).
Added new commands to suspend/resume individual tasks.
Added a new command to suspend all non-rendering tasks for a job.
Fixed some bugs with the RenderJob command line option.
Fixed some bugs with the JobStatistics command line option.
Added some User Group command line options.
Fixed the RemoteControl command to properly print out results.
Updated the help text for the ChangeRepository command line option to mention the optional Repository Path
argument.
The RemoteControl command options are no longer case sensitive.
Added GetJobDetails command to print the job details that are shown in the Job Details panel in the Monitor.
Added GetVersion and GetMajorVersion commands.
Added commands that can be used to configure the Cloud plugins, group mappings, regions, etc.
Added DeadlineCommand commands for adding job, slave and repository history entries.
Added command line option to DoHouseCleaning and DoRepositoryRepair to choose which mode to run.
Added command line commands for performing path mapping.
Removed JobCleanup command line option, since the DoHouseCleaning command can do this.
The DoPendingJobScan command line option can now take an optional region parameter that is used for path
mapping when checking asset and script dependencies.
Added SlaveExists command to check if a slave exists.
Deadline Command no longer checks if the collection indices in the database need to be created (the other
Deadline applications still handle this).
The ChangeRepository command no longer tries to load the Qt libraries if it is being passed the repository path
as a command line option.
The ChangeLicenseServer command no longer tries to load the Qt libraries if it is being passed the license server
as a command line option.
The ChangeUser command no longer tries to load the Qt libraries if it is being passed the user name as a
command line option.
Fixed a bug with commands that accept a repository path as an argument. The bug would cause deadline
command to crash if the repository path was quoted and ended with a character (ie: \serverrepository).
Scripting Improvements
Added new events to the Event Plugin API: OnHouseCleaning, OnSlaveStarted, OnSlaveStopped, OnSlaveRendering, and OnSlaveStartingJob.
Added new grid-based control options to the DeadlineScriptDialog class, which make it easier to create custom
interfaces in the Deadline scripts.
Updated the cloud plugins to not swallow their errors when creating instances.
932
Exposed some errors that happened when Cloud/Balancer plugin files were missing or spelled incorrectly.
Added function to get the database connection string.
Added function to change a jobs frame list.
Default for ConcurrentTasks in a plugins dlinit file is now True.
Added API commands to launch processes with a specific user account.
Made some improvements to the way python exceptions are printed out.
Fixed some issues with how python stdout and stderr redirection to the Deadline logs was working.
Added new API commands to suspend all non-rendering tasks for a job.
When a plugin, event, cloud, or Monitor script is executed, the log will now show where the script is being
loaded from.
Added EnabledStickySaving function to the DeadlineScriptDialog class that can be used to automatically save
sticky settings when the dialog is closed.
Improved some function documentation for the API.
The Slave Stdout Limit is now applied to ManagedProcess objects created in plugin scripts. Before, it was only
applied to the main DeadlinePlugin object.
Fix a bug that prevented module import errors from showing the actual Python error.
Added some additional User Group functions.
The RGB spinners in the Color script control now resize when the control size changes.
Added ClientUtils.CreateScriptTempFolder() function to create a temporary folder for the script that is automatically cleaned up.
Fixed how the value is set for the RadioControl script control.
Fixed a bug with getting the disabled slave count in the GetFarmStatisticsEx.py web service script.
Added a OnJobPurged event trigger that gets called right before a job gets purged from the database.
Added OnSlaveStalled callback for event plugins.
Added functions to the REST API and the standalone Python module to get the job details that are shown in the
Job Details panel in the Monitor.
Added support to the REST API and the standalone Python module to undelete deleted jobs, purge deleted jobs,
get deleted jobs and get deleted job ids.
Added RepositoryUtils.GetJobDetails() function.
Added RepositoryUtils functions to get deleted job IDs and to undelete jobs.
Fixed a bug that prevented JobUtils.CalculateJobStatistics() from working in non-Monitor scripts.
PYTHONHOME and PYTHONPATH are now properly set to the systems values in RunProcess for the event
plugins.
GetConfigEntry and GetConfigEntryWithDefault functions for plugins now trim whitespace off the values.
Added support to the Standalone Python API for doing basic authentication with the Web Service.
Added missing documentation for SlaveUtils.GetMachineIPAddresses() API function.
Added RepositoryUtils.SlaveExists() function to check if a slave exists.
Fixed a bug where the OnJobFinished callback for Event plugins wasnt always getting the updated job object.
933
Added some missing properties to the doxygen docs for BalancerInfo, PulseInfo, SlaveInfo, and SlaveSettings.
SlaveHostMachineIPAddressOverride in SlaveSettings now represents the correct value.
Application Plugin Improvements
3ds Max Improvements
Updated SMTD version numbers to 7.0.
Fixed a SMTD initialization error.
When copying external files, SMTD no longer tries to copy over missing files.
3dsMax2015_sp2 & Extension_1 dictionary entry added to 3dsmax plugin.
Default/sticky settings can now be set in SMTD for the ExtraInfo fields.
Removed (x86) references in 3dsmaxcmd plugin for Max2014 & Max2015.
Made some improvements for the RTT (Render To Textrure) feature in SMTD, including the option to bake one
object per task.
Fixed bug in FumeFX string handling in 3dsmax plugin.
Updated SMTD to handle blowup mode properly.
Updated Region manipulator in SMTD to keep aspect ratio while in blowup mode.
When offloading Mental Ray DBR jobs, the job will now use a temporary max.rayhosts file, rather than modify
the original.
Added workaround to prevent 3ds Max 2015 from crashing when its rendering as a service.
Fixed some layout issues in SMTD.
Fixed some layout issues in the VRay DBR submitter.
Added better error messages to SMTD if the main script from the repository cant be loaded.
Added some new SMTD sanity checks (CheckForOutputPathLength, CheckForREPathLength, CheckForDuplicateREPaths, CheckForObjectNames, CheckForCorruptGroup).
Fixed a bug in the 3ds Max 2015 workspace workaround that caused it to fail if the workspace directory doesnt
exist.
Fixed a bug that affected the tile assembly of frames rendered using the VRay frame buffer.
Fixed a tile assembly issue with VRay MultiMatte render elements.
Updated 3dsmax plugin dict in 3dsmax.py to clearly inform users which versions of 3dsMax are broken with
Deadline.
Changed maxTileAssembler command to use HiddenDOSCommand to hide console window on slave.
SMTD - Add [PREVIEW] job ability to enable/disable its parent dependency to the [REMAINING] frames job.
SMTD - When rendering single frame tile or single frame jigsaw, OutputFilename# should be frame specific
instead of ####.
SMTD - If VRay Separate Render Channels is enabled, RE paths were not output to the Monitor OutputFilename#.
SMTD - Re-worked logic for when VRay REs are output as Separate Render Channels via VFB to the
Monitor OutputFilename#.
934
935
CommandLine Improvements
Path Mapping is now performed on the arguments for CommandLine jobs.
CommandScript Improvements
Path Mapping is now performed on the arguments for CommandScript jobs.
Corona Improvements
Added support for Corona standalone.
DJV Improvements
Re-worked DJV plugin & submission script to handle new DJV v1.0.1, which has changed the majority of its
command line flags in this new release!
Fixed a couple bugs when using the job right-click script to submit a DJV job.
Draft Improvements
Added Path Mapping support to the Draft tile assembler.
Updated Draft to version 1.2.3.57201. Also note that if you are using Draft 1.1 or earlier, you will need an
updated Draft license.
Updated Draft Tile assembler monitor submission script to be able to add all of the plugin submission options.
Updated Draft Tile submitter to fix a visual bug.
Improved the error message when the Draft Tile Assembler cant load input tiles.
FFmpeg Improvements
Path mapping is now applied to the preset files.
The FFmpeg plugin now enforces the correct path separators based on the OS.
Fixed some typos in the FFmpeg submitter in the Monitor.
Fusion Improvements
Added support for Fusion 7.
Updated the Fusion plugin icon.
Hiero Improvements
Fixed how we get the start and end frame for a clip in the Hiero submitter.
Houdini Improvements
Fixed some bad logic when checking the output file in the houdini submitter.
Fixed an error when loading the sticky SubmitSuspended property in the integrated houdini submitter.
The integrated submitter now includes the current ROP name with the job name.
Improved Arnold for Houdini support.
Lightwave Improvements
Updated the Path Mapping tooltip in the Lightwave plugin to mention that it can be disabled if there are no Path
Mapping entries defined in the Repository Options.
Jobs submitted from Lightwave 11.8 now render properly.
Mantra Standalone Improvements
The mantra: Bad Alembic Archive error message is now caught during rendering.
936
Updated the Path Mapping tooltip in the Mantra plugin to mention that it can be disabled if there are no Path
Mapping entries defined in the Repository Options.
Maya Improvements
Added Jigsaw support to Maya.
Removed unnecessary 32 bit paths from the MayaBatch and MayaCmd plugin configurations.
Added a new stdout handler to catch a Maya licensing error.
Fixed some text cutoff issues in the integrated submitter on Mac OSX Mavericks.
Added overrides for the height and width of the render output to the Monitor submitter.
Fixed FumeFX Wavelet Sim issue for MayaBatch & MayaCmd.
Fixed an Arnold for Maya verbosity flag bug.
Fixed some issues when using tile rendering with VRay.
VRay render elements are now supported when using the Draft Tile Assembler.
Arnold AOVs are now supported by tile rendering.
Added multichannel EXR support for Jigsaw and Draft Tile rendering.
Fixed the default Maya executable paths on OSX.
Added an explanation to the tooltip for the frame list control in the integrated submitter for why it would be
disabled.
Fixed some Vray related bugs in the integrated Maya submitter due to differences between Vray 2 and Vray 3.
Mental Ray Standalone Improvements
Added plugin configuration option to treat exit code 1 as error or success.
modo Improvements
Added Jigsaw support to modo.
Added option to modo Monitor submitter to specify the output pattern.
Added warning message to modo Monitor submitter that overriding output and using Tile Rendering has limitations, and that they should use the integrated submitter in certain cases.
Fixed a bug in the integrated modo submitter that prevented it from working in modo 801.
Nuke Improvements
Added support for Nuke 9.
Updated Nuke plugin to properly handle frame counts in batch node when given write node names.
Fixed a bug that could crop up when setting the environment in the nuke submitter prior to launching deadlinecommand.
Added Render Using Proxy Mode option to the Nuke submitter.
Removed Build option from Nuke submitter, since the versions of Nuke that Deadline supports are 64-bit only.
Fixed an error that could occur if PrepForOFX is not defined in the Nuke.dlinit file.
The integrated Nuke submitter now includes output paths for all Views so that they can be viewed from the
Monitor.
The integrated Nuke submitter now displays a warning if you are trying to submit a job that has no Views.
937
Updated the names given to the Knobs created by the integrated submitter, which seems to address some instability issues that could come up.
The secondary pool setting is now sticky in the integrated submitter.
Fixed a bug with Nuke path mapping that would mess up embedded TCL in the output path.
Updated the Path Mapping tooltip in the Nuke plugin to mention that it can be disabled if there are no Path
Mapping entries defined in the Repository Options.
The integrated Nuke submitter handles TCL embedded in the output path properly when passing the paths to
Deadline to view the output from the Monitor.
Fixed an error in the submitter when the Nuke comp has proxy mode enabled.
In the Nuke submitter, Deadlines settings are now created in a Deadline tab, instead of just using the default
User tab. The settings have more readable names too.
Added Performance Profiling option to submitter (Nuke 9 and later).
Changed layout of submitter controls a bit.
Fixed an issue with loading Shotgun and FTrack KVPs from the Nuke script file.
Puppet Improvements
Added support for Puppet jobs.
Python Improvements
Path separators for the script path are now set per OS after Path Mapping has taken place.
Quicktime Improvements
Fixed an error in the job right-click script to submit a Quicktime job.
Realflow Improvements
Added support for Realflow 2014.
Improved Hybrido simulation progress reporting.
Rhino Improvements
Added Jigsaw support to Rhino.
Added Tile Rendering support to Rhino.
Updated the default Rhino 5 executable path.
When Rhino starts up, the Enter button is now pressed to workaround a case where Rhino wouldnt start
rendering.
Salt Improvements
Added support for Salt jobs.
SketchUp Improvements
Added support for sketchup 2015.
Increased width of export directory and prefix fields in the submitter.
Vray DBR Improvements
Added a task timeout option to all the DBR submission scripts. When the timeout is reached, the task will be
marked as complete so that the slave can move on to something else.
In the Monitor submitter, the application version number is now sticky between sessions.
938
The 3ds Max and Maya DBR submitters now disable vray distributed rendering when closing if the submitter
had automatically enabled it.
The 3ds Max DBR submitter can now automatically mark the spawner job as complete when the rendering
finishes.
Fixed how the Maya Vray DBR submitter creates a new shelf if there isnt already a Deadline shelf.
In the Monitor submitter, the port label visibility is now toggled on/off based on the currently selected application, which properly refreshes the UI.
The default Vray spawner paths for 3ds Max Design are now included.
Added a timeout setting for all supported applications except 3ds Max (3ds Max RT is supported though).
Added an option for how to handle the case where a vray DR process is already running on the machine.
The Port number can now be specified for 3ds Max.
3ds Max RT is now properly supported.
Updated height of VRay dialog in Softimage.
In the Ply2Vrmesh submitter, the attribute field is now wider.
Event Plugin Improvements
ftrack Event Improvements
Added ftrack support to most of the submission scripts.
Shotgun Event Improvements
Updated Shotgun API to version 3.0.17.
Added functionality to upload a filmstrip and a H264 quicktime movie to Shotgun when a job finishes rendering.
939
The submission script installers no longer create a rollback folder in the Repository folder.
Launcher Improvements
Fixed an error on Linux when checking how long the system has been idle in a headless environment.
Slave Improvements
Fixed a bug that caused the slave to report that it had a permanent license in some cases when it couldnt check
out a valid license.
Pulse Improvements
Fixed a bug that prevented a Primary Pulse from performing the Pending Job Scan on Linux and OSX.
Application Plugin Improvements
3ds Max Improvements
Fixed a bug for 3ds Max 2015 when checking the visibility of the SceneExplorer prior to rendering.
Cinema 4D Team Render Improvements
The C4D Team Render plugin now works properly with C4D 15 and 16.
Removed the security token file location options from the plugin configuration, since they arent needed.
The security token file is now created in the correct location on OSX.
Improved the error message that occurs if the security token file cant be created (often due to permissions).
Moved the Copy to Clipboard button next to the security token field in the integrated submitter.
Increased the button widths at the bottom of the integrated submitter to fix some text cutoff issues.
If the security token is blank when submitting the job, it is now populated with the token that is automatically
generated.
The Team Render submission script installer now supports C4D 16.
The security token can no longer be modified from the Monitor after the job has been submitted.
Combustion Improvements
Path mapping is now performed on the scene file path (if the scene isnt being submitted with the job).
Lightwave Improvements
Added support for Lightwave 2015.
Fixed a bug that prevented the integrated submitter from working with Lightwave 2015.
modo Improvements
Permissions are now set properly by modo submitter installer, which allows modo to recognize the Deadline
submitter when loading.
941
942
In addition, Swap usage for the rendering process is stored with a jobs task when it completes, and is also stored in
the statistics for the job when it completes.
Improved Slaves Statistics Reports
The Slave Resource Usage farm report is now called the Slaves Overview farm report, and shows additional statistics.
For example, the new Slaves Overview chart shows how many slaves were in each state (starting job, rendering, idle,
offline, stalled, and disabled). In addition, the new Available/Active Slaves charts show the number of slaves that are
available, and the number of available slaves that are active. Finally, the new Plugin Usage chart shows the overall
usage of the render plugins.
Both the Slaves Over and Active Slaves Stats reports can also be shown for a given region. This allows you see
statistics for slaves in a specific Cloud region, or in specific areas in the office (ie: render nodes versus workstations).
Note that this requires you to set which regions your slaves belong to in their Slave Settings.
Improved Graphs in the Monitor
Line and Bar graphs in the Monitor now support panning and zooming, and a right-click option has been added to reset
the zoom level. In addition, individual series in some Line graphs can be shown/hidden from the right-click menu.
Finally, the axis labels in these graphs have been updated to properly represent integer and date/time values, which
makes them easier to read.
Expanded Font Synchronization
The new FontSync event plugin that ships with Deadline can be used to synchronize fonts on Mac OS X and Windows
before the Slave application starts rendering any job, or when the Slave first starts up. This general FontSync event
plugin replaces the font synchronization options in the After Effects plugin and now works for ALL plugin types in
Deadline.
Improved Job Batch Display
Deadline 7 introduced the ability to group jobs together in the Monitor by setting their Batch Name property. Now, all
Deadline submitters automatically set the Batch Name if multiple related jobs are being submitted at the same time.
For example, when submitting each render layer as a separate job in Maya, they will all be part of the same batch.
Another example is submitting a Jigsaw render with a dependent assembly job.
In addition, the Batch Row in the job list in the Monitor now shows information for all columns, depending on the
settings for the jobs in the batch. For numeric settings like priority or machine limit, the largest value for the jobs is
shown. For settings like pool and group, the value will be shown if all jobs have the same value, and if they dont,
<batch> is shown instead. For all other columns, <batch> is simply shown.
Finally, the counts above the job list in the job panel now show the number of batches in the list, and the selected count
now ignores selected batches so that it properly represents the number of selected jobs.
Protected Jobs
Jobs now have a Protected property. When enabled, the job can only be deleted by the jobs user, a super user, or a
user that belongs to a user group that has permissions to handle protected jobs. Other users will not be able to delete
the job, and the job will also not be cleaned up by Deadlines automatic house cleaning. This is useful if you have jobs
you need to keep around for testing or benchmark purposes.
943
For example:
as the padding.
For example:
{SEQ@}: This represents the tasks frame sequence files, using @ as the padding.
/path/to/image@@@@.png
For example:
{SEQ%}: This represents the tasks frame sequence files, using %d as the padding.
/path/to/image%04d.png
For example:
The arguments default to {FRAME}, which keeps the default behavior from previous versions of Deadline intact.
In addition, proper names can be given to the viewers, which are shown in their corresponding menu items. Finally,
viewers can be configured to support chunked tasks (tasks which consist of more than one frame).
Standalone Web Service Application
A standalone Web Service application is now shipped with Deadline, and is called deadlineWebService.exe. It works
exactly the same as the Web Service feature that is built into Deadline Pulse, and both can be configured using the new
Web Service page in the Repository Options.
Install Launcher as Daemon on Mac OS X
The Deadline Client installer now has an option to install the Launcher as a Daemon on Mac OS X. This feature lets
you run the Launcher daemon as root, or as another user account.
Improved Submission Script Installers
The submission script installers now show what DEADLINE_PATH is set to (which is used by the submission scripts
to determine where the Deadline Clients bin folder is located). You then have the option to change it if its incorrect,
or set it if it doesnt exist. This is useful if you have multiple versions of Deadline installed on your system.
A side-effect of this improvement is that it allows you to update DEADLINE_PATH without having to reinstall the
Deadline Client or manually changing your systems environment. To do this, simply run any submission script
installer, change the DEADLINE_PATH value, and uncheck all options listed in the Components list. The installer
will then update the DEADLINE_PATH without installing the submission script files.
Draft Updated to Version 1.3.2.58232
This version of Draft requires a new Draft 1.3 licenses, and includes the following updates:
EXR Images:
Added support for EXR data and display windows (previously data windows were set to the same size as the
display windows).
944
945
Updated submitters that support Draft tile assembly to add a new line to the start of Draft assembler config files
to workaround potential encoding issues.
Fixed some rounding errors in Jigsaw that could occur with region sizes if the background image is a different
resolution than the rendered image.
Fixed some encoding issues that could occur in the assembly config files when submitting Jigsaw renders.
Added an Idle Detection option to only stop a slave when the machine is no longer idle if that slave was originally
started by idle detection.
Fixed a bug when doing path mapping that caused the whole file to be read into memory when there werent any
paths in the Mapped Paths settings in the Repository Options.
Deadline no longer tries to create the deadline.ini file if it doesnt exist, which can happen unintentionally when
the deadline.ini file is being updated on Linux or OSX, and cause the file to get wiped.
The Deadline applications no longer cache the machines host name, which can cause problems when running
multiple instances of the Slave on the same machine.
Fixed a bug in how the default font for the Deadline applications was chosen on OSX, which could cause the
shortcuts in the menus to be displayed incorrectly.
Fixed issues with how all the Deadline applications handled startup errors.
Installer Improvements
The Repository installer now ships with default script menu layouts for the Monitor (they are only applied if
there arent existing customizations to the script menus).
Added a backuprepo command line option to the Repository installer to specify if the repository should be
backed up or not (default is true).
Improved the speed of backing up the repository in the Repository installer.
Added command line option to the Windows Client installer to kill Deadline processes before proceeding with
installation.
Fixed an error in the OSX Client installer that occurred when trying to set up the Launcher Login Item in a
headless session.
The Repository installer now sets the label in the mongodb plist file correctly on OSX.
Added option to OSX Client installer to install launcher as a daemon.
The Submitter Installers no longer create empty folders in the Start Menu on Windows.
The Client uninstaller now checks if there is another Deadline installation prior to deleting DEADLINE_PATH,
and if there is, it prompts the user if they want to delete DEADLINE_PATH or change its value to something
else.
The submission script installers now show what DEADLINE_PATH is set to, giving you the option to change it
if its incorrect.
Fixed some issues with the MongoDB init script that is installed by the Repository installer on Linux that could
cause it to conflict with a previous MongoDB installation.
Fixed how the LAUNCHERSERVICELOCK variable is set in the Launcher init script that is installed by the
Client installer on Linux.
Fixed a bug that caused the modo submitter installer to install into an extra DeadlineModo sub folder.
The DeadlineModoClient.pl script for modo 6xx and earlier is now shipped with the Repository installer again.
946
End of line characters are now removed with the Repository installer sets up the dbConnect.xml file in the
Repository.
The SketchUp submitter installer now works even if SketchUp hasnt been installed on the machine yet.
Fixed the default install paths in the SketchUp submitter installer for SketchUp 7, 8, and 2013.
Added command line option to the Client installer to set the launcher daemon delay setting (Linux and OSX).
Permissions are now set properly on the integrated submission scripts that are installed by the Submitter Installers.
When installing the launcher as a service on Windows, the client installer now grants the SeServiceLogonRight right to the account name so that the service can start properly.
Job Improvements
Added an option to the job scheduling settings to set when the job should stop.
The swap usage of the rendering process is now stored with the task when it is complete, and is also stored in
the job statistics.
Added a job option to specify the rendering progress cut-off for interrupting a job.
Maintenance jobs now take the jobs whitelist/blacklist into account when setting the number of tasks for the
job.
Added an Enhanced Balancing Logic option for balanced/weighted scheduling options. Its an experimental
feature that helps prevent slaves from jumping between jobs.
Added new OutputFilename#Tile? property to the job info file, which will keep track of images for tile jobs (#
is for the output index, ? is for the tile index).
Added job property to project jobs from being deleted or archived.
A history entry is now added to a job before it is deleted (in case the job gets undeleted later).
Fixed some warning messages that could appear when submitting a job that is frame dependent.
Fixed some bugs when submitting jobs with asset dependencies.
Updated the JobTransfer plugin to use the new RepositoryUtils.CreateJobSubmissionFiles function, which ensures that the transferred jobs properties are set correctly.
The TransferSubmission script now sets the transfer job name based on the selected job name.
Trailing path separators are now stripped from the output directory when using OutputDirectory# in the job info
file during submission.
Added a Custom job scheduling option which lets you pick specific days to start and/or stop the job, just like in
the Slave Scheduling settings.
When setting the next start or stop date for Daily scheduled jobs, it is now relative to the current date, not the
date that the job was originally scheduled to be started or stopped.
Fixed a Daylight Savings Time related bug that affected job archiving and getting jobs from the REST API.
When archiving a job, any JSON errors are now written to the log.
Added option to use sudo or su on Linux and OSX when rendering the job as another user. Also added the
option to preserve the environment when using sudo or su.
Fixed an Automatic Job Timeout logic bug. Now, if both Automatic Job Timeout options are enabled in the
Repository Options, then both requirements must be met.
947
A job no longer fails to submit if an empty Username value is set in the job info file. Instead, the current
Deadline user on the machine is used.
Statistics Improvements
The Slaves Overview report now shows an overview of Available and Active slaves.
The Slaves Overview report now shows overall usage of the render plugins..
The Slaves Overview and Active Slave Stats reports can now be shown for a specific Region.
Added tables to the Slaves Overview report to show Last, Min, Avg, and Max values for the series in the graphs.
All farm reports pages now use splitters so that the graphs can be resized or hidden.
In the Farm Reports dialog, the date range boxes are now formatted to be consistent with how dates are formatted
elsewhere.
Power Management Improvements
When Power Management is starting up Slaves for a job, it now checks the jobs Limits, and doesnt start up
Slaves if they will exceed the maximum for those Limits.
When starting up Slaves for a job, the list of jobs for the secondary pool scan are now gathered properly.
The maximum number of Slaves that can be started for a specific job is now tracked between the primary and
secondary pool scan, in case the job shows up in both collections.
Fixed a bug in Idle Shutdown that would not shutdown an idle Slave on a machine if there were offline and/or
stalled Slave instances on the same machine.
The thermal shutdown sensor dialog in the Power Management window now ensures that the user enters a host
name or IP address for the sensor.
When setting up Thermal Shutdown, you can now specify Test sensors that can be used to test the Thermal
Shutdown functionality without connecting to real temperature sensors.
Added option to Power Management groups to simply include all slaves in the group (instead of having to add
each one manually).
Launcher Improvements
The option to change the user is now disabled if Deadline is configured to use the systems user.
Fixed a memory leak in the Launcher that occurred when it launched various Deadline applications through it.
When shutting down a Linux machine, the Launcher now tries to use sudo shutdown instead of just shutdown.
The deadlinelauncherservice script on Linux now returns the proper exit code when checking the status of the
Launcher.
Fixed a bug that prevented Idle Detection from working properly on Linux.
The launcher now passes a -service command line argument to the slave if it is running as a service on
Windows. This tells the slave that it is running as a service.
The init.d script (Linux) and launchd script (OSX) now pass a -daemon argument to the launcher.
948
When the launcher is running as a daemon (Linux and OSX), it will sleep the number of seconds specified for
the LauncherDaemonStartupDelay setting in the system deadline.ini file before starting up any other Deadline
applications. This delay helps ensure that the machine has its hostname set during startup before launching
Slave, Pulse, or Balancer.
The Launcher icon tooltip now shows that the repository is not set when the deadline.ini file doesnt exist.
Monitor Improvements
General
All remote viewer scripts (ARD, Radmin, RDC, and VNC) now use the hostname/IP override if necessary.
When managing the Script menus in the User Group Permissions dialog, scripts that are no longer in the repository no longer show up.
When auto-configuring the performance settings in the Repository Options, a preview of the current and new
performance settings is now shown.
The option to change the user is now disabled if Deadline is configured to use the systems user.
Added Import/Export/Default buttons to the Script Menus page in the Repository Options.
Fixed a bug in the Script Menus page where script items were losing their order when dragged and dropped as
a group.
Fixed a bug in the Script Menus page that could cause the Monitor to crash during some drag and drop operations.
When changing repositories, the Submission and Scripts menus in the Monitor are now updated.
Moved Image Viewer settings to separate page in Monitor options, and added the option to specify command
line arguments for them.
The Client Setup page in the Repository Options now explains that clients can automatically upgrade or downgrade.
Added a pinned filters button above the list in most panels to allow quicker switching between pinned filters.
Tooltips for spinner controls in the Repository Options now show the minimum and maximum values that are
supported.
The default balancing algorithm in the Balancer Settings in the Repository Options now has a Verbose option.
Made a slight improvement to the performance of the list panels.
The size of the box on the left of Repository Options, Plugin Configuration, etc, can now be resized.
The datetime values shown in the Monitor are no longer based on the systems region settings. This was breaking
the Monitor datetime filters in some regions.
The first column in all the lists can now be moved using drag and drop.
The Monitor no longer asks for the super user password twice during startup if the Monitor is configured to start
in super user mode.
Fixed inconsistencies in the sort order of the job status column.
Added options to the House Cleaning settings in the Repository Options to disable having the Slaves perform
housecleaning, repository repair, and pending job scan.
Added option to Slave Scheduling groups to simply include all slaves in the group (instead of having to add each
one manually).
Removed the [?] button in the top right corner from a few dialogs, since it isnt used.
12.5. Deadline 7.1.0.35 Release Notes
949
Added a green label to the Monitor status bar that makes it clear that you are only seeing your own jobs if you
arent allowed to see other users jobs. The tooltip for this label explains why you cant see the other jobs.
Fixed a bug in the local slave controls where some options werent disabled by default if Override Idle Detection
Settings was disabled.
Added an option to the View menu to save all pinned layouts to a zip file.
The first tab in each group of tabs is now selected by default when opening the Monitor, instead of the last tab.
The pinned filter button menu for the list controls is now updated properly.
The Monitor no longer is hidden and restored when changing local pinned layouts.
The Execute Command remote control option now respects the IP Address or Host Name override of the machine its connecting to.
Added user permission option to control if users in a group are allowed to handle protected jobs.
Fixed tooltips and tab order in New/Edit Path Mapping dialog.
The Reports panels in the Monitor now use a monospace font when using the default style (like the Console
panel does).
The Console font in a custom style is now applied properly when the monospace option is disabled.
The Log report row color in the Reports panels is now applied properly when using a custom style.
Line graphs now support panning and zooming, and a right-click option is now available to reset the zoom level.
Individual series in some Line graphs can be shown/hidden from the right-click menu.
Improved the axis labels in many graphs so that they properly represent integer or date/time values.
Fixed a display issue with the Find icon in the context menu for the Console and Reports panels.
Fixed a bug that could prevent jobs exported in one timezone from being imported into another timezone.
Fixed a bug that could prevent the report panel from displaying an error message if it cant connect to Pulse to
stream logs (if Pulse log streaming is enabled in the Monitor options).
Fixed how the controls in the Override Idle Detection group box were enabled/disabled when enabling it.
Split local slave dialog into 3 tabs so that its not so tall.
Error and log reports now show if the slave was running as a service (Windows only).
Added a new CheckTemperatureSensors.py script that can be used to check the temperature of all sensors in
Power Management.
Added options for custom viewer name (which is used for the menu item), and if it supports chunked tasks. If
an image viewer supports chunked tasks, the chunked task image viewer dialog wont be shown.
Added a better error message to deal with custom viewers that are pointing to directories instead of files.
When the Monitor is configured to start in super user mode, it no longer hides panels during startup that the user
wouldnt normally have permissions to see.
The Monitor no longer loads the Monitor settings twice during startup.
When changing repositories while in super user mode, the monitor will stay in super user mode if the new repo
doesnt have a super user password, or will prompt for the password if it does have a super user password.
Fixed a bug in the logic that determines if the Timeout Multiplier label in Automatic Job Timeout settings in the
Repository Options is enabled or disabled.
When streaming logs from Pulse, the Monitor will now only to connect to the Primary Pulse if it is running.
950
Fixed an error in the Custom Farm Reports when creating a graph with a Time Span value for the group or value
column.
Increased the default width of the Edit Data Columns dialog when creating a Custom Farm Report.
Jobs and Tasks
The job batch expansion arrow in the job list is now always shown in the first column.
The job batch row now shows more information based on the jobs in the batch.
The batch row now shows the number of jobs that are in the batch.
The counts above the job list now show the number of batches, and the selected count now ignores batch rows
so that it properly shows the number of selected jobs.
The counts at the top of the job list now update properly when switching between graph and list displays.
The Job Batch setting in the Job Properties dialog is now called Batch Name.
The initial title for the charts in the job list is now set properly.
Fixed a bug where changing a jobs state from the Monitor didnt update the jobs Started and Finished date
properties.
Fixed error when updating jobs history after deleting all reports for the job.
When switching between being able to see other users jobs and not being able to see them, the job counts are
now updated properly.
The Scan For Missing Output option is now available for Tile jobs.
The Scan For Missing Output dialog now pulls the colors for the task output rows from the job lists color
scheme.
Clicking No on the requeue confirmation in the Scan For Missing Output dialog no longer closes the dialog.
Output for Tile jobs can now be viewed from the Task list in the Monitor.
Added a new dialog to handle the resubmission of Tile jobs.
Fixed a bug in the Scan For Missing Output dialog that would always result in the whole job getting requeued.
Fixed some cases where job batch rows did not disappear properly when all their jobs were filtered out.
Removed an obsolete warning message that could appear when using quick filters in the job list.
Trailing path separators are now removed in the job/task context menu options to Explore Output so that duplicates can be removed properly.
Fixed a bug when whitelisting or blacklisting a slave from the task menu that prevented it from persisting.
The Scheduling page in the Job Properties now has a Custom option, which lets you pick days of the week to
start and/or stop the job.
The job properties dialog will now ask you to pend/release a job if the scheduling settings have changed.
Fixed a bug that prevented the job progress bar in the Monitor from updating when the progress for a single task
job is updated.
Changed the color for normalized render time line in the task render times graph.
Fixed an error in the Job Dependency View when changing the Elided Titles setting.
Slaves and Pulses
When starting Slave machines from the Slave list, the info dialog is now displayed immediately.
The initial title for the charts in the Slave list is now set properly.
951
The Slave list now shows the name and port of the Pulse instance that a Slave is connected to (it still shows
No if it cant connect to Pulse).
The host name or IP address specified in the Pulse settings is now treated as an override. If left blank, the host
name or IP address shown in the Pulse list will be used (depending on the Pulse setting to use Pulses IP address
in the Repository Options).
If a host name or IP address override is specified in the Pulse settings, it is now used when the Slaves connect to
Pulse, and for remote commands.
A MAC address override can now be specified in the Pulse settings.
Added Pulse remote commands to Monitor to perform housecleaning, pending job scan, repository repair, and
power management check.
Removed the filtered utilization calculation from the slave list because it impacted performance, and wasnt
really that useful.
Fixed some display issues in the Slave list for slaves that were in the Starting Job state.
Fixed an onDataChanged error in the Slave list.
Removed some debug logging when starting slaves via Remote Control.
Disabled slaves now show their actual state in parenthesis in the Status column in the slave list.
Disabled slaves are no longer included in the utilization values in the slave list.
Slaves that are starting a job are now included in the utilization values in the slave list.
When sending remote commands from the Monitor to the Slave, only the slaves postfix is sent, instead of the
full slave name. This prevents slaves from starting up with the wrong slave name if the machines host name
changes.
When sending remote commands from the Monitor to the default slave instance on the machine, the ~ character
is used to represent the default instance (since the default instance has no postfix).
The CPU Affinity settings in the Slave Properties now mention that its only supported on Windows and Linux.
Limits
Added a button to the Limit List Control to add a new Limit, which works the same as the existing right-clicking
option.
The list labels in the Limit Dialog are now set correctly when editing a Machine-level Limit.
Balancers and Cloud
Cloud Regions can now be renamed.
Added a Scripts menu to the Balancer lists right-click menu.
Added Balancer license information to the Balancer list.
Added Balancer remote commands to Monitor to perform balancing.
A host name or IP address override can now be specified in the Balancer settings, which is used for remote
control.
A MAC address override can now be specified in the Balancer settings.
Added a button and a right-click option in the Cloud Panel to add a new instance.
Some right-click options in the Cloud Panel are now asynchronous.
Fixed a bug that affected the updating of the Cloud panel.
Permissions for the cloud panel are now open by default.
952
Slave Improvements
Slaves now report their Network I/O, Disk I/O, and Swap usage, which can be viewed from the Monitor.
Fixed a bug that could cause the Slave to lock up when registering new fonts on Windows.
The Slave UI now shows the name and port of the Pulse instance that the Slave is connected to (it still shows
No if it cant connect to Pulse).
Fixed an Access Denied error that could occur when rendering as a user on Windows.
On Windows, a warning is now printed if the rendering process cannot be assigned to a job object instead of
failing the render. Note that this is only an issue on Windows 7 and earlier.
The Region name is now shown in the Slave UI.
The Slave now sets its Slave name when updating a requeue report.
Fixed a bug that prevented the Slave on Linux from getting the output image size correctly after it finished
rendering a task.
On Linux and Mac OSX, a SIGKILL signal is now sent to the rendering process when cancelling a task if it
doesnt shut down gracefully.
When finished rendering a task from a Tile job, the Slave now sets the output image file size for the task.
The slaves now report memory usage for a task more reliably on Linux.
Fixed an error on Windows when mapping drives to a remote path with / as the path separators.
Disabled slaves no longer perform house cleaning, pending job scan, or repository repair operations.
A slave now only triggers slave events when its state actually changes. Previously, the slave would trigger the
OnSlaveIdle event repeatedly when it didnt find any jobs to render, even if it was idle before looking for a job.
When the slave is shutting down, it skips the gathering of system info (cpu, ram, swap, network I/O and disk
I/O) when reporting the slave state because it can significantly slow down the shutdown of the slave.
The slaves now only try to connect to the Primary Pulse if it is running.
Pulse Improvements
Fixed a few bugs with how Slave names were processed by the web service (for example, there were issues with
case sensitivity).
Added functions to the REST API to get the contents of job, task, and Slave reports.
The confirmation dialog shown when shutting down Pulse now mentions if Pulse is the Primary.
The Region name is now shown in the Pulse UI.
Balancer Improvements
Added a text box to the Balancer UI that shows information from the previous balancing operation.
Added Balancer license information to the Balancer UI.
Added option to change license server from the Balancers file menu.
The Balancer system tray icon is now hidden when the Balancer is closed.
The Balancer now responds to remote shutdown requests properly.
The Balancer UI and logs now show which logic plugin the Balancer is using.
953
The confirmation dialog shown when shutting down Balancer now mentions if Balancer is the Primary.
The Region name is now shown in the Balancer UI.
Added icon to the Perform Balancing menu item in the Balancer UI.
The primary balancer now tries to pull a license immediately after connecting to the repository. This ensures
that the license information in the balancer GUI is correct when it pops up.
The primary balancer will now check in its license if it is switched to standby mode while its running.
The primary balancer will now explicitly check in its license when it is shut down.
The Balancer now shows regions as disabled if they are disabled or if they are disabled specifically for the
Balancer.
Fixed a bug that could cause the Slave from a terminated instance to still show up in the Slave list in the Monitor.
Command Improvements
Added RemoveCloudRegion command line option to remove a cloud region.
Fixed the help text for the CreateCloudRegion command.
Fixed the deadlinecommand shell script to properly set the LD_LIBRARY_PATH on Linux.
More job properties are now supported by the GetJobSetting and SetJobSetting commands.
Added GetJobExtraInfoKeyValue and SetJobExtraInfoKeyValue commands to get/set key/values in the jobs
Extra Info dictionary.
Added GetJobPluginInfoKeyValue and SetJobPluginInfoKeyValue commands to get/set key/values in the jobs
Plugin Info dictionary.
Added AppendJobFrameRange command to append frames to a job without affecting the jobs existing tasks.
Improved stdout when using commands that change a jobs state (like RequeueJob).
The help messages for the SubmitMultipleJobs and Multi commands are now consistent.
Web Service Improvements
A standalone web service application is now included with Deadline.
Fixed a bug that could cause the web service to lock up on Linux and OSX.
Added REST API functions to get the Deadline version.
Added REST API functions to delete Pulse and Balancer instances.
Added REST API functions to perform path mapping.
Added States parameter to the jobs API to get jobs in the specified state(s). It accepts a comma separated list
of states.
Added REST functions to get the report contents for a job.
The web service now returns status code 500 when a web service script throws an error.
Web service scripts can now return a status code, as well as additional headers.
Added REST function to append frames to a job without affecting the jobs existing tasks.
954
Scripting Improvements
The RepositoryUtils.CheckPathMappingInFileAndReplace() function no longer loads the entire file in memory
for path mapping.
Popup detection in the Application Plugins now works on Qt popups and Windows 8 mobile popup dialogs.
Invalid DateTime values in Deadline objects passed via the standalone Python API no longer cause errors, and
instead are set to the minimum DateTime value.
Right-click scripts for the Balancer list in the Monitor can now be created.
Added new ReplaceFrameNumberWithPadding and ReplaceFrameNumberWithPrintFPadding functions to
FrameUtils.
Added functions to the standalone Python API to get the contents of job, task, and Slave reports.
Fixed the RemoveSlavesFroLimitGroupList function typo in RepositoryUtils.
ProcessUtils.IsProcessRunning() is now more reliable on OSX.
Added functions to JobUtils to test dependencies.
Added new Power Management events.
Added script API functions to modify a jobs bad Slave list.
Added a GetEventDirectory() function to the event plugin class, which returns the directory path for the current
event plugin.
Added new API functions to get selected Pulse and Balancer settings objects in the Monitor.
Added Standalone Python API functions to get the Deadline version.
Added Standalone Python API functions to delete Pulse and Balancer instances.
Added functions to PathUtils to register or unregister a list of fonts (Windows only).
Added a RepositoryUtils.CreateJobSubmissionFiles function to create the submission files from the job.
Added new MappedPaths module to standalone Python API to perform path mapping.
Added a RepositoryUtils.GetJobsInState function to get jobs that are in the specified state(s).
Added Jobs.GetJobsInState and Jobs.GetJobsInStates functions to standalone Python API to get jobs in specific
states.
Added JobBatchName to Job object in the script API.
Removed use of deprecated JobUtils and ScriptUtils functions from the Monitor scripts that ship with Deadline.
Added FrameUtils.ReplacePaddingWithFrameNumber() function to Script API.
Added standalone Python functions to get the report contents for a job.
Added IsRunningAsService() function to DeadlinePlugin to check if the slave is running as a service on Windows.
Added Limit properties LimitCurrentHolders, LimitInUse, and LimitStubLevel to scripting API.
Added RepositoryUtils.GetPowerManagementOptions function to scripting API.
Added PowerManagementGroup and PowerManagementOptions classes to scripting API.
The SetIniFileSetting function no longer changes the order of sections and keys in the ini file.
Added RepositoryUtils.GetPathMappings() function to get all the path mappings for the current OS and region.
955
Added RepositoryUtils.AppendJobFrameRange() function to append frames to a job without affecting the jobs
existing tasks.
Added standalone Python function Jobs.AppendJobFrameRange() to append frames to a job without affecting
the jobs existing tasks.
Comments are now supported in the param and options files for the plugins. A ; or # can be placed at the
start of a line to comment it out.
Application Plugin Improvements
3ds Cmd Improvements
Added support for 3ds Max 2016.
Added Backburner sys env PATH checks to 3dsCmd plugin.
Fixed FTrack bug in the 3dsCmd integrated submitter.
Version info for 3dsmaxcmd.exe and 3dsmax.exe executables are now logged during rendering.
Added a new sanity check to the integrated submitter.
Updated version dictionary in 3dscmd.py.
3ds Max Improvements
Added support for 3ds Max 2016.
Many Corona renderer properties can now be modified from the Monitor after the job has been submitted.
Increased the communication timeout between Deadline and 3ds Max, which greatly reduces the occurrence of
timeout errors.
Lowered the timeout for customise.ms from 10 minutes down to 1 minute.
Improved the reliability of the Kill ADSK Comm Center feature.
Added Qt popup handling.
Fixed a bug with .mxp path config files for some submission modes.
The contents of the temporary dl.ini file are now printed to the render log.
Fixed a bug in SMTD that allowed you to click the Submit button before SMTD finished loading.
A warning message is now logged if the Backburner version installed doesnt match the version of 3ds Max
being rendered with.
Sub State-Sets and Scripted State-Sets now supported in SMTD.
Unified color scheme in SMTD.
Blacklist/whitelist now shows the slave states by coloring the items in the list in SMTD.
The Batch Name for jobs is now supported for all job type submissions in SMTD.
Updated support for some Corona advanced options that recently changed.
Fixed a bug with Quicktime job submission in SMTD.
Added popup handler for ADSK license dialog popup when you borrow a license.
Added popup handler for Populate dialog.
Fixed some Tile/Jigsaw related bugs.
Added MAXScript Debugger popup ignorer.
956
SMTD now sets the tile output paths for a Tile job so that they can be viewed from the Monitor.
Fixed a bug in SMTD that prevented a couple Shotgun checkboxes from working properly.
Fixed how SMTD set the job batch name when using the Create/Upload checkboxes.
Added new sanity checks to SMTD.
Added ExtraInfo customisable maxscript $tokens to SMTD.
Added extra LogInfo line so we can see in a crashing 3dsMax error/log report that slave is running as a service.
Added V-Ray & Corona VFB override checkbox option in SMTD.
Added V-Ray & Corona VFB enable/disable checkbox, only active if override option is enabled in SMTD.
Updated customize.ms to use DeadlineUtil.WarnMessage for warning messages.
Added support for V-Ray Image Sampling - Render Mask Type = current scene selection.
Updated V-Ray advanced renderer maxscript properties to support latest V-Ray v3.15 / nightly builds.
Updated iray advanced renderer maxscript properties to support most recent iray features introduced in 3dsMax
2014 onwards.
Added read-only labels to 3dsmax submission via SMTD to display the final, assembled image resolution when
tile/region/jigsaw rendering.
Changed V-Ray VFB [Region] button is enabled sanity check from #fail to #fix.
Added a couple Maxwell popup ignorers.
Updated version dictionary in 3dsmax.py.
After Effects Improvements
Changed wording of Number of Tasks to Number of Machines in multi machine rendering settings.
Enabling multi machine mode in the Monitor submitter now disables the local rendering option.
Fixed a text cutoff issue in the integrated After Effects submitter.
Anime Studio Improvements
Added support for Anime Studio 11.
The submitter can now parse the Layer Comps from the new .anime and .animeproj scene files that were introduced in Anime Studio 11.
Arnold Standalone Improvements
Added path mapping support to the contents of the Arnold .ass files.
Added progress reporting to the Arnold plugin.
The Command Line field in the Monitor submitter is now sticky.
Cinema 4D Improvements
Added stdout handler to catch The output resolution is too high for the selected render engine error message.
Composite Improvements
Added support for Composite 2016.
Corona Improvements
Added support for Corona distributed rendering.
Updated the Corona icon.
957
Draft Improvements
Updated Draft to version 1.3.2.58232 (requires a new Draft 1.3 license).
Added the Use Shotgun Data button and its functionality to the job right-click Draft submission script.
Fixed a bug caused by trailing backslashes in paths being passed as command line arguments to Draft.
Fixed some bugs in the Draft Tile Assembler.
Updated Draft Assembly plugin to use new Draft functions, which reduce memory usage and improves performance.
Fusion Improvements
Re-added Submission and Job scripts to submit Fusion Quicktime jobs to Deadline (was removed in Deadline
6).
Fusion Quicktime jobs are now submitted to the Fusion plugin, instead of having their own QuicktimeFusion
plugin.
The Fusion plugin now pulls from the Fusion render log correctly when Fusion.exe is chosen as the render
executable.
Fixed a bug in the integrated submitter that would prevent job submission from working for certain versions of
Fusion 7.x.
Hiero Improvements
When submitting a sequence with a custom in/out time, the integrated Hiero submitter now sets the end frame
properly.
Houdini Improvements
Added support for Houdini 14.
Added support to the integrated submitter for submitting Wedge ROP jobs.
Submission settings in the integrated submitter now get saved in the Houdini scene file so that settings are sticky
between individual scenes.
Removed auto-detection of Houdini install path in the Houdini submitter installer due to various bugs.
Fixed a bug with how the Houdini submitter installer installed the submission script on OSX.
Fixed an ftrack bug in the integrated submitter.
The integrated submitter now collects all ROPs in the scene, not just the ones in /out.
Added Tile rendering support when submitting from the integrated or Monitor submitters.
Improved path mapping support by using the HOUDINI_PATHMAP environment variable.
Path mapping is now enabled by default in the Houdini plugin.
LuxRender Improvements
Added support for LuxSlave distributed rendering (works like the VRay Spawner and modo Distributed Rendering plugins).
Updated the default executable paths in the plugin configuration.
Mantra Standalone Improvements
Added support for Houdini 14.
Added Tile rendering support when submitting from the Monitor submitter.
Improved path mapping support by using the HOUDINI_PATHMAP environment variable.
958
959
Fixed inverted assembly issue when rendering animation tile jobs with Renderman for Maya.
Tile and Jigsaw now works with render layers with Renderman for Maya.
Fixed a bug that could cause Jigsaw animation jobs to not submit a dependent assembler job.
Jigsaw animation jobs now respect the frame list override when overriding layer settings during submission.
Draft Assembly config files are now created in the layer folder that the output is saved to. Before, they were
saved to the root image folder.
Removed some debugging print statements from the integrated Maya submitter.
Path mapping is now enabled by default in the MayaCmd and MayaBatch plugins.
Fixed the install path for the Maya integrated submitter installer on OSX.
Removed all frame borders from the integrated Maya submitter (since they are deprecated in Maya 2016).
Fixed an overlap issue with the integration Connect button in the Maya integrated submitter.
modo Improvements
Added path mapping support for assets in the modo scene file, and for the render output paths.
Added submission option to submit each render pass group as a separate job.
Native modo dialogs are now used for info, errors, and yes/no questions.
The local scene file warning is now only shown when the scene file is not being submitted with the job.
Added support for VRay for modo.
Fixed a typo in the description of the Geometry Cache Buffer setting in the modo plugin configuration.
The output format is now sticky in the modo submitter for the Monitor.
The modo submitter now sets the tile output paths for a Tile job so that they can be viewed from the Monitor.
Fixed a typo for one of the tabs in the modo submitter for the Monitor.
Added Output Override settings to the integrated modo submitter, which let you render to the Layered PSD or
EXR formats.
The browser buttons in the integrated submitter no longer clear their corresponding values if the user cancels
the browser window.
Fixed a bug that prevented Draft assembly from working when submitting modo Tile renders from the Monitor
for if a layered EXR format was not selected.
Added support for a modo sanity check script, which can be created in the submission/Modo/Main folder in the
Repository.
Nuke Improvements
Added support for Nuke Frame Server distributed rendering (for Nuke Studio).
Added support for sequence/container submission in Nuke Studio.
Added option to integrated submitter to submit only write nodes within precomp nodes.
Added option to integrated submitter to render the precomp nodes first.
Fixed an issue with how the nuke integrated submitter handles write nodes with embedded TCL in the output
path.
Fixed the nuke integrated submitter to evaluate embedded TCL properly before checking for frame padding.
960
The integrated submitter now properly detects if Gizmos are selected when submitting the selected write nodes
only.
Octane Improvements
The Octane submitter can now parse render targets from Octane 2 ocs files.
The Octane submitter handles ocs parsing errors better.
Ply2Vrmesh Improvements
Added support for handling multiple frames.
Added option to merge the outputs.
PRMan Improvements
The PRMan plugin now supports rendering RIB files with layers in the file name.
REDLine Improvements
Added support to REDLine for using RMD files for metadata, in addition to the existing RSX option.
Rhino Improvements
Added Qt popup handling.
Fixed a bug in the Rhino submitter.
The Rhino submitter now sets the tile output paths for a Tile job so that they can be viewed from the Monitor.
Fixed some layout issues in the Rhino submitter.
Tweaked Tile Rendering labels in the submitter, and added a label that explains that tile rendering is disabled
when submitting from the Monitor.
RIB Improvements
The RIB plugin no longer fails 3delight renders when they print error reporting to stdout.
RVIO Improvements
Re-added right-click job script for RVIO submission.
Softimage Improvements
The Softimage submitter now sets the tile output paths for a Tile job so that they can be viewed from the Monitor.
VRay Standalone Improvements
When rendering separate vrscene files per frame, the frame padding is now added to the output file name.
Path mapping is now enabled by default in the VRay plugin.
VRay DBR Improvements
Added support for VRay DBR for 3ds Max 2016 and Maya 2016.
The Monitor VRay Spawner submitter now defaults to the none pool like the other submitters.
The minimum value for the maximum number of servers in the Monitor submitter is now 1 instead of 0.
The Version and Port settings can now be seen in the job properties for VRay Spawner jobs.
Updated the default TCP ports for 3ds Max VRay and 3ds Max VRay RT.
The port number in the integrated submitter is now disabled after the render begins.
Updated Vray Spawner Monitor submitter to add support for Cinema 4D.
Monitor submitter now hides the port setting if it isnt applicable for the selected application.
961
The log box in the submitter for 3ds Max now has colored text.
Fixed a regression in the submitter for 3ds Max.
Added Check ALL, INVERT & None button to the submitter for 3ds Max to allow easy Active Server List
selection.
Added ability to select which Slaves are used for DBR in the submitter for 3ds Max. Disabled Slaves will still
continue to run the spawner job until it is deleted or completed.
Added a couple Maxwell popup ignorers when rendering with VRay for 3ds Max.
Fixed the install path for the Maya VRay DBR integrated submitter installer on OSX.
Increased the width of the Maya VRay DBR submitter, and removed all frame borders from the UI (since they
are deprecated in Maya 2016).
VRimg2Exr Improvements
Added additional submission options: Separate Files, Multi Part, Long Channel Names, and Threads.
Vue Improvements
Added support for Vue 2015.
Default Vue executable paths for Vue 2014 and 2015 now include the path to the Vue PLE executable.
Event Plugin Improvements
Draft
Fixed a bug where Pool and Group were switched in the Draft event plugin.
Fixed the Draft event plugin to pass on the contents of the DraftExtraArgs key-value pairs to the Draft job.
ftrack
You can now create new Assets from Deadlines ftrack UI.
The Asset list in the UI will now list all Assets belonging to the selected Tasks parent (as opposed to assets
already tied with that Task).
The ftrack event plugin now uses a relative path to load the ftrack API.
The ftrack event plugin no longer adds the ftrack API path to sys.path if its already there.
Fixed a bug in the ftrack event plugin where it would still try to create a thumbnail after determining that it
shouldnt.
If there is only one output file, Deadline now creates the default main component instead of Deadline_Output_0.
Upgraded the ftrack API.
Shotgun
The Shotgun event plugin now uses a relative path to load the Shotgun API.
Cleaned up some logging in the Shotgun event plugin.
Added option to Shotgun event plugin to specify the character that should be used for frame padding when
uploading the paths to Shotgun (default is #).
962
963