Documente Academic
Documente Profesional
Documente Cultură
April 2013
The product or products described in this book are licensed products of Teradata Corporation or its affiliates.
Teradata, Active Data Warehousing, Active Enterprise Intelligence, Applications-Within, Aprimo, Aprimo Marketing Studio, Aster, BYNET,
Claraview, DecisionCast, Gridscale, MyCommerce, Raising Intelligence, Smarter. Faster. Wins., SQL-MapReduce, Teradata Decision Experts,
"Teradata Labs" logo, "Teradata Raising Intelligence" logo, Teradata ServiceConnect, Teradata Source Experts, "Teradata The Best Decision Possible"
logo, The Best Decision Possible, WebAnalyst, and Xkoto are trademarks or registered trademarks of Teradata Corporation or its affiliates in the
United States and other countries.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
Apache, Apache Hadoop, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the Apache Software Foundation
in the United States and/or other countries.
Axeda is a registered trademark of Axeda Corporation. Axeda Agents, Axeda Applications, Axeda Policy Manager, Axeda Enterprise, Axeda Access,
Axeda Software Management, Axeda Service, Axeda ServiceLink, and Firewall-Friendly are trademarks and Maximum Results and Maximum
Support are servicemarks of Axeda Corporation.
Data Domain, EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
GoldenGate is a trademark of Oracle.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Hortonworks, the Hortonworks logo and other Hortonworks trademarks are trademarks of Hortonworks Inc. in the United States and other
countries.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, and z/OS are registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI is a registered trademark of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United States
and other countries.
NetVault is a trademark or registered trademark of Quest Software, Inc. in the United States and/or other countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
Oracle, Java, and Solaris are registered trademarks of Oracle and/or its affiliates.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Used under license.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
SPARC is a registered trademark of SPARC International, Inc.
Symantec, NetBackup, and VERITAS are trademarks or registered trademarks of Symantec Corporation or its affiliates in the United States and
other countries.
Unicode is a registered trademark of Unicode, Inc. in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.
THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED ON AN "AS-IS" BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR
NON-INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION
MAY NOT APPLY TO YOU. IN NO EVENT WILL TERADATA CORPORATION BE LIABLE FOR ANY INDIRECT, DIRECT, SPECIAL, INCIDENTAL,
OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS OR LOST SAVINGS, EVEN IF EXPRESSLY ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are not
announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features, functions,
products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions, products, or
services available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or updated
without notice. Teradata Corporation may also make improvements or changes in the products or services described in this information at any time
without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this document.
Please email: teradata-books@lists.teradata.com.
Any comments or materials (collectively referred to as "Feedback") sent to Teradata Corporation will be deemed non-confidential. Teradata
Corporation will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display, transform,
create derivative works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis. Further, Teradata
Corporation will be free to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose whatsoever, including
developing, manufacturing, or marketing products or services incorporating Feedback.
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
This guide provides instructions for users and administrators of Aster Client, version AC 5.10.
If you’re using a later version, you must download a newer edition of this guide!
The following additional resources are available:
• Aster Database upgrades, clients and other packages:
http://downloads.teradata.com/download/tools
• Documentation for existing customers with a Teradata @ Your Service login:
http://tays.teradata.com/
• Documentation that is available to the public:
http://www.info.teradata.com/
Typefaces
Command line input and output, commands, program code, filenames, directory names, and
system variables are shown in a monospaced font. Words in italics indicate an example or
placeholder value that you must replace with a real value. Bold type is intended to draw your
attention to important or changed items. Menu navigation and user interface elements are
shown using the User Interface Command font.
• A comma and an ellipsis (, ...) means the preceding element can be repeated in a comma-
separated list.
• In command line instructions, SQL commands and shell commands are typically written
with no preceding prompt, but where needed the default Aster Database SQL prompt is
shown: beehive=>
This section explains how to install various utilities that complement your Aster Database
installation.
• Install the Aster Database Cluster Terminal (ACT) (page 11)
• Install the Loader Tool (page 133)
Obtain ACT
ACT is installed automatically on the queen during the installation of Aster Database. By
default, the install directory is /home/beehive/clients/act. To launch ACT from the
queen, see “Launch ACT Directly on the Queen” on page 17.
If you want to install ACT on another machine, get the file for the client operating system and
copy it to the computer you'll use to query your database. You may obtain the file in one of
two ways:
• To get the newest package, download it from http://downloads.teradata.com/download/tools
• On your queen node, you can find the installers in the directory /home/beehive/
clients_all/<your_client_OS>.
Tip! ACT for Linux requires glibc version 2.6.18 or higher. If you do not have glibc version 2.6.18 or higher, you must
use the IP address instead of the hostname for the -h flag when running ACT. To check the version of glibc, issue
the command ldd --version.
4 See the section “Launch ACT” on page 16 for information on running ACT.
3 See the section “Launch ACT” on page 16 for information on running ACT.
This section explains how to use Aster Database Cluster Terminal (ACT) to query and manage
databases. ACT is a terminal-based query tool that connects with Aster Database. ACT lets you
connect to the database (optionally using SSO and/or SSL), type queries, issue them to Aster
Database, and get query results. Alternatively, you can source your queries from a file. ACT
can return your query results to the command line or to a file, which makes it useful for
extracting data. Meta-commands and shell-like features are provided to facilitate writing
scripts and automating tasks.
Tip! Beginning with ACT version 4.6, the ACT client cannot connect to versions of Aster Database prior to version
4.6. If you attempt to connect to a pre-4.6 version of Aster Database with a 4.6 or later version of ACT, you will see an
error message indicating that there is a version mismatch between Aster Database and the client. You should obtain
the version of ACT that matches the version of Aster Database to which you are attempting to connect.
Tip! When using SSO (single sign-on), the -U and-w options are not used, because the username and password
are passed directly to the host via SSO.
To log in to the default database that is provided in your installation, type this, replacing the IP
address with the hostname or IP address of your Aster Database queen:
$ act -d beehive -h 10.42.52.100 -U beehive -w beehive
To see a list of ACT command line arguments, type:
$ act --help
Install ACT
You can install ACT on Linux, Windows, Solaris, or Mac.
Launch ACT
See the appropriate section below for instructions on launching ACT:
• “Launch ACT on Windows” on page 17
• “Launch ACT on Linux or Solaris” on page 17
• “Launch ACT on Mac” on page 17
• “Launch ACT Directly on the Queen” on page 17
Tip! On an Aster Database where LDAP authentication is enabled, if during logon an ACT user gets the error mes-
sage:
'ERROR: An internal error has occurred.',
make sure the username is present in Aster Database with proper privileges.
Tip! ACT for Linux requires glibc version 2.6.18 or higher. If you do not have glibc version 2.6.18 or higher, you must
use the IP address instead of the hostname for the -h flag when running ACT. To check the version of glibc, issue
the command ldd --version.
2 Change directories to the directory where ACT is installed (by default, /home/beehive/
clients.
3 Log in to ACT:
$ act -d <db name> -U <username> -w <password> [argument flags]
Note that if you do not provide the hostname using -h, ACT defaults to the localhost. For
details on the command line options, see “Startup Parameters for ACT” on page 18
Log In to ACT
1 Run ACT by typing a command like:
act -d <db name> -h <hostname> -U <username> [-w <password>]
[argument flags]
For details on the command line options, see “Startup Parameters for ACT” on page 18
2 Provide your database password by:
• adding –w <password> to the ACT login string, or
• omitting the -w argument and providing your database password at the prompt.
3 Choose a database by adding -d <database name> to the ACT login string. If -d is not
used, ACT places you in the system database (with the default name “beehive”).
4 You will see a welcome message, followed by the database prompt, which shows the
database name, followed by “=>”. For example:
Welcome to act AC 5.10, the Aster Database Terminal.
beehive=>
Flag Description
-d [ --dbname ] DBNAME Specify database name to connect to
(default: “beehive”).
-h [ --host ] HOSTNAME Aster Database server host (default:
“localhost”).
-U [ --username ] NAME Aster Database username (default:
“beehive”). Not used with SSO.
Tip! Note the default values for the connection parameters. If you do not specify the parameters -d (database
name), -h (hostname), -U (username), and/or -p (port) in the connect string, ACT will use the default values.
The default values are:
• “beehive” for database name
• “localhost” for hostname
• “beehive” for username, and
• “2406” for port.
If -w is not used, ACT will prompt for a password.
Flag Description
-d [ --dbname ] DBNAME Specify database name to connect to (default:
“beehive”).
--config-file FILENAME Lads startup parameters from a configuration file
specified by FILENAME. See “Use a Configuration File
to Pass ACT Startup Parameters” on page 22 for more
information.
-c [ --single-command Run only single command (SQL or internal) and exit.
]COMMAND For example:
act -c "COPY MyTable FROM stdin;" <
myDataFile.dat
-f [ --input-file ] FILENAME Execute commands from file, then exit. Run a SQL
script.
Flag Description
--on-error-stop Enables the “on-error-stop” option, by default this
or option is disabled. See “Using the “on-error-stop”
-E Option in ACT” on page 22 for more information.
Flag Description
-a [ --echo-script-input ] Echo all input from script.
-e [ --echo-all-input ] Echo commands sent to server.
-o [ --redirect-query-results ] Send query results to file (or | pipe).
FILENAME
Flag Description
--enable-ssl Enables Secure Socket Layer (ssl) support. Must be used if
any of the other SSL/SSO arguments are used.
--ssl-encrypt-reads SSL Encrypt Reads. Must be used if secureWrites=true
on the server. Conversely, must not be used if
secureWrites=false on the server. See How to Set
Configuration Parameters on the Queen (page 85) for
information on how to set the secureWrites parameter
on the server.
--ssl-self-signed-peer Indicates that ACT will connect to a Queen which will
provide a self-signed certificate.
--ssl-private-key-path The SSL Private Key Path indicates where the private key is
PATH stored on the client (ACT) machine.
--ssl-certificate-path The SSL Certificate Path indicates where the certificate is
PATH stored on the client (ACT) machine.
--ssl-trusted-ca-dir When using a chain of certificates rather than a single
DIRECTORY certificate, use the SSL Trusted CA Dir to set the directory on
the client machine where the chain of trusted certificates is
stored.
--ssl-trusted-ca-file Use SSL Trusted CA Filename to provide the location of the
FILENAME signed copy of the server’s certificate on the client machine.
--ssl-cert-filetype ARG SSL Certificate File Type (use 1 for PEM; 2 for ANS1; default:
0).
--enable-sso Enables Single sign-on (SSO) support.
Table 2 - 4: SSL and SSO related command-line parameters for ACT (continued)
Flag Description
--gss-lib-path PATH For Linux, sets the GSS shared library path (default on linux
is /opt/guest/lib32 or /opt/guest/lib64). Ignored on
Windows.
Tip! The SSL settings in ACT have interdependencies, and in most cases they rely on the SSL settings on the queen.
See Common SSL Configurations (page 36).
Flag Description
-q [ --quiet ] Run quietly and do not print messages, only query
output. Use this for clean query output. Often used
with the -c flag.
-t [ --print-rows-only ] Print rows only.
-x [ --expanded ] Turn on expanded table output.
-A [ --unaligned ] Turn on unaligned table output
-F [ --field-separator ] ARG Set field separator (default: '|')
Flag Description
-h [ --host ] HOSTNAME Aster Database queen hostname or IP address
(default: "localhost"). Note that ACT supports glibc
version 2.6.18 or higher. If you do not have glibc
version 2.6.18 or higher, you must use the IP address
instead of the hostname. To check the version of
glibc, issue the command ldd --version.
When using SSO, you should specify a fully qualified
hostname using the -h option, as in the example:
<hostname>.<domain>.<com|org etc>. If
only the hostname is used with SSO, ACT will
append the local domain name before attempting to
look up the host. Using an IP address with -h is not
supported with SSO.
-p [ --port ] PORT Aster Database server port (default: "2406").
-U [ --username ] USERNAME Aster Database username (default: "beehive").
-w [ --password ] PASSWORD Aster Database password. This parameter is
optional; ACT will prompt for a password if you do
not pass a -w parameter. Not used with SSO.
To use a configuration file, first create a text file of startup parameters. The following rules
apply when creating the config file:
1 Lines starting with a # character are ignored (considered as comments).
2 Blank lines are ignored (including lines containing just spaces).
3 Parameters are entered using the format
flagname: value
where flagname is same as the name of the command line flag without the preceding
hyphens (--) and value is the flag value as it would be provided on the command line.
Note that the short notations of flags are not supported. For example:
host: <ip>
will work but the following:
h: <ip>
will not work.
4 Flags which do not take any argument on the command line should be given a value of
either true or false.
5 Flag names are case-sensitive.
6 If the config file includes invalid flag names or repeated entries, ACT will not launch, and
an error will display.
7 If the config file includes the “on-error-stop” option with the parameters set to enable
this option, ACT will stop if an error occurs while running SQL queries. See Set “on-error-
stop” in the ACT config file (page 23) for information on setting this option.
host: 10.10.10.10
dbname: sampledb
username: sampleuser
# SSL settings
enable-ssl: true
ssl-self-signed-peer: true
ssl-encrypt-reads: false
To start ACT, explicitly using the config file, issue a command like this example:
$ act --config-file /home/beehive/.act_ssl_config
To start ACT, explicitly using the config file and also specifying an additional parameter to
redirect query results to a file for this session only, issue a command like this example:
$ act --config-file /home/beehive/.act_ssl_config -o /home/beehive/
query_results_file
retail_sales=>
To list the tables in the database, enter \d at the ACT prompt (in this case, retail_sales=>).
For example:
retail_sales=> \d
List of relations
Schema | Name | Type | Owner
--------+--------------+-------+---------
public | customer_dim | table | beehive
public | date_dim | table | beehive
public | geo_dim | table | beehive
public | product_dim | table | beehive
public | region_dim | table | beehive
public | sales_fact | table | beehive
public | store_dim | table | beehive
(7 rows)
Exit ACT
To quit ACT, type \q and hit <Enter>.
ACT, you can set these parameters using \set, and in other clients you can typically set them
in your data source definition or parameters file.
Let’s look at fetch-count first.
\set fetch-count n
where n is the maximum number of rows ACT should return at a time.
To enforce the fetch-count, ACT uses server side cursors to fetch results, which can help
prevent the memory footprint of ACT from growing too large.
Note that to the user, the results returned will not be different when using fetch-count. The
purpose is simply to reduce the memory footprint of ACT on the server.
Tip! When fetch-limit is used, the total row count returned for the query will be the total row count returned
by the query or the row count specified by fetch-limit, whichever is smaller. For example, if a query normally
returns 35,453 rows, but you have specified a fetch limit of 1000, the query will return 1000 rows (and it will display
“1000 rows returned”). There will be no indication that there were in fact 35,453 rows that would have been returned
had you not had a fetch limit set.
So, as you can see, fetch-limit via server-side cursors does not translate into the workers
doing a LIMIT 1000 on their individual slice of data. Therefore, if the use case calls for it, an
Aster Database power-user should be aware that using the SQL LIMIT clause can speed up
query execution dramatically in Aster Database.
Tip! If you receive an “Error writing history to file.” error on Linux when attempting to view command history with \s,
check that the current Linux user has permissions to write to the current working directory.
Tab Completion
The UNIX/Linux version of ACT can tab-complete SQL commands and table names that you
type. Tab-completion is not available in the Windows version of ACT.
To use this feature, type the first couple of letters of a command and hit the <Tab> key. If the
completion is unambiguous, ACT completes the command. If ACT doesn’t complete the
command, hit <Tab> again and ACT prints all the possible completions. Using the list as a
reference, type enough additional characters to unambiguously identify the desired command
or table, and hit <Tab> again to complete it. Here are a few common uses of tab completion:
• To complete common SQL commands. For example, type “se” and hit <Tab> to type
SELECT.
• To list various ACT utility commands. For example, type “\”and hit <Tab> to show all the
commands, or type “\d” and hit <Tab> to show all the commands that start with “d”.
• To complete a table name. For example, type “SELECT * FROM sa” and hit <Tab> to
complete the table name or hit <Tab> twice show all the table names that start with “sa”.
You can also list the names of all tables in the database by typing “SELECT * FROM ” (note
the trailing space) and hitting <Tab> twice.
Command Description
\? prints help for ACT commands.
\c[onnect] DBNAME USER change login credentials and/or connect to a new database.
HOST PORT The parameters must be specified in the order shown, with a
\c[onnect] DBNAME USER space before each, and parameters may not be skipped. In
HOST other words, if only one parameter is specified, it is
\c[onnect] DBNAME USER understood to be DBNAME; if a second parameter is also
\c[onnect] DBNAME
specified, it is understood to be USER; and so on.
\cd [DIR] change the current working directory.
\copyright show ACT usage and distribution terms.
\h help with SQL commands.
\h [SQL command name] help with syntax of the specified SQL command, * for all
commands.
\g or terminate with a semicolon (;) to execute query.
\q quit ACT.
\! [command] execute command in shell or start interactive shell.
\password change the password for the current user.
Command Description
\info display current environment settings.
\set display current ACT parameter settings.
\set param-name set ACT parameter setting param-name to value param-
[param-value] value. (For example, “\set fetch-count 500” tells ACT
to fetch no more than 500 rows at a time when selecting.) If no
parameter value is supplied, displays the current setting for the
specified parameter.
\timing [on|off] toggle or set timing of commands.
\pager [on|off] toggle or set to use pager to enable paging through large result
sets.
Command Description
\e [FILE] edit the query buffer (or file) with external editor. On most systems, this
launches your default text editor. When you save and exit the editor, the
edited statement is passed back to ACT for running.
\g [FILE] send query buffer to server (and results to file or | (pipe character)).
\p show the contents of the query buffer.
\r reset (clear) the query buffer.
\w FILE write query buffer to file.
Command Description
\echo write string to query output stream (see \o below).
[STRING]
\i [FILE] execute SQL commands from SQL script file. (Run an SQL script.)
\o [FILE] redirect all query results to file or | (pipe character).
\o Type \o with no argument to stop sending results to a file and resume sending
them to the ACT shell.
\s display command history in Linux (optionally, print history to a file specified by
[FILENAME] FILENAME) Note that query history includes only the first 2048 characters of
each query.
Command Description
\dF list installed files, SQL-MapReduce functions, and other functions in the
current schema. Use a regular expression as an argument to display a subset of
the available functions. For example, to view all installed functions in the
database, issue:
\dF *.*
where the first asterisk means "all schemas" and the second means "all
functions and files."
Alternatively, you can examine the system views.
\dF+ show details for all installed files, SQL-MapReduce functions, and other
functions in the current schema. For each function, the output shows the
name, schema, owner, upload time, and MD5 Hash fingerprint of the
function.
Use a regular expression as an argument to display a subset of the available
functions. For Example, type \dF+ *.* to show details for functions and
files in all schemas in the database. Alternatively, you can examine the system
views.
Command Description
\dE show all the installed SQL-MR functions for which the current user has
privileges. Use a regular expression as an argument to display a subset of the
available functions. Shows function name, schema, owner, function version
and creation time.
\install install the file or SQL-MapReduce function in Aster Database. The file must
<FILE> be available on the file system where ACT is running. Note that the database
[[<SCHEMA>/ user running this command must have permission to install files and
]<FILE_ALIAS> functions in the specified schema.
]
You cannot install two files or functions with the same name. If attempting to
do this, you must follow these steps:
• remove the existing file or function
• install the new file or function
• grant the appropriate privileges on the file or function.
There is a limit of 238MB on the size of the file to be installed. If you try to
install a larger file, you will see an error like:
ERROR: row text exceeds limit of 238MB ...
Note that when installing larger files, the queen may run out of memory. The
queen needs available memory of approximately eight times the size of the file
to be installed, in order to encode, buffer and copy the file.
\download download the specified installed file or function (identified by its FILE or
[[<SCHEMA>/ FILE_ALIAS) to the machine where ACT is running. Note that the database
]<FILE_ALIAS> user running this command must have permission to download files and
] <FILE> functions from the specified schema.
\remove remove from the cluster the file or SQL-MapReduce function specified by its
[[<SCHEMA>/ FILE_ALIAS. Note that the database user running this command must have
]<FILE_ALIAS> permission to remove files and functions from the specified schema.
]
Command Description
\d list all tables, indexes and views in the current schema.
\d [PATTERN} describe table or index.
\dt list all tables in the current schema.
\dt [PATTERN} print schema, name, type, and owner of a table or
tables. To see tables in a custom schema, type \dt
schemaname.*
Command Description
\di [PATTERN} describe index.
\dg list groups.
\dg [PATTERN} describe group.
\du list users.
\du [PATTERN} describe user.
\dn list schemas.
\dn [PATTERN} describe schema.
\l list all databases.
\extl host=hostname_or_IP lists all databases on an external system.
[option_name=option_value,
…]
Tip! In Aster Database 5.0, for the \extd command in ACT, if the optional user argument is not specified, the
command will fail on any but the default database. The error message is not specific about what caused the command
to fail. The workaround is to always specify the argument user when issuing \extd.
Tip! ACT uses the schema search path (search_path) for the database user when displaying lists of tables,
veiws and indexes. The schema search path defaults to the schema search path for the current user in the database.
To set the search_path from ACT, issue the following command:
beehive=> SET session search_path TO <schema>;
Note that multiple schemas are not supported. If multiple schemas are listed in the search_path, the first
schema listed will be used.
To display the current search_path type:
beehive=> SHOW search_path;
Note that you may also set the search_path on the server.
Alternatively, you can specify the schema to use when issuing commands by following the command with a schema
qualified reference. This example shows how to display information on all tables in the schema “myschema”:
\dt myschema.*
Command Description
\a toggle between unaligned and aligned output mode.
\f [STRING] show or set field separator for unaligned query output.
\t [on|off] show only rows (off by default).
Command Description
\x [on|off] set or toggle expanded output mode ON and OFF. With expanded output
mode turned on, each record is split into rows, with one row for each
value, and each new record is introduced with a text label in the form, -
--[ RECORD 37 ]---. This can help make wide tables readable on a
small screen, and is very useful if you’re trying to read EXPLAIN output.
Note that in expanded mode, the number of rows is not returned at the
end of the table. Because of this, when querying a table with no rows,
you will simply see the ACT prompt again.
Parameter Description
auto-commit [1|0] When set to 1 (the default, on), each SQL command is automatically
committed upon successful completion. When set to 0 (off), you may
manually commit your changes after each transaction or series of
transactions by issuing the COMMIT command, or undo changes by
issuing ROLLBACK. If you do not issue the COMMIT command, all
transactions that occurred since the last COMMIT will rollback
automatically.
fetch-count [int] To limit the number of rows returned at a time. ACT uses a fetch count
by default (i.e. even when fetch-count is not set explicitly.) The
fetch-count (number of rows per fetch) should always be set to
greater than 0, The default value is 1024 (1024 rows).
fetch-limit [int] To set the maximum number of rows returned per query. A value less
than 0 implies fetch all rows. A value greater than 0 implies fetch-
limit rows in total. The default value is -1 (all rows).
use-server-cursors When set to 1, sets the server to use cursors (useful when the result set
[1|0] is very large). When set to 0 (the default, off), sets the server to not use
cursors.
on-error-stop [1|0] By default, this feature is disabled or set to off = 0.
When set to 1 (or “on”) ACT will stop and exit if it meets an error
during SQL query processing.
The following are ACT exit messages:
• EXIT_SUCCESS = 0 means ACT finished processing normally.
• EXIT_FAILURE = 1 means an error occurred, such as "file not
found" in the “-f ” option.
• EXIT_USER = 3 means an error occurred in a sql script and the
option “on-error-stop” was on or enabled.
host: saturn.asterdata.com
dbname: sampledb
username: sampleuser
# SSL settings
enable-ssl: true
ssl-self-signed-peer: true
# SSO settings
enable-sso: true
Queen-Side Settings
Make the following settings on the queen (note that these are the default settings for a new
Aster Database installation):
• disallowPeerWithoutCertificates=false
• sslCertificatePath=/home/beehive/certs/server.cert
• sslPrivateKeyPath=/home/beehive/certs/server.key
• sslFileType=1 (A value of “1” means SSL_FILETYPE_PEM.)
• Ensure that secureWrites is set to false
• Ensure that secureMuleServer is set to true
• There is no need to set the trustedCAPath and trustedCAFileName parameters.
Client-Side Settings
Use the following command line arguments when executing ACT:
• --enable-ssl
• --ssl-self-signed-peer
host: 10.10.10.10
dbname: sampledb
username: sampleuser
# SSL settings
enable-ssl: true
ssl-self-signed-peer: true
Note that the Client need not have a copy of the server's certificate. Do not use the other SSL
settings such as --ssl-trusted-ca-file, or --ssl-trusted-ca-path.
Queen-Side Settings
Make the following settings on the queen:
• disallowPeerWithoutCertificates=false
• sslCertificatePath=/home/beehive/certs/server.cert
• sslPrivateKeyPath=/home/beehive/certs/server.key
• sslFileType=1 (A value of "1" means SSL_FILETYPE_PEM. A value of “2” means
SSL_FILETYPE_ASN1.)
• Ensure that secureWrites is set to false.
• Ensure that secureMuleServer is set to true.
• There is no need to set the trustedCAPath and trustedCAFileName parameters.
Client-Side Settings
Do the following:
Copy the queen's public key (self-signed certificate), /home/beehive/certs/server.pem,
to the client. For this example, we will assume the client will store the public key as /home/
jbloggs/certs/server.pem.
host: 10.10.10.10
dbname: sampledb
username: sampleuser
# SSL settings
enable-ssl: true
ssl-trusted-ca-file: server.pem
ssl-trusted-ca-dir: /home/jbloggs/certs/
Because --ssl-self-signed-peer is not specified, the connection can be made only when
the server provides a CA signed certificate. The identical CA signed certificate must already
exist on the client. However, when --ssl-self-signed-peer is used, the server is able to
supply the certificate at the time of connection and nothing is required on the client.
Queen-Side Settings
Do the following:
1 Get the root certificate of the CA (certificate authority) that signed your client's certificate.
Save the root certificate on the queen. For this example, we will save it as /home/
beehive/certs/client.pem on the queen.
2 Make the following settings on the queen:
• disallowPeerWithoutCertificates=true
• trustedCAFileName=/home/beehive/certs/client.pem
• sslCertificatePath=/home/beehive/certs/server.cert
• sslPrivateKeyPath=/home/beehive/certs/server.key
• sslFileType=1 (A value of "1" means SSL_FILETYPE_PEM. A value of “2” means
SSL_FILETYPE_ASN1.)
• There is no need to set the trustedCAPath parameter if you use a single root
certificate for all clients.
• Ensure that secureWrites is set to false.
• Ensure that secureMuleServer is set to true.
Variation: If your client's certificates were not all signed by the same CA, then you must set
Aster Database to recognize all the CA root certificates used to sign you clients' certificates,
like so:
1 Save the root certificates of all the signing CAs on the queen.
2 Set trustedCAPath to point to the directory that contains the root certificates. For
example:
• trustedCAPath=/home/beehive/certs
3 Un-set the queen configuration parameter, trustedCAFileName, by setting it to no value
at all. For example:
• trustedCAFileName=
Client-Side Settings
Use the following command line arguments when executing ACT. For this example, we will
assume the client will store the certificate as /home/jbloggs/certs/client.cert and the
key as /home/jbloggs/certs/client.key:
• --enable-ssl
• --ssl-certificate-path /home/jbloggs/certs/client.cert
• --ssl-private-key-path /home/jbloggs/certs/client.key
• --ssl-cert-filetype 1 (A value of "1" means SSL_FILETYPE_PEM. A value of
“2” means SSL_FILETYPE_ASN1.)
Or use a config file similar to the following:
# ACT configuration file example
# Contains settings for connecting securely to a specific host and
database
host: 10.10.10.10
dbname: sampledb
username: sampleuser
# SSL settings
enable-ssl: true
ssl-certificate-path: /home/jbloggs/certs/client.cert
ssl-private-key-path: /home/jbloggs/certs/client.key
ssl-cert-filetype: 1
Queen-Side Settings
Make the following setting on the queen:
• secureWrites=true
Even though you may have set secureWrites=false when setting up SSL, set it to true now
in order to enable encryption of communication from the queen. This permits setting up the
SSL connection before enabling two way encryption, if desired.
Client-Side Settings
Use the following command line arguments when executing ACT:
• --ssl-encrypt-reads
Troubleshooting ACT
returned; the ACT client just hangs. If you experience this, check to ensure that SSL settings
on the queen and in ACT match.
Aster Database provides Open Database Connectivity (ODBC) and Java Database
Connectivity (JDBC) drivers for connecting business intelligence (BI) tools. This section
explains how to install and use the Aster Database drivers.
General tips:
• General Tips for Connecting Clients to Aster Database (page 44)
ODBC Driver
Teradata Aster provides a standard ODBC driver for Aster Database. The following list shows
all the supported operating systems with the corresponding ODBC driver package name:
• Windows 32-bit: nClusterODBCInstaller_i386.msi
• Windows 64-bit: nClusterODBCInstaller_x64.msi
• Linux 32-bit: clients-odbc-linux32.tar.gz
• Linux 64-bit: clients-odbc-linux64.tar.gz
• Solaris Sparc 32: clients-odbc-solaris-sparc.tar.gz
• Solaris i386: clients-odbc-solaris-x86.tar.gz
• Mac OS x86: clients-odbc-mac.tar.gz
• AIX Power PC 32-bit: clients-odbc-aix.tar.gz
The Aster Database ODBC driver may change in any Aster Database release. For this reason, with each new edition of
Aster Database, you should reinstall the driver and recompile your applications that include the driver.
Linux/AIX
Windows
Solaris
Mac OS X
Installation Procedure
1 Verify that Microsoft Visual C++ 2008 Redistributable Package (x86) is installed on the
system where you want to install the driver. Note that Microsoft also offers newer versions
of the package, such as Microsoft Visual C++ 2010 Redistributable Package (x86). Teradata
Aster has not tested compatibility with these later versions! If the supported version is not
installed:
a Download it from Microsoft, choosing the version that fits your architecture:
• 32-bit
• 64-bit
2 Obtain the ODBC Driver Package.
3 If you are upgrading, use the Windows Add/Remove Programs tool to uninstall the old Aster
ODBC version.
4 Double-click on the .msi file to install it.
A setup wizard walks you through the installation of Aster Database ODBC as one of the
available data sources on your computer.
5 From the Windows Control Panel, double-click Administrative Tools to open the
Administrative Tools window.
6 Double-click the Data Sources (ODBC) option to open the ODBC Data Source
Administrator dialog box.
7 Click the System DSN tab.
8 Click Add to open the Create New Data Source dialog box.
9 Select the Aster ODBC Driver data source from the list.
10 Click Finish.
The Aster Database Login window appears.
11 In the Aster Database Login window, enter the following information:
• Data Source: Use this field to give this database connection an easy-to-recognize name.
• Server: The hostname or IP address of your Aster Database queen.
• Port: The port on which your Aster Database queen listens for client connections. The
default is 2406.
• Database: The name of the database in Aster Database you want to connect to. Default
system database is beehive.
• Username: Database user name.
• Password: Database user’s password.
• Fetch Count: See “Throttle Query Results in ACT and Aster Database” on page 26.
Your ODBC setup is complete. Now, in your applications that will query Aster Database, you
may connect to Aster Database as an ODBC data source.
Prerequisites
1 The Aster Database ODBC driver requires the following libgcc version, depending on your
operating system:
• Linux and Solaris: libgcc 3.4.6
• Mac OS 10.5: libgcc 4.2
• Mac OS 10.6: libgcc 4.5
2 On Solaris and MacOS systems, the Aster Database ODBC driver requires that you install a
driver manager:
• On Solaris, use the unixODBC driver manager, version 2.2.12. Instructions are
available here: “Install and Configure the unixODBC Driver Manager on Solaris” on
page 52.
• On MacOS, use the iODBC driver manager, version 3.52.3. This driver and its
documentation are available from http://www.iodbc.org.
Installation Procedure
Install the Aster Database ODBC driver as shown below.
Warning! For MacOS, the Aster Database ODBC Driver supports only MacOS version 10.6 or earlier.
1 Extract the ODBC package into the directory where you want to install the driver. Once
extracted, you will see the directory stage that includes the driver.
2 Change to the Aster Database driver directory:
# cd stage/clients-odbc-<your_client_os>
3 Edit your library path environment variable, to add the Aster ODBC library directory to it.
The Aster ODBC library directory has the path <install location>/stage/clients-
odbc-<your_client_os>/Libs.
The library path variable is:
• LD_LIBRARY_PATH on Linux and Solaris
• DYLD_LIBRARY_PATH on MacOS
Edit the appropriate environment settings file to do this (for example, edit the ~/.bashrc
file if you want to set it for the current user on a typical Linux environment). To set it for
the current session only, type the command shown below, substituting
DYLD_LIBRARY_PATH for LD_LIBRARY_PATH on MacOS:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/stage/clients-
odbc-<your_client_os>/Libs
4 Add or edit the ODBCSYSINI environment variable, setting it to the directory where your
ODBC connection settings files (odbc.ini and odbcinst.ini) will reside. To follow this
example, let’s assume we are working as user “mjones” and will save the configuration files
to our home directory /home/mjones.
export ODBCSYSINI=/home/mjones
5 Check that the Aster Database ODBC driver library can find all its dependencies.
Assuming we have installed in /usr/local/lib, we would type (on Linux or Solaris):
# cd /usr/local/lib
# ldd stage/clients-odbc-linux64/ODBCDriver/
libAsterDriver_unixODBC.so
On MacOS, we would type:
# cd /usr/local/lib
# otool -L stage/clients-odbc-linux64/ODBCDriver/
libAsterDriver_unixODBC.dylib
If a “not found” message does not appear, then all the required libraries have been linked.
6 Choose the next step, depending on your operating system:
• Configure Driver Manager on Linux and AIX
• Install and Configure the unixODBC Driver Manager on Solaris
• For MacOS, go to http://www.iodbc.org to download and configure iODBC driver
manager.
Installation Procedure
Install the Aster Database ODBC driver on AIX as shown below.
1 Obtain the ODBC Driver Package.
2 Extract the bundle for the ODBC driver:
a Unzip the file:
$ gunzip clients-odbc-aix.tar.gz
b Untar the file:
$ tar -xvf clients-odbc-aix.tar
Next Step: Configure Driver Manager on Linux and AIX.
$ cd
4 Make backups of the files you moved:
$ cp -p aster.ini aster.ini.backup
$ cp -p odbc.ini odbc.ini.backup
$ cp -p odbcinst.ini odbcinst.ini.backup
5 Make the following edits to aster.ini:
a Set DriverManagerEncoding to UTF-8.
b Set ODBCInstLib to <InstallDir>/DataDirect/lib/odbcinst.so, replacing
<InstallDir> with the folder where the driver is installed.
For example:
[driver]
DriverManagerEncoding=UTF-8
ODBCInstLib = <InstallDir>/DataDirect/lib/odbcinst.so
ErrorMessagesPath=<InstallDir>/ErrorMessages
DSILogging=0
6 Modify odbc.ini as follows:
a Change the DSN configuration parameters SERVER, UID, PWD, DATABASE and
PORT.
[ODBC Data Sources]
... ...
asterdsn=AsterDriver
[ODBC]
... ...
[asterdsn]
Driver=<InstallDir>/DataDirect/lib/libAsterDriver.so
SERVER=192.206.82.100
PORT=2406
DATABASE=beehive
UID=beehive
PWD=beehive
b Add this item to the [ODBC] section of odbc.ini:
InstallDir=<InstallDir>/DataDirect
7 Add the <InstallDir>/Libs directory to:
• LD_LIBRARY_PATH for Linux, or
• LIBPATH for AIX PowerPC.
8 Export the following values, where <directory_path> is the path to the directory where the
files odbc.ini and odbcinst.ini reside:
export ODBCHOME=<directory_path>
export ODBCINI=$ODBCHOME/odbc.ini
export ODBCINST=$ODBCHOME/odbcinst.ini
9 Edit the odbcinst.ini file, as shown in this example:
[ODBC Drivers]
... ...
AsterDriver=Installed
[ODBC Translators]
OEM to ANSI=Installed
[ODBC]
Next Step: Proceed to the next section, Install and Configure the unixODBC Driver
Manager on Solaris.
1 Get the templates for the ODBC connection settings files. Copy these files from the Aster
Database driver’s Setup directory to the user’s home directory. The files you need are
aster.ini, odbc.ini and odbcinst.ini:
# cd /usr/local/lib/stage/clients-odbc-linux64/Setup
# cp odbc.ini ~
# cp odbcinst.ini ~
# cp aster.ini ~/.aster.ini
Note that we have also renamed the aster.ini file, adding a dot at the beginning of the
file name. You must do this.
2 Edit the .aster.ini file as follows:
a Set DriverManagerEncoding to UTF-32.
b Set ODBCInstLib to <unixODBCDir>/lib/libodbcinst.so, replacing
<unixODBCDir> with the directory where the driver is installed:
Tip! At this point, you can run “odbcinst -j” to find out where the ODBC driver expects to find its configuration files.
3 In a text editor, edit the odbc.ini file, making the following changes:
a Set SERVER to the hostname or IP address of your Aster Database queen.
b Set PORT to 2406, the standard port on which your Aster Database queen listens for
client connections.
c Set DATABASE to the name of the database in Aster Database you want to connect to.
d Optionally, you may set UID and PWD to your Aster Database SQL username and
password, respectively.
e Finally, Teradata Aster recommends that you add the setting,
NumericAndDecimalAsDouble=1.
f If you retrieve bytea-stored data through the ODBC driver, you can specify whether
values in a column of datatype bytea will be retrieved in a character representation, or
in the default binary representation. To have the ODBC driver to retrieve values in
character representation, add the setting, ByteaAsVarchar=1 to your odbc.ini; if
you leave it unset, the driver preserves the binary output representation of bytea data.
g Optionally, you can set a number of other database connection behavior settings.
These include enable_quoted_identifiers (see “Quoted-Identifier Handling” on
page 71), enable_backslash_escapes: See “Escape Character Handling” on
page 71.
For this example, we set the contents of odbc.ini to read:
[ODBC Data Sources]
Aster Data ODBC for nCluster DSN=AsterDriver
Tip! You can have multiple data sources. The name “Aster Data ODBC for nCluster DSN” in the odbc.ini file is just
a default name that Teradata Aster has given to the sample data source. You can rename this source and add more, as
shown in this example:
[my_1st_source]
Driver=AsterDriver
SERVER=10.50.52.100
...
[my_2nd_source]
Driver=AsterDriver
SERVER=10.42.43.100
...
4 In a text editor, edit the odbcinst.ini file, setting the Driver parameter to the Aster
Database driver directory path. For this example, we set the contents of odbcinst.ini to
read:
[AsterDriver]
Driver=/usr/local/lib/stage/clients-odbc-linux64/ODBCDriver/
libAsterDriver_unixODBC.so
IconvEncoding=UCS-4LE
On MacOS, it will look like:
[AsterDriver]
Driver=/usr/local/lib/stage/clients-odbc-linux64/ODBCDriver/
libAsterDriver_unixODBC.dylib
IconvEncoding=UCS-4LE
5 The installation and configuration are now complete.
Troubleshooting
If after installation you cannot connect:
1 Find the library libodbcinst.so and note its path.
2 Set the LD_LIBRARY_PATH environment variable so that it includes the directory that
contains libodbcinst.so. For example, if the library is in /usr/lib64, then you will
type:
export LD_LIBRARY_PATH=/usr/lib64:$LD_LIBRARY_PATH
$ export PATH=$PATH:/home/beehive/toolchain/x86_64-unknown-linux-
gnu/unixODBC-2.2.12/bin
$ which odbc_config
/home/beehive/toolchain/x86_64-unknown-linux-gnu/unixODBC-2.2.12/
bin/odbc_config
$ odbc_config --cflags -DHAVE_UNISTD_H -DHAVE_PWD_H
-DHAVE_SYS_TYPES_H -DHAVE_LONG_LONG -DSIZEOF_LONG=8
$ perl -eshell -MCPAN
cpan[1]> force install DBD::ODBC
8 Run odbcinst -j to see where the .ini files are being picked up:
$ odbcinst -j
$dbh->do("BEGIN");
$dbh->do("set random_page_cost to '4'");
$dbh->do("set enable_seqscan to 'off'");
$dbh->disconnect;
PHP
To set up PHP:
1 Make sure Apache is installed and make note of the directory. For this example, we will
assume Apache is installed at /usr/local/apache
2 Ensure that unixODBC has been installed as described above.
3 Download the source for PHP 5.4.10 and extract it to the desired directory. The following
setup instructions should be used for PHP:
$ CFLAGS="-DSIZEOF_LONG=8" ./configure --with-apxs2=/usr/local/
apache/bin/apxs --with-zlib --with-unixODBC --with-pdo-odbc=unixODBC
Tip! You should not set up your own PHP or use /etc/init.d/apachectl on the queen for your own web
pages.
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI\<DSN-NAME>]
"ByteaAsVarchar"="1"
• For a 32-bit ODBC driver running on 64-bit Windows
Set the flag by adding it to the Windows registry entry for the DSN. Using a registry editor,
add this line to the registry, taking care to first replace <DSN-NAME> with your Data
Source name:
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\<DSN-name>]
"ByteaAsVarchar"="1"
JDBC Driver
JDBC is an API for the Java programming language that provides methods for querying and
updating data in a relational database. The Aster Database JDBC driver enables your Java
applications and reporting tools to retrieve data directly from Aster Databases.
The Aster Database JDBC driver is a Type 4 JDBC driver that implements the JDBC 3
specification.
• Aster JDBC Driver (page 59)
• Differences from the Legacy JDBC Driver (page 60)
• Before You Start (page 60)
• Install the JDBC Driver (page 61)
• Use the JDBC Driver in a Java Application (page 61)
• Parameters for Connecting through JDBC (page 62)
• Configuring the JDBC Log Settings (page 63)
• Behavior and Performance Settings for JDBC (page 63)
• Using Client-Side Cursors in JDBC (page 67)
• Test JDBC Connect Program (page 69)
Prerequisites
The JDBC driver supports these versions of the Java JDK:
• Oracle JDK 1.5
• IBM JDK 1.6
• On your queen node, you can find the installers in the directory /home/beehive/
clients_all/<your_client_OS>.
2 Unzip the ZIP package.
The resulting folder contains multiple JAR files.
3 Copy the JAR files to a location in the classpath of the application that uses the driver.
Required Parameters
To establish a connection to an Aster Database using the Aster Database JDBC driver, you
must provide the driver with the URL to use to connect to the database. The URL has this
format:
jdbc:ncluster://<Host:Port>/<Database>?enable_backslash_escapes=<on_or
off>&enable_quoted_identifiers=<on_or off>
For example:
jdbc:ncluster://192.65.197.90:2406/
beehive?enable_backslash_escapes=on&enable_quoted_identifiers=on
The URL needs three parameters to connect to an Aster Database:
Table 3 - 2: Parameters in URL to connect to an Aster Database
In addition, to the URL, you must also provide the username and password needed to access
the Aster Database, which you can get from your Aster Database administrator.
Optional Parameters
You can set the Autocommit and fetch_count settings for the connection in the URL by adding
the autocommit and fetch_count parameters. See “Frequently Used JDBC Settings” on
page 63. If your application will query large tables, you should set autocommit to false, and
you should declare a fetch_count for the connection. By doing this, you enable the
connection to use distributed cursors for improved performance.
You can also use the NumericAndDecimalAsDouble parameter to map NUMERIC and DECIMAL
type columns to SQL_DOUBLE. When you set this parameter, its value will be stored in a connection
context. This example shows how to use this parameter to decode the row header:
if ((sqlType == Types.NUMERIC ||
sqlType == Types.DECIMAL) &&
inSettings.connectionSettings_.numericAsDouble_) {
sqlType = Types.DOUBLE;
}
it). You can set this in the connection URL with the autocommit parameter (for example,
jdbc:ncluster://10.80.50.100:2406/beehive?autocommit=false) or in the Java
code for your connection with Connection.setAutoCommit(). The default is true.
try
{
conn =
DriverManager.getConnection(url, username, password);
// Remember the current autocommit state
boolean autoCommit = conn.getAutoCommit();
}
In the example above, the scope of the SET variables is limited to the commands between the
autoCommit(false) and commit() lines.
The Copy, Install File, Uninstall File, and Download File Commands
Version 5.0.3 of the JDBC driver adds support for these commands:
• Copy (page 65)
• Install File (page 65)
• Uninstall File (page 66)
• Download File (page 67)
For more information about these commands, see the section describing SQL commands in
the Aster Database User Guide.
Copy
This command moves data between Aster Database tables and a remote client (from and to a
file) via the connection between the client and the server.
This is an example:
public void copyCommandExamples() {
String stmt1 = "COPY simba TO 'd:\\simba.txt' DELIMITER as ','";
String stmt2 = "COPY simba TO 'd:\\simba.csv' with csv QUOTE AS '@';";
try {
Statement s = conn_.createStatement();
s.execute(stmt1);
s.execute(stmt2);
s.close();
} catch (Exception e) {
e.printStackTrace();
fail(e.getMessage());
}
}
Install File
This command installs the data file or SQL-MapReduce function in the specified Aster
Database schema.
Uninstall File
This command removes the file or SQL-MapReduce function from Aster Database.
UNINSTALL FILE 'filename' from schema schemaname
This is an example:
public void testUninstall() {
Download File
This command downloads the specified installed file or function.
DOWNLOAD FILE [[schema/]alias] filename
This is an example:
public void download_file_examples() {
Tip! When working with ResultSets of type ResultSet.TYPE_FORWARD_ONLY, you cannot scroll back-
wards, nor can you jump to any location in the ResultSet other than the next row.
4 In the Statement object, you must pass a single query, not multiple queries strung together
with semicolons.
5 You must set the statement’s fetch_count using the Statement.setFetchSize(int
rows) command. This instructs the driver to fetch the specified number of rows at a time
from the database. If the fetch size is not set, the driver fetches the full set of rows that
match the query.
6 To help ensure a quick response when a page of rows is exhausted, the JDBC driver, by
default, pre-fetches and caches ten pages of results from the database. A page is one
fetch_count worth of rows. You can set the number of pages to be pre-fetched, or you can
disable pre-fetching if desired.
Tip! In the example below, we set the Autocommit setting and the FetchSize setting using the setAutoCom-
mit() and setFetchSize() methods, but you also have the option of setting these in the JDBC connection
parameters when you make the connection. See “Frequently Used JDBC Settings” on page 63.
Disabling Cursors
Cursors are enabled by default. To turn them off, set the FetchSize to zero. Assuming a
statement st and a ResultSet rs, you would do this as shown here:
st.setFetchSize(0);
rs = st.executeQuery("SELECT * FROM mytable");
while (rs.next()) {
System.out.print("many rows were returned.");
}
System.out.println("getIdentifierQuoteString:
"+meta.getIdentifierQuoteString());
System.out.println("\ngetSQLKeywords: "+meta.getSQLKeywords());
System.out.println("\ngetNumericFunctions: "+meta.getNumericFunctions());
System.out.println("\ngetStringFunctions :
"+meta.getStringFunctions());
System.out.println("getSystemFunctions : "+meta.getSystemFunctions());
System.out.println("getTimeDateFunctions :
"+meta.getTimeDateFunctions());
System.out.println("getSearchStringEscape :
"+meta.getSearchStringEscape());
System.out.println("getExtraNameCharacters :
"+meta.getExtraNameCharacters());
System.out.println("getCatalogTerm : "+meta.getCatalogTerm());
System.out.println("getCatalogSeparator :
"+meta.getCatalogSeparator());
System.out.println("getURL : "+meta.getURL());
System.out.println("getUserName : "+meta.getUserName());
System.out.println("getMaxCursorNameLength :
"+meta.getMaxCursorNameLength());
System.out.println("getMaxSchemaNameLength :
"+meta.getMaxSchemaNameLength());
System.out.println("getMaxProcedureNameLength :
"+meta.getMaxProcedureNameLength());
System.out.println("getMaxCatalogNameLength :
"+meta.getMaxCatalogNameLength());
System.out.println("getMaxColumnsInIndex :
"+meta.getMaxColumnsInIndex());
System.out.println("supportsSubqueriesInComparisons :
"+meta.supportsSubqueriesInComparisons());
System.out.println("getMaxConnections : "+meta.getMaxConnections());
System.out.println("getMaxColumnsInTable :
"+meta.getMaxColumnsInTable());
System.out.println("isReadOnly : "+meta.isReadOnly());
System.out.println("\ngetCatalogs:");
ResultSet res = meta.getCatalogs();
while (res.next()) {
System.out.println(res.getString(1));
}
System.out.println("\ngetTables:");
res.close();
res = meta.getTables(null,null,"%",null);
while (res.next()) {
System.out.println(res.getString(1) +res.getString(2) +res.getString(3)
+res.getString(4));
}
System.out.println("");
res.close();
res = meta.getTableTypes();
System.out.println("\ngetTableTypes:");
while (res.next()) {
System.out.println(res.getString(1));
}
res.close();
Statement stmt=con.createStatement();
ResultSet rs = stmt.executeQuery("select count(*) from page_views");
while (rs.next()) {
System.out.println(rs.getInt(1));
}
rs.close();
stmt.close();
con.close();
} catch(SQLException ex) {
System.err.println("SQLException: " + ex.getMessage());
}
}else{
System.out.println("Could not Get Connection");
}
}
public static Connection getJDBCConnection(){
try {
Class.forName("com.asterdata.ncluster.Driver");
} catch(java.lang.ClassNotFoundException e) {
System.err.print("ClassNotFoundException: ");
System.err.println(e.getMessage());
}
try {
con = DriverManager.getConnection(url,userid, password);
} catch(SQLException ex) {
System.err.println("SQLException: " + ex.getMessage());
}
return con;
}
}
Quoted-Identifier Handling
enable_quoted_identifiers: This setting affects the way Aster Database processes strings
enclosed in double-quote characters ("..."). With enable_quoted_identifiers='on' (the
default), Aster Database follows the standard behavior of interpreting each double-quoted
string as an identifier (a column, table, schema, or function name). With
enable_quoted_identifiers='off', each double-quoted string is interpreted as a literal
string constant. Any printable character may be represented in a double-quoted string.
COPY FROM/TO
Semantics and overall flow of execution for COPY is the same as for the Aster Database Loader
Tool. See “Aster Database Loader Tool” on page 125.
Syntax
COPY table [(column list)] FROM <quoted file name> ...COPY attributes
or
COPY table [(column list)] TO <quoted file name> ...COPY attributes
The COPY command accepts a quoted file name and streams the data into or out of Aster
Database using the Aster Database Loader Tool protocol for maximum throughput.
INSTALL
The INSTALL command is similar to the ACT command “\install <FILE> [[<SCHEMA>/
]<FILE_ALIAS>]” on page 33, and supports SQL-MR security semantics.
Syntax
INSTALL FILE <quoted file name> [[<schema>/]<file alias>]
• The schema name must be quoted if it contains spaces or mixed case.
• The file alias must be quoted.
UNINSTALL
The UNINSTALL command supports SQL-MR security semantics.
Syntax
UNINSTALL FILE <quoted file name> [[<schema>/]<file alias>]
• The schema name must be quoted if it contains spaces or mixed case.
• The file alias must be quoted.
DOWNLOAD
The DOWNLOAD command is similar to the ACT command “\download [[<SCHEMA>/
]<FILE_ALIAS>] <FILE>” on page 33, and supports SQL-MR security semantics.
Syntax
DOWNLOAD FILE <quoted file name> [[<schema>/]<file alias>]
• The schema name must be quoted if it contains spaces or mixed case.
• The file alias must be quoted.
Teradata Wallet
Teradata Wallet (TD Wallet) is a software utility that provides the users of an Aster Database
client system with full and unrestricted access to their stored database passwords on that
system, while at the same time protecting those passwords from being exposed in scripts.
Each user on an Aster Database client has a TD Wallet. The TD Wallet securely stores the
passwords that the user adds to the wallet. However, a user cannot access the passwords stored
in the wallets of other users.
Rather than using the passwords in scripts, you can use their corresponding names defined in
your wallet.
The ACT, JDBC, and ODBC drivers support TD Wallet.
Wallet Contents
A wallet contains a set of <name, value> pairs. These pairs consist of Unicode character
sequences.
Table 3 - 3: Wallet contents
Item Description
name The name of this password entry. This is the name you use to reference the
password in scripts without exposing it. We recommend using meaningful
names to help you determine what the password is used for. The name is
case-sensitive.
value The password. The password is not exposed in your scripts. Only the name of
this password is exposed.
The example in Table 3 - 4 shows entries in the TD Wallet of user client system user jdoe.
Table 3 - 4: The Teradata Database passwords of user jdoe
Name Value
pwd_for_beehive_db s4t#gp6s_#4
pwd_for_customers_db nsdho_34f
Name Value
pwd_for_clickstream_db oc_m_3nd234
TD Wallet Commands
TD Wallet provides these commands:
Table 3 - 5: TD Wallet commands
Command Description
tdwallet add name Adds an item to the wallet. When you run this command,
tdwallet prompts you for the value component of the name/value
pair.
tdwallet addsk name Adds a string with the specified name (saved-key).
tdwallet del name Deletes the specified item from the wallet.
Download TD Wallet
TD Wallet support has been added in the back-end client connector layer, both the Java and
C++ connectors. All the Aster Database clients based on the back-end client connector can use
it. To use TD Wallet in Aster Database, you must first download it from http://
downloads.teradata.com/download/tools. The supported version of TD Wallet is version
14.00. Note that this version of TD Wallet does not currently support MacOS.
ACT
1 Install TD Wallet.
2 Add a name/value pair to the wallet.
$ ./tdwallet add mypassword
Enter desired value for the string named "userpass":
String named "userpass" added.
3 Install the latest ACT for Aster Database.
4 Create the symbolic link “tdwalletdir” in the directory where ACT is installed.
For example, these commands add the symbolic link:
cd /home/beehive/work/multibranch/build/bin
ln -s /opt/teradata/client/tdwallet tdwalletdir
JDBC
1 Install TD Wallet.
2 Add a name/value pair to the wallet.
$ ./tdwallet add mypassword
Enter desired value for the string named "userpass":
String named "userpass" added.
3 Get the JDBC Driver and tdwalletJNI.so.
4 Copy tdwalletJNI.so into /home/beehive/work/multibranch/builds/build-main/lib/.
$ javac -classpath /home/beehive/work/multibranch/builds/build-main/
lib/noarch-aster-jdbc-driver.jar com/test/testjdbc/tdwallet.java
$ java -classpath .:/home/beehive/work/multibranch/builds/build-main/
lib/noarch-aster-jdbc-driver.jar -Djava.library.path=/home/beehive/
work/multibranch/builds/build-main/lib/ com.test.testjdbc
You can also find the tdwalletJNI.so file in the same directory as noarch-aster-jdbc-
driver.jar in these packages:
• clients-platform-version-reversion.tar.gz
• clients_all/platform...
5 Create the symbolic link “tdwalletdir” in the directory where tdwalletJNI.so located is
located.
For example, these commands add the symbolic link:
cd /home/beehive/work/multibranch/builds/build-main/lib/
ln -s /opt/teradata/client/tdwallet tdwalletdir
ACT
1 Install TD Wallet.
2 Add a name/value pair to the wallet.
E:\>tdwallet add mypassword
Enter desired value for the string named "mypassword":
String named "mypassword" added.
3 Use the TD Wallet password instead of the real password to log in to Aster Database. For
example:
E:\asterclientWIN\win64\bin>act.exe -h 192.65.197.130 -U beehive
-w $tdwallet(mypassword)
Welcome to act 05.10.00.00, the Aster nCluster Terminal.
...
ODBC
1 Install TD Wallet.
2 Add name/value pairs to the wallet.
3 Install the latest ODBC Driver for Aster Database.
4 Use the TD Wallet password ($tdwallet(mypassword)) instead of the real password to log in
to Aster Database.
$tdwallet(mypassword)
JDBC
1 Install TD Wallet.
2 Add name/value pairs to the wallet.
E:\>tdwallet add mypassword
Enter desired value for the string named "mypassword":
String named "mypassword" added.
3 Get the latest JDBC Driver, and make the dependent library libtdwalletJNI.dll available in
the library path.
E:\>javac -classpath e:\asterclient\lib\noarch-aster-jdbc-driver.jar
com\test\testjdbc\tdwallet.java
E:\>java -classpath e:\asterclient\lib -
Djava.library.path=e:\asterclient\lib com.test.testjdbc
4 Use the TD Wallet password ($tdwallet(mypassword)) instead of the real password to log in
to Aster Database.
Usage
After you install TD Wallet you can use the $tdwallet directive in place of your password. The
syntax if the directive is:
$tdwallet(name)
where name is the name of a password entry in the TD Wallet.
Port Number
The Aster Database queen port number for SSL connections is the same as the regular client
connection port: 2406. Port 2406 is multiplexed to support both secure sockets layer (SSL)
connections and unencrypted connections.
On the client side, the files related to SSL and its configuration are:
• Copy of the queen’s public key in PEM format. This is a copy of the queen’s server.pem.
For example, you might save it as /home/mjones/certs/server.pem
• The client’s SSL-related settings (see “Client-Side SSL Settings” on page 79), stored:
• for Linux, in the client’s odbc.ini file; and
• for Windows, in the ODBC parameter fields of the registry
• SSLAllowSelfSignedPeer: Determines whether the client allows peers with self signed
certificates to communicate (string value of 0 for false, or 1 for true; default is 1).
• SSLFileType: The certificate file type. A string value; one of:
• SSL_FILETYPE_PEM (the default)
• SSL_FILETYPE_ASN1
• SSLPrivateKeyPath: Path to the private key to be used. Optional. (A string value.)
• SSLCertificatePath: Path to the SSL certificate to be used. (A string value.)
• Set either SSLTrustedCADir or SSLTrustedCAFilename, depending on whether you
have one or many CA certificates:
• SSLTrustedCADir: Path to the directory containing CA certificates in PEM format.
(A string value.)
• SSLTrustedCAFilename: Filename of CA certificate in PEM format. (A string value.)
Queen-Side Settings
Make the following settings on the queen:
• disallowPeerWithoutCertificates=false
• allowSelfSignedPeer=true
• trustedCAFileName=/home/beehive/certs/server.pem
• sslCertificatePath=/home/beehive/certs/server.cert
• sslPrivateKeyPath=/home/beehive/certs/server.key
• sslFileType=1 (A value of “1” means SSL_FILETYPE_PEM.)
Client-Side Settings
Make the following settings on each ODBC client:
• EnableSSL=1
• SSLEncryptReads=0
• SSLAllowSelfSignedPeer=1
• SSLFileType=SSL_FILETYPE_PEM
• There is no need to set the other SSL settings such as SSLPrivateKeyPath.
Scenario 2: Client has a copy of the queen’s public key
In this scenario, we edit Aster Database’s SSL configuration to allow connections only from
clients that have a copy of queen’s public key. This scenario uses the default public key
(/home/beehive/certs/server.pem) that is part of the standard queen installation.
Queen-Side Settings
Make the following settings on the queen:
• disallowPeerWithoutCertificates=true
• allowSelfSignedPeer=false
• trustedCAFileName=/home/beehive/certs/server.pem
• sslCertificatePath=/home/beehive/certs/server.cert
• sslPrivateKeyPath=/home/beehive/certs/server.key
• sslFileType=1 (A value of “1” means SSL_FILETYPE_PEM.)
• Ensure that secureWrites is set to false.
• Ensure that secureMuleServer is set to true.
• There is no need to set the trustedCAPath parameter.
Client-Side Settings
Do the following:
1 Copy the queen’s public key (self-signed certificate), /home/beehive/certs/
server.pem, to the client. For this example, we will assume the client will store the public
key as /home/jbloggs/certs/server.pem.
2 Make the following settings on each ODBC client:
• EnableSSL=1
• SSLEncryptReads=0
• SSLAllowSelfSignedPeer=1
• SSLFileType=SSL_FILETYPE_PEM
• SSLTrustedCADir=/home/jbloggs/certs/server.pem
Queen-Side Settings
Do the following:
1 Get your public key file and save it on the queen. For this example, we will save it as
/home/beehive/certs/sampleco.pem on the queen.
2 Save the corresponding private key file on the queen. For this example, we will save it as
/home/beehive/certs/sampleco.key on the queen.
3 Make the following settings on the queen:
• disallowPeerWithoutCertificates=true
• allowSelfSignedPeer=false
• trustedCAFileName=/home/beehive/certs/sampleco.pem
• sslCertificatePath=/home/beehive/certs/sampleco.pem
• sslPrivateKeyPath=/home/beehive/certs/sampleco.key
• sslFileType=1 (A value of “1” means SSL_FILETYPE_PEM.)
• There is no need to set the trustedCAPath parameter.
Client-Side Settings
Do the following:
1 Copy the queen’s public key, /home/beehive/certs/sampleco.pem to the client. For
this example, we will assume the client will store the public key as /home/jbloggs/
certs/sampleco.pem.
2 Make the following settings on each ODBC client:
• EnableSSL=1
• SSLEncryptReads=0
• SSLAllowSelfSignedPeer=1
• SSLFileType=SSL_FILETYPE_PEM
• SSLTrustedCADir=/home/jbloggs/certs/sampleco.pem
Queen-Side Settings
Do the following:
1 Get the root certificate of the CA (certificate authority) that signed your client’s certificate.
Save the root certificate it on the queen. For this example, we will save it as /home/
beehive/certs/ca-cert.pem on the queen.
2 Make the following settings on the queen:
• disallowPeerWithoutCertificates=true
• allowSelfSignedPeer=false
• trustedCAFileName=/home/beehive/certs/ca-cert.pem
• sslFileType=1 (A value of “1” means SSL_FILETYPE_PEM.)
• There is no need to set the trustedCAPath parameter if you use a single root certificate
for all clients.
Variation: If your client’s certificates were not all signed by the same CA, then you must set
nCluster to recognize all the CA root certificates used to sign you clients’ certificates, like so:
1 Save the root certificates of all the signing CAs on the queen.
2 Set trustedCAPath to point to the directory that contains the root certificates. For
example:
• trustedCAPath=/home/beehive/certs
3 Un-set the queen configuration parameter, trustedCAFileName, by setting it to no value
at all. For example:
• trustedCAFileName=
Client-Side Settings
Do the following:
1 Save the client’s public key on the client. For this example, we will assume the client will
store its public key as /home/jbloggs/certs/my-client-cert.pem.
2 Make the following ODBC settings on the client:
• EnableSSL=1
• SSLEncryptReads=0
• SSLAllowSelfSignedPeer=0
• SSLFileType=SSL_FILETYPE_PEM
• SSLTrustedCADir=/home/jbloggs/certs/my-client-cert.pem
Repeat the above steps for all ODBC client machines in your environment.
All other things being equal, switching any network connection from unencrypted to SSL-
encrypted has the effect of reducing the maximum available rate of data transmission on that
connection.
To set this up:
Queen-Side Settings
Make the following settings on the queen:
• secureWrites=true
Client-Side Settings
Make the following settings on each ODBC client:
• SSLEncryptReads=1
Queen-Side Settings
Make the following settings on the queen:
1 Set up Aster Database user authentication as explained in the Aster Database User Guide.
2 Configure the queen SSL configuration flags as described in one of the scenarios above
which you are planning on implementing. (For example, see “Scenario 1: Allowing
connections from clients without certificates” on page 80
Client-Side Settings
Do the following:
1 Set the flags in the client’s ODBC configuration file or registry as described in the scenario.
2 Set the EnableSSO flag in the client’s ODBC configuration file:
• EnableSSO=1
3 If EnableSSO is set to 1, you must also:
• Ensure that ServerIP is set to the fully qualified domain name of the Aster Database
queen and not to an IP address.
• For 64-bit Linux machines: The ODBC driver assumes that libvas-gssapi.so is
present at /opt/quest/lib64/. If /opt/quest/lib64/libvas-gssapi.so does
not exist, locate libvas-gssapi.so by referring to the VAS documentation and set
the GSSPath parameter to point to the installed location of libvas-gssapi.so. For
example, if libvas-gssapi.so is deployed at /usr/lib64, then the GSSPath
parameter needs to be set to /usr/lib64 in the ODBC.ini config file as shown below:
GSSPath=/usr/lib64
• For 32-bit Linux machines: The ODBC driver assumes that libvas-gssapi.so is
present at /opt/quest/lib/. If /opt/quest/lib/libvas-gssapi.so does not
exist, locate libvas-gssapi.so by referring to the VAS documentation and set the
GSSPath parameter to point to the installed location of libvas-gssapi.so. For
example, if libvas-gssapi.so is deployed at /usr/lib, then the GSSPath parameter
needs to be set to /usr/lib in the ODBC.ini config file as shown below:
GSSPath=/usr/lib
Sample ODBC.INI:
This sample assumes your queen machine is called cqueen.asterengqa.com and that you
are following Scenario 1 (outlined earlier in this chapter):
[ODBC Data Sources]
AsterTest=AsterDriverTest
[AsterTest]
Driver=AsterDriverTest
SERVER=cqueen.asterengqa.com
DATABASE=beehive
PORT=2406
UID=testuser13
PWD=testuser133
SQLSupportedConversions=3
NumericAndDecimalAsDouble=1
EnableSSO=0
GSSPath=
EnableSSL=1
SSLEncryptReads=0
SSLAllowSelfSignedPeer=1
SSLFileType=SSL_FILETYPE_PEM
SSLPrivateKeyPath=
SSLCertificatePath=
SSLTrustedCADir=
SSLTrustedCAFilename=
Sample ODBCINST.INI:
This sample assumes you have installed the driver in /Drivers/AsterDriver/ODBCDriver:
[AsterDriverTest]
Driver=/Drivers/AsterDriver/ODBCDriver/libAsterDriver_unixODBC.so
IconvEncoding=UCS-4LE
Sample aster.ini:
This sample assumes you want to log error messages in /Drivers/AsterDriver:
[driver]
DriverManagerEncoding=UTF-32
DSILogging=1
ErrorMessagesPath=/Drivers/AsterDriver/ErrorMessages
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC]
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI]
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\AsterDriver32]
"Driver"="C:\\AsterDriver-Win32\\ODBCDriver\\AsterDataODBCDSII.dll"
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\ODBC Drivers]
"AsterDriver32"="Installed"
[HKEY_LOCAL_MACHINE\SOFTWARE\Aster]
[HKEY_LOCAL_MACHINE\SOFTWARE\Aster\Driver]
"DSILogging"="0"
"ErrorMessagesPath"="C:\\AsterDriver-Win32\\ErrorMessages"
"DriverManagerEncoding"="UTF-16"
The values in the keys above can be modified depending on where the driver is located on the
local machine and what the name of the driver should be. The values above are based on the
assumption that the driver folder is at "C:\AsterDriver-Win32" and the name of the driver is
"AsterDriver32". For an example .reg file that makes these settings, contact Teradata support.
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC]
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI]
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\AsterDriver64]
"Driver"="C:\\AsterDriver-Win64\\ODBCDriver\\AsterDataODBCDSII.dll"
[HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBCINST.INI\ODBC Drivers]
"AsterDriver64"="Installed"
[HKEY_LOCAL_MACHINE\SOFTWARE\Aster]
[HKEY_LOCAL_MACHINE\SOFTWARE\Aster\Driver]
"DSILogging"="0"
"ErrorMessagesPath"="C:\\AsterDriver-Win64\\ErrorMessages"
"DriverManagerEncoding"="UTF-16"
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC]
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBCINST.INI]
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBCINST.INI\AsterDriver32
]
"Driver"="C:\\AsterDriver-Win32\\ODBCDriver\\AsterDataODBCDSII.dll"
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBCINST.INI\ODBC Drivers]
"AsterDriver32"="Installed"
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Aster]
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Aster\Driver]
"DSILogging"="0"
"ErrorMessagesPath"="C:\\AsterDriver-Win32\\ErrorMessages"
"DriverManagerEncoding"="UTF-16"
The values in the keys above can be modified depending on where the driver is located on the
local machine and what the name of the driver should be. The values above are based on the
assumption that the driver folder is at "C:\AsterDriver-Win32" for 32 bit drivers and the name
of the driver is "AsterDriver32". For an example .reg file that makes these settings, contact
Teradata support.
"SSLTrustedCAFilename"="\"\""
[HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\ODBC\ODBC.INI\ODBC Data
Sources]
"AsterDSN32"="AsterDriver32"
Above, the name of the DSN for the 32-bit driver is AsterDSN32. The server being connected
to is 10.51.12.100.
Creating Certificates
openssl req \-new \-x509 \-nodes \-sha1 \-days 365 \-key host.key >
host.cert
DBMS can just run the PreparedStatement SQL statement without having to compile it
first.
Thread Safety
The Aster legacy JDBC driver (pre-5.0.3) supports thread safety for all JDBC objects. The new
Aster JDBC Driver (5.0.3 and later), does not support JDBC object thread safety in all cases.
If you have developed code based on the legacy JDBC driver which uses the same instance of a
JDBC object across multiple threads simultaneously, you may have to modify your code to
ensure that JDBC object thread safety is implemented by your application.
AquaFold’s ADS lets you perform DDL operations and query data interactively, and it
provides tools that help you write and manage queries efficiently. ADS is a third-party tool
available for purchase from Aqua Fold directly.
Aster Database is compatible with ADS version 10.0.2 with patch ads-10.0.7_03-patch.zip.
Install ADS
This section explains how to install ADS on your client workstation and connect to an Aster
Database.
1 Download Aqua Data Studio version 10.0.2 or later from http://www.aquafold.com/
downloads.html
2 Install Aqua Data Studio on your client workstation as explained in your version of the
Aqua Data Studio documentation at http://docs.aquafold.com/
aquadatastudio_11_documentation.html
Versions
MicroStrategy 9 or later is required, and Aster Database 5.0 or later is required.
Platforms Supported
Aster Database supports Intelligence Server clients running on Windows XP and Windows
Vista with the Aster Database ODBC driver for Windows.
Limitations
Aster Database is only certified as a warehouse with MicroStrategy. Aster Database cannot be
used as a repository.
Set-up Instructions
To connect MicroStrategy to Aster Database, follow the steps below.
Prerequisites
1 Microstrategy supports the ODBC Driver Manager bundled with the Aster Database
ODBC driver and the Windows Driver Manager only.
2 Make sure the following patches are applied to your MicroStrategy installation:
Install Drivers
On the client machine where MicroStrategy runs, install the database drivers:
1 Install the Aster Database ODBC driver. See “ODBC Driver” on page 44 for installation
instructions.
2 Install the MicroStrategy VLDB driver, version 9 or later.
Best Practices
By following the guidelines below, you can avoid common errors in Aster Database-
MicroStrategy integration.
Schema changes
• If your schema contains NUMERIC(X,0) type columns, you should replace these with
INT or BIGINT type columns for a higher probability of success with existing
MicroStrategy reports.
• For your small- to medium-sized tables that have no BIGINT or INT columns, you should
create the tables in Aster Database as replicated dimension tables. This takes more space in
the cluster, but works better with MicroStrategy.
Operational items
When pointing an existing MicroStrategy report from another database to Aster Database, the
warehouse schema should be “UPDATED” the first time when pointed to an Aster Database
Instance. This updates the metadata inside MicroStrategy for correct MicroStrategy columns.
This is particularly important when a schema is ported from some other database to Aster
Database.
Teradata Aster provides a suite of tools that enable you to use Aster Database as a data source
in your .NET applications and reports. These tools include:
• nClusterDNProvider—A managed .NET Data Provider
(nClusterDNProviderInstaller_ng_i386.msi, nClusterDNProviderInstaller_ng_x64.msi, a
database driver for ADO.NET) that allows Microsoft SQL Server Integration Services
(SSIS) and .NET client applications to connect to an Aster Database server.
• Tutorials, including the sample program program.cs that show how to use the Aster
Database tools for .NET in a Microsoft reporting and BI environment.
This section explains how to install these tools and how to start writing applications and
reports that use databases in Aster Database.
• Installing the Aster Database ADO.NET Driver (nClusterDNProvider) (page 97)
• Performing SSIS Data Loading with nClusterDNProvider (page 99)
• Sample Program for ADO.NET (page 113)
Note: The nCluster OleDB Driver (nClusterOleDbInstaller_i386.msi) is no longer supported.
Installation Prerequisites
Make sure the .NET 2.0 framework is installed on your workstation before installing
nClusterDNProvider. You can download it from: http://www.microsoft.com/downloads
Installing nClusterDNProvider
Follow the steps below to install nClusterDNProvider:
1 Get the Aster ADO.NET driver installer, whose name depends on your operating system:
• For 32-bit, the installer is nClusterDNProviderInstaller_i386.msi
• For 64-bit, the installer is nClusterDNProviderInstaller_x64.msi
by doing one of the following:
• Copy the client package for your operating system from your queen node. Clients are
located in the directory /home/beehive/clients_all on the queen.
• Download it from http://downloads.teradata.com/download/tools.
2 Place the installer on your client machine.
3 Run the installer.
a In the Welcome screen, click Next.
b In the Select Installation Folder screen:
Choose the installation location.
Specify who should be allowed to use the driver.
Click Next.
c In the Confirm window, click Next.
d When the Installation Complete window appears, click Close.
Note: Before proceeding, make sure that nClusterDNProvider is installed correctly. There should be several DLL files
and the RegisterAsterProvider.exe file in the .NET provider install path (for example, C:\Program Files (x86)\Aster
Data Systems\nCluster ADO.NET Provider (x86)). If not, nClusterDNProvider will not work. You can also check the
installation from the Control Panel. If the size of nClusterADO.Net Provider (x86) is less than 1 MB and the version is
1.0.x, you need a new driver.
When installed, these files enable the Aster Database ADO.NET provider to appear as an
option within Microsoft Visual Studio and Microsoft Analysis Services.
To install the files, run the Install.cmd file from the command line with the /codebase
option. The /codebase option should have the full path to the Simba.Net.dll file. The
syntax to run the command is:
Install.cmd /codebase <full path>\Simba.Net.dll
For example, the command will look similar to:
Install.cmd /codebase "C:\Program Files (x86)\Aster Data
Systems\nCluster ADO.NET Provider (x86)\Simba.Net.dll"
If you are installing the ADO.NET provider to work with SQL Server 2005, you must use
the /vs2005 option, as in:
Install.cmd /vs2005 /codebase <full path>\Simba.Net.dll
3 Put the Aster Database cartridge (the file nCluster.xsl) into the corresponding directory for
SSAS. This file should be placed into all the cartridge directories for SSAS, which may
include all of the following directories, though the directories on your Windows machine
may differ slightly:
SQL Server 2008
C:\Program Files\Microsoft Analysis Services\AS OLEDB\10\Cartridges
C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP
\bin\Cartridges
C:\Program Files\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\
IDE\DataWarehouseDesigner\UIRdmsCartridge
C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE
\PrivateAssemblies\DataWarehouseDesigner\UIRdmsCartridge
SQL Server 2005
C:\Program Files\Microsoft SQL Server\MSSQL.3\OLAP\bin\Cartridges
C:\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7
\IDE\DataWarehouseDesigner\UIRdmsCartridge
C:\Program Files\Microsoft Visual Studio 8\Common7\IDE
\PrivateAssemblies\DataWarehouseDesigner\UIRdmsCartridge
Overview
SSIS is a tool for building extract, transform, and load (ETL) jobs. In .NET environments,
SSIS provides a fast way to extract data from or load data into your Aster Database databases.
To connect to Aster Database, SSIS requires the Aster Database driver, nClusterDNProvider.
This tutorial shows you how to set up SSIS to use the driver and provides an example for
exporting data from Aster Database to a flat file.
Procedure
To set up SSIS to use nClusterDNProvider, follow these steps:
1 Run Microsoft SSIS
2 Choose File: New > Project to create a new integration project.
3 In the New Project dialog:
a Select Integration Services Project.
b Enter your project a name
c Click OK.
4 In the Connection Manager tab at the bottom of the window, add a new Connection.
To do this, right-click and select New ADO.NET Connection.
11 From the Data Flow Sources panel, drag an ADO NET Source object into the Data Flow panel.
13 In the ADO.NET Source Editor dialog box, configure the Connection Manager properties:
14 In the ADO.NET Source Editor dialog box, check the Columns property values:
a Click Columns.
b Check the External Column and Output Column values.
15 In the ADO.NET Source Editor dialog box, configure the Error Output properties:
17 From the Data Flow Destinations panel, drag a Flat File Destination object into the Data Flow
panel.
18 Connect the green arrow of the ADO NET Source object to the Flat File Destination object.
c Click New.
d In the Flat File Format dialog box, click Delimited.
e Click OK.
f In the Flat File Connection Manager Editor dialog box, in the Connection manager name
field, enter the name of the output file of the connection manager.
g Click Browse.
h Select an output file and click Open.
i Click OK.
j In the Flat File Destination Editor, click Mappings.
k Set the correct column mappings for the Flat File Destination object.
l Click OK.
20 From the Data Flow Destinations panel, drag a Flat File Destination object into the Data Flow
panel.
This object is the destination for all error records.
21 Connect the error output (red arrow) of the ADO NET Source object to Flat File
Destination you just added.
22 Create another Flat File Connection Manager for the new Flat File Destination object that serves
as the error destination.
a Double-click the Flat File Destination object.
b In the Flat File Destination Editor, click Connection Manager.
c Click New.
d In the Flat File Format dialog box, click Delimited.
e Click OK.
f In the Flat File Connection Manager Editor dialog box, in the Connection manager name
field, enter the name of the output file to be used to store all error records.
g Click Browse.
h Select an output file and click Open.
i Click OK.
j In the Flat File Destination Editor, click Mappings.
l Click OK.
23 Save the project.
24 Run the project with debugging (Debug > Start Debugging).
You can check the progress of the workflow in the Progress panel.
When the export is successful, the flat file source and destination objects in the Data Flow
pane turn green.
If the export is not successful, the ADO NET Source object turns red. For example, you
might get an exception like “Error: 0xC0047062 at Data Flow Task, ADO NET Source [16]:
System.ArgumentException: Error loading assembly: C:\Program Files (x86)\Aster Data
Systems\nCluster ADO.NET Provider (x86)\AsterDataC#DSII.dll.”
In this case, configure the 32-bit Runtime in SSIS for the project or you install the 64-bit
NClusterDNProvider.
The Microsoft SQL Server Business Intelligence Studio (BIDS) is a Visual Studio plug in.
For SQL Server 2008, this edition is 32-bit only. This requires using 32-bit drivers,
including for Aster Database.
However, in the 64-bit versions of Windows, there is a project flag that you must set to
allow 32-bit drivers to operate. Select the project, right-click, and choose Properties. Then,
set Run64BitRunTime to False. If not, you get an architecture mismatch and other
connection errors.
Similarly, to run the package outside of BIDS—such as running a SQL Server Agent job or
running the package by itself—click the tab Execution options and check the check box Use
32 bit run time.
namespace ConsoleApplication2
{
class Program
{
}
// table schema: create table t_test_adonet (i int, j char(10)) distribute by
hash(i);
static void OutputParameter(DbConnection connection)
{
Console.WriteLine("OuputParameter start....");
// Send a query to backend.
DbCommand command = connection.CreateCommand();
command.CommandText = "select * from t_test_adonet where i = 11";
// Now declare an output parameter to receive the first column of
// the tablea.
DbDataReader reader = command.ExecuteReader();
return binary;
}
else
{
return obj.ToString();
}
}
static void GetMetaDataCollections(DbConnection connection)
{
DataTable tbl = connection.GetSchema("MetaDataCollections");
Print(tbl);
}
static void GetRestrictions(DbConnection connection)
{
DataTable tbl = connection.GetSchema("Restrictions");
Print(tbl);
}
/* static void GetDatabases(DbConnection connection)
{
DataTable tbl = connection.GetSchema("Databases", new String[] {
"beehive" });
Print(tbl);
}
*/
static void GetTables(DbConnection connection)
{
string[] restrictions = new string[4];
restrictions[2] = "tablea";
Print(connection.GetSchema("Tables", restrictions));
}
static void GetColumns(DbConnection connection)
{
try
{
Console.WriteLine("Aster .Net Provider Test Program");
Console.WriteLine();
Console.WriteLine();
Console.WriteLine("Looking up provider factory...");
DbProviderFactory factory = DbProviderFactories.GetFactory("Aster.Net");
Console.WriteLine("Found provider factory.");
connection = factory.CreateConnection();
string connectionString =
"uid=beehive;pwd=beehive;dbname=beehive;ip=153.65.197.90;port=2406;NumericAndDecimalAsDo
uble=1";
connection.ConnectionString = connectionString;
Console.WriteLine("***********************************************************");
Console.WriteLine("Exception: " + e.Message);
Console.WriteLine("Stack: ");
Console.WriteLine(e.StackTrace);
Console.WriteLine("***********************************************************");
if (connection != null)
connection.Close();
}
Console.WriteLine("Press any key to continue.");
Console.ReadKey();
}
}
}
To load data into Aster Database, you can use the Aster Database Loader Tool, the COPY
command, the INSERT command, or a custom-defined SQL-MapReduce data loading
function you have written. This section provides tips for efficient loading and shows how to
load using the Aster Database Loader Tool. The following sections explain these utilities:
• Best Practices for Data Loading (page 121)
• Aster Database Loader Tool (page 125)
• Troubleshoot Loading (page 143)
Loading Terminology
We use the following terms in the text that follows:
Aster Database loader node: In the cluster, a loader node is a node dedicated to data loading.
Many loader nodes can operate in parallel.
Aster Database Loader Tool (ncluster_loader) is the client application for initiating high-
speed bulk loads.
Aster Database Load Error Logging is a feature in ncluster_loader that allows you to perform
loading that is more tolerant of poorly formatted input data. Load Error Logging sends
malformed rows to an error logging table.
Input data: Source input file(s) which are to be loaded into Aster Database. All source files are
compatible with a format that Aster Database is able to load. Examples include the CSV
Tip: A UNIQUE or PRIMARY KEY violation in the data being loaded will always cause the load to abort. So if the
table you are loading into contains these constraints and your load is failing, check the data you are loading to ensure
it complies.
tables. Note that any custom error logging table has to inherit from the default system
error logging table. To create such a table, see .
3 Abort data load operation in the presence of too many malformed rows;
This is in particularly useful if you want a given load operation to abort if too many
malformed rows are present in the input data (--el-enabled --el-limit = 100). In
order to preserve atomicity for bulk load operations, the load operation fails as a
transaction when the error limit is reached. When the operation fails, any rows already
written by the transaction to the target table and error logging table are deleted.
4 Label malformed rows in the error logging table.
If multiple operations are loading data into Aster Database at the same time (e.g., the pre-
production system is testing integration of two separate data sources), you can label each
load operation to identify which rows belong to which data source (--el-enabled --
el-label = 'my_data_source').
Summary
In an environment where not all operations are run in an automated fashion, where new data
sources are to be integrated, or where existing ETL processes are to be changed, we
recommend that you set an error logging limit to prevent accumulation of too many
malformed rows in the error logging tables. If malformed data rows need to be inspected, we
recommend that you use a custom error logging table or create a separate error logging table
for that load job.
Syntax
To run the Aster Database Loader Tool, you type:
$ ncluster_loader [arguments] [schemaname.]tablename [ filename |
dirname ]
where
• arguments are the command-line flags that control how the loader runs. The flags are
explained in Argument Flags, below, or you can display the help by typing:
$ ncluster_loader -?
• schemaname is the optional name of the destination schema. If no schema name is
provided, Aster Database will search the schemas listed in the schema search path.
• tablename is the name of the destination table (See “Case-Sensitive Handling for Table
Names” on page 126 if you wish to have Aster Database evaluate table names in a case-
sensitive manner.);
• filename or dirname indicates the file or directory of files to be loaded.
• filename Qualified path of the file containing the data to be loaded. The contents of
the file must be in either CSV or text format, as described for the COPY statement.
Details of the encoding used (such as non-default values for null or delimiter) are
specified using the appropriate options, as described below; or
• dirname Qualified path of the directory containing one or more data files to be
loaded. All data files found within this directory are expected to be in the same format
and will be loaded as a single transaction. Subdirectories will not be processed.
If you don’t supply a file or directory name argument, Aster Database Loader assumes you
want to load from STDIN. See “Load from STDIN Example” on page 138.
Tip for Windows users: When using Aster Database Loader on Microsoft Windows, bear in mind:
• If the filename or dirname contains spaces, make sure you enclose it in double quotes.
• When specifying a dirname, you must use a double backslash at the end of the path. For example, use
“c:\temp\loadFiles\\” to specify a directory called “loadFiles”.
• Never mix UNIX and Windows-style newline characters in the same data file. Doing so will cause your load attempt
to fail.
and in the map file, you would reference the table as:
"table" : "\"Foo\".\"Bar\"",
If you want to load to table "bar" in schema "Foo", you would still need to escape quote the
schema and the table separately as follows:
ncluster_loader.exe -h 10.51.3.100 -U mjones -w st4g0l33 \"Foo\".\"bar\"
mydata.csv -c
and in the map file, as:
"table" : "\"Foo\".\"bar\"",
Argument Flags
In addition to the schemaname.tablename and filename/dirname arguments explained
above, the Aster Database Loader Tool takes the following argument flags at the command line
or in the map file. (Map files let you load from many input files in a single running of Aster
Database Loader. See “Troubleshoot Loading” on page 143.)
In the table that follows, the argument flags are sorted based on the long-form, command-line
flag:
• The left column lists the flag you use at the command line.
• The middle column lists the flag you can use in a map file (See “Rules for Passing
Arguments in a Map File” on page 136.). If no value appears in the middle column, then
the argument is one that can only be passed at the command line.
Table 5 - 6: Argument Flags for Aster Database Loader Tool
Exit Status
Exit
Value Description
0 The Aster Database Loader Tool terminated successfully.
2 An error internal to the Aster Database Loader Tool was detected.
3 Another error was detected.
4 Failed to establish a connection to Aster Database.
5 An error was detected while communicating with Aster Database.
6 Malformed input data detected.
Connecting
The Aster Database Loader Tool is a regular Aster Database client application. In order to
connect to an Aster Database, you need to specify the host name or IP address of the queen
node via the command line option -h. If the connection could not be made for any reason
(e.g., insufficient privileges, server is not running on the targeted host, etc.), the Aster
Database Loader Tool will return an error and terminate.
Procedure:
To load data with the Aster Database Loader Tool, do this:
1 Install the Aster Database Loader Tool on your data staging machine. (See “Install the
Loader Tool” on page 133.)
2 If you have not already created one or more loader nodes in Aster Database, create them
now.
3 Prepare the file or files that contain the data you wish to load:
a For hints on file formatting, see the descriptions of the --csv, --delimiter, and --
null arguments, below. Use a consistent newline character in your input file(s)!
b Determine your mapping of the input file’s field values to the columns of your target
table. See the description of the --columns argument, below.
c Determine any special parsing hints you need to provide to the loader routing. See the
descriptions of the --escape, --quote, and --data-prefix arguments, below.
4 Place your data input file(s) on the data staging machine.
5 Figure out how you want to handle records that fail to load. See the section “Error
Logging” on page 142. Teradata Aster recommends that you create an error logging table
that will receive rows that fail to load. Be prepared to query the
nc_all_errorlogging_stats table for statistics about your load attempt.
6 Figure out which advanced options of the Aster Database Loader Tool you will use:
a Do you need to load from multiple files or insert into multiple tables? If so, see the
description of the --map-file argument and read the section, “Troubleshoot
Loading” on page 143.
b Do you need to automatically partition data when loading parent child tables with
inheritance? Use the --auto-partition argument and read the section, “Loading
Parent Child Tables with Inheritance” on page 140
c If you need to run an SQL script before and/or after the data is loaded, read the
descriptions of the --begin-script and --end-script arguments, below.
7 Load your data by running the Aster Database Loader Tool on your input file(s). See
“Argument Flags” on page 127 for a list of the command-line options. The command you
type will be similar to:
$ ./ncluster_loader -h 10.50.25.100 –w beehive -D "~" customers
input_data.txt
8 Check the results of your load attempt by querying the statistics tables,
nc_all_errorlogging_stats and nc_user_errorlogging_stats.
9 Check your error logging table(s) for rows that failed to load. Be aware that if the load
failed entirely (for example, if the number of errors exceeded the --el-limit), then no
new rows will appear in your target table, and no error rows will appear in the error
logging table.
10 If you wish to import the failed rows, find each row of input data that failed, edit it to fix it,
and combine these fixed rows into a new input file. Re-run the Aster Database Loader Tool
on the new input file.
Procedure:
1 Prepare your data files for import. Each file should be formatted as usual for the Aster
Database Loader Tool. All the files you submit in a single running of the Aster Database
Loader Tool must be in the same format.
2 You can optionally pass connection parameters in the map file. Any connection
parameters specified on the command line supercede those in the map file. The
connection parameters you can specify are:
• dbname - the name of the database
• username - the name of the Aster Database user
• password - the password of the Aster Database user
• loader - the hostname or IP address of the Aster Database loader node
• force-loader - same as the -f command line option. Instructs the Aster Database
Loader Tool to use the loader node specified with the loader parameter even if the IP
address provided is not known to Aster Database. Note that if this option is specified,
the Aster Database Loader Tool will only try that single IP address and return an error
status if the connection fails for any reason.
• timeout - same as the -t command line option. Specifies the timeout value in seconds
for Aster Database connection attempts. Default is 30 seconds.
3 Prepare your map file. The map file is a text file containing a set of logical text blocks, each
surrounded by curly braces. Each block represents a file or directory to be loaded. The
format is like this:
{
"loadconfig" :
[
{
"table" : "schema1.targettable1",
"file" : "data/insert1.txt",
"errorlogging" : { "enabled" : false }
},
{
"table" : "schema1.targettable1",
"file" : "data/insert2.txt",
"begin-script" : "input/mapfile/begin-script.sql",
"end-script" : "input/mapfile/end-script.sql",
"errorlogging" : { "enabled" : false }
}
]
}
In the above example, we assume the current directory (from which we invoke
ncluster_loader) contains a subdirectory, data, which has two files, insert1.txt and
insert2.txt, and we load these into table targettable1 in schema1. Error logging is
turned of.
Each block in the map file must contain the following required parameter flags and their
values:
• table specifies the name of the target table. Typically you will schema-qualify the table
name as shown in the example above. You may omit the schema, in which case the
user’s schema search path determines the schema. If your table name is case sensitive,
you must surround the table name with backslash-escaped double quote marks (\"),
like this:
"table" : "schema1.\"TargetTable\"",
• file specifies the name of the file or directory to be loaded. See “filename” in “Syntax”
on page 126.
• begin-script and end-script specify scripts to run either before or after the
loading of the file. Both are optional. Each map file entry can have a separate begin-
script and end-script. For each map file entry, ncluster_loader will run the
begin-script, load the data from the file, then run the end-script.
The begin-script and end-script require each statement to be on a single line. Do
not include commands that begin or end the transaction in the script files. Here is an
example of a valid script file:
DROP table IF EXISTS foo;
CREATE table foo (id int, sometext varchar(40)) DISTRIBUTE BY HASH
(id);
• errorlogging introduces a block that specifies how to handle malformed rows in the
input file. Inside the errorlogging block (enclosed in curly braces) you pass the
enabled parameter and, optionally, the parameters discard-errors, label, limit,
schema, and table. These parameters correspond, respectively, to the command-line
flags, --el-enabled, --el-discard-errors, --el-label, --el-limit, --
schema, --el-table.
Each block in the map file can contain the optional parameter flags listed in the middle
column of the table, “Argument Flags” on page 127. See “Rules for Passing Arguments in a
Map File”, below.
4 Run the Aster Database Loader Tool, passing the --map-file or -m option and the name
of your map file. Pass additional command line flags as needed, observing the rules set
forth in the preceding paragraph.
3 Those flags that have no value listed in the middle column can also be used when you’re
loading with a map file, but you must pass them at the command line, instead.
{
"enabled" : true,
"label" : "vm_test_12-test13",
"limit" : 100000,
"schema" : "public",
"table" : "nc_errortable_part"
}
}
]
}
Examples
Here’s how we run Aster Database Loader, piping its input data through sed:
$ cat sampleData-3.tsv \
| sed -e 's_\\_\\\\_g' \
| ncluster_loader -h $QUEEN_IP -d my_db -U beehive -w beehive testo /
dev/stdin
Loading tuples using node '192.168.28.100'.
3 tuples were successfully loaded into table 'test'.
Here are the result rows:
$ act -h $SYSMAN_IP -d my_db -U beehive -w beehive -c 'SELECT * FROM testo ORDER BY id;'
id | string
----+---------------------------------------------------------------------
1 | This is just a line.
5 | How often do back-slash characters ('\') appear in your data?
6 | And how often do you think they actually disappear: 1 \? 2 \? 3 \?
7 | \W\a\y \t\o\o \o\f\t\e\n\! \! \!
(4 rows)
Windows-based machine. For example, if you will use an SSH client (e.g., putty) to run
ncluster_loader, make sure you set the SSH client’s default character set to UTF-8.
We recommend that, prior to loading, you convert your text files to UTF-8. For example, if
you’re a Notepad++ user, you can use the command, “Convert to UTF8 without BOM.”
Newline Character
Make sure your data file uses a consistent character to represent newlines. If the file uses \r\n
for newlines, then it should not also use \n for newlines, and vice versa. If your file contains
both UNIX-style \n newlines and Windows-style \r\n newlines, then you must clean the file
before you try to load it. The UNIX command, dos2unix, can be useful for doing this.
1 Set up the parent-child table schema in your database. On each ultimate child table, write
a CHECK constraint that specifies what data may be loaded into that child table.
Warning! Aster Database does not detect overlapping constraints on peer child tables. As a result, the correct place-
ment of a row during loading can be indeterminate.
Workaround: Take care that the constraints you define do not create overlapping logical partitions. A simple mistake
would be to set up range constraints like this:
CHECK ( ymdh BETWEEN '2005-07-01' AND '2005-08-01' );
CHECK ( ymdh BETWEEN '2005-08-01' AND '2005-09-01' );
In this example, it is not clear in which partition the ymdh value '2005-08-01' resides.
Error Logging
The --el-discard-errors flag discards all malformed rows, the --el-label tags failed
row data, the --el-limit flag sets a maximum number of allowed failed rows for the job, and
the --el-table flag specifies a custom error logging table. See “Argument Flags” on page 127
for explanations. See “Example with Error Logging” on page 139 for example usage.
To perform error logging, the Aster Database Loader Tool relies on the error handling features
of the Aster Database COPY command in SQL.
Troubleshoot Loading
Here is an example:
Assume the file test.txt, which we are attempting to load into table table1 includes the
following input data row:
1, 25, "Some text with a, comma in it."
The following will not load, returning the "Error: extra data after last expected
column" error:
Synopsis
To run the Aster Database Export Tool, you type:
$ ./ncluster_export [arguments] [schemaname.]tablename [ filename ]
You can also pipe the results through a standard UNIX command such as gzip. For example:
$ ./ncluster_export -U mjones -w st4g0l33 -h 10.50.52.100 -d mydb mytable
| gzip -c > mytable.gz
In the synopsis above, the arguments are:
• schemaname is the optional name of the source table’s schema. If no schema name is
provided, Aster Database will search the schemas listed in your current schema search
path.
• tablename is the name of the source table. See “Case-Sensitive Handling for Table
Names” on page 146 if you wish to have Aster Database evaluate table names in a case-
sensitive manner. To export from multiple tables, see “Exporting from Multiple Tables” on
page 146.
• filename indicates the name of the file that will receive the exported data. If you don’t
supply a file or directory name argument, Aster Database Export assumes you want to
export to STDOUT. If the filename contains spaces, make sure you enclose it in double
quotes.
• arguments are the command-line flags that control how the exporter runs. The flags are
explained in “Argument Flags for Exporter” on page 147, or you can display the help by
typing:
$ ncluster_export -?
• “maxtuplesperfile” can be used to specify how many records will be put in each output
file. This allows outputting multiple files for a single table, with the specified number
of records in each.
• “columns” is a comma-separated list of the columns to be exported.
2 Run ncluster_export, passing the --map-file or -m option and the name of your map
file.
$ ./ncluster_export -U mjones -w st4g0l33 -h 10.50.52.100 -d mydb -m
"path/to/dir/mapfile"
Column Type
-? [ --help ] Shows the online help.
-B [ --begin-script ] arg Specifies the qualified path of the file containing SQL
commands that should be executed when the
transaction starts, i.e. immediately after the begin
command is issued to Aster Database. NOTE: Data-
returning statements such as SELECT are not allowed.
Each command in the begin script should be on a
separate line. Do not include BEGIN or END
statements.
-c [ --csv ] Exports the output files in CSV format (the default is to
use Text format).
-C [ --columns ] arg An optional comma-separated list of columns to be
exported from the source table (the default is to export
all columns).
-d [ --dbname ] arg Specifies the name of the database to connect to (the
default is 'beehive').
-D [ --delimiter ] arg Specifies the delimiter character to use when exporting
data (must be a string that represents a valid single
character, such as 'd' or '\n'). The default is a tab
character ('\t') in text mode, a comma (',') in CSV
mode.
-E [ --end-script ] arg Specifies the qualified path of the file containing SQL
commands that should be executed when the
transaction finishes, i.e. immediately before the end
command is issued to Aster Database. NOTE: Data-
returning statements such as SELECT are not allowed.
Each command in the end script should be on a separate
line. Do not include BEGIN or END statements.
Column Type
-e [ --escape ] arg Specifies the escape character for CSV output (must be a
string that represents a valid single character, such as 'd'
or '\n'). The default is the quote value - double-quote by
default. This option is only valid when CSV mode is
specified.
-f [ --force-loader ] Instructs ncluster_export to use the loader node
specified with the '-l/--loader' option even if the IP
address provided is not known to Aster Database.
NOTE: ncluster_export will only try that single IP
address and return an error status if the connection fails
for any reason.
filename (Not a flag; you pass filename indicates the name of the file that will receive
the file or directory name itself!) the exported data. If you don’t supply a file or directory
name argument, Aster Database Export assumes you
want to export to STDOUT. If the filename contains
spaces, make sure you enclose it in double quotes.
-h [ --hostname ] arg Specifies the hostname or IP address of the machine on
which Aster Database is running. Default is 'localhost'.
-l [ --loader ] arg Preferred loader IP address. If a value is provided,
ncluster_export will try to export through this IP
address before trying any other loader node. NOTE:
ncluster_export expects this IP address to be known by
Aster Database and will simply ignore the address
provided if this is not the case. To change this behavior
one can use the '-f/--force-loader' option.
-m [ --map-file ] arg Specifies the name of the file containing mappings of
the tables to be exported. This option allows export of
multiple tables within the same transaction. For details
about the format used for the map file, see “Exporting
from Multiple Tables” on page 146.
-M [ --max-tuples-per- Specifies how many records are exported to a single file.
file ] arg If the total number of records exceeds this number
multiple output files with the suffix '_N' are produced,
where N is an integer.
-n [ --null ] arg Specifies a string that represents a NULL value. The
default is '\N' in text mode and an empty value with no
quotes in CSV mode. See also “--null-backslash-
escapes” on page 148.
--null-backslash-escapes Indicates that backslashes in the '-n/--null' flag should be
treated as special characters, which will be processed
according to the default rules for escaped strings.
You should use this flag whenever the null string
contains a backslash, to ensure compatibility with future
versions of Aster Database.
Column Type
-p [ --port ] arg Specifies the TCP port on which Aster Database is
listening for connections. The default is port 2406.
-q [ --quote ] arg Specifies the quote character for CSV output (must be a
string that represents a valid single character, such as 'd'
or '\n'). The default is the double-quote. This option is
only valid if CSV mode is specified.
schemaname (Not a flag; you pass schemaname is the optional name of the source table’s
the schema name itself!) schema. If no schema name is provided, Aster Database
will search the schemas listed in your current schema
search path.
-t [ --timeout ] arg Specifies the timeout value (in seconds) to use when
connecting to Aster Database (default is 30).
tablename (Not a flag; you pass tablename is the name of the source table. See “Case-
the table name itself!) Sensitive Handling for Table Names” on page 146 if you
wish to have Aster Database evaluate table names in a
case-sensitive manner. To export from multiple tables,
see “Exporting from Multiple Tables” on page 146.
-U [ --username ] arg Connects to the database with the specified username
instead of the default ('beehive'). (You must have
permission to do so, of course.)
-V [ --version ] Prints the version of ncluster_export and exit.
-w [ --password ] arg Connects to the database with the specified password
(as opposed to providing a password at the prompt).
-W [ --password-prompt ] Forces ncluster_export to prompt for a password before
connection to the database.
Supported Platforms
For a list of supported operating systems for the Aster Database drivers and utilities, see the
Aster Database Drivers and Utilities Guide for your version of Aster Database.
On Linux, ncluster_export requires glibc 2.7 or later.
If you are running on AIX, see “AIX Client Dependent Libaries” on page 44.
Procedure
To install the Aster Database Export Tool on a machine other than the queen:
1 To obtain the Aster Database Export Tool package for your client operating system,
download it from one of these places onto your client machine in the directory where you
will install it:
• To get the newest package, download it from http://downloads.teradata.com/download/
tools
• On your queen node, you can find the installers in the directory /home/beehive/
clients_all/<your_client_OS>.
2 Set permissions to make the files executable.
F L
FETCH_COUNT 26 label argument during batch load 136
FETCH_LIMIT 28 label for data loading errors 129
fetch-count 26 LIMIT
fetch-limit 28 FETCH_LIMIT as alternative to 28
field separator, setting in ACT 34 limit argument during batch load 136
file limit rows returned at a time 26
redirect command history to file 32 limit total rows returned per query 28
redirect query history to file 32 line break character 140
redirect query results to file 32 list all tables 33
file argument during batch load 136 list databases command 34
file input to ACT 32 load 121
with \i 32 autopartition 140
with -f 19 from many files 135
force-loader flag for bulk loading 130 from SDTIN 138
format logging bad rows during import 142
dates in bulk loads 128 map file 135
load data 121
G other tools 145
loader
get latest documentation 10
arguments 127
group
Aster Database loader tool 125
list all groups in Aster Database 34
destination table 126
destination table, case-sensitive name 126
H flags 127
help 10 map file: supported flags 127
help for ncluster_loader 130 parallel loading with 140
history in ACT 32 parameters 127
hostname flag for bulk loading 130 specifying a dedicated loader node 140
loader flag for bulk loading 130
I loader node 121
parallel loading with 140
import loader tool 125
from many files 135 loading 121
logging bad rows during import 142 arguments 127
ncluster_loader 125 Aster Database Export tool 145
index Aster Database Loader tool 125
list all indexes in Aster Database 33 best practices 121
install character set for loading 139
ACT client 11 flags 127
Aster Database Loader Tool 133 from SDTIN 138
handling nulls when loading data 131
J JDBC driver 59
Java map file: supported flags 127
JDBC statements 90 ODBC driver 44
JDBC 59 parameters 127
connecting through JDBC 62 SSIS and 99
cursors in 67 troubleshooting 140
driver 59 troubleshooting problems encountered when loading 143
how to write applications that use JDBC 90 loading errors 142
query example 91 log errors 142
M P
malformed rows during load 142 parameter
map file 135 setting queen system parameters 85
supported flags 127 settings for ACT 31
map file for ncluster_loader 135 partitioning
map-file flag for bulk loading 131 autopartition during load 140
memory usage in ACT: limit with FETCH_COUNT paging of performance tuning
query results 26 improve ACT performance with FETCH_LIMIT 28
Microsoft .NET, drivers for 97 limit ACT memory use with FETCH_COUNT 26
Microsoft SSIS 99 server-side cursors in ACT 26
MicroStrategy, connections for 93 Perl scripts, ODBC driver for 55
pipe to ncluster_loader 138
N pivot column and row output 35
port flag for bulk loading 131
nc_errortable_part 142
portal 10
nc_errortable_repl 142
preferences
ncluster_export 145
SQL settings 71
source table, case-sensitive name 146
prefix argument during batch load 131
ncluster_loader 125
PreparedStatement in JDBC 90
arguments 127
pre-process data with sed 138
character set for loading 139
date format 128
destination table 126 Q
destination table, case-sensitive name 126 queen
flags 127 system parameters, setting 85
installing 133 query
loading from many files 135 transpose results 35
loading to multiple tables 135 typing in ACT 24
logging bad rows during import 142 via Java JDBC calls 90
map file: supported flags 127 query buffer 32
parameters 127 send to file 32
.NET Data Provider 97 query history 32
newline character 140 send to file 32
null query results 32
handling nulls when loading data 131 send to file 32
null flag for bulk loading 131 query tool
installing ACT client 11
O quiet mode of ACT 21
quote character flag for bulk loading 131
ODBC 44
quoted identifier
bytea handling 57
enable in Aster Database 71
driver 44
Driver Manager 50
Driver Manager configuration 50 R
ODBC driver recommended client settings 44
for MicroStrategy 93 recommended loading settings 139
installing for use with Perl scripts 55 redirect command history to file 32
installing on AIX 50 redirect query history to file 32
installing on Linux 48 redirect query results to file 32
installing on Windows 46 reporting tools
V
VACUUM
after bulk loading 132
vacuum-table flag for bulk loading 132
version
check ACT version 19
documentation version 10
ncluster_loader version 132
W
workaround
certificate, missing self-signed cert 78
X
x command 35