None

Data copy between databases in IBM DB2


Copy data between tables in different db2 database servers.

By Kostas Koutsogiannopoulos

There are several ways to copy data between different DB2 databases.
The consistent way is to backup the source database creating a backup image and restore it to the target instance as a new database.
This will create a clone of the source database with the data existed at the backup timestamp.
If you did an "online" backup you can also "rollforward" the target database to a different timestamp using the transaction logs from the source database.

In this post we are describing a quick way to copy data from tables living in a remote DB2 database to a local DB2 database table.

We assume that a table with the same definition as the remote table exists in the local database.
If there is not we cat extract the DDL from the remote database and run the equivalent "CREATE" SQL commands localy.

Setting up the instance environment

At first we catalog the remote node to the local instance:

db2 "CATALOG TCPIP NODE <remote node> REMOTE <remote ip> SERVER <remote port>"

Then we catalog the remote database that is running to the remote node:

db2 "CATALOG DB <source database> AT NODE <remote node> AUTHENTICATION SERVER"

Now we can access the remote database from the local machine. We are testing it with the DB2 command line interface like that:

db2 "connect to <remote database> user <remote database user>"

If the connection is successfull we can continue to the folllowing steps.

Setting up federation environment

DB2 "federation" is a configuration that allows remote database (DB2 or not) objects to be accessed as local DB2 database odjects.
We are using this way in order to copy data from the live remote tables as quickly as possible without the use of temporary storage in the proccess.

We can setup the remote database as "federated" to our local database.

At first we need to create a wrapper for the remote database data parsing. Is our case (DB2 remote database) we can use the follwing command:

db2 "CREATE WRAPPER <wrapper name> LIBRARY '/opt/ibm/db2/V9.5/lib64/libdb2drda.so'"

Then we need to create the remote federated server:

db2 "CREATE SERVER <source database name at local catalog> TYPE DB2 VERSION 10.5 WRAPPER <wrapper name> AUTHORIZATION "<remote database user>" PASSWORD "<remote database password>" OPTIONS(DBNAME '<source database name>')"

Then we need to map a local database user with the "LOAD" privilege to a remote database user with "SELECT" privileges on the objects we want to copy.
The local user will inherit the privileges of the remote user to remote database:

db2 "CREATE USER MAPPING FOR "<local user name>" SERVER <source database name> OPTIONS ( ADD REMOTE_AUTHID '<remote user name>', ADD REMOTE_PASSWORD '<remote user password>' )"

Now we can create objects in the local database (nicknames) that are reflecting remote database objects (tables) with commands like that:

db2 "CREATE NICKNAME <local schema name>.<local nickname name> FOR <source database name>.<source schema name>.<source table name>"

After nickname creation we can access the data of the remote table in the same way as the data of a local table.

For example the resultset of the command:

db2 "select * from <local schema name>.<local nickname name>"

... running in the local database as the local user, is equivalent to the command:

db2 "select * from <source schema name>.<source table name>"

... running in the remote database as the remote user.

Running the crossload

Load utility is the fastest way for data copy because is running parallel inserts without logging the transactions.
We are using here the "load from cursor" possiblity after a cursor creation using sql select statemnents.

For example check the following script:

 load1.sh

#!/bin/bash
#Loading the db2 profile
if [ -f /home/db2inst1/sqllib/db2profile ]; then
    . /home/db2inst1/sqllib/db2profile
fi
DB=<source database name>
USER=<local user name>
PASS=<local user password>
FROMTABLE=<source table name>
TOTABLE=<target table name>
SOURCESCHEMA=<source schema name>
TARGETSCHEMA=<target schema name>
db2 connect to $DB user $USER using $PASS
db2 "CREATE NICKNAME NN$SOURCESCHEMA.NN$FROMTABLE FOR DSNP.$TARGETSCHEMA.$FROMTABLE"
db2 "DECLARE C1 CURSOR FOR SELECT * FROM NN$SOURCESCHEMA.NN$FROMTABLE WITH UR"
db2 "LOAD FROM C1 OF CURSOR MESSAGES ./load.log REPLACE INTO $TARGETSCHEMA.$TOTABLE NONRECOVERABLE"
db2 "SET INTEGRITY FOR $TARGETSCHEMA.$FROMTABLE STAGING, MATERIALIZED QUERY, FOREIGN KEY, GENERATED COLUMN, CHECK IMMEDIATE UNCHECKED"
db2 "DROP NICKNAME NN$SOURCESCHEMA.NN$FROMTABLE"
db2 connect reset

As you can see the script is running the following steps:

  • Creates Nickname
  • Declares Cursor
  • Runs Load Utility
  • Deals with integrity checking
  • Drops the nickname

Of cource you can include the whole procedure in a for loop if you want to copy data from multiple tables:

 load_all.sh

#!/bin/bash
#Loading the db2 profile
if [ -f /home/db2inst1/sqllib/db2profile ]; then
    . /home/db2inst1/sqllib/db2profile
fi
DB=<source database name>
USER=<local user name>
PASS=<local user password>
LOGFILE=<File for logging>
S_T=$SECONDS
echo Load of tables started 
db2 connect to $DB user $USER using $PASS > $LOGFILE
for TAB in <schema1.table1 schema2.table2 schema3.table3 ...> do
s_t=$SECONDS
echo $TAB load started
db2 -v "CREATE NICKNAME NN$ODSTAB FOR DSNP.$TAB" >> $LOGFILE
db2 -v "DECLARE C1 CURSOR FOR SELECT * FROM NN$TAB WITH UR" >> $LOGFILE
db2 -v "LOAD FROM C1 OF CURSOR MESSAGES ./$TAB.log REPLACE KEEPDICTIONARY INTO $TAB NONRECOVERABLE" >> $LOGFILE
db2 -v "SET INTEGRITY FOR $TAB STAGING, MATERIALIZED QUERY, FOREIGN KEY, GENERATED COLUMN, CHECK IMMEDIATE UNCHECKED" >> $LOGFILE
db2 -v "DROP NICKNAME NN$TAB" >> $LOGFILE
e_t=$(($SECONDS - $s_t))
echo $TAB load terminated - "$(($e_t/60)) min $(($e_t%60)) sec"
done
db2 connect reset >> $LOGFILE
E_T=$(($SECONDS - $S_T))
echo Load of tables terminated - "$(($E_T/60)) min $(($E_T%60)) sec"

You can also run multiple scripts like this in parallel and you can monitor running load utilities with the command:

~$ db2 list utilities show detail

Pros, cons of the method

ProsCons
Ultra fast data loadingNot consistent data for live databases
Cross platform (endianess)Requires data declaration on target DB
No need of temporary storage 
Easy filtering with SQL 
Logging with load utility logging capability 
Easy Monitoring 

View epilis's profile on LinkedIn Visit us on facebook X epilis rss feed: Latest articles