IBM Books

Administration Guide


Using the IMPORT Utility

The IMPORT utility inserts data from an input file into a table or view. If the table or view receiving the imported data already contains data, you can either replace or append the existing data with the data in the file.

Notes:

  1. If the existing table is a parent table containing a primary key that is referenced by a foreign key in a dependent table, its data cannot be replaced, only added to.

  2. You cannot import data into a summary table.

The following is an example of the command line processor syntax for the IMPORT command:

   db2 import from stafftab.ixf of ixf insert into userid.staff

The following information is required when importing data to a table or view:

When importing into large object (LOB) columns, the data can come either from the same file as the rest of the column data, or from separate files. In the latter case, there is one file for each LOB instance.

The column data in the file contains either the data to load into the column, or a filename where the data to load is stored. The default is the file contains the data to load into the column.

Notes:

  1. When LOB data is stored in the file, no more than 32KB of data is allowed. Truncation warnings are ignored.

  2. All of the LOB data must be stored in the main file or each LOB stored in separate files. The main file cannot have a mixture of LOB data and file names.

For more information on importing LOBs from files, see the LOBSINFILE option in the Command Reference manual.

You may also provide the following information:

To import data into a new table, you must have SYSADM authority, DBADM authority, or CREATETAB privilege for the database. To replace data in an existing table or view, you must have SYSADM authority, DBADM authority, or CONTROL privilege for the table or view. To append data to an existing table or view, you must have SELECT and INSERT privileges for the table or view.

With IMPORT, there is a possibility of unequal code page situations involving possible expansion or contraction of the character data. Such situations could occur with Japanese or Traditional-Chinese Extended UNIX Code (EUC) and double-byte character sets (DBCS) which may have different length encodings for the same character. An option, NOCHECKLENGTHS, is used to toggle between two situations:

  1. Comparison of input data length to target column length is performed before reading in any data. If the input length is larger than the target, NULLs are inserted for that column if it is nullable. Otherwise, the request is rejected. This is the default.
  2. No initial comparison is performed and, on a row-by-row basis, an attempt is made to import the data. If the data is too long after translation is complete, the row is rejected. Otherwise, the data is imported. Specifying NOCHECKLENGTHS will enable this behavior.

The IMPORT utility casts user-defined distinct types (UDTs) to similar base data types automatically. This saves you from having to explicitly cast UDTs to the base data types. Casting allows for comparisons between UDTs and the base data types in SQL.

Use the IMPORT utility to re-create a table that was saved by using the EXPORT utility. The table must have been exported to an IXF file. When creating a table from an IXF file, not all attributes of the original table are preserved. For example, the referential constraints, foreign key definitions, and user-defined data types are not retained. If the IXF file was created with the LOBSINFILE option, then the length of the original LOB is lost. Attributes of the original table that are preserved or retained are:

Note:Before running the import utility, you must be connected or connected implicitly to the database into which the data will be imported. Also, the utility issues a COMMIT or ROLLBACK statement; therefore, you should complete all transactions and release all locks by performing either a COMMIT or ROLLBACK before using the utility.

Using IMPORT with Buffered Inserts

In a partitioned database environment, the IMPORT utility can be enabled to use buffered inserts. This reduces the messaging that occurs when data is loaded, resulting in better performance.

To cause the IMPORT utility to use buffered inserts, the BIND command must be used. The import package, db2uimpm.bnd has to be rebound against the database with the INSERT BUF option. This can be achieved using the following commands:

   db2 connect to your_database
 
   db2 BIND db2uimpm.bnd INSERT BUF

However, any one of the individual inserts that are buffered can fail. It is not possible to report the failing row and error back to the user as IMPORT usually does. Therefore, buffered inserts should only be enabled with the IMPORT utility if the user is not concerned about error reporting.

Import in a Client/Server Environment

When you import a file to a remote database, a stored procedure may be called to perform the import on the server. A stored procedure will not be called when:

When importing using a stored procedure, messages are created in the message file using the default language installed on the server. The messages are in the language of the application if the language at the client and the server are the same.

The import utility creates two temporary files in the tmp directory indicated by the DB2INSTPROF environment variable on the database server. One file is for data and the other file is for messages generated by the import utility.

If you receive an error about writing or opening data on the server, make sure that:

Differences Between the IMPORT and LOAD Utilities

This table gives you a quick comparison between the two utilities highlighting the important differences between them.
The IMPORT utility The LOAD utility
Significantly slower than the LOAD utility on large amounts of data. Significantly faster than the IMPORT utility on large amounts of data because of LOAD's writing of formatted pages directly into the database.
Limited intra-partition parallelism exploitation. Exploitation of intra-partition parallelism. Typically, this requires symmetric multiprocessor (SMP) machines.
No FASTPARSE support. Support for FASTPARSE datatype. Reduced data checking on user-supplied data.
No CODEPAGE support. Support for CODEPAGE datatype. Converts character data (and numeric data specified in characters) from the code page given with this datatype to the database code page during the load operation.
Supports hierarchical data. Does not support hierarchical data.
Creation of table, hierarchy, and indexes supported with PC/IXF format. Tables and indexes must exist.
Creation of table and indexes supported with IXF format. Table and indexes must exist.
WSF format is supported. WSF format is not supported.
No BINARYNUMERICS support. Support for BINARYNUMERICS datatype.
No PACKEDDECIMAL support. Support for PACKEDDECIMAL datatype.
Can import into views and tables. (Aliases are supported.) Can load into tables only. (Aliases are supported.)
The table space(s) that the table and its indexes reside in are online for the duration of the import. The table space(s) that the table and its indexes reside in are offline for the duration of the load.
All rows are logged. Minimal logging is performed.
Triggers will be fired. Triggers are not supported.
If an import is interrupted and a commitcount was specified, the table is usable and will contain the rows that were loaded up to the last commit. The user has the choice to restart the import or use the table as is. If a load is interrupted and a savecount was specified, the table remains in load pending state and cannot be used until the load is restarted to continue the load or the table space is restored from a backup image created some time before the load.
Space required is approximately the size of the largest index plus about 10%. This space requirement is used from the temporary table spaces within the database. Space required is approximately the sum of the size of all indexes defined on the table and possibly twice this size. The space required is temporary space outside of the database.
All constraints are validated during an import. Uniqueness is verified during a load but all other constraints must be checked using the SET CONSTRAINTS statement.
The keys of each row are inserted into the index one at a time during import. During a load, all the keys are sorted and the index is built after the data has been loaded.
If up-to-date statistics are required after an import, RUNSTATS must be run afterwards. Statistics can be gathered during the load if all the data in the table is being replaced.
You can import into a host database through DB2 Connect. You cannot load into a host database.
Files that are imported must reside on the node where import is invoked. Files/pipes that are loaded must reside on the node where the database resides.
No backup image is required. The backup image can be created during the LOAD procedure.


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]

[ DB2 List of Books | Search the DB2 Books ]