IBM Books

Administration Guide

Configuring I/O Servers for Prefetching and Parallel I/O

To enable prefetching, the database manager starts separate threads of control, known as I/O servers, to perform page reading. As a result, the query processing is divided into two parallel activities: data processing (CPU) and data page I/O. The I/O servers wait for prefetch requests from the CPU processing activity. These prefetch requests contain a description of the I/O needed to satisfy the anticipated data needs. The reason for prefetching determines when and how the database manager generates the prefetch requests. (See "Understanding Sequential Prefetching" and "Understanding List Prefetching" for more information.)

The following figure illustrates how I/O servers are used to prefetch data into a buffer pool.

Figure 56. Prefetching Data using I/O Servers


The following steps are illustrated in Figure 56:

The user application passes the SQL request to the database agent.

The database agent determines that prefetching should be used to obtain the data required to satisfy the SQL request and writes a prefetch request to the I/O server queue.

(3), (4)
The first available I/O server will read the prefetch request from the queue and read the data from the table space into the buffer pool. Depending on the number of prefetch requests in the queue and the number of I/O servers configured by the num_ioservers configuration parameter, multiple I/O servers can be fetching data from the table space at the same time.

The database agent performs the necessary actions against the data pages in the buffer pool in order to return the result of the SQL request back to the user application.

Configuring enough I/O servers with the num_ioservers configuration parameter can greatly enhance the performance of queries for which prefetching of data can be used. Having some extra I/O servers configured will not hurt performance because extra I/O servers are not used and their memory pages will get paged out. Each I/O server process is numbered and the database manager will always use the lowest numbered process that is available and, as a result, some of the upper numbered processes may never be used.

To determine the number of I/O servers that you should configure, consider the following:

Enabling Parallel I/O

For situations in which multiple containers exist for a table space, the database manager can initiate parallel I/O. Parallel I/O refers to the ability of the database manager to use multiple I/O servers to process the I/O requirements of a single query. Each I/O server is assigned the I/O workload for a separate container, allowing several containers to be read in parallel. Performing I/O in parallel can result in significant improvements to I/O throughput.

While a separate I/O server will handle the workload for each container, the actual number of I/O servers that can perform I/O in parallel will be limited to the number of physical devices over which the requested data is spread. This also means you need as many I/O servers as the number of physical devices.

How parallel I/O is initiated and used is dependent on the reason for performing the I/O:

For optimal performance of parallel I/O, ensure that:

Allocating Multiple Pages at a Time

The database manager expands the database as needed when the multipage_alloc database configuration parameter is set to YES. (See "MultiPage File Allocation Enabled (multipage_alloc)".) The default value for this parameter is NO, causing the database manager to expand the database file one page at a time.

To set multipage_alloc to YES (and gain the associated performance improvement), use the db2empfa tool in the sqllib/bin directory. (For more information on this tool, see the Command Reference.)

[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]

[ DB2 List of Books | Search the DB2 Books ]