Ten DB2 database optimization tips

DB2 DBA to help avoid disaster and get high performance, and I for our customers, users and colleagues summarize a set of DB2 expert fault diagnosis process. The following detailed description of the Unix, Windows and OS / 2 environment using DB2 UDB OLTP e-commerce application 10 of the most important performance improvement skills - and the end part of the conclusion of this article.

About every few weeks, we will have received a distress call DBA, complaining about performance issues. "Our Web site as slow as a snail," they complain, "We are losing customers in serious condition. Can you help me?" To answer these questions, I for my consulting firm has developed an analysis procedure, it Let us quickly find the cause of performance problems, develop and propose remedial measures to adjust views. The caller inquired about the fees and costs very little - they only care about stop loss. When DB2 or e-commerce applications can not achieve the desired performance running, the organization and the proceeds will suffer great financial loss.

10. Monitor the Switch

Ensure that you have open the monitor switch. If they do not open, you will not get the performance you need information. To turn the monitor switch, issue the following command:

db2 "update monitor switches using

lock ON sort ON bufferpool ON uow ON

table ON statement ON "

9. Agent

DB2 agents to ensure adequate procedures to deal with the workload. To find the agent, please issue the command:

db2 "get snapshot for database manager"

And find the following line:

High water mark for agents registered = 7

High water mark for agents waiting for a token = 0

Agents registered = 7

Agents waiting for a token = 0

Idle agents = 5

Agents assigned from pool = 158

Agents created from empty Pool = 7

Agents stolen from another application = 0

High water mark for coordinating agents = 7

Max agents verflow = 0

If you find Agents waiting for a token or Agents stolen from another application is not 0, then increase the available database manager agent number (MAXAGENTS and / or MAX_COORDAGENTS where applicable).

8. The largest number of open files

DB2 resource constraints in the operating system as far as doing a "good citizen." It's a "good citizen" of the action is to open the file at any time to set a ceiling on the maximum number. DB2 database configuration parameter MAXFILOP bound to the maximum number of open files at the same time. When the number of open files to the quantities, DB2 will shut down and began to open its tablespace files (including bare equipment). Constantly open and close files slow down the SQL response time and CPU cycles consumed. To ascertain whether DB2 is closing file, issue the following command:

db2 "get snapshot for database on DBNAME"

And find the following line:

Database files closed = 0

If the parameter value is not 0, then the increase in the value of MAXFILOP constantly open and close the file until the state stopped ridge Yun J gown extravagant trip Core? / P>

db2 "update db cfg for DBNAME using MAXFILOP N"

7. Lock

LOCKTIMEOUT default value is -1, which means that there will be no lock out (for OLTP applications, this situation may be disastrous). Nevertheless, I often find that many DB2 users with LOCKTIMEOUT = -1. LOCKTIMEOUT set to a very short time value, such as 10 or 15 seconds. Wait too long in the lock will produce an avalanche effect locked.

First of all, with the following command to check the value of LOCKTIMEOUT:

db2 "get db cfg for DBNAME"

And find the line that contains the following text:

Lock timeout (sec) (LOCKTIMEOUT) = -1

If the value is -1, consider the following command to change it to 15 seconds (be sure to first Xunwen application developers or vendors to ensure that your application can handle the lock timeout):

db2 "update db cfg for DBNAME using LOCKTIMEOUT 15"

You also should monitor the number of waiting lock, lock wait time and are using the lock list memory (lock list memory) amount. Please send the following command:

db2 "get snapshot for database on DBNAME"

Find the following line:

Locks held currently = 0

Lock waits = 0

Time database waited on locks (ms) = 0

Lock list memory in use (Bytes) = 576

Deadlocks detected = 0

Lock escalations = 0

Exclusive lock escalations = 0

Agents currently waiting on locks = 0

Lock Timeouts = 0

If the Lock list memory in use (Bytes) defined by more than 50% of LOCKLIST size, then LOCKLIST database configuration page to increase the number of 4k.

6. Temporary table space

To improve the DB2 implementation of the parallel I / O and improve the use of TEMPSPACE sort, hash link (hash join) and the performance of other database operations, temporary table space should be at least three different disk drives have three containers.

To know your temporary table space with the number of containers, please issue the following command:

db2 "list tablespaces show detail"

Look like the following example of TEMPSPACE table space definition:

Tablespace

Name = TEMPSPACE1

Type = System managed space

Contents = Temporary data

State = 0x0000

Detailed explanation: Normal

Total pages = 1

Useable pages = 1

Used pages = 1

Free pages = Not applicable

High water mark (pages) = Not applicable

Page size (bytes) = 4096

Extent size (pages) = 32

Prefetch size (pages) = 96

Number of containers = 3

Note Number of containers of the value is 3, and the Prefetch size is three times the Extent size. To get the best parallel I / O performance, it is important Prefetch size as a multiple of Extent size. The multiplier should be equal to the number of containers.

To find the definition of the container, please send the following command:

db2 "list tablespace containers for 1 show detail"

1 refers to the tablespace ID # 1, it is just the example given in the TEMPSPACE1.

5. Memory Sort

OLTP application should not implement the large order. Their CPU, I / O and the high cost of using time, but will slow down any OLTP application. Therefore, 256 4K pages (1MB) size of the default SORTHEAP (1MB) should be sufficient. You should also know that the number of sort overflows and ranking number for each transaction.

Please send the following command:

Db2 "get snapshot for database on DBNAME"

And find the following line:

Total sort heap allocated = 0

Total sorts = 1

Total sort time (ms) = 8

Sort verflows = 0

Active sorts = 0

Commit statements attempted = 3

Rollback statements attempted = 0

Let transactions = Commit statements attempted + Rollback

statements attempted

Let SortsPerTX = Total sorts / transactions

Let PercentSortOverflows = Sort overflows * 100 / Total sorts

If PercentSortOverflows ((Sort overflows * 100) / Total sorts) is greater than 3 percentage points, then the SQL in the application will appear in the sort of serious or unexpected problems. Because it is the overflow that occurred in the presence of a large order, so the ideal situation is found in no sort overflows or at least the percentage is less than one percentage point.

If there is too much the sort overflow, then the "emergency" solution is to increase the size of SORTHEAP. However, this only masks the real performance problems. Instead, you should determine the cause and change the sort of SQL SQL, index or cluster to avoid or reduce sorting costs.

If SortsPerTX greater than 5 (as a voice of experience), then the order number for each transaction can be large. Although some applications a combination of many small, Executive order (they will not overflow and the execution time is very short), but it consumes too much CPU. When SortsPerTX large, according to my experience, these machines are usually subject to CPU restrictions. Determine the cause sort of SQL and to improve the access program (through the index, cluster, or change the SQL) to improve the transaction throughput is extremely important.

4. Table Access

For each table, DB2 for each transaction to determine the number of rows to read. You must issue two commands:

db2 "get snapshot for database on DBNAME"

db2 "get snapshot for tables on DBNAME"

After the first command is issued, to determine how many affairs have taken place (by taking Commit statements attempted and Rollback statements attempted the and - See Skills 3).

After the second command is issued, will read the number of rows divided by the Services (RowsPerTX). In each transaction, OLTP applications should normally be read from 1 to 20 for each table row. If you find hundreds of each transaction line is being read, then place the scanning operation, may need to create the index. (Sometimes with distribution and detailed indexes to run runstats also provide a solution.)

"Get snapshot for tables on DBNAME" sample output as follows:

Snapshot timestamp = 09-25-2000

4:47:09.970811

Database name = DGIDB

Database path = / fs/inst1/inst1/NODE0000/SQL00001 /

Input database alias = DGIDB

Number of accessed tables = 8

Table List

Table Schema = INST1

Table Name = DGI_

SALES_ LOGS_TB

Table Type = User

Rows Written = 0

Rows Read = 98857

Overflows = 0

Page Reorgs = 0

Overflows of the volume may mean you need to re-form. When change the line width due to a less than ideal DB2 must locate a row page overflow occurs.

3. Table Spatial Analysis

Table space snapshot of the understanding of access to what data and how access is extremely valuable. To get a table space snapshot, please issue the following command:

db2 "get snapshot for tablespaces on DBNAME"

For each table space, answer the following questions:

Average access time (ms) how many?

The average write time (ms) is the number?

Asynchronous (prefetching) relative to the synchronized (random) share physical I / O, what is the percentage?

Each table space, buffer pool hit rate?

The number of physical pages read per minute?

For each transaction the number of physical and logical to read the page?

For all table spaces, answer the following questions:

Which table space of time to read and write the slowest? Why? Because of its container in the slow disk to it? Container size is equal to? Contrast asynchronous access and synchronize access, access to it and expect the same property? Random Reading The table should have random access to read the table space, which is to be read simultaneously a high percentage, usually a higher buffer pool hit rate and lower physical I / O rate.

For each table space, ensure that the prefetch size is equal to block size multiplied by the number of containers. Please send the following command:

db2 "list tablespaces show detail"

If needed, can change in a given table space prefetch size. You can use the following command to check the container definition:

db2 "list tablespace containers for N show detail"

Here, N is the table space identifier.

2. Buffer pool optimization

I always find some DB2 UDB site, although the machine has a 2,4 or 8GB of memory, but only one DB2 database buffer pool (IBMDEFAULTBP), its size is only 16MB!

If your site is the case, please SYSCATSPACE catalog table space to create a buffer pool for TEMPSPACE table space to create a buffer pool and the other to create at least two buffer pools: BP_RAND and BP_SEQ. Random access of the table space should be allocated to the buffer pool for random access (BP_RAND). Sequential access (use asynchronous prefetch I / O) table space should be allocated to the buffer pool for sequential access (BP_SEQ). According to certain matters of performance goals, you can create additional buffer pools; example, you can make a buffer pool large enough to store the "hot" (or very frequent visits) table. When it comes to the big table, some DB2 users will be an important index of the table into an index (BP_IX) buffer pool was a great success.

Buffer pool is too small will produce excessive and unnecessary physical I / O. Much of the buffer pool so that the system at risk in the operating system paging and consume unnecessary CPU cycles to manage the over-allocation of memory. Just the right size of the buffer pool in "too small" and "too much" a balance between the points. The appropriate size present in the return will begin to reduce the point. If you do not use the tool to automatically reduce the analysis of return, then you should increase the size of the buffer pool buffer pool of science to test the performance (hit rate, I / O time and physical I / O read rate), until the best buffer pool size. Because business has been change and growth, so it should be periodically re-evaluate the "best size" decision-making.

1. SQL cost analysis

A bad SQL statement that will completely destroy your day. I see more than once a relatively simple SQL statement mess of a very good database and machine adjustments. For many of these statements, under the sun (or in the document) is not DB2 UDB configuration parameters to SQL statements to correct errors caused by the high costs of the case.

Worse, DBA is often subject to many constraints: can not change the SQL (perhaps because it is the application provider, such as SAP, PeopleSoft or Siebel). This is a DBA, leaving only three choices:

1. To change or add an index

2. To change the cluster

3. Change directory statistics

In addition, the application of robust procedures are now tens of thousands of SQL statements of different composition. The frequency of these statements with the implementation of application functionality and daily business needs differ. SQL statement is executed once the actual cost is the cost multiplied by the number of its implementation.

DBA facing each major task is to identify the highest "actual cost" of the statement of the challenge, and reduce the costs of these statements.

Through the machine DB2 Explain utilities, some third-party tool vendors, or DB2 UDB SQL Event Monitor data, you can perform a SQL statement to calculate the resources used in the cost. But the statement execution frequency only through the careful and time-consuming analysis of DB2 UDB SQL Event Monitor data to understand.

SQL statement question in the study, DBA to use the standard process is:

1. Create a SQL Event Monitor, write to the file:

$> Db2 "create event monitor SQLCOST for statements write to ..."

2. Activate the event monitor (to ensure adequate free disk space):

$> Db2 "set event monitor SQLCOST state = 1"

3. Let the application run.

4. Deactivated event monitor:

$> Db2 "set event monitor SQLCOST state = 0"

5. Db2evmon using DB2 provides tools to format the SQL Event Monitor raw data (based on SQL throughput may require hundreds of megabytes of free disk space):

$> Db2evmon-db DBNAME-evm SQLCOST

> Sqltrace.txt

6. View the whole formatted file, find the cost of significantly large number (a time-consuming process):

$> More sqltrace.txt

7. Formatted documents that have been a more complete analysis, the paper attempts to identify unique statements (separately from text values), the only statement of each frequency (the number of times it appears) and the total CPU, sorting, and other the total resource cost. Such a thorough analysis of the application in 30 minutes samples of SQL activity that may take a week or more time.

To reduce the high cost of SQL statement to determine the time spent, you might consider many sources of information available:

4 from the tips, be sure to calculation of each transaction to read from each table the number of rows. If the resulting number looks big, then the DBA can format the output in SQL Event Monitor to search the table name (this will narrow your search and save some time), so may be able to identify the problem statement. 3 from the tips, be sure to calculation of each table space, the percentage of asynchronous reads and physical I / O read rate. If a table space, a high percentage of asynchronous read and far more than the average of the physical I / O read rate, then the table space in one or more tables are being scanned. Check directory and find out which table was assigned to the suspect's table space (one table per table space allocation to provide the best performance test), and then format the output in SQL Event Monitor to search for these tables. These also may help reduce the high cost SQL statements on the search. Try to observe the application execution of each SQL statement, DB2 Explain information. However, I found that high-frequency, low-cost statements often compete for the machine capacity and capability to provide the desired performance. If the analysis time is short and maximum performance is crucial, then please consider using the tools provided by the supplier (which can be quickly automated identification of resource-intensive SQL statements in the process). Database-GUYS Inc.'s SQL-GUY tools provide accurate, real-time and balanced level of analysis of the cost of SQL statement.

Continue to regulate

Optimal performance requires not only exclude high-cost SQL statements, and the need to ensure appropriate physical infrastructure is appropriate. When all the adjustment knobs are set just right, the memory is effectively assigned to the pool and the heap and I / O evenly allocated to each disk in order to get the best performance. Although the measurement and adjustment will take time, but the implementation of the 10 recommendations of the DBA will be very successful in DB2 to meet internal and external customers. Because e-business change and growth, even the best database management also need regular trimming. DBA's job is never shake a stick at!
Ten DB2 database optimization techniques Original Source: http://space.itpub.net/15082138/viewspace-629191
  • del.icio.us
  • StumbleUpon
  • Digg
  • TwitThis
  • Mixx
  • Technorati
  • Facebook
  • NewsVine
  • Reddit
  • Google
  • LinkedIn
  • YahooMyWeb
Tags:

Related Posts of Ten DB2 database optimization tips

  • UC Optimization Tips

    Optimization Tips: The following is the project I can see some of the UC optimization problems, and summarize a bit. This document I have UC, three days after the beginning of the optimize the creation, in optimizing the process of constantly add, mo ...

  • SQL Optimization Tips oracle hint

    Optimal adjustment of the database in SQL, it often will use HINT tips. Currently supported HINT ORACLE as follows: In the SQL statement optimization process, we often use hint, is to sum up the optimization process in the SQL usage in the common Ora ...

  • SQL Server performance optimization tips

    http://it.dianping.com/database-performance-opimizing-tips.htm

  • V Database Optimization

    MSYQL optimization One. Configuration optimization II. Database design optimization of three. SQL optimization Introduction MYSQL Currently the 4.X, 5.X, 6.X Each version there Standdard, Max, Debug course is divided into three types of windows platf

  • Photo Search Engine Optimization Tips (Reprinted)

    Photo Search Engine Optimization Tips SEO SEO for beginners on the basics have been in verse, and the current intense competition for keywords, but also give the web site image optimizing traffic to bring efficiency to the customer. Which brings me t

  • Sql optimization optimization ideas and principles of database

    【※】 optimize database sql optimization ideas and principles 【※】 Articles Category: Database Optimization Database of thinking: 1, para keyword indexing. 2, using the stored procedure, which allows SQL to become more flexible and efficient. 3, backup the d

  • Oracle SQL Optimization Tips chapter summary

    Oracle SQL Optimization Tips summary article: (1) Choose the most efficient sequence table name ( Only rule-based optimizer in the effective ): ORACLE Parser in accordance with the order processing from right to left in the FROM clause of table ,FROM Clau

  • Site Keywords Rank Tips Tips

    Xin Cheng fastest computer optimization services web site for 15 days to improve your site ranking! ! ! ???? Optimization Website Telephone: 13450227654 Do your business web site, why not for some years have a greater profit? Your business website is ...

  • Oracle SQL Performance Optimization Tips Great summary

    Oracle SQL Performance Optimization Tips Great summary (1) Select the most efficient sequence table name (only the effective rule-based optimizer): ORACLE parser in accordance with the order processing from right to left in the FROM clause of the tab

  • Database optimization, including: oracle, mysql. (Turn)

    Database design: a good database design is to optimize the conditions of the code is the standard database optimization, design optimization is more important than that. Divided into four parts: # 1 #: database design process: The first step: to anal

blog comments powered by Disqus
Recent
Recent Entries
Tag Cloud
Random Entries