Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Tandem Computers wikipedia , lookup
Concurrency control wikipedia , lookup
Microsoft SQL Server wikipedia , lookup
Microsoft Jet Database Engine wikipedia , lookup
Open Database Connectivity wikipedia , lookup
Clusterpoint wikipedia , lookup
Extensible Storage Engine wikipedia , lookup
Object-relational impedance mismatch wikipedia , lookup
IBM GLOBAL SERVICES D17 DB2 UDB Best Practices Dwaine R Snow IBM DB2 Information Management Technical Conference Las Vegas, NV © IBM Corporation 2004 Sept. 20-24, 2004 Introduction • Discuss best practices for – – – – Building your databases Configuring DB2 and your databases Monitoring Tuning • Based on experience with customers Building Databases • Ensure enough physical disks – Too few disks in the #1 cause of poor performance – General ROT, minimum 6-10 disks per CPU • More is good – Do not isolate table spaces to disks unless you have plenty of disks • Normally better to spread table spaces across all available disks • Do not create more than one TEMP table space per page size • SMS is normally the best choice for TEMP Building Databases • If using SMS on AIX, use 3 or more containers per table space to prevent i-node contention • Logging – Ensure logs are placed on separate disks – Use mirror logging • Removes logging as a single point of failure – When using mirror logging, ensure the logs are on different disks, arrays, disk adapters Building Databases • For SAN/NAS – Do not count on a large disk cache for performance – Buffer pool normally absorbs the hits – Size for performance, NOT capacity Partitioned Databases • Rules of thumb for RAW data volumes – For Regatta class CPUs 50 - 200 GB per CPU – For Intel/AMD, HP, SUN 25 - 150 GB per CPU • Rules of thumb for CPU to Partition Ratio – For Regatta 1 or 2 CPUs per partition both work well – For Intel/AMD, HP, SUN 2 CPUs per partition typically work best Configuring DB2 • Use 64 bit if you can – Allows more addressability • Bigger buffer pools • Bigger sorts – Less limitations for dynamic tuning Configuring DB2 • Disable intra-partition parallelism using INTRA_PARALLEL – Leaving this enabled and setting MAX_DEGREE to 1 is worst possible configuration • Connection Concentrator should be used for workloads with: – Many connections – Issuing very quick SQL statements – Do not use connection concentrator for large / long running queries • Set the CPUSPEED to –1 so DB2 can calculate the appropriate value Configuring DB2 • Set DIAGLEVEL and NOTIFYLEVEL to 3 • For large result sets set RQRIOBLK as big as possible (64K) – Default OK for single row results • Set SHEAPTHRES based on the average # of concurrently executing apps times the SORTHEAP – But not less than 10X SORTHEAP • Make sure there are few people in the SYSADM group – Too many can lead to problems Configuring your databases • Other than size, a single large buffer pool requires no tuning and gives very good performance • Multiple buffer pools – Can give better performance if sized correctly – Require constant monitoring • Configure enough primary logs to handle workload – Creating secondary logs is expensive • Setting LOGPRIMARY to -1 enables infinite logging – Do you really need/want this? – If using this, set MAX_LOG to limit recovery timeframe Configuring your databases • Set NUM_IOCLEANERS based on # of CPUs • Set NUM_IOSERVERS based on # of physical disk the database is using • Set DFT_EXTENT_SZ based on underlying disks – Be aware of striping, SAN disk configuration • Set DFT_PREFETCH_SZ based on underlying disks • Set BLK_LOG_DSK_FUL so that DB2 waits if the log disk becomes full Configuring your databases • Type II indexes can improve concurrency – For databases migrated from V7, use reorg to convert the indexes • If you encounter an “Out of DBHEAP’ error – Double DBHEAP and continue – Since DB2 only allocates what is needed, no need to spend a lot of time calculating exact value Configuring your OLTP databases • For dynamic workloads define a large package cache • For dynamic workloads use query optimization 2 or 3 • Set the log buffer to at least 256 pages • Larger log file size improves performance – But can impact recovery • Set MAXLOCKS to 20-30% – UNIX default is too low – Also increase LOCKLIST from default Configuring your OLTP databases • Set MINCOMMIT to 1? – Maybe set to # trans per second / 10 (or # trans / 100) • Set AVG_APPLS low, typically 1 is good – Not based on the # of connections • Reduce CHNGPGS_THRES to 20-30 % – Default can cause system slowdowns • Set SOFTMAX to an integer multiple of 100% • Do not set SORTHEAP too large as it makes sorts more “attractive” to the optimizer • Use smaller page sizes Configuring your DSS databases • Use query optimization of 5 or higher – Required for some optimizations, i.e. hash joins • Ensure AVG_APPLS is set based on real workload – Monitor the system and determine the average # of concurrently executing applications – Not based on # of connections • Large SORTHEAP can help to reduce overflowed sorts • Since normally read only, increase DLCHKTIME • Use larger page sizes Configuring your ODS • Use QP to ensure query response times • Run report is UR isolation level • Use MQTs to improve report performance, separate access from updates Monitoring • Make sure you take DB2 and OS snapshots at the same time • Only take the snapshot(s) you are interested in – Snapshot for all can add overhead • SQL functions for snapshots allow you to easily insert the data into table(s) for analysis using SQL – Can analyze data for trends – Predict and plan for growth Things to Look for in Snapshots • High number of sort overflows indicate larger SORTHEAP may help performance • Examine ratio of rows read to rows selected – High ratio indicates likely table scans • Rows written/inserted for a query only workload indicates sorts • Application snapshots are for the length of the connection, not individual statements – To examine statements look at statement snapshot Tuning • Use AUTOCONFIGURE to get an initial configuration • When tuning, change one parameter at a time, and re-test • Make sure your statistics are current • If you add an index, make sure you run RUNSTATS • Before retesting a poorly performing stmt – Flush the package cache • Otherwise the old plan will be reused • When you create an index, the key order does matter – Put column with highest cardinality as the first key Troubleshooting • When there is a performance problem, use vmstat to see if the bottleneck is CPU, I/O or memory • High I/O wait typically indicates overflowed sorts or table scans – Use iostat to isolate the disk, then correlate to the database object • High CPU usage may indicate sorting or lock waits are occurring – Examine database snapshots to determine which one is occurring • Memory issues usually indicated by paging Troubleshooting • Statement snapshot (or QP) can help identify poorly performing queries – Look for queries with high execution times • Especially if their cost is low – Look for queries with a number of sorts • More important for queries that are run many times Summary • Hopefully these tips can save you time and/headaches • Make sure you examine the tips and think about how they relate to your environment • When making config changes, make one change at a time and re-test