The Fastest DBA Contest benchmark was run against a server with only 1 disk so spreading out the I/O over multiple dbspaces wouldn't do too much good. My server has access to multiple RAID10 sets that are made up of 6 to 10 drives each so I think I can get some benefit from spreading out the I/O. dbspace1 has 6 drives, dbspaces 2 and 3 have 8 drives and dbspace 4 has 10 drives.
After the last benchmark I ran the following SQL against the sysmaster database to identify the I/O for each table and index
select
tabname::char(20) tab,
(pagreads + pagwrites) io,
pagreads,
pagwrites
from
sysptprof
where
dbsname = "ptpcc" and
tabname not matches ("sys*")
order by
(pagreads + pagwrites) desc;
...
tab io pagreads pagwrites
stock 833785 417350 416435
customer 226990 134711 92279
order_line 75287 42206 33081
stock_pk 45504 45504 0
oorder_uix1 36457 23932 12525
oorder 29367 26773 2594
order_line_pk 27214 11809 15405
oorder_pk 19565 16879 2686
customer_pk 17865 17865 0
new_order_pk 8496 6196 2300
district 4793 4745 48
customer_ix1 3894 3894 0
item 3773 3773 0
new_order 1979 773 1206
history 1522 12 1510
item_pk 815 815 0
warehouse 11 5 6
district_pk 7 7 0
warehouse_pk 1 1 0
Using this information I can place my tables and indexes in the appropriate dbspaces to spread the reads and writes across my disks evenly (giving the RAID10 set with 10 disks more I/Os to do and the RAID10 set with only 6 disks less I/Os to do) and hopefully improve my benchmark TPM.Here is how I chose to distribute my data
stock dbspace4
customer dbspace3
order_line dbspace2
stock_pk dbspace1
oorder_uix1 dbspace2
oorder dbspace2
order_line_pk dbspace1
oorder_pk dbspace1
customer_pk dbspace2
new_order_pk dbspace1
district dbspace2
customer_ix1 dbspace1
item dbspace2
new_order dbspace1
history dbspace2
item_pk dbspace1
warehouse dbspace2
district_pk dbspace1
warehouse_pk dbspace2
Making these changes and rerunning the benchmark gives us a TPM of 6187, an improvement of almost 5%. Not exactly the barn burning results I was hoping for, most likely a result of reducing our I/O by increasing out buffers but it is an improvement.
No comments:
Post a Comment