Thursday, May 27, 2010

Need a Free Database?


On May 25th IBM announced they will be repackaging their Informix Dynamic Server offerings.

Enterprise Edition, Workgroup Edition are enjoying a well deserved retirement and have been replaced with two new options: Ultimate Edition and Growth Edition. Developer Edition, the unpaid intern, is still around.

There is a lot of good stuff going on here, previously for cost features now included at no additional cost and previously unavailable features in certain editions now included at no additional cost.

But I don't want to talk about that. I want to talk about the free stuff, the C versions.

IBM announced two additional editions, Innovator-C and Ultimate-C. I've read that C stands for Community, as in Community provided support I guess. The kind of community support one can find on comp.databases.informix and the IIUG SIGs.

These C versions are FREE and can be used in production environments. I couldn't believe it either until I saw it explicitly stated in the IBM License Information document


1) IBM Base License

This Program has been provided to Licensee at no charge.

This License Information serves as Licensee's Proof of Entitlement (PoE).


Well hot damn. So what's the catch?

Community support is free, IBM support ain't. If you want to be able to call IBM if you have a problem or question you will have to purchase some support. This is pretty reasonable.

You can't redistribute the C versions without a distribution agreement and this will probably cost money. It is OK to install a free version in your data center and build a production application on top of it, it is not OK to bundle Informix with your application and distribute it for free.

And of course there are some restrictions on the features, resources and platforms available for the C versions.

Innovator-C Edition - Available for all platforms

Maximum of 4 CPUVPs (VPCLASS cpu,num=4)
Maximum of 2GB memory allocated to Informix (SHMTOTAL 2097152)
Maximum of 2 Enterprise Replication Admin nodes
1 Read/Write HDR Server or 1 Read/Write RSS Server
No DBSpace Prioritization during backup/restore
No Recovery Time Objective Policy
No Private Memory Cache for VP
No Direct I/O for Cooked Files
No I-STAR
No Parallel Data Query (PDQ)
No High Performance Loader (HPL)
No Parallel Index Build
No Parallel backup/restore
No Partitioning
No Column Level Encryption
No Compression
No Label based access control (LBAC)
No Last commit concurrency
No Multiple triggers and views
No Web feature service
No Node DataBlade
No Auto-gather statistics during index build
No Point-in-time table recovery
No SQL Warehouse
No Shared Disk Secondary

Ultimate-C Edition - Available on Windows and Mac OS X only

Maximum of 16 CPUVPs (VPCLASS cpu,num=16)
Maximum of 16GB memory allocated to Informix (SHMTOTAL 16777216)
1 Read/Write HDR Server or 1 Read/Write RSS Server
No Compression
No Shared Disk Secondary

If you're not familiar with Informix and you've stumbled across this blog looking for alternatives to other free database engines this may look like a scary list things you don't get. I can assure you it is not. These restrictions are a very small subset of the Informix features and none of these feature or resource restrictions should prevent you from using Innovator-C or Ultimate-C as your free database engine of choice.

One very important limitation that does NOT exist is the limitation on data size. If you have a 5 petabyte database, Innovator-C and Ultimate-C will support it.

I should say that all of this is just my interpretation of the IBM documentation on the new Informix packaging, I am not an IBMer and I could be wrong on some things, but I don't think I am.

Thursday, May 13, 2010

OLTP Benchmark - Update Statistics


Thought I'd post a quick entry about something that you're probably thinking I should have done already, update statistics.

Well it didn't do much good, which is kind of what I expected since Informix now does some updating of statistics automatically when you build an index and these statistics are enough to let the optimizer pick the correct index for my benchmark SQLs.

Side note: I think this is a great addition to the engine, it sure has seem to have reduced the number of "Did you update statistics" posts on C.D.I. but I wish there was a way to disable this functionality. When update statistics runs as part of the index build it runs slower than if I were updating my statistics manually (something I'm going to do anyway after my index build) and this extends the time that I am officially creating an index.

I guess if I have to find something to complain about in the engine I'm happy it is a small complaint like this one.

OLTP Benchmark - OPTOFC


I had always intended to try OPTOFC and the other settings that help reduce network traffic between the clients and the server, but upon the advice from Cesar and The Clown I decided to try them now. I'll admit that I didn't expect this setting to be the breakthrough setting I was looking for, but it sure was!

OPTOFC is an environment variable that you set on the client that optimizes the opening, fetching and closing of cursors by deferring the opening of a cursor until the first fetch is executed and automatically closing the cursor once the last row is returned to the client. In the benchmark I'm doing a lot of single row lookups, before this would be 3 network messages. 1 to open the cursor, another to fetch the row and a third to close the cursor. With OPTOFC set this is reduced to 1 network message that opens, fetches the 1 row and closes the cursor and boy howdy did it do the trick by increasing my TPM to 9607! That's an improvement of 31.5%, not bad for a simple environment variable change.

10,000 TPM is now within reach.

Without the 3 cheat codes from the previous post I was able to reach a TPM of 7855, a 27% improvement.

Wednesday, May 12, 2010

OLTP Benchmark - Cheats


I am tired of not making any real progress on improving my TPM so it is now time to cheat. I call the next 3 things cheats because they are not something I would do in a production environment or they artificially improve the benchmark TPMs. People who do benchmarks for a living would probably consider this just part of playing the benchmarking game, so lets play.

Cheat #1 - Buffered Logging

I can't turn off logging all together but I can improve performance by using buffered logging instead of unbuffered logging. The downside to using buffered logging in a production environment is the possibility of the user committing a transaction, that commit being written to the logical log buffer, a response being sent back to the user saying the transaction was committed followed by an engine crash before the logical log buffer is flushed to disk which will cause that transaction to be rolled back (even tho the user was told it was comitted) when the engine is restarted and recovery happens.

The upside to buffered logging is it is faster. Changing LOGBUFF to 256 and turning on buffered logging for the ptpcc database improved my TPM to 6785, an improvement of almost 10%.

Cheat #2 - Those Pesky Checkpoints

A funny thing happens around 11.5 minutes into my 15 minute benchmark, my performance falls off a cliff for about 1 to 2 minutes while I flush all my data to disk during a checkpoint. It is nice to see processing not stop all together as it would in the days of blocking checkpoints (I want to marry non blocking checkpoints because I love them so much) but it would be nicer to not see them at all.

If we're just playing the benchmarking game all we need to do is make it through the 15 minute benchmark time without doing any writes (foreground, LRU or chunk). So far I don't have any foreground or LRU writes so I should just have to avoid any checkpoints during my run.

I chose to change the RTO_SERVER_RESTART ONCONFIG from 120 to 1800. This parameter tells the engine to fire a checkpoint when the time to recover from a failure reaches 30 minutes. Not too practical for a real world production OLTP engine, but great for a 15 minute benchmark.

I suppose I could also have disabled auto checkpoints and simply set the old school CKPTINTVL ONCONFIG parameter to 1800. I will do this if RTO_SERVER_RESTART starts firing checkpoints during our 15 minute run.

TPM with no checkpoints = 7032 an improvement of 3.6% and I broke the 7000 TPM barrier!

Cheat #3 - Prime the buffer cache

After the last benchmark run I checked the last few lines of onstat -P to see how many of my 1.5 million buffers were actually filled with data. Turns out that 45.5% of my buffers had gone unused. This is just a factor of my benchmark time being short and not having enough time to populate the buffers with read in pages. I'm going to take advantage of all this space by prepopulating my buffers with the data I think the benchmark will need, specifically the customer and stock data pages.

If I run the following 3 SQLs before I run my benchmark I should populate the buffers with the data I want and hopefully get a better read cache hit rate.

select {+full (oorder)} count(distinct o_c_id) from oorder;
select {+full (customer)} count(distinct c_zip) from customer;
select {+full (stock)} count(distinct s_quantity) from stock;

Doing this increased my read cache hit % up to 99.81 from around 98.5 for a new TPM of 7307.

This also reopened an area I can tune. Now that the oorder, customer and stock table a fully cached before the benchmark and we are no longer doing any writes during the benchmark my I/O to physical disks has changed a lot.

dbspace4 had 0 I/Os during the last benchmark and dbspace3 had just a handful of I/Os. I should redistribute my data and indexes to take advantage of this.


Tuesday, May 11, 2010

OLTP Benchmark - FILLFACTOR


In my ongoing quest to minimize I/O I will next try to reduce the number of index pages I have. 4 tables in the benchmark get rows inserted: history, order_line, oorder and new_order and the history table doesn't have any indexes on it.

This means the remaining tables have no inserts and I can set FILLFACTOR to 100 and put as much index data on a page that will fit and hopefully reduce my I/O.

For the 3 tables that currently do have indexes and are inserted into I will start by changing the FILLFACTOR from the default of 80 to 90 and see what happens. Over the course of the benchmark I increase the number of rows in these tables by 5%, so a more optimal FILLFACTOR may be 95%. I'll try that second and then try a FILLFACTOR of 100% for all tables just for fun.

FILLFACTOR 90% for my 3 insert tables - TPM 6190, meh, not much improvement over 6187.

FILLFACTOR 95% - TPM 6100

FILLFACTOR 100% - TPM 6165

I think it is interesting that FILLFACTOR 100% outperformed FILLFACTOR 95%, other than that nothing of any real excitement going on here.

Back to the drawing board.

Monday, May 10, 2010

OLTP Benchmark - MAX_FILL_DATA_PAGES


Add another parameter to the list of things that do not improve the benchmark TPM.

MAX_FILL_DATA_PAGES is a newer ONCONFIG parameter that when set to 1 will try to cram more rows with variable length data into a data page than it normally would. I thought this would be a good way to decrease I/O by reducing the number of data pages but I was wrong.

With MAX_FILL_DATA_PAGES set to 0 the engine will only store a row with variable length data on a page if the page has enough free space to allow the new row to grow to the maximum size. When MAX_FILL_DATA_PAGES is set to 1 the engine will store the row on a page if there is at least 10% of the page still free after inserting the new row.

One disadvantage to using this parameter is that a row with variable length data could need to grow in size and that growth could cause the row to be split across multiple pages if there is not enough free room and as we all learned in elementary school, this is a very bad thing.

With MAX_FILL_DATA_PAGES set to 1 and a drop/recreate/reload of the ptpcc database (something I do before each benchmark, BTW) the TPM dropped to 6095.

Well, I'm not making much progress. I'm wondering if I can get over 7000 TPM at this point. I'm not too concerned yet. I've only done some easy ONCONFIG changes and some spreading of I/O over my extra disks. There are still a lot of things to try.



OLTP Benchmark - CPUVPs


I currently have 4 CPUVPs configured for my 2 dual core CPUs. Can I get more from my physical CPUs if I create more CPUVPs?

The last TPM running 4 CPUVPs was 6187.

If I increase CPUVPs to 6, what happens? I get a decreased TPM of 6042 is what happens.

Maybe I was too aggressive, what if I just increase CPUVPs to 5, just 1 more CPUVP than real processors? Well, I get a TPM of 6027 and I'm done trying to increase CPUVPs to increase performance. At least on my hardware for this benchmark anyway.

What if I reduce CPUVPs to 3? I get a TPM of 6173, better than when I increased CPUVPs but worse (but not by much) than running 4 CPUVPs on a machine with 4 CPUs.

For now I'm going to stick with my original configuration as far as number of CPUVPs goes.

What about affinity? Would it help to pin my 4 CPUVPs down to a physical CPU and prevent my oninit processes from jumping around from CPU to CPU?

Unfortunately, that didn't work to well either giving me a TPM of 6083. Pffffffffffffft, what a bummer. No improvements today.




Friday, May 7, 2010

OLTP Benchmark - Spreading out the I/O


The Fastest DBA Contest benchmark was run against a server with only 1 disk so spreading out the I/O over multiple dbspaces wouldn't do too much good. My server has access to multiple RAID10 sets that are made up of 6 to 10 drives each so I think I can get some benefit from spreading out the I/O. dbspace1 has 6 drives, dbspaces 2 and 3 have 8 drives and dbspace 4 has 10 drives.

After the last benchmark I ran the following SQL against the sysmaster database to identify the I/O for each table and index




select
tabname::char(20) tab,
(pagreads + pagwrites) io,
pagreads,
pagwrites
from
sysptprof
where
dbsname = "ptpcc" and
tabname not matches ("sys*")
order by
(pagreads + pagwrites) desc;

...

tab                             io    pagreads   pagwrites

stock                       833785      417350      416435
customer                    226990      134711       92279
order_line                   75287       42206       33081
stock_pk                     45504       45504           0
oorder_uix1                  36457       23932       12525
oorder                       29367       26773        2594
order_line_pk                27214       11809       15405
oorder_pk                    19565       16879        2686
customer_pk                  17865       17865           0
new_order_pk                  8496        6196        2300
district                      4793        4745          48
customer_ix1                  3894        3894           0
item                          3773        3773           0
new_order                     1979         773        1206
history                       1522          12        1510
item_pk                        815         815           0
warehouse                       11           5           6
district_pk                      7           7           0
warehouse_pk                     1           1           0
Using this information I can place my tables and indexes in the appropriate dbspaces to spread the reads and writes across my disks evenly (giving the RAID10 set with 10 disks more I/Os to do and the RAID10 set with only 6 disks less I/Os to do) and hopefully improve my benchmark TPM.

Here is how I chose to distribute my data

stock dbspace4
customer dbspace3
order_line dbspace2
stock_pk dbspace1
oorder_uix1 dbspace2
oorder dbspace2
order_line_pk dbspace1
oorder_pk dbspace1
customer_pk dbspace2
new_order_pk dbspace1
district dbspace2
customer_ix1 dbspace1
item dbspace2
new_order dbspace1
history dbspace2
item_pk dbspace1
warehouse dbspace2
district_pk dbspace1
warehouse_pk dbspace2

Making these changes and rerunning the benchmark gives us a TPM of 6187, an improvement of almost 5%. Not exactly the barn burning results I was hoping for, most likely a result of reducing our I/O by increasing out buffers but it is an improvement.

OLTP Benchmark - Buffers


What is the first thing everyone does when tuning an engine for the first time? Check the buffers and increase them if they haven't been changed from the default setting. We want to minimize I/O because I/O sucks and takes a long time. Putting as much likely to be used again very soon data into cache is a good way to minimize I/O.

Our baseline was 3651 TPM with buffers set to 50000. I'm sure we can improve on that number if we reduce the number of disk reads and writes. Taking advantage of the fact that this is an OLTP benchmark I can configure nearly all of the available memory on the box to buffers (3GB or 1572864 2K buffers) and configure just enough memory to the virtual segment to handle all of the sessions (250 MB).

Running the benchmark again I get the following results

2010-05-07 16:35:07: benchmark - 100 terminals running for 900 seconds
2010-05-07 16:35:07:                       sql        total          min          max          avg          num
2010-05-07 16:35:07:          dbSelDistOrderId   366.158430     0.000592     3.143735     0.103085         3552
2010-05-07 16:35:07:             dbSelDistPymt  1435.361208     0.000605     0.837176     0.037623        38151
2010-05-07 16:35:07:             dbSelCustById  2255.096493     0.000784     1.295496     0.059110        38151
2010-05-07 16:35:07:          dbSelNewestOrder   595.117024     0.000480     5.193018     0.172648         3447
2010-05-07 16:35:07:             dbSelCustData   150.336596     0.000696     0.435675     0.038957         3859
2010-05-07 16:35:07:           dbSelOrderLines    69.367385     0.000373     0.499633     0.020124         3447
2010-05-07 16:35:07:          dbSelCustByOrder  1348.661282     0.000582     0.652879     0.038489        35040
2010-05-07 16:35:07:               dbUpdOOrder   677.845172     0.000256     0.566081     0.019345        35040
2010-05-07 16:35:07:                 dbUpdDist   763.428072     0.000250     0.676018     0.019180        39803
2010-05-07 16:35:07:                dbSelOrder   135.996560     0.000631     0.804294     0.039454         3447
2010-05-07 16:35:07:             dbUpdDistPymt  4233.915816     0.000257     8.117577     0.110978        38151
2010-05-07 16:35:07:                dbUpdStock  7224.883937     0.000263     0.797545     0.019243       375455
2010-05-07 16:35:07:            dbSelCustCount   150.360743     0.000795     1.164771     0.069676         2158
2010-05-07 16:35:07:             dbUpdWhsePymt  4263.935629     0.000254     8.083272     0.111765        38151
2010-05-07 16:35:07:   dbUpdCustBalanceAndData    71.946746     0.000414     0.333886     0.018644         3859
2010-05-07 16:35:07:             dbInsHistPymt   728.578052     0.000265     0.503783     0.019097        38151
2010-05-07 16:35:07:          dbUpdCustBalance   666.788779     0.000283     0.656385     0.019444        34292
2010-05-07 16:35:07:        dbSelCustByNameOrd    94.072595     0.000762     0.373746     0.046318         2031
2010-05-07 16:35:07:             dbSelNewOrder  2017.488472     0.003504     5.246324     0.057577        35040
2010-05-07 16:35:07:             dbSelCustWhse  5384.940138     0.000726     6.392744     0.135290        39803
2010-05-07 16:35:07:          dbSelCustByIdOrd    82.164412     0.000758     0.852723     0.058026         1416
2010-05-07 16:35:07:            dbUpdOrderLine  1070.116667     0.000348     1.189829     0.030540        35040
2010-05-07 16:35:07:                dbSelStock 19846.689419     0.000703     1.215853     0.052860       375455
2010-05-07 16:35:07:         dbSelOrderLineAmt  1375.268063     0.000681     0.623969     0.039249        35040
2010-05-07 16:35:07:             dbSelWhsePymt  1448.440517     0.000599     0.677760     0.037966        38151
2010-05-07 16:35:07:            dbInsOrderLine  7268.765215     0.000281     0.920966     0.019360       375455
2010-05-07 16:35:07:               dbInsOOrder   764.685358     0.000281     0.520975     0.019212        39803
2010-05-07 16:35:07:             dbDelNewOrder   770.737759     0.000263     7.087722     0.021996        35040
2010-05-07 16:35:07:                 dbSelDist  5383.367946     0.000620     6.788749     0.135250        39803
2010-05-07 16:35:07:             dbInsNewOrder   767.917783     0.000262     0.398216     0.019293        39803
2010-05-07 16:35:07:         dbUpdCustDelivery  1270.802428     0.000293     1.166629     0.036267        35040
2010-05-07 16:35:07:           dbSelCountStock   118.598752     0.006119     1.274016     0.033389         3552
2010-05-07 16:35:07:                 dbSelItem 14574.856113     0.000633     0.837902     0.038819       375455
2010-05-07 16:35:07:                    totals 87376.689563            -            -            -      2240081
2010-05-07 16:35:07:
2010-05-07 16:35:07: time: 901.060969, total transactions: 88584, TPM: 5898.646354, SPM: 149162.891976
Maxing out buffers increases out TPM by almost 62% to 5899, not bad for a simple config change and engine bounce.

Next I think I'll take advantage of my 3 unused dbspaces to spread what I/O I have left around to my idle disks.

OTLP Benchmark - Baseline


I'm ready to start benchmarking, but before I start we need to talk about the hardware I'm running the benchmark against and produce some baseline results.

Server/Storage Properties

2 Dual Core CPUs
4GB RAM
OS: 64 bit RHEL5
Informix: 11.50.FC6

rootdbs: 8GB RAID10 spread over 6 physical drives, currently holds a 4GB physical log
llogdbs: 8GB RAID10 spread over 8 physical drives, currently holds 32 256 MB logical logs
dbspace1: 49GB RAID10 spread over 6 physical drives
dbspace2: 108GB RAID10 spread over 8 physical drives
dbspace3: 69GB RAID10 spread over 8 physical drives
tempdbs: 4GB RAID10 spread over 10 physical drives

The client (the ptpcc.py process that runs the benchmark) will run on a separate but identical server.

For the baseline benchmark, here is a sampling of the ONCONFIG values I used

PHYSBUFF 128
LOGBUFF 64
NETTYPE soctcp,1,200,NET
VPCLASS cpu,num=4,noage
VP_MEMORY_CACHE_KB 8192
VPCLASS aio,num=10,noage
CLEANERS 8
AUTO_CKPTS 1
RTO_SERVER_RESTART 120
STMT_CACHE 0
USELASTCOMMITTED NONE
BTSCANNER num=1,threshold=5000,rangesize=-1,alice=6,compression=default
OPTCOMPIND 0
BUFFERPOOL size=2K,buffers=50000,lrus=8,lru_min_dirty=50,lru_max_dirty=60
AUTO_LRU_TUNING 1

All tables used in the baseline benchmark were created with row level locking in dbspace1 and the original set of indexes.

Before each benchmark I will bounce the engine and allow the benchmark to run for 5 minutes without collecting statistics. After this 5 minute ramp up time the benchmark will run for 15 minutes and produce some statistics that look like this:


2010-05-07 15:11:26: benchmark - 100 terminals running for 900 seconds
2010-05-07 15:31:28:                       sql        total          min          max          avg          num
2010-05-07 15:31:28:          dbSelDistOrderId   264.492183     0.000589     4.998652     0.120443         2196
2010-05-07 15:31:28:             dbSelDistPymt   666.518798     0.000611     0.874549     0.028071        23744
2010-05-07 15:31:28:             dbSelCustById  3865.562448     0.000819     2.452900     0.162802        23744
2010-05-07 15:31:28:          dbSelNewestOrder  3540.358412     0.000530    13.473251     1.700460         2082
2010-05-07 15:31:28:             dbSelCustData    69.895807     0.000703     0.853232     0.030310         2306
2010-05-07 15:31:28:           dbSelOrderLines    33.498745     0.000373     0.484876     0.016090         2082
2010-05-07 15:31:28:                dbSelOrder    69.810965     0.000608     0.529128     0.033531         2082
2010-05-07 15:31:28:          dbSelCustByOrder  1081.806514     0.000592     1.636825     0.049017        22070
2010-05-07 15:31:28:               dbUpdOOrder  1071.331497     0.000262     2.102251     0.048542        22070
2010-05-07 15:31:28:                 dbUpdDist   369.739051     0.000248     0.864587     0.015036        24591
2010-05-07 15:31:28:           dbSelCountStock  5653.657284     0.006115    14.663261     2.574525         2196
2010-05-07 15:31:28:             dbUpdDistPymt  3211.747694     0.000262     6.936206     0.135266        23744
2010-05-07 15:31:28:                dbUpdStock  3607.889070     0.000283     1.362896     0.015618       231006
2010-05-07 15:31:28:            dbSelCustCount   207.250488     0.017086     2.808686     0.166466         1245
2010-05-07 15:31:28:             dbUpdWhsePymt  2955.763594     0.000259     5.403844     0.124485        23744
2010-05-07 15:31:28:   dbUpdCustBalanceAndData    32.769989     0.000403     0.146913     0.014211         2306
2010-05-07 15:31:28:             dbInsHistPymt   368.780551     0.000269     1.330070     0.015532        23744
2010-05-07 15:31:28:          dbUpdCustBalance   345.092451     0.000297     1.336634     0.016097        21438
2010-05-07 15:31:28:        dbSelCustByNameOrd    45.854596     0.000788     0.640742     0.038566         1189
2010-05-07 15:31:28:             dbSelNewOrder  1986.016858     0.003527     6.152908     0.089987        22070
2010-05-07 15:31:28:             dbSelCustWhse  5416.771947     0.000852     6.570103     0.220275        24591
2010-05-07 15:31:28:          dbSelCustByIdOrd   108.330620     0.003462     1.446552     0.121311          893
2010-05-07 15:31:28:            dbUpdOrderLine  1515.644348     0.000358     2.189919     0.068674        22070
2010-05-07 15:31:28:                dbSelStock 26748.686979     0.000714     5.385313     0.115792       231006
2010-05-07 15:31:28:         dbSelOrderLineAmt   688.818336     0.000692     1.372060     0.031211        22070
2010-05-07 15:31:28:             dbSelWhsePymt   708.854190     0.000605     0.904277     0.029854        23744
2010-05-07 15:31:28:            dbInsOrderLine  4126.250988     0.000295     2.563897     0.017862       231006
2010-05-07 15:31:28:               dbInsOOrder   604.548543     0.000291     2.471327     0.024584        24591
2010-05-07 15:31:28:             dbDelNewOrder   479.087660     0.000269     8.575327     0.021708        22070
2010-05-07 15:31:28:                 dbSelDist  3854.715855     0.000630     7.683246     0.156753        24591
2010-05-07 15:31:28:             dbInsNewOrder   412.719221     0.000263     2.343799     0.016783        24591
2010-05-07 15:31:28:         dbUpdCustDelivery  2324.551898     0.000327     2.426604     0.105326        22070
2010-05-07 15:31:28:                 dbSelItem 12098.294207     0.000647     1.985179     0.052372       231006
2010-05-07 15:31:28:                    totals 88535.111786            -            -            -      1383948
2010-05-07 15:31:28:
2010-05-07 15:31:28: time: 901.863525, total transactions: 54876, TPM: 3650.840630, SPM: 92072.556084


The last line shows the total results for the entire benchmark. time is the actual run time of the benchmark in seconds, TPM is the number of transactions per minute and SPM is the number of SQLs executed per minute.

Above the total results we see metrics for each SQL statement that make up each transaction. sql is a name for the SQL statement, total is the total number of seconds spent executing the SQL, min, max and avg are the minimum, maximum and average number of seconds for a single SQL and num is the number of times this SQL was executed.

Now the fun can begin and will most likely involve increasing the buffers to something reasonable to take advantage of the 4GB of memory I have on this machine.



Fastest Informix DBA Contest at IIUG 2010


One of my favorite parts of the IIUG conference is the Advanced Data Tools Fastest DBA contest. This year Lester gave us a real challenge: tune an engine for a TPCC like OLTP benchmark. This alone was a fun challenge but what really put you to the test was the amount of time you had. You had 1 hour from start to finish which included 10 minutes to initialize the engine and 10 minutes to run the benchmark leaving you only 40 minutes to get familiar with the benchmark and do any tuning or index building you wanted to do.

For me the 40 minutes felt like 4 and I didn't think of anything special to do to tune the engine and finished in 4th place but I did manage to claim the fastest Middle-Aged Informix DBA title. The prize for winning this category was a very nice shirt and many jokes at my expense from Tom Girsch and others.

Now that I am back home and have more than 40 minutes to play around with the benchmark I want to see how fast I can get it to run.

The first thing I did is rewrite the Java version in Python. Why would I do this? I don't know, I just like Python and rewriting the original benchmark was a good way for me to understand what the benchmark was doing. During my rewrite I added some logic to capture the execution times of each SQL, the number of times it was executing, the minimum execution time and the maximum execution time so I can see what needs to be improved or how much I've improved things.

Now that I have a good idea of what the benchmark does, I can give a brief overview of what is going on, something I couldn't do a week ago during my 40 minutes at the IIUG conference.

The benchmark simulates a order processing system. The system can take orders, schedule orders for delivery, process payments, check stock levels and check order statusii. There are 50 warehouses that each contain different amounts of the 100000 items in stock. Each warehouse has 10 districts that each have 3000 customers (50 * 10 * 3000 = 1.5 million customers) that can place orders, be delivered items or make payments, etc. from 1 of the 100 terminals that are running in parallel.

When the benchmark starts 100 terminal threads are created and run for 10 minutes continuously picking a transaction type to run.

43% of the time a payment transaction is picked
4% of the time a order status check transaction is picked
4% of the time a delivery transaction is picked
4% of the time a stock level check transaction is picked
45% of the time a new order transaction is picked

Each transaction type consists of multiple SQL statements that select, insert, update and delete and simulates an OLTP environment.

Next time I'll get into the configuration of my test system and share the results of my baseline and then start changing stuff and see if it helps or hurts.