Friday, November 21, 2014

I'm Back! Did You Think I was Gone Forever?


I Sure Did Miss You

Truth be told, I just haven't had much to write about until now. I haven't been doing much of anything interesting with Informix. Sure, I've been doing the day to day Informix DBA stuff, but nothing blog worthy. Mostly I've been working on some software development, data mining and business optimization projects for my company. But that all changed this week.

The Catalyst

I want to upgrade to v12.1, but I'm a big fat chicken and I've got a pending in place alter on my most importantest of important tables. The one that holds account information and balances for over 1 billion accounts and the manual tells me I should probably resolve all of my IPA's before performing an in place upgrade. Could I take a chance and upgrade the engine without following the manual's advice? Sure. Have I done it before without problems? Yeah, but I'm not proud of it and they weren't on our most precious table, so this time I'm going to follow the rules.

Part 1 - What to Expect and What I Knew Wouldn't Work

This will be a 89 part series (give or take a few) on how I accomplished this task without any downtime, without killing performance, without locking up the table and most importantly without updating every gosh darned one of those 1 billion rows like the manuals suggest you do. (Ok, full disclosure, I'm still working on the last little bit, but I think I've got this nut cracked.)

I have submitted a proposal for IIUG2015 on this topic, so if you don't feel like reading you can just wait for the movie. You will have to buy a ticket and they will be on sale soon. If you too would like to present something at IIUG2015 and receive a complimentary pass, there is still time to submit a proposal. This is the 20th anniversary of the first Informix conference in Chicago and it will be fun and educational as always.

What I knew wouldn't work - The manuals seem to assume we are all operating with tables with only a handful of rows and their simple solution to resolving IPA's is to just dummy update every row in the table.

update your_tiny_table set your_field = your_field where 1 = 1;

Well, we all know that will cause havoc on a system if you're updating a table with any substantial size. Locks, long transactions, evil I/O, oh my. There has to be a better way and there is. Next time I'll show you the first thing I tried. (Spoiler alert, it didn't work either.)

Friday, March 21, 2014

Eric Vercelletto Asks and Answers the Question of "Where is Informix?"


You may have already read this article by my friend and "Benchmark Guy", Eric Vercelletto, but it is so well done and contains so many great thoughts and observations that I think it needs to be posted here for you to read again (or for the first time).

Eric's Blog

Where is Informix?

Tuesday, December 17, 2013

IIUG2014 Conference Update


I posted this on the IDS SIG yesterday. Here is it again in case you missed it and are interested.

The IIUG2014 CPC met at the JW Marriott in Miami, FL this weekend to prepare for the 2014 conference. I thought I would give an update on where we are at and pass along my experience with the city and hotel since this will be a new location for a lot of us in April.

The hotel is beyond nice. The place is 98.34% Marble and Mahogany and smells terrific. Yes, it is weird to compliment a hotel on their smell, but I thought this to myself each time I entered the lobby. It is the nicest hotel I have ever stayed in and unless I hit the Florida Lotto, it will be the nicest hotel I ever stay in. The hotel is not undergoing a remodel nor are they planning on any major construction during the conference (for those that attended the Overland Park conferences, this is important).


Friday, May 3, 2013

IBM Informix Request for Enhancement Tool


I have no idea how IBM decides what new features to add to Informix, but I do know that we can now be part of the discussion by using the new Request for Enhancement tool (RFE Tool).

I took this for a spin today and I must admit this is a pretty interesting thing that you should check out. Not only can you submit your own requests for new features, you can view what everyone else has suggested and vote for what you want to have added.

There are a lot of good ideas in there and I really hope to see some of these feature requests in later releases.

Take the RFE for a spin today. Submit a request, it is fun. 

To see the Informix specific RFEs, search under Brand: Information Management, Product Family: Informix.

Monday, April 8, 2013

Friday, April 5, 2013

When You Need to Store a String of Numbers in a CHAR Column


Sometimes you have to store a string of numbers in a CHAR column. Probably because the string of digits represents an account number or something similar and storing as an INTEGER or BIGINT doesn't really make sense. The account number could have leading zeros that would be lost if stored as an integer. Parts of the account number could store special information, like positions 2,3 and 4 identify what department an account belongs to and it might be useful to be able to select digit_string[2,4]. There are plenty of reasons to store numerical data in a string.

What is the best way to ensure that all of the characters in the string are actually numbers?

This is what I do, is there a better way to do it?

alter table my_table add constraint check (replace(rtrim(digit_string), " ", "x")::bigint >= 0) constraint my_table_ck1;

insert into my_table (digit_string) values ("123456");

1 row(s) inserted.

insert into my_table (digit_string) values ("abc123");

1213: A character to numeric conversion process failed

insert into my_table (digit_string) values ("   123456");

1213: A character to numeric conversion process failed

update my_table set digit_string = "xyzpdq" where digit_string = "123456";

1213: A character to numeric conversion process failed

The constraint will try to cast the digit string to a BIGINT, if this works then all of the characters in the string are numbers. If it doesn't work we get an SQL error and the bad data never makes it into our database.

The replace(rtrim()) stuff attempts to capture leading white space in the digit string that would not cause the cast to a BIGINT to fail.

There are plenty of other ways to accomplish the same thing, but I like this way.

You could rely on the application to check the digit string before it inserts/updates the database, but I'm pretty sure this isn't the best way.

You could write a stored procedure that is run by insert/update triggers, but I don't think that is more efficient than the check constraint/cast to BIGINT method. This would have the benefit of being able to raise a user defined SQL error instead of the odd -1213 error, though.

Tuesday, March 26, 2013

For My 100th Post: IBM Informix 12.1 - It's Simply Powerful


11:58 AM: Waiting for the IBM Informix It's Simply Powerful Webcast to start and on my screen I see IBM Informix 12.1, so I guess it is officially announced.

12:00 PM: Moderator is giving the rules and regulations of the Webcast. Questions will be answered after the Webcast.

12:01 PM: Chad Gates from Pronto Software, John Miller Informix Lead Architect and Sally Hartnell from IBM Marketing filling in for Jerry. Where's Jerry? He is unavoidably detained.

12:03 PM: 12+ Years of Informix Innovation with IBM

12:03 PM: Over 190 new partners in 2012

12:04 PM: Overview of the new stuff in 12.1. Cloud, Easy of Use, Warehouse, Sensor Data Management and something else I missed

12:05 PM: TimeSeries for Sensor Data. 5x Performance using 1/5 the resources as the competition

12:07 PM: Compression: Reduces storage and improves performance.

12:08 PM: JM3 talking about compression now. NEW! Index compression. NEW! Blob compression.

12:10 PM: NEW! Automatic table compression

12:11 PM: NEW! Primary Storage Manager replaces ISM for more backup solution options

12:12 PM: Chad from Pronto Software now talking about their EVP experience.

12:13 PM: Pronto has an ERP product that embeds Informix and Cognos. Informix initially picked for the OLTP capabilities. Informix 12.1 improves OLTP performance and OLAP performance benefiting from Informix Team working closely with the Cognos Team.

12:26 PM: Pronto experiences massive performance gain when concurrently running OLTP and OLAP on 12.1 over 11.x

12:28 PM: "Informix Warehouse Accelerator gaining worldwide traction to accelerate warehouse queries up to 100+ times"

12:29 PM: Back to JM3 on IWA improvements. NEW! Trickle Feed (cool) can now have real time analytics vs. refreshing the entire warehouse. NEW! Automated Partition Refresh. NEW! IWA and OAT integration.

12:31 PM: NEW! IWA and TimeSeries integration. IWA analytics over TimeSeries data.

12:32 PM: Flexible Grid/ER - NEW! ER no longer requires a Primary Key.

12:34 PM: Execute SQL over the grid - Query Sharding, that's sharding with a D.

12:35 PM: Talking about Hypervisor edition for Virtual/Cloud based deployments.

12:36 PM: Informix Genero accelerates new generation of mobile and cloud-based apps.

12:36 PM: Sally: Informix integrated with the IBM Mobile Database. Sync mobile db data with Informix backend.

12:37 PM: JM3: NEW! Mobile OAT for your phone or tablet

12:38 PM: Improved OAT out of the box experience, OAT GUI deployed as part of Informix install

12:39 PM: Sally: Smart Choice of ISVs and OEMs. Small footprint, silent install, up and running in minutes, 0 administration, autonomics. NEW! Dynamic ONCONFIG, Self Healing, Self Optimizing

12:40 PM: About to wrap up? Already? Oh, right. Q/A at the end. I want MOAR new features :)

12:41 PM: Bundling of Cognos licenses with new Advanced (Worgroup/Enterprise) Editions

12:41 PM: IIUG 2013 April 21-25, 2013 San Diego, CA

12:43 PM: Q/A starts.

12:43 PM: "Is compression available in Workgroup?" Sally says Compression included in Advanced Enterprise, available for purchase in Enterprise.

12:44 PM: "64 bit OAT?" JM3 says currently only 32 bit, but you can run 32 bit version on Windows 64 bit. Looking to have a 64 bit version for Windows in the future.

12:45 PM: "Is OAT faster in 12.1?" JM3 says ability to run update stats on sysmaster will allow OAT to run faster

12:46 PM: "Is Pronto using compression?" JM3 says no, perf gains are without compression

12:47 PM: "New tools to migration FROM Oracle?" JM3 says yes, a lot of technology added to assist in migrations.

12:48 PM: "Will Mobile OAT work with my 11.x server?" JM3 says yes

12:48 PM: "Where can I find more info about the new editions?" Sally says go to ibm.com/informix and view the new brochure. More detail: google Carlton Doe Informix Editions or google ibm software announcement 213-156

12:50 PM: "Any plans to do a benchmark?" Sally says the prefer industry specific real world benchmarks with their customers. Soon to publish a Meter Data Management benchmark.

12:52 PM: "Is ontape still supported?" JM3 says ontape and onbar still supported in 12.1. onbar just improved with PSM.

12:53 PM: "Can I get Congnos express bundled instead of the full Cognos?" Sally says no.

12:54 PM: "What do I need to do to use the compression features?" Sally says compression included in Advanced Enterprise, add on for Enterprise.

12:54 PM: "What is the #1 thing to remember from this webcast?" JM3 says the great improvements in OTLP/OLAP performance.

12:55 PM: "Is OAT built using a new version of PHP?" JM3 says yes, OAT uses a later version of PHP.

12:56 PM: "Tell us more about IBM Mobile" Sally says it is included with all for-pay versions of Informix and is a secure persistent storage for data on a device that allows backend syncronization to an Informix DB.

12:57 PM: "Can 12.1 replicate TimeSeries data?" JM3 says, yes TimeSeries can now be replicated via HDR/SDS/RSS, etc.

12:58 PM: Sally notes the great attendance to this Webcast and gives a shout out to IIUG 2013 (thanks Sally)

12:59 PM: End of Webcast, perfectly timed. Replay of webcast will be made available online.


Thursday, March 7, 2013

Psuedo strtok in SPL


I needed a way to extract the individual words from a sentence stored in a single character field. After some failed google searches and no desire to install a Datablade or write a C UDR for something that doesn't need to have killer performance, I decided to write my own quick and dirty SPL function.

my_strtok(str, delim, token_num) will take a string, break it into individual tokens based the delimiter and return the Nth token of the string.

Running this:

execute function my_strtok("How now brown cow", " ", 3)

Would return the third token:

brown

Here is the code for my_strtok(), comments welcome on anything I might have missed in the logic. And when I say it is slow, I just mean it could be done in a different way and perform more efficiently, but for what I needed it works.

create function my_strtok (str lvarchar(2048), delim char(1), token_num smallint)
returning lvarchar(2048) as token;

        define str_len integer;
        define start_pos integer;
        define stop_pos integer;
        define cur_token_num integer;


        -- initialize start position and current token number to 1
        let start_pos = 1;
        let cur_token_num = 1;

        -- remove any leading delimiters from the input string
        let str = ltrim(str, delim);

        -- save the input string length so we don't have to recalculate it later
        let str_len = length(str);


        -- find the start of the token we want to return

        -- while there is still more string available to process
        while (start_pos <= str_len)
                -- if the current token number is the token we want, stop looking
                -- for a start position
                if (cur_token_num = token_num) then
                        exit;
                end if;

                -- increment the start position to the next character
                let start_pos = start_pos + 1;

                -- check to see if the current character in the string is a delimiter
                if (substr(str, start_pos, 1) = delim) then
                        -- we have found the next token
                        let cur_token_num = cur_token_num + 1;

                        -- advance the token start position past any repeating delimiters
                        while (start_pos <= str_len)
                                let start_pos = start_pos + 1;

                                if (substr(str, start_pos, 1) != delim) then
                                        -- there are no more repeating delimiters
                                        -- stop looking for repeating delimiters
                                        exit;
                                end if;
                        end while;
                end if;
        end while;


        -- we now either have the start position of the token we are looking for
        -- or we did not find the token we were looking for
        -- if we did not find the token, return NULL
        -- if we did find the token we were looking for, find the end of the token

        if (cur_token_num = token_num) then
                -- we found the token
                let stop_pos = start_pos;

                -- while there is still string to process try to find the end of our token
                -- if we run out of string before we find the next delimiter then
                -- our token ends where the string ends
                while (stop_pos <= str_len)
                        let stop_pos = stop_pos + 1;

                        if (substr(str, stop_pos, 1) = delim) then
                                -- we found the end
                                let stop_pos = stop_pos - 1;
                                exit;
                        end if;
                end while;

                -- return the found token
                return substr(str, start_pos, stop_pos - start_pos + 1);
        else
                -- the token was not found
                return NULL;
        end if;
end function;

execute function my_strtok("Simple test", " ", 1);

token  Simple

1 row(s) retrieved.


execute function my_strtok("Simple test", " ", 2);

token  test

1 row(s) retrieved.


execute function my_strtok("    Leading delimiters", " ", 1);

token  Leading

1 row(s) retrieved.


execute function my_strtok("Repeating       delimiters", " ", 2);

token  delimiters

1 row(s) retrieved.


execute function my_strtok("Token not found", " ", 4);

token

1 row(s) retrieved.


execute function my_strtok("Should have checked for invalid input", " ", -1);

token

1 row(s) retrieved.


execute function my_strtok("Invalid input works, but is unecessarily slow", " ", -1000);

token

1 row(s) retrieved.


execute function my_strtok("Empty delimiter defaults to space, convenient", "", 6);

token  convenient

1 row(s) retrieved.

Tuesday, March 5, 2013

Where Are They Now?


Where did I find this picture, can you identify anyone in this picture and what is the guy on the left looking at on the ground?

Say "CHEESE"


Monday, March 4, 2013

From the SIGs - LTXHWM and LTXEHWM


John Adamski posted a question to the IIUG SIGs about how to identify a session that caused the long transaction that eventually put his system in a Blocked:LONGTX state. A few of us came back with some responses, but it wasn't until John Miller III from IBM and Informix Fun Facts replied with "finding the session that caused your long transaction isn't very useful, you need to prevent this situation from happening with the LTXHWM and LTXEHWM ONCONFIG parameters" that I realized these config parameters are typically underutilized.