Friday, March 21, 2014
You may have already read this article by my friend and "Benchmark Guy", Eric Vercelletto, but it is so well done and contains so many great thoughts and observations that I think it needs to be posted here for you to read again (or for the first time).
Where is Informix?
Posted by Andrew Ford at 8:06 AM
Tuesday, December 17, 2013
I posted this on the IDS SIG yesterday. Here is it again in case you missed it and are interested.
The IIUG2014 CPC met at the JW Marriott in Miami, FL this weekend to prepare for the 2014 conference. I thought I would give an update on where we are at and pass along my experience with the city and hotel since this will be a new location for a lot of us in April.
The hotel is beyond nice. The place is 98.34% Marble and Mahogany and smells terrific. Yes, it is weird to compliment a hotel on their smell, but I thought this to myself each time I entered the lobby. It is the nicest hotel I have ever stayed in and unless I hit the Florida Lotto, it will be the nicest hotel I ever stay in. The hotel is not undergoing a remodel nor are they planning on any major construction during the conference (for those that attended the Overland Park conferences, this is important).
Friday, May 3, 2013
I have no idea how IBM decides what new features to add to Informix, but I do know that we can now be part of the discussion by using the new Request for Enhancement tool (RFE Tool).
I took this for a spin today and I must admit this is a pretty interesting thing that you should check out. Not only can you submit your own requests for new features, you can view what everyone else has suggested and vote for what you want to have added.
There are a lot of good ideas in there and I really hope to see some of these feature requests in later releases.
Take the RFE for a spin today. Submit a request, it is fun.
To see the Informix specific RFEs, search under Brand: Information Management, Product Family: Informix.
Monday, April 8, 2013
Friday, April 5, 2013
Sometimes you have to store a string of numbers in a CHAR column. Probably because the string of digits represents an account number or something similar and storing as an INTEGER or BIGINT doesn't really make sense. The account number could have leading zeros that would be lost if stored as an integer. Parts of the account number could store special information, like positions 2,3 and 4 identify what department an account belongs to and it might be useful to be able to select digit_string[2,4]. There are plenty of reasons to store numerical data in a string.
What is the best way to ensure that all of the characters in the string are actually numbers?
This is what I do, is there a better way to do it?
alter table my_table add constraint check (replace(rtrim(digit_string), " ", "x")::bigint >= 0) constraint my_table_ck1; insert into my_table (digit_string) values ("123456"); 1 row(s) inserted. insert into my_table (digit_string) values ("abc123"); 1213: A character to numeric conversion process failed insert into my_table (digit_string) values (" 123456"); 1213: A character to numeric conversion process failed update my_table set digit_string = "xyzpdq" where digit_string = "123456"; 1213: A character to numeric conversion process failed
The constraint will try to cast the digit string to a BIGINT, if this works then all of the characters in the string are numbers. If it doesn't work we get an SQL error and the bad data never makes it into our database.
The replace(rtrim()) stuff attempts to capture leading white space in the digit string that would not cause the cast to a BIGINT to fail.
There are plenty of other ways to accomplish the same thing, but I like this way.
You could rely on the application to check the digit string before it inserts/updates the database, but I'm pretty sure this isn't the best way.
You could write a stored procedure that is run by insert/update triggers, but I don't think that is more efficient than the check constraint/cast to BIGINT method. This would have the benefit of being able to raise a user defined SQL error instead of the odd -1213 error, though.
Posted by Andrew Ford at 3:44 PM
Tuesday, March 26, 2013
11:58 AM: Waiting for the IBM Informix It's Simply Powerful Webcast to start and on my screen I see IBM Informix 12.1, so I guess it is officially announced.
12:00 PM: Moderator is giving the rules and regulations of the Webcast. Questions will be answered after the Webcast.
12:01 PM: Chad Gates from Pronto Software, John Miller Informix Lead Architect and Sally Hartnell from IBM Marketing filling in for Jerry. Where's Jerry? He is unavoidably detained.
12:03 PM: 12+ Years of Informix Innovation with IBM
12:03 PM: Over 190 new partners in 2012
12:04 PM: Overview of the new stuff in 12.1. Cloud, Easy of Use, Warehouse, Sensor Data Management and something else I missed
12:05 PM: TimeSeries for Sensor Data. 5x Performance using 1/5 the resources as the competition
12:07 PM: Compression: Reduces storage and improves performance.
12:08 PM: JM3 talking about compression now. NEW! Index compression. NEW! Blob compression.
12:10 PM: NEW! Automatic table compression
12:11 PM: NEW! Primary Storage Manager replaces ISM for more backup solution options
12:12 PM: Chad from Pronto Software now talking about their EVP experience.
12:13 PM: Pronto has an ERP product that embeds Informix and Cognos. Informix initially picked for the OLTP capabilities. Informix 12.1 improves OLTP performance and OLAP performance benefiting from Informix Team working closely with the Cognos Team.
12:26 PM: Pronto experiences massive performance gain when concurrently running OLTP and OLAP on 12.1 over 11.x
12:28 PM: "Informix Warehouse Accelerator gaining worldwide traction to accelerate warehouse queries up to 100+ times"
12:29 PM: Back to JM3 on IWA improvements. NEW! Trickle Feed (cool) can now have real time analytics vs. refreshing the entire warehouse. NEW! Automated Partition Refresh. NEW! IWA and OAT integration.
12:31 PM: NEW! IWA and TimeSeries integration. IWA analytics over TimeSeries data.
12:32 PM: Flexible Grid/ER - NEW! ER no longer requires a Primary Key.
12:34 PM: Execute SQL over the grid - Query Sharding, that's sharding with a D.
12:35 PM: Talking about Hypervisor edition for Virtual/Cloud based deployments.
12:36 PM: Informix Genero accelerates new generation of mobile and cloud-based apps.
12:36 PM: Sally: Informix integrated with the IBM Mobile Database. Sync mobile db data with Informix backend.
12:37 PM: JM3: NEW! Mobile OAT for your phone or tablet
12:38 PM: Improved OAT out of the box experience, OAT GUI deployed as part of Informix install
12:39 PM: Sally: Smart Choice of ISVs and OEMs. Small footprint, silent install, up and running in minutes, 0 administration, autonomics. NEW! Dynamic ONCONFIG, Self Healing, Self Optimizing
12:40 PM: About to wrap up? Already? Oh, right. Q/A at the end. I want MOAR new features :)
12:41 PM: Bundling of Cognos licenses with new Advanced (Worgroup/Enterprise) Editions
12:41 PM: IIUG 2013 April 21-25, 2013 San Diego, CA
12:43 PM: Q/A starts.
12:43 PM: "Is compression available in Workgroup?" Sally says Compression included in Advanced Enterprise, available for purchase in Enterprise.
12:44 PM: "64 bit OAT?" JM3 says currently only 32 bit, but you can run 32 bit version on Windows 64 bit. Looking to have a 64 bit version for Windows in the future.
12:45 PM: "Is OAT faster in 12.1?" JM3 says ability to run update stats on sysmaster will allow OAT to run faster
12:46 PM: "Is Pronto using compression?" JM3 says no, perf gains are without compression
12:47 PM: "New tools to migration FROM Oracle?" JM3 says yes, a lot of technology added to assist in migrations.
12:48 PM: "Will Mobile OAT work with my 11.x server?" JM3 says yes
12:48 PM: "Where can I find more info about the new editions?" Sally says go to ibm.com/informix and view the new brochure. More detail: google Carlton Doe Informix Editions or google ibm software announcement 213-156
12:50 PM: "Any plans to do a benchmark?" Sally says the prefer industry specific real world benchmarks with their customers. Soon to publish a Meter Data Management benchmark.
12:52 PM: "Is ontape still supported?" JM3 says ontape and onbar still supported in 12.1. onbar just improved with PSM.
12:53 PM: "Can I get Congnos express bundled instead of the full Cognos?" Sally says no.
12:54 PM: "What do I need to do to use the compression features?" Sally says compression included in Advanced Enterprise, add on for Enterprise.
12:54 PM: "What is the #1 thing to remember from this webcast?" JM3 says the great improvements in OTLP/OLAP performance.
12:55 PM: "Is OAT built using a new version of PHP?" JM3 says yes, OAT uses a later version of PHP.
12:56 PM: "Tell us more about IBM Mobile" Sally says it is included with all for-pay versions of Informix and is a secure persistent storage for data on a device that allows backend syncronization to an Informix DB.
12:57 PM: "Can 12.1 replicate TimeSeries data?" JM3 says, yes TimeSeries can now be replicated via HDR/SDS/RSS, etc.
12:58 PM: Sally notes the great attendance to this Webcast and gives a shout out to IIUG 2013 (thanks Sally)
12:59 PM: End of Webcast, perfectly timed. Replay of webcast will be made available online.
Thursday, March 7, 2013
I needed a way to extract the individual words from a sentence stored in a single character field. After some failed google searches and no desire to install a Datablade or write a C UDR for something that doesn't need to have killer performance, I decided to write my own quick and dirty SPL function.
my_strtok(str, delim, token_num) will take a string, break it into individual tokens based the delimiter and return the Nth token of the string.
execute function my_strtok("How now brown cow", " ", 3)
Would return the third token:
Here is the code for my_strtok(), comments welcome on anything I might have missed in the logic. And when I say it is slow, I just mean it could be done in a different way and perform more efficiently, but for what I needed it works.
create function my_strtok (str lvarchar(2048), delim char(1), token_num smallint) returning lvarchar(2048) as token; define str_len integer; define start_pos integer; define stop_pos integer; define cur_token_num integer; -- initialize start position and current token number to 1 let start_pos = 1; let cur_token_num = 1; -- remove any leading delimiters from the input string let str = ltrim(str, delim); -- save the input string length so we don't have to recalculate it later let str_len = length(str); -- find the start of the token we want to return -- while there is still more string available to process while (start_pos <= str_len) -- if the current token number is the token we want, stop looking -- for a start position if (cur_token_num = token_num) then exit; end if; -- increment the start position to the next character let start_pos = start_pos + 1; -- check to see if the current character in the string is a delimiter if (substr(str, start_pos, 1) = delim) then -- we have found the next token let cur_token_num = cur_token_num + 1; -- advance the token start position past any repeating delimiters while (start_pos <= str_len) let start_pos = start_pos + 1; if (substr(str, start_pos, 1) != delim) then -- there are no more repeating delimiters -- stop looking for repeating delimiters exit; end if; end while; end if; end while; -- we now either have the start position of the token we are looking for -- or we did not find the token we were looking for -- if we did not find the token, return NULL -- if we did find the token we were looking for, find the end of the token if (cur_token_num = token_num) then -- we found the token let stop_pos = start_pos; -- while there is still string to process try to find the end of our token -- if we run out of string before we find the next delimiter then -- our token ends where the string ends while (stop_pos <= str_len) let stop_pos = stop_pos + 1; if (substr(str, stop_pos, 1) = delim) then -- we found the end let stop_pos = stop_pos - 1; exit; end if; end while; -- return the found token return substr(str, start_pos, stop_pos - start_pos + 1); else -- the token was not found return NULL; end if; end function; execute function my_strtok("Simple test", " ", 1); token Simple 1 row(s) retrieved. execute function my_strtok("Simple test", " ", 2); token test 1 row(s) retrieved. execute function my_strtok(" Leading delimiters", " ", 1); token Leading 1 row(s) retrieved. execute function my_strtok("Repeating delimiters", " ", 2); token delimiters 1 row(s) retrieved. execute function my_strtok("Token not found", " ", 4); token 1 row(s) retrieved. execute function my_strtok("Should have checked for invalid input", " ", -1); token 1 row(s) retrieved. execute function my_strtok("Invalid input works, but is unecessarily slow", " ", -1000); token 1 row(s) retrieved. execute function my_strtok("Empty delimiter defaults to space, convenient", "", 6); token convenient 1 row(s) retrieved.
Tuesday, March 5, 2013
Monday, March 4, 2013
John Adamski posted a question to the IIUG SIGs about how to identify a session that caused the long transaction that eventually put his system in a Blocked:LONGTX state. A few of us came back with some responses, but it wasn't until John Miller III from IBM and Informix Fun Facts replied with "finding the session that caused your long transaction isn't very useful, you need to prevent this situation from happening with the LTXHWM and LTXEHWM ONCONFIG parameters" that I realized these config parameters are typically underutilized.
Posted by Andrew Ford at 4:17 PM
Friday, March 1, 2013
"Holy Cow, two blog posts in one day!" - Harry Caray
Ben Thompson over at Informed Mix currently wrote about using "select for update/where current of" syntax and in the mother of all coincidences one of the developers that writes code that hits my Informix engines came over to tell me about the evolution of performance improvements he went through to speed up a bulk data delete application. Here is his story, from static SQL all the way to prepared statements using the "select for update/where current of" syntax.