Ordering Data Inside ETL

April 27, 2010

Ever wondered if it mattered what order your data was in inside an ETL load?

I hadn’t really considered it until this morning.  I have already considered and used the technique of dropping indexes during ETL and then creating them afterwords.  But this was new.

I had a load that took 50 seconds.  It had a clustered index as well as a unique index that was used to enforce a primary key.  The ETL load came in in no order.

The first thing I tried was to order the ETL in ascending order – which is the same as the order of the clustered index – to see if it mattered.  I suspected it would and it did.  the time went from 50 s. to 33 s.  This was good – nearly half the time just by ordering the data.  I wondered if ordering the data backwards from the clustered index would help or hurt, and to my surprise, it helped and lowered the time further to 28 s.

Now I wondered how much lower I could go by dropping the indexes before ETL and the rebuilding them afterwords.

I re-ran the non-ordered ETL using the drop and rebuild method and got 28 s.  So if all I had done was just drop and rebuild indexes, I would have gone from 50 to 28 s.  Not bad.  But when I ordered the load descending it dropped even further to 14 s.  The last test of course was the order it back to ascending and see which ordering worked best for the drop and rebuild method.  It turned out that ordering ascending worked better and dropped the time down to 13 s.

This was a surprise to me.  With the indexes built, the opposite order to the clustered index was better.  When you drop and reload, the same order was better.  I think the moral of the story isn’t specifically whether ascending or descending is better – I suppose it could be different in other cases.  Mostly just remember that the order can really matter and when you are trying to squeeze every ounce of performance out of ETL, check out the order of your ETL. 

Anyone on the SQL CAT team want to comment on why backwards works better when indexes are left intact?


PerformancePoint 2007 with SQL 2008

October 14, 2009

I have been using the Monitoring Server Depolyment Guide for PerformancePoint 2007 (excerpted from Nick Barclay and Adrian Downes’ Book) document as a guide for installing and deploying PPS Monitoring.  There is a lot of documentation around how to get PPS 2007 to work with SQL 2008 among them being CU9 for SQL 2005.

PPS uses SQL 2005 clients for database and Analysis Services access – PPS was released well before SQL 2008.  When it comes time to hook up to Analysis Services 2008 and SQL 2008, it seems odd to install a SQL update to the PPS server – even if SQL is on another box.  This is to update the client tools used by PPS to access the data back end.

In the Deploying Monitoring Server documentation, it says that CU9 for SQL 2005 is required.  Well – this is not completely the truth.  Specifiaclly, you need a minimum version for accessing SQL 2008.  For SNAC, it is 2005.90.3282.0 with a file date of 05-Aug-2008.  The CU9 website doesn’t specifically point out the version numbers of the other required clients, ADOMD and ASOLEDB9.

In any case, applying SQL 2005 CU9 is unecessary if you use SQL Server 2005 Feature Pack from December 2008.  This collection of various and sundry tools has versions newer than are included in CU9 and are sufficient for running PPS.  The version for SNAC included in Feature Pack from December is 2005.90.4035.0 from 25-Nov-2008.


RDBMS Doomed?

February 13, 2009

Here is a great article from Read Write Web that discusses the place and uses for a Key/Value database such as SSDS or SimpleDB or Google App Engine.  Jason Massie argues that the DBA profession will level off in 5-7 years and then start to fall off in 10 because of the prevelence of cloud based Key/Value based databases.

During the last couple of releases of SQL Server, Microsoft has been focused on BI.  Why?  Everyone has lots of data.  Everyone knows that there is a lot of value in that data, but they need to get at it.  BI is one of the top agenda items for CIOs according to Gartner.

What does this have to do with SSDS and SimpleDB?

SSDS and SimpleDB have basically no reporting or analytics capabilities.  Microsoft has mentioned that the future for SSDS includes OLAP like capabilities.  Even if this was available today and shipped today, would companies be ready to move their entire data ecosystem to Microsoft, Amazon, or Google, re-write everything out of RDBMS and into Key/Value programming, then re-work ETL and data warhouse infrasturcture to work on a non-existant (so far, Microsoft says) architecture?

They sure aren’t going to move all of the OLTP systems they have to the cloud right now because the data is then trapped in a no-reporting storage mechanism.  Google App Engine only allows 1000 result objects per query.  Amazon only allows 5 second duration queries.  Can companies’ appetite for analytics be satiated by that?  I don’t think so.

This seems a very log way off.  Even if one of these Key/Value databases HAD reporting and analytics, it seems a minimum of 5 to 7 year for any sizable amount of migration to take place.  And this would be if there WERE analytics and reporting.  There aren’t.

Many companies have large amounts of data and money in their VSAM/COBOL systems that are still running fine that everyone said would have to have been replaced a decade ago.  Those systems aren’t going anywhere for quite a while and these systems need analytics.  These analytics will need to be done on-sight on RDBMS/OLAP platforms.

I suppose for a San Francisco-based startup-focused DBA, there might be a noticible movement to the cloud, but for the rest of the world, it is going to take longer than 5 years to notice a plateau in the need for RDBMS DBAs.

I will however conceede that at some point, there will be a draw to the cloud.  But there are many hurdles to overcome before that can happen in earnest.

One change might be a cloud hosted RDBMS.  Certainly there are some out there somewhere, but the hot topic right now is Key/Value and how that enables large distributed systems.  For a company looking to save money, a hosted RDBMS makes sense, but probably not SSDS and Key/Value.


Float Data Types

January 29, 2009

This is a really big distinction but is only mentioned in passing in BOL (check here), but approximate numerics are just that:  approximate.  Someone who understands the storage engine better can explain why a foat or real can’t store an exact number, but they can’t and don’t.  You might put 1.5 into it and later come to find that the value is 1.4999.

If you need an exact representation of a number that isn’t an integer, you need to use numeric or decimal.  These are functionally the same, so it doesn’t matter which you choose.


Duplicate ErrorCode Columns

May 2, 2008

A colleague of mine was selecting data from a table that contained a column called ErrorCode.  The OLE DB Source in SSIS was giving an error that said:

The package contains two objects with the duplicate name of “output column “ErrorCode” (13)” and “output column “ErrorCode” (279)”

She was having trouble figuring out why it was complaining since the table she was pulling from had only one column named “ErrorCode”.

If you remember, data transformations can add columns to the data flow that describe error conditions.  One of those columns is “ErrorCode”.  This was the problem.  SSIS wanted to reserve ErrorCode for the column that it adds and that conflicted with the column that was in the table.  I wasn’t able to make this happen when I added a column called “ErrorCode” to a select statement – only when it was in the table from which the statment is selected.

We went to the source table and changed the name from ErrorCode to something else.  This fixed the problem.  The other way to fix it is to use the AS clause in SQL to change the name in the source query.  Changing the output name in the Columns tab in the OLE DB Source didn’t fix the problem so you’ll have to go a little deeper to resolve the conflict.

Mark


OLE DB Source Metadata Refresh

April 4, 2008

There is a particular behavior on the OLE DB Source that I’d like to highlight.  I have had the need to change the data type of a column in a source query recently and found that the OLE DB Source doesn’t flush and refresh its metadata.

Let’s say I have a varchar 5 field and it really is suposed to be a dollar amount.  I could go in to the package and add a Data Conversion task – which is a good option, but also I could go in to my source query and change it there by performing a cast([my_varchar_column] as money).  What I have found is that you have to comment out the column and click OK to get out of the OLE DB Source dialog, then go back and un-comment the column so that it is back in the metadata collection with the new data type.

I know that performing cast operations in a souce query may not be a best practice, but I had a table with a few hundred columns and a whole bunch of them needed to have their data type changed.  I could either write a source query (using the catalog) that had all the right data types built in, or build a set of data conversion tasks (like maybe 50) that do the same thing.  In the interest of time, I did the former.


SELECT 1 VS Select *

February 7, 2008

Connor Cunningham has a great explanation of the difference between using SELECT 1 versus SELECT * in an EXISTS clause.  I’d like to add that I’m assuming that the same rule would apply to SELECT count(*) and SELECT count(1).

SQL Query Optimizer would expand * into the column list and then figure out that none of them are needed.  So theoretically, SELECT count(1) would be a little faster in putting the plan together, but would run at the same speed and have the same IOs as SELECT count(*).