Unknown Member in Analysis Services

October 22, 2009

Analysis Services has a built in member for each dimension called (by default) “Unknown.”  This is to simplify the process of dealing with facts that have the property of Unknown for a particular dimension.  If a dimension member comes to the fact table after failing a lookup in the SSIS package and contains a null for the surrogate key, Analysis Services assigns it to this special Unknown Member and moves forward.

There are three steps to this situation.  First, the dimension itself has a property called UnknownMember that describes the usage of this unknown member.  It can be set to Visible, Hidden, or None.  Next, the dimension member set with the usage of Key has a property called NullProcessing that is set to UnknownMember.  This tells the dimension what to do in case of coming across a null surrogate key in the fact table.  Third, in the Dimension Usage screen of the cube, for each dimension in use, there is a setting under Advanced that once again describes NullProcessing.  This also can be set to describe behavior when processing a null dimension surrogate key.  Here is a link to a description of all the options.  This is a reference to Analysis Services Scripting, but it was the only place I could find these options described.  http://msdn.microsoft.com/en-us/library/ms127041.aspx

I think that this unknown member is a very convenient inclusion by the Analysis Services team, but I think I’ll pass on using it.  There is some syntactic sugar in MDX that allows the usage of a member called UNKNOWNMEMBER that seems nice, but what this scenario does not allow is an unknown member in the relational store.  If you don’t ever plan on querying the relational store, then the only place you will need an unknown member will be Analysis Services.  You can then pass unknown members to the fact table as null and allow AS to process accordingly.

I like to leave the relational store in as query-able state as possible.  Report writers might later have a reason to use it and having null in the fact table for surrogate keys will cause problems.  Report writers will have to use LEFT JOIN and then derive an unknown member at query time.

I think in this situation, creating an unknown member in the dimension with a surrogate key of 1, 0, or -1 (a special number of your choosing) is a good solution.

You’ll have to go and turn off the unknown member in the dimension, change NullProcessing in the key attribute for the dimension and change NullProcessing in the dimension usage of the cube to enable it.  But I think you’ll find that this is a good compromise when the relational store needs to be as query-able as possible.


RDBMS Doomed?

February 13, 2009

Here is a great article from Read Write Web that discusses the place and uses for a Key/Value database such as SSDS or SimpleDB or Google App Engine.  Jason Massie argues that the DBA profession will level off in 5-7 years and then start to fall off in 10 because of the prevelence of cloud based Key/Value based databases.

During the last couple of releases of SQL Server, Microsoft has been focused on BI.  Why?  Everyone has lots of data.  Everyone knows that there is a lot of value in that data, but they need to get at it.  BI is one of the top agenda items for CIOs according to Gartner.

What does this have to do with SSDS and SimpleDB?

SSDS and SimpleDB have basically no reporting or analytics capabilities.  Microsoft has mentioned that the future for SSDS includes OLAP like capabilities.  Even if this was available today and shipped today, would companies be ready to move their entire data ecosystem to Microsoft, Amazon, or Google, re-write everything out of RDBMS and into Key/Value programming, then re-work ETL and data warhouse infrasturcture to work on a non-existant (so far, Microsoft says) architecture?

They sure aren’t going to move all of the OLTP systems they have to the cloud right now because the data is then trapped in a no-reporting storage mechanism.  Google App Engine only allows 1000 result objects per query.  Amazon only allows 5 second duration queries.  Can companies’ appetite for analytics be satiated by that?  I don’t think so.

This seems a very log way off.  Even if one of these Key/Value databases HAD reporting and analytics, it seems a minimum of 5 to 7 year for any sizable amount of migration to take place.  And this would be if there WERE analytics and reporting.  There aren’t.

Many companies have large amounts of data and money in their VSAM/COBOL systems that are still running fine that everyone said would have to have been replaced a decade ago.  Those systems aren’t going anywhere for quite a while and these systems need analytics.  These analytics will need to be done on-sight on RDBMS/OLAP platforms.

I suppose for a San Francisco-based startup-focused DBA, there might be a noticible movement to the cloud, but for the rest of the world, it is going to take longer than 5 years to notice a plateau in the need for RDBMS DBAs.

I will however conceede that at some point, there will be a draw to the cloud.  But there are many hurdles to overcome before that can happen in earnest.

One change might be a cloud hosted RDBMS.  Certainly there are some out there somewhere, but the hot topic right now is Key/Value and how that enables large distributed systems.  For a company looking to save money, a hosted RDBMS makes sense, but probably not SSDS and Key/Value.

Analysis Services 2008 Performance Guide

November 18, 2008

Via Chris Webb, the Analysis Services 2008 Performance Guide is out.


Thanks, Chris!

Currency in a Analysis Services Calculated Measure

October 2, 2008

Ever needed to make a calculated measure have a display type of “Currency”?  Did you notice it wasn’t what you wanted?

Instead of choosing the drop-down box to “Currency”, you can type in your own.  You’ll notice that “#,#.00” is nearly what we want, but we need to add a “$”.  To finish up, you can type in “$#,#.00”.  Don’t forget to include the quotes just like the other ones.  If you’d like to get rid of the decimals, feel free to omit the .00 and just type in “$#,#”.

Analysis Services via HTTP

May 30, 2008

UPDATED: Added more information on permissions.

It is pretty common policy to seperate data servers from web servers.  Microsoft may be making headway with making IIS more secure, but IT departments commonly forbid having the two on the same production server.

Microsoft has provided us with a very simple tool to use in this situation.  They call it the Analysis Services Data Pump.  This ISAPI extension has existed (I think) since SQL 2000, but I recall it had a different name.  It is a dll hosted by IIS that takes server requests on HTTP and forwards the requests to the Analysis Server.  The server responds to the dll with results and the results are forwarded back to the requestor.  Here is the link to the article that explains how to set it all up: http://www.microsoft.com/technet/prodtechnol/sql/2005/httpasws.mspx.

The only thing I would add is that the article has a lot of different directories they reference:  D:\tmp\isapi and C:\InetPut\wwwroot and C:\wwwroot.  All of these should be the same place.  I don’t see any reason that you couldn’t just create a virtual directory in IIS that points directly to %InstallPath%\OLAP.2\OLAP\bin\isapi so that you don’t have to make another copy of the dll.  The only reason I can think to make a copy is that it would isolate you from being affected by an update to SQL server if you didn’t want the file to be updated without you testing it.  I still don’t know if this is enough of a reason to make a copy.  Don’t forget that if you don’t copy into the wwwroot directory, you’ll need to make sure your access permissions are correct.  Anything that goes in the wwwroot folder, IUSR automatically has access to it.  If you don’t put it in there, (and you use anonymous access) you’ll have to give IUSR permission to the folder where it is.

I just set it up and it works great and was very easy to do.  If you work in an environment where port access requests are difficult and a web server already has HTTP ports open, this might be a good solution.

SQL 2008 in Q3 ’08

January 25, 2008

Teo Lachev has brought up that a clarification has been made that SQL 2008 will be released in Q3 2008.  The original blog entry can be found here.

I guess this is good news – but I have to say that many of the places I’m working are just getting through the 2000 to 2005 migration.  I know that Microsoft has been hard at work to deliver SQL within the 24-36 month range.  I have to ask myself – is that because they think it is good for their customers or because it is good for their revenue stream?

Can customers keep up with the upgrade cycle? Can technologists keep up with all the new features?

I also think that Microsoft is undermining their certifications program because it is quite an investment to achieve one and before you know it, you have to start all over with the next version.

Have a good weekend.

Automating SSAS Deployment

September 6, 2006

One of the last pieces of our complete deployment we have left to automate is Analysis Services.  This turns out to be pretty simple.

Microsoft has created a tool called Analysis Services Deployment Wizard.  It can be found in your start menu under SQL 2005, Analysis Services, Deployment Wizard.  This takes the asdatabase file in your bin directory and creates an XMLA script that creates the SSAS database.  It can be found at:

[Install Drive]\Program Files\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe.

It can be called from the command line without showing the GUI wizard piece.

Now the only thing is that we need to play that script against our server.  Microsoft has also written this tool for us already.  There is a sample app that comes with SQL server and it can be found in [Install Drive]\Program Files\Microsoft SQL Server\90\Samples\Analysis Services\Administrator\ascmd\CS.  Open the solution and compile and you’ll have a tool that will play XMLA scripts against the server.  It has a simple command line interface too.

Next we’ll have to create some XMLA scripts that we can play that will process our dimensions and cubes.