Friday, November 30, 2012

MDX: Cousin Function

I recently found a need to reference an alternate hiearchay in an MDX member formula in an ASO cube.

My first attempt was to utilize the Lag function as I had in many other formuals, but found that I kept getting results of zero.  Using the MDX Script Editor in EAS, I found that the Lag function was only valid within the hiearchy that the specified member was found in.  After reviewing the list of available functions, I took a look at the COUSIN function.  I found it was exactly what I needed. 

Here is an example of how it might be utilized:

Let's assume fort his example that you have two hiearchies within your period dimension.  Let's also assume that data is loaded on a weekly basis.  The first hiearchy provides your weeks, months, quarter, and year.  Your alternate hiearchy is a dynamic calc of the quarter to date as of each week.  So your two hiearchies look like this:

For each member you could do a unique formula, specific to each week:
  • WK1 = [Wk1]
  • WK2 = Sum(MemberRange([WK1],[WK2])
  • WK3 = Sum(MemberRange([WK1],[WK3])
  • WK4 = Sum(MemberRange([WK1],[WK4])
  • WK5 = Sum(MemberRange([WK1],[WK5])
Or instead you could use the the same formula for each WK QTD member of a quarter:
  • WK1 thru WK13 = Sum(MemberRange([WK1],Cousin([Period].CurrentMember,[Year])))
  • WK14 thru WK26 = Sum(MemberRange([WK14],Cousin([Period].CurrentMember,[Year])))
  • WK27 thru WK39 = Sum(MemberRange([WK27],Cousin([Period].CurrentMember,[Year])))
  • WK40 thru WK52 = Sum(MemberRange([WK40],Cousin([Period].CurrentMember,[Year])))
It could be taken the next step so that the same formual is use for each WK QTD member:
  • Sum(MemberRange([Period].CurrentMember.Parent.Parent.FirstChild.FirstChild,Cousin([Period].CurrentMember,[Year])))
Please note there are probably several different ways that you can set your the first member of your member range to by dynamic.  This is probably a rather crude way, but since this post is about the COUSIN function, I went with the simplest route that would validate.

In summary, the COUSIN function allows the flexibility reference a child member at the same relative position as a member from another ancestor.  This can be used within the same member hiearchy or across another member hiearchy.

Monday, October 29, 2012

ASO Partial Data Clears Using MaxL and MDX

The Essbase 11.1 release brought us the ability to do partial data clears within an ASO database.  Gone are the days of using report scripts or the Excel Add-In to clear data.  There a couple of posts out there on other sites about how to use MaxL and MDX to do a partial data clear, but most of them speak to the simple clear script, performing a restriction using one dimension.  What happens when you need to define your data clear a little more precisely?

Let's start with the basics of data clears in an ASO Essbase cube.  There are two types of data clears you can perform, a logical delete and a physical delete.  A physical delete removes the input cells in the specified region, whereas a logical delete creates a new slice with an exact offset of the input cells you are trying clear.  When you retrieve the data both slices are netted together and you get zeros.  The technical reference notes that a physical delete completes in a time proportional to the size of input data while the logical completes in a time proportional with the size of data being cleared.  For this reason, a physical data clear is suggested when removing large slices of data.  It is also warned that a logical delete will increase the size of your database.  Both of these should be taken into consideration when doing a partial data clear.

Now on to the fun stuff…

The MaxL Script Editor can be used to perform the clear.  The syntax to be used is:

alter database . clear data in region '{MDX set expression}' [physical]

Your appname and database name should be placed in single quotes.  The MDX set expression is the definition of the data you want to clear.  Physical is an optional keyword that should be used when performing a physical clear.  The most difficult part of this syntax is the MDX set expression.

First and foremost, a set must have curly brackets around it.  Second, it has to be one of the prescribed MDX functions that return a set.  Remembering those two things, building the MDX set expressions becomes easier.

Using the ASOSamp application here are a few examples progressively showing you how you can restrict your script more and more precisely:
  • If you wanted to remove all Qtr4 data (regardless of year or any other dimension) your command would look like this:
alter database 'ASOsamp'.'Sample' clear data in region '{[Qtr4]}' Physical
·         If you wanted to remove only Qtr4 for current year (regardless of any other dimension) your command would look like this:
alter database 'ASOsamp'.'Sample' clear data in region 'Crossjoin({[Qtr4]},{[Curr Year]})' Physical
·         If you wanted to remove Qtr3 and Qtr4 for current year (regardless of any other dimension) your command could look like any of these:

o   alter database 'ASOsamp'.'Sample' clear data in region 'Crossjoin({[Qtr3]:[Qtr4]},{[Curr Year]})' Physical
o   alter database 'ASOsamp'.'Sample' clear data in region 'Crossjoin({[2nd Half].Children},{[Curr Year]})' Physical
o   alter database 'ASOsamp'.'Sample' clear data in region 'Crossjoin({MemberRange([Qtr3],[Qtr4])},{[Curr Year]})' Physical

·         If you wanted to remove Qtr3 and Qtr4 units for current year (regardless of any other dimension) your command could look like this:

alter database 'ASOsamp'.'Sample' clear data in region 'Crossjoin(Crossjoin({[Qtr4]},{[Curr Year]}),{[Units]})' Physical 

·         If you wanted to remove Qtr3 and Qtr4 units for current year (regardless of any other dimension) your command could look like this:

alter database 'ASOsamp'.'Sample' clear data in region 'Crossjoin(Crossjoin(Crossjoin({[Qtr4]},{[Curr Year]}),{[Units]}),{Descendants([Online])})' Physical

Keeping in mind the basics of MDX scripts is the key to getting your clear script defined down to a more granular level.  It is also important to note that the more complex your MDX is the longer your clear script may take to run.  If the data set being cleared is going to be very small you may want to use a logical delete as the increase to your database size will not be as noticeable.  If you are going to delete a large chunk of data, the physical delete may make more sense as a new large slice may take up a lot of memory and impact performance.

Tuesday, July 3, 2012

Difference Cube for Data Validation

On occasion throughout my career with Essbase, I have heard of difference cubes.  I understood the concept behind them, but until recently didn't see how they could be useful.  I don't know that I am using a difference cube in the sense that others have spoken of them, but I did find a great way to test large amounts of data using Essbase and the concept is still largely the same.


Recently we decided to upgrade one of our cubes by adding some new dimensions.  To do this we needed to update our history to include these new fields via a set of SQL update queries to our source table.  We had the the correct data in a separate set of PeopleSoft source tables.  We needed to ensure that our table updates populated the correct data.  We are talking more than 5 million rows of data so testing in Excel is not exactly a viable solution.  So what I did instead was load my data from my PeopleSoft tables into a copy of my cube into a scenario called "Actual."  Next I took my SQL updated table and and loaded it into the cube into a scenario called "Test."  I then had a dynamic scenario called "Difference" which did a simple calculation of "Test" - "Actual."  Once each of the data sets was loaded, I was able to set up a high level retrieve sheet for each of the drilled out new dimensions for the "Difference" scenario.  Anywhere, where I saw something other than zero was something that needed to be researched, as it was a variance between my two data sets.  I was also able to further drill down on my other dimensions to determine exactly where the variance was occurring.  I was able to completely validate my data in hours rather than the days or weeks it may have taken me if I had been forced to use a set of queries and Excel.


This was just the immediate use I found for an Essbase difference cube.  Taking just a few minutes, I can think of various other situations where this type of cube might come in handy.  Just a couple of examples would be that it might also be useful in:

  • Verifying a new calculation for a cube
  • Comparing a back up of an initial cube load to a new cube load
  • Validating a new cube to an existing similar cube
A difference cube isn't always going to be something that makes sense, but it is definitely a tool that should be included in your tool box that can be drawn upon should an ideal scenario arise.  I don't see a difference cube as a daily use, especially for a functional user.  But for someone who develops cubes on a regular basis or works with large amounts of source data, there is definitely a functionality of this type of cube that is appealing.

As a note, if it were so necessary, you can also keep a copy of your difference cube as documentation of your validation for audit or SOX requests.