This document is a guide for using MaxL, multi-dimensional database access language for Hyperion Essbase OLAP Server. MaxL is a flexible way to automate . To get started, click on the Start menu and navigate to Program | Oracle EPM System | Essbase | Start Administration Services Console. In the Log in menu, enter. Essbase MaxL scripts MaxL is a Multidimensional Access Language developed for Essbase analytics. MaxL, as it is known, is a powerful scripting tool.
|Published (Last):||22 May 2016|
|PDF File Size:||6.36 Mb|
|ePub File Size:||7.42 Mb|
|Price:||Free* [*Free Regsitration Required]|
I was feeling a little bit whimsical last week and wanted to get a little use out of my SurveyMonkey account, so I decided to do a quick poll: This issue initially arose for me when I was heckling Cameron Lackpour at one of his presentations a few years ago.
My memory must be a little faulty because at the time I could have swore that he liked. So I wanted to settle this once and for all. I have seen both in environments.
Literally both, as in, some scripts are. So, there you have it. Way to think outside the box. Suffice it to say, I am more than a little disappointed with these results and than the. Thank you all for submitting answers to this somewhat lighthearted survey. Having talked about automation and such with other people at ODTUG this year, it seems that several people are using this technique or a variant of it on their own systems.
Basically, the idea is that you want to be able to sync your production automation scripts from your test server as easily as possible. Therefore, you want to try and write your automation scripts as generically as possible, and use variables to handle anything that is different between test and production. As an added bonus for making the sync from test to prod just a little bit easier, why not dynamically choose the proper configuration file?
Assuming you are running on Windows the same concept will work on other platforms with some tweaks for your local scripting environmentone way to handle it is like this: Knowing that we can set environment variables in a batch file that will be available in MaxL, we could setup a file called essbase For example, the contents of essbase Assuming that the two batch files and the cleardb.
Now for a little explanation. Note that in essbase The first line of main. The batch file will run, all of the SET commands inside it will associate values to those environment variables, and control flow will return to the calling batch file, which then calls essmsh to run cleardb. Using this technique can make your MaxL scripts fairly portable and more easily reusable. In order to get this to work on the production server, we could just create another batch file called essbase Therefore, your deploy method could literally be as simple as copying the folder from the test server to the production server.
Of course, there is nothing wrong with running your Essbase automation on the server: But perhaps you have a bunch of functionality you want to leave on a Windows server and have it run against your shiny new AIX server, or you just want all of the automation on one machine.
It is possible to specify the locations of rules, reports, and data files all using either a server-context or a client-context. For example, your original automation may have referred to absolute file paths that are only valid if you are on the server. You can generally adjust the syntax to explicitly refer to files that are local versus files that are remote. This particular piece of automation will also run just as happily on a client or workstation or remote server that has the MaxL interpreter, essmsh installed of course.
Oracle Essbase 9 Implementation Guide by Joseph Sydney Gomez, Sarma Anantapantula
So, here is the script:. We are using simple dimension build load rules to update the dimensions. While not strictly necessary, I find that it makes the script more flexible and cleans things up visually. Next, we login to the Essbase server. Again, essnase just refers to locations that are defined in the conf.
We set our output locations for the spool command. Here is our first real difference when it comes to running the automation on the server versus running somewhere else. These locations are relevant to the system executing the automation — not the Essbase server. Now on to the import command. Essbaae that although we are using three different rules files and three different input files for those rules files, esabase can do all the work in one import command.
The first file we are loading in is DeptAccounts. In other words, here is the English translation of the command: Also, suppress verification of the outline for the moment. Lastly, we want to preserve all of the data currently in the cube, and send all rejected essabse records that could not be used to update the dimensions to the dim.
Also, some careful usage of MaxL variables, spacing, and comments can make a world of difference in keeping things readable. A very common task for Essbase automation is to essbaxe data from one cube to another. There are a number of reasons you may want or need to do this.
One, you may have a cube that has detailed data and another cube with higher level data, and you want to move the sums or other calculations from one to the other.
You may accept budget inputs max one cube but need to push them over to another cube.
In any case, there are many reasons. For the purposes of our discussion, the Source cube is sssbase cube with the data already in it, and the Target cube is the cube that is to be loaded with data from the source cube. There is a simple automation strategy at the heart of all these tasks:. This can be done by hand, of course through EASessbae you can do what the rest of us lazy cube monkeys do, and automate it.
Foo which represents our source cube. It will have dimensions and members as follows:. As you can see, this is a very simple outline. For our essbasr, the target cube, Target. Bar, has an outline as follows:. These outlines are similar but different. This cube has a Scenario dimension with Actual, Budget, and Forecast whereas in the source cube, since it is for budgeting only, everything is assumed to be Budget.
Also note that Target. Bar does not have a Location dimension, instead, this cube only concerns itself with totals for all regions. Looking back at our original thoughts on automation, in order for us to move the data from Source. Bar, we need to calculate it to roll-up all of the data for the Locations mmaxl, run a report script that will output the data how we need it for Target. Bar, use a load rule on Target.
Bar to load the data, and then calculate Target. Of course, business needs will affect the exact implementation of this operation, such as the timing, the calculation to use, and other complexities that may arise.
I enjoy giving my automation files unnecessarily long filenames. It makes me feel smarter. Some of these may be omitted if they are not necessary for the particular process you may opt to use the default calc script, may not need some of the aggregations, etc.
What does the report script look like? We just need something to take the data in the cube and dump it to a raw text file. Most of the commands here should be pretty self explanatory.
From here it is a simple matter of designing the load rule to parse the text file. In this case, the rule file is part of Target.
Bar and is called LoadBud. When the load rule is done, we should be able to run the script and schedule it in our job scheduling software to carry out the task in a consistent and automated manner. As an advanced topic, there are several performance considerations that can come into play here. I already alluded to the fact that we may want to tighten up the calc scripts in order to make things faster.
As always, check the DBAG for more information, it has lots of good stuff in it. Good luck out there! There are a lot of different ways to update your substitution variables. You can tweak them with EAS by hand, or use one of several different methods to automate it.
Here is one method that I have been using that seems to hit a relative sweet spot in terms of flexibility, reuse-ability, and effectiveness. First of all, why substitution variables? You can also use them in load rules. You would do this if you only want to load in data for a particular year or period, or records that are newer than a certain date, or something similar.
The majority of my substitution variables seem to revolve around different time periods. That being said, I find that automating updates to timing variables is almost always a win. Not only can the fiscal calendar be quite different, it can have some weird quirks too.
One approach to this problem would be to simply create a data file or table in a relational database, or even an Excel sheet that maps a specific calendar date to its equivalent fiscal date counterparts.
You just have to make sure that someone remembers to update the file from year to year. Of course, this can be very different across different companies and organizations.
Monday might be the first day of the week or something. Max, the concepts are the same but the implementation will look different as with everything in Essbase, right? And by cleaner, I mean that I want something algorithmic to convert one date to another, not just a look-up table.
Here is where the approaches to updating variables start to differ. You could have a fancy XML configuration file that is interpreted and tells the system what essbqse to create, where to put them, and so on.