Thursday, 29 August 2013

XML To Be The New Revolution in Business Intelligence Reporting

Hi, Its been long break from my last post here, but yes I don't like the idea of just writing anything crap on the web, after completing my school, I joined Amazon and life became hectic. The good thing about working at Amazon is you really work at the grass root level of the technology for me it was data. data and data.

The very idea that no business can run without informed decisions makes the domain Business Intelligence very interesting, I know a lot more new terms have been coined these days, especially DATA SCIENTIST, well I am not here to comment on what the salutation should be :-), and with internet being the new mode of communication capturing each customer movement and arriving at the correct decision is a critical activity.

Since internet is such a versatile customer experience that it can possibly capture customer information in any form from, DB Storage to flat file, the question is how do we process all these information and collectively arrive at a consolidated decision, since your information could be on several pieces of storage, technically individual sources should add up to the common source, commonly known as POC ( Proof Of Concept ).

There are many vendors which provide us with state of art to Extract, Transform and Load ( ETL ) solution, so capturing the raw data and storing it is not a problem, also storage these days are cheap so eliminating the problem of storing massive amount of data. The real problem is how do we integrate all pieces of block and build a wall so that we are able to retain our PnL statement.

Till date with my knowledge of handling Data Quality issues, there is only one Vendor who has tried to solve this problem is Oracle by coining a yet another term called Real Time Heterogeneous Data Base integration using Oracle Golden Gate, to be very honest two different database servers 10000 miles a part it takes 5-10 seconds to synchronize them, but still the question is maintaining expensive servers and then buying the license to integrate them, is the expense worth extracting such information ? OGG is primarily used for DRS system ( Disaster Recovery System ) and less of data integration with few exceptions.

The solution to the above problem where we can arrive at a consolidated solution by combining all the data points stored at different system without physically synchronizing them is XML.For many readers it might be a surprise but YES.. XML is a strong tool for data exchange primarily used as communication channel between web and back end data server, but this very fact can be used as communication channel between different data points on a common platform eg web. We may use the power of XML data transfer technique and build a robust reporting system that can interact with almost any data source and display the result, and all this without spending a penny.

The concept is really simple to understand but equally tough to implement. Would appreciate any suggestions, reading comments on this.

hope you might have enjoyed my idea, although its just a summary of what I have been reading and thinking.

Best
Sid

Tuesday, 15 May 2012

ALTERNATE WAY TO DBLINK

DBlink most of the time is used to synchronize two different databases, before going to the details about this discussion I am restricting my blog to Oracle only. Since now there are many technology available for the data synchronization DBlink is rarely used for such operations.

To briefly describe the database link, its a mechanism where we can connect to database objects, mainly tables. which can be on the database server, i.e with different SID or may be on different server ( physically located. The use of DBlink in my views should be the last option although there are some question where users might want to say that there is no alternative. Well recently at my work I had to compare two data set that were on different tables and do a minus operator to make sure that both the tables are same. The reason for such operation was that we are migrating some legacy jobs to a new environment for performance issues and want to make sure that the new jobs are populating the data in the same way as the legacy jobs.

The only option that we had for the testing was the use of DBlink or write a complex code in java which is not everyone's cup of tea. creating a DBlink itself is DBA activity and being a developer I did not had the admin rights to create a DBlink, use of java is more cumbersome as it requires more of jar files to be included in the environment ( eclipse ). So what is an alternative way to accomplish this task, the answer is EXCEL.

With widely supported on many machines this utility is very handy when we are doing such ad-hoc data validation, the use of excel for such data validation is simple and doesn't require a great deal of programming also. The architecture of the tool is very simple, collect the data from different database either from different server or from the same database server, bring that data to different excel sheet, in my case it was a two sheet and then a loop through both the sheet to see if there are any data anomalies in the data set.

The only constraint in this case would be the sql running on both the server should have order by clause in the same order, which is the also the perquisite for the minus operator when comparing two data set.

Using a DBlink in this case would be a costly operations keeping in mind the full table scan and joins in case used, reading the data from tables separately and then comparing them on the local machine reduces the i/o of operations by huge amount considering there are huge numbers of rows to be compared with the minus operator that has DBlink.

The same application can be also be applied on when trying to insert the data from database server to a different database server.

Please let me know if any one is interested in seeing my excel file for this operation.

I hope you might have enjoyed this very alternative way of DBlink

Regards,
Siddharth gupta

Sunday, 29 April 2012

OBIEE Assignment - Designing the Dasboard


BI Dashboard was one of the admiring features of Business Intelligence, during my industrial experience I had a chance to work on the BI Dashboard using Microstrategy but the knowledge was limited to analyzing the data, which was more of sql rather than designing the reports and making dasboard.  With this assignment of making reports and dasboard I got a chance to actually analyze the problem and make effective report to answer some of the business problem.

The methodology our team followed to make report and dasboard was to give the user every flexibility to run the report with parameter he/she wanted, we achieved this by using promts in the OBIEE environment.
The reason I proposed this approach was because of my previous work experience. Since we had the common dasboard the common report for all the business users, the only criteria that was different was the specific details like revenue details for any property, the fact contained all the data for all property, it was just specific users are interested in the data for their own property, so while they ran the report they just enetered the property id for their preorty. Also purposely we allowed multiple selection of time value from the time dimension so the user may analyze the data for any specific time period or for just one year.

The drill down functinalty was the last option that I wanted to implement but could not do it because there was not much of help available at the internet or on the oracle OBIEE documentation, so was not able to do it.

The assignment was a good practice for me in the sense it gave me an opportunity to build reports according to the problem at hand and create a dasboard by visuliazign the data. I was able to bring use of Microstrategy experience while  creating the reports and dasboard and still use the class room teaching to analyze the problem and decide on the KPI.

Regards,
Siddharth gupta

SSIS Tutorial

SSIS tutorial was one of the mind squeezing assignment although not graded, coming from ETL background I knew the assignment will be challenging terms of learning new environment. I was also interested in this assignment as I wanted to test my learning curve for new ETL tool, previously worked on Informatica currently working on IBM Data stage at UITS - Mosaic, SSIS was easy to pick up although there were some cool transformation that I never found or never used during my industrial experience.

Working on different ETL tool, I was able to figure out the most important thing that was also pointed out by our professor Dr. Ram , is to use your knowledge and use it on the problem at hand. Solving the problem is just a trivial task if you know how to use your knowledge ( in this case SSIS transformation ). There will be wide variety of tools that will keep emerging in the market and you mat not be pro in learning all of it, the key is we need to know the underlying concept and then use the technology to accomplish it.

Although the I had to follow up the tutorial for setting up the environment as I was not aware about the SSIS environment, once I had the ETL flow diagram set up in my mind it was pretty much drag and drop of the transformation and changing parameter value.

Nevertheless the SSIS assignment was yet another huge learning curve for me in BI Class this semester after the GOMC prject and OBIEE dashboard. Looking forward to follow MSDN link and learn more about it.

Regards,
Siddharth gupta 

Sunday, 22 April 2012

INFOGRAPHIC RESUME

Infographic resume was one of the toughest assignment for this semester in BI class this spring. Although in the class lecture it was looking a good idea to present in form of some picture rather than textual representation but its very tough to select a theme that shows your passion and still be able to convey your skills and achievements.

Finally I decided to make my infographic resume on the theme of a famous Microsoft game called Age of Empire which I used to play in my undergrad, playing games is one of my passion be it on computer on ipad.

Infographic Resume Theme:
Fedual Age
I have tried to portray me resume as a story of a soldier from  Persian Civilization, who over a period of time has gained skills by passing on different stages.
The first being Fedual age, the lowest rank of a solider, i.e when I was in undergrad college at Panjab University.


Castle Age


 As the civilization grows the soldier improves on his rank and becomes much more skillful this is shown in the adjoining image.

Imperial age

Finally when the civilization reaches the imperial age the solider attains the highest level of skill set shown below.




With the idea of gaming I have tried to display in way that suits my passion, I hope you appreciate my resume

Tuesday, 17 April 2012

What is the difference between Relational and Multidimensional database implementation ?

This was one of my second question on linked in to get a more industrial perspective answer. A lot of discussion on the implementation of relational or multidimensional database implementation. Some of the feedback from experts in this fields are outlined below :

Michel Voogd :
“The difference in implementation is that a multidimensional database includes pre-packaging subsets of data into small objects that are usable for fast online browsing, usually in a BI portal environment such as Cognos or Business Objects.
A relational database in itself doesn't include those packages but it would allow querying larger datasets.”

Bala Seetharaman :
“Relational DB - ER Modeling it has to comply the Codd's 12 rules. Here you can store only the way supported by DB engine (you can partition or multifile groups)
Multidimensional DB - Dimensional model - store the pre-aggregated data in the multidimensional form, still data sourced from Relational DB or Flat files. (Here you can store in the form of MOLAP, ROLAP and HOLAP and DOLAP too).

SQL - Query language used to search and manipulate the data from Relational DB
MDX - Multidimensional Query Expression - used to search and retrieve the data from cube or MDB (Multidimensional) store.

Siddharth: To answer your question "are there is any different tool or language to query the multidimensional database ( CUBE )",
MDX is the query language used to query the cube like your SQL again, its not like your ANSI standard SQL, we need to write in the form of 3D axis.

the calculations are quite easy in RDB than MDB, here if don't understand the dimension and hierarchy members we can't get the result easily in cube. “
John McKenna :
“….In relational databases data is organized by tables and columns (tuples) and records are grouped into blocks for storage and access. Querying is performed based upon relational algerbra (SQL). In multi-dimensional database implementations (most no longer exist), data is organized into mulit-dimensional cubes (think multi dimensional arrays), and queried based on a language suitable to navigating cubes (I am not aware of a standard although one may exist). To further muddy the waters you have columnar databases that group column data into blocks (efficient for ROLAP applications where few columns are in the result set, therefore less blocks traversed).

In addition to the database implementations many reporting tools have (OLAP/cube) functionality built in but many of these are not full blown multi-dimensional databases but scaled bown persistance engines that store all cube values together. Most full blown multi-dimensional databases have faded away due to performance issues (due to sparcity issues, etc), learning new query languages, supporting multiple database platforms and people finding that it was relatively easy to implement cubes in relational databases (ROLAP) by using dimensional database design (Ralph Kimball). …….”


The next question was the query methodology to query both type of database implementation, luckily I came to know that Oracle has also implemented multidimensional database architecture called the ESSBASE and for SQL Server its SSAS and SSRS.

Well there is still lot of information on my profile if you may want to have a look. Compiling all the notes is actually a tedious job. I have tried to aggregate some of the valuable comments.

For detailed discussion please follow the link

Sunday, 26 February 2012

BI Second Project - Twitter API programming

Hi All,
I hope every one is almost done with their first project execution plan and already Dr. Ram has asked us to start the second project planning, I am having a lot of expectation from this project as many skills will be put to test, one of the challenging test would be api programming in twitter.

This tutorial will provide how to integrate twitter api for your BI second project, I have spend almost 2 weeks on this, I don't want you folks to waste so much of time and get started right away.

Here are the steps to start with twitter api.

1 )download the twitter api twitter4j-core-2.2.5 from twitter twitter4j.org .

2 ) Search for the above api in the zip file, and place on your local machine.

3) Launch eclipse and right click on any project you want to import the twitter4j api and select build path --> configure build path. below is the screen shot.

4) Under the library tab click " Add External Jar files", and import the jar file you downloaded.

5) Under the project folder you can see the sub folder "Referenced Libraries"

Just check all the packages and classes are there. If they are you are good to go.

Now you need to access the twitter data with this api, but you need to have an key and a secret key.

1 ) Go to twitter.com

2) Scroll down and you will find a developer hyperlink, click and get your self registered.

3) Fill out the form and get your key and secret key.

4) use the below code to test the connection.

//replace by correct values
twitter.setOAuthConsumer("[consumer key]", "[consumer secret]");
AccessToken at=new AccessToken("[token]", "[token secret]");
twitter.setOAuthAccessToken(at);
try {
RateLimitStatus rls=twitter.getRateLimitStatus();
} catch (TwitterException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

5) use any of the function and play now.

I hope this would help you in Second BI project.

Please don't share the code any where else.