""

Technical

How to configure SQL Server connectivity for WebI from SAP BusinessObjects BI4.0 in Linux

Nowadays we have noticed some of our customers are following the trend of open source products. Indeed, Linux is a great choice of operating system due to the fact it is totally compatible with SAP BusinessObjects BI 4 and it also help companies to cut costs.  However, Linux has retained the way the classical Unix operating system works and therefore everything is about rights and batch commands. Therefore an advanced Linux technical know-how is compulsory before getting into it.

The purpose of this blog entry is to share the issues we faced in one of our customers running SAP BusinessObjects BI4 SP4 in a Red Hat Enterprise Linux Server release 6.3 using MySQL 5.1.61 as the system database and how we solved them.

The issue came out when right after a production database migration (a brand new SQL Server 2008) all their WebI documents stopped running from the SAP BI4 Launchpad with an unusual error "Database Error .[ (IES 10901)" blocking every single WebI to run and the whole core business was jeopardized. Rich Client did not experience any problem in Windows. After the first analysis, we discovered that default SQL Server ODBC driver installation was only configured properly for 32bit connections in the Linux server whereas WebI requires 64bit ODBC driver connectivity for running in the SAP BI4 Launchpad.

When it came to this point we had to apply a couple of OSS notes. The first one was OSS 1607125 "How to configure SQL Server connectivity for WebI from a BI4.0 unix environment". Resolution is:

1. Open env.sh under <install directory>/sap_bobj/setup/

2. Search for the following line

LIBRARYPATH="$LIBDIR:$LIBDIR32:$WCSCOMPONENTDIR:$PLUGINDIST/auth/secEnterprise:${CRPEPATH64}:${CRPEPATH}:${MWHOME}:$PLUGINDIST/desktop/CrystalEnterprise.Report:${BOBJEDIR}enterprise_xi40/$SOFTWAREPATH32/ras:${BOBJEDIR}mysql/lib”

3. Modify the line above by adding the following

":${BOBJEDIR}enterprise_xi40/linux_x64/odbc/lib:${BOBJEDIR}enterprise_xi40/$SOFTWAREPATH32/odbc/lib"

The line should look like this

LIBRARYPATH="$LIBDIR:$LIBDIR32:$WCSCOMPONENTDIR:$PLUGINDIST/auth/secEnterprise:${CRPEPATH64}:${CRPEPATH}:${MWHOME}:$PLUGINDIST/desktop/CrystalEnterprise.Report:${BOBJEDIR}enterprise_xi40/$SOFTWAREPATH32/ras:${BOBJEDIR}mysql/lib:${BOBJEDIR}enterprise_xi40/linux_x64/odbc/lib:${BOBJEDIR}enterprise_xi40/$SOFTWAREPATH32/odbc/lib”

4. Navigate to <install directory>sap_bobjenterprise_Xi40

5. Open odbc.ini file using vi or other text editor tools.

6. Find the entry for Sql Server DSN.  The default DSN entry in the odbc.ini is called "[SQL Server Native Wire Protocol]" but it's recommended that you create your own DSN entry using the same parameters specified in the default DSN.

7. Update the "Driver" section of the DSN to point to 64 bit version of SQL Server ODBC drivers

Driver=<install directory>/sap_bobj/enterprise_xi40/linux_x64/odbc/lib/CRsqls24.so

8. Restart the SIA

However the issue was not resolved completely. We received a new error with the following description whenever we tried to run a WebI "Receive the error : Database error: [DataDirect][ODBC lib] System information file not found. Please check the ODBCINI environment  variable.. (IES 10901) (WIS 10901)". This is a configuration issue on the Linux operating system with the environment variable ODBCINI.  Please make sure your environment variables are set correctly according to OSS note 1291142 - "Web Intelligence reporting using DataDirect drivers in Unix" (as of today it still applies to BI4). Resolution is:

1. In the Bobje user's Unix profile, add/modify the following environment variables and source the profile

BOBJEDIR=<install_path>/bobje export BOBJEDIR ODBC_HOME=$BOBJEDIR/enterprise120/<platform>/odbc export ODBC_HOME ODBCINI=$BOBJEDIR/odbc.ini export ODBCINI LD_LIBRARY_PATH=$BOBJEDIR/enterprise120/<platform>/dataAccess/RDBMS/connectionServer:$       ODBC_HOME/lib:$BOBJEDIR/enterprise120/<platform>/:$LD_LIBRARY_PATH export LD_LIBRARY_PATH

NOTE: For AIX replace LD_LIBRARY_PATH with LIBPATH, For HP-UX use SHLIB_PATH NOTE: Replace <platform> with linux_x86, solaris_sparc, aix_rs6000, hpux_pa-risc depending on your specific Linux platform. NOTE: You must set/export the above env variables in the same order as shown.

Please make sure to use the file $HOME/.odbc.ini as your default source for ODBC settings. Therefore, modify the ODBCINI variable in the following way:

ODBCINI=$HOME/.odbc.ini export ODBCINI

2. Modify the odbc.ini to add the DSN

                  [TestDSN] Driver=<install_path>/enterprise120/<platform>/odbc/lib/CRmsss23.so Description=DataDirect 5.3 SQLServer Wire Protocol Driver Address=<sql_server host or ip>, <port> Database=<db_name> QuotedId=Yes AnsiNPW=No

NOTE: Your DSN name (TestDSN) must be the same DSN name you used when creating the ODBC connection in Windows

3. DataDirect provides both NON-OEM drivers and OEM drivers

The drivers provided by BI4 are OEM drivers. Basically the WebI is dependent on the ConnectionServer.  By default the ConnectionServer is set to use NON-OEM drivers. Thus, we edited the connection server to allow the use of the OEM branded DD driver. The steps are:

  • Make a backup copy of $BOBJEDIR/enterprise120/<platform>/dataAccess/RDBMS/connectionServer/odbc/odbc.sbo
  • Open odbc.sbo with VI, search for DataDirect, there are 4 entries one for each MSSQL server we support.
  • Change all 4 from No to Yes <Parameter Name="Use DataDirect OEM Driver" Platform="Unix">Yes</Parameter>

 4. Stop all XI servers

Run ./stopservers, log out completely from your unix shell and log back in (to make sure new environment variables are setup), start all BI4 servers again.

After applying the OSS note we were able to retrieve data from SQL Server 2008 refreshing our WebI documents, however we noticed that CPU was reaching 100% every time we used a WebI in any way. Going through the log files we found errors such as "MS SQL Server 2008 |JobId:61340512 |EXIT SQLGetDiagRec with return code -1 (SQL_ERROR)" .

We took a look at the odbc.ini file and we found out that QWESD entry that was not initially there somehow appeared. As long as we were copying the information from an existing datasource we didn't need it all and we decided to remove the QEWSD=<random string> from the ini file.

Finally double check that <Parameter Name="Use DataDirect OEM Driver" Platform="Unix">Yes</Parameter> located at sqlsrv.sbo file in /opt/bi40/sap_bobj/enterprise_xi40/dataAccess/connectionServer/odbc is set to Yes.

We hope that our experience is a rapid problem solving approach for you. If you have any tips or suggestions to improve this article, please leave a comment below.

Managing ETL dependencies with BusinessObjects Data Services (Part 1)

Are you satisfied with the way you currently manage the dependencies in your ETL? Dependencies between jobs (or parts of jobs) are an important aspect of the ETL management. It pertains to questions like: Do you want to execute job B if job A failed? Imagine that you have a job C with sub-job 1 (usual runtime: 3 hours) and sub-job 2 (usual runtime: 2 minutes). If sub-job 1 was successful and sub-job 2 failed, can you gracefully restart job C without the sub-job 1 being restarted again?

As soon as you have more than 1 simple job, you have to manage your dependencies. In this article (part 1 of a series of articles about ETL Dependencies Management) I’ll first list some of the characteristics I’m looking for in an ideal dependency management system. I will then have a look at some of the possibilities offered by SAP Data Services 4. In part 2 (my next post), I will propose the architecture of a possible dependency management system. In part 3, I will go into the details of the implementation in Data Services. I’ll finish with part 4 by telling you about how the implementation went, and if some improvements are possible.

The ideal dependency management system

In this post I will use the word “process” to design a series of ETL operations that have a meaning together. Example: extract a source table, create a dimension, or update a fact table. The objective here is to manage the dependencies between the processes: updating a fact table should probably only be allowed if updating the corresponding dimensions was successful.

A dependency management system should ideally have at least the following characteristics:

  • Run a process only if its prerequisites ran correctly
  • After a failure, offer the option to re-run all the processes or only the processes which failed
  • Trace the outcome of each process (ran successfully, failed, did not run)
  • Run dependent processes dynamically (rather than statically, i.e. based on date/time)

The possibilities

Let’s enumerate some of the possibilities offered by Data Services, with their respective pros and cons.

1) One job with all processes inside. This is very easy to implement, dynamic in terms of run times, but it doesn’t allow for concurrent runs. Most importantly, it means that failures have to be managed so that the failure of one process does not stop the whole job.

2) One process per job, with jobs scheduled at specific times. This is very easy to implement, allows concurrent runs, but is not dynamic enough. If the process durations increase with the months/years, jobs may overlap.

3) One main job calling other jobs (for example with execution commands or Web Services).

4) One process per job, all the jobs being scheduled at specific times, but checking in a control table if the pre-requisites ran fine. Otherwise they just sleep for some time before checking again.

5) Use the BOE Scheduler to manage jobs based on events (how-to is well described on the SCN). I’ve not tested it yet, but I like this approach.

By default, the first two possibilities only manage the “flow” side of the dependency management (after A, do B). But they do not manage the conditional side of the dependency management (do B only if A was successful). In both cases, a control table updated by SQL scripts would allow the ETL to check if the prerequisite processes have been run correctly.

What I don’t really like in the solutions 2 to 5 is the fact that it’s difficult to have an overview of what’s going on. You cannot really navigate within the whole ETL easily. The solution 1 gives you this overview, but at the cost of having a potentially huge job (without the possibility of processes running concurrently).

Also note that the solutions with multiple jobs will need to manage the initialization of the global variables.

What I miss in all these solutions is an optimal re-start of the ETL. If 10 of my 50 processes failed, and I want to restart these 10 only, do I really have to start them manually?

In my next blog post I’ll propose an architecture that addresses this optimal restart.

Until then, please let me know your thoughts about how you manage your ETL dependencies. Any of the 5 solutions mentioned before? A mix? Something else? And how well does it work for you.

Use Data Services SDK libraries to construct an AWTableMetadata in a Java application

If you have a Java application that returns a table and you are planning to use this as a source of information for SAP Data Services, the best way is to return a table with the same data type as the Data Services Template table “AWTableMetadata”. I will explain how to easily do that in this article.

First you need to go to the libraries folder inside your SAP BusinessObjects installation (…SAP BusinessObjectsData Serviceslib). From this folder we have to import the following libraries to our Eclipse Java project.

  • Acta_Adapter_sdk.jar
  • Acta_broker_client.jar
  • Acta_Tool.jar

The easiest way is to put these libraries inside your Java ext libraries folder so your application will import it automatically. Also if you’re planning to deploy this application on a server you need to place this library inside the server library folder too.

  • ….Javajdk1.7.0jrelibext
  • …..Javajre7libext
  • …..SAP BusinessObjectsTomcat6lib

Import these libraries inside the project:

Import com.acta.metadata.AWAttribute;

Import com.acta.metadata.AWColumn;

Import com.acta.metadata.AWTableMetadata;

Once we have our libraries imported inside our Java project we have to assign the return value for the function in charge of constructing the table as the same data type for the table.

Public static AWTableMetadata createAWTable () throws Exception {…]

Then we are ready to construct our table. To do so we have to:

  1. Declare the table:
    1. AWTableMetadata awTable = new AWTableMetadata () ;
    2. awTable.setTableName("……");
  2. Assign the rows and columns
    1. AWAttribute [] attributes = new AWAttribute [2000] ;
    2. AWColumn [] columns = new AWColumn [2000] ;
  3. Assign the Attributes and columns to our table:
    1. awTable.setColumns(columns);
    2. awTable.setAttributes(attributes);

Finally we have to make the return statement as “return awTable”.

In conclusion, once we have our function done we will be able to communicate and exchange data with data services through our application in this case with a table and be able to use our application as a Data Source.

If you have any doubts or suggestions, please leave a comment below.

Tuning SAP Rapid Marts XI: Streamline delta loads

You have finished you standard implementation of SAP Rapid Marts XI, everything went fine but your customer start to have issues regarding the time consumption of the delta loads. In this article I will explain a couple of approaches to achieve a better performance on delta loads of SAP Rapid Marts. In the image below we have the typical infrastructure of SAP Rapid Marts, loading into on single data warehouse.

This infrastructure has pros and cons but I will highlight two main advantages:

  • Avoids duplication of information
  • Simplifies maintenance from customer perspective

 

1st Approach: One job runs it all

Taking the architecture illustrated above as our basis, the first step to achieve better performance will be to create one single ETL job to run the different SAP Rapid Marts involved in our implementation.

This task is simple; just create one workflow per SAP Rapid Mart containing all the different workflows that are part of each SAP Rapid Mart. Once this task is done, create an ETL job with all the corresponding global variables, drag and drop all the workflows and connect them to create a sequence of execution.

This job also allows us to take advantage of the “execute only once” option in SAP Data Services. This option is set for all the components in SAP Rapid Marts and it defines that each component within the same ETL job execution is executed only once. If you take into account how many components are shared between different SAP Rapid Marts this approach becomes very interesting.

In addition, this approach allows us to create a strategy of try/catch in the ETL process. Some customer environments can have intermittent issues that can crash the execution of our daily loads (i.e. network errors). We will place try and catch statements for every workflow of the job, then inside the catch statement we will place again the workflow that we were trying to execute, the following image illustrates the idea:

The try/catch + ”Execute only once” strategy allows you to retry the execution of a component of the ETL job and continue the execution where it stopped. 

Once this idea is implemented the execution of your SAP Rapid Marts will be more robust and optimized but maybe not enough to fulfill your customer´s expectations… so let us move on to the second step.

 

2nd Approach: Working around a parallel execution

Analyzing the information of the Performance Reports generated in SAP Data Services Management Console after the execution of a job, you will be able to identify the components with the worst execution times.

These components can vary from one implementation to another depending of your customer´s environment; within the top 10 worst execution times you will find some components generating information of dimensions and/or fact tables of the model. Some of these components can be easily removed from your sequential execution and placed on a separated job to be executed in parallel.

It is critical at this stage to ensure that these components are completely removed from the sequential execution and that any final output of the component is not used in other parts of the ETL process (i.e. subsequent table lookups). To ensure this, the function “Where is used” of the SAP Data Services Designer will be extremely helpful.

In my experience, after applying these two steps we should experience a considerable improvement on execution performance of delta loads. To give you an example, in one of our recent implementations we started with an execution time of 17 hours for five SAP Rapid Marts running sequentially, this was decreased to 6 hours using the two approaches I have described in this post.

 

Digging deeper

If even after applying the previous steps you still face bad performance in isolated components, this situation will require more analysis and customization at lower level.

Some components of the standard SAP Rapid Mart tries to execute on the ERP side some components with complex logic, which can take a very long time (i.e. SAP General Ledger RM + SAP Note 1557975 or SAP Inventory RM + SAP Note 1528553)

In these cases, the workaround is to split the process in several steps and maybe make use of custom tables on the ERP side and performance boost will be remarkable. I can tell you that in our most recent implementation one of the components was taking no less than 12 hours to run but after we analyzed and modified the behavior of the component, to make use of one custom table on the ERP, this component took no more than 30 minutes to run. This process of customization of a component took 2 man days to be completely implemented.

As conclusion, my experience with the SAP Rapid Marts is very positive. SAP provides a rapid deployment solution that can be up and running end-to-end in a few weeks. Furthermore, it provides an extremely easy to use framework to ensure your customer has the ability to develop any level of customization in a few weeks. Overall we are in front of a solution that will allow your customers to create their own data warehouse in weeks instead of months. If we can improve this issue of delta load performance, the solution becomes even more appealing to your customers and it helps to increase satisfaction levels with the tool.

That´s all folks! I hope this article will help you to raise the bar in your SAP Rapid Marts implementations. If you have any doubts feel free to leave a comment below.

Problems Uninstalling Data Services

I have faced a problem recently and I wanted to share the resolution, in case you have to deal with the same topic. I was trying to upgrade a Data Services machine following SAP procedure (this is copying the configuration files uninstall and then install the new version – not very sophisticated as you can see). This wasn´t as simple as I first though.

Problem started after uninstalling the software, the new version refused to install stating that I should first uninstall the previous version. I uninstalled the software again… but Data Services is still there, so uninstalled again, but this time the process failed (makes sense as the software is already uninstalled), so I kept trying… reboot…uninstall… reboot…rename older path name… reboot…you see where this is going…

So, how did I finally solve this?

  1. Start Registry Editor (type regedit in a command window or in the Execute dialog).
  2. Take a backup of the current Registry content. To do this, with the top node of the registry (Computer) selected go to File -> Export and select a name for the backup file.
  3. Delete the Key HKEY_LOCAL_MACHINESOFTWAREBusiness ObjectsSuite 12.0EIM (Suite XX.X may vary).  NOTE: You may want to write down the key KEY_LOCAL_MACHINESOFTWAREBusiness ObjectsSuite 12.0EIMKeycode first as it contains the license code.
  4. Go to HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows CurrentVersionUninstall and look for a KEY which property DisplayName is "BusinessObjects Data Services". This step is to remove the entry for the software in the Uninstall Window’s dialog.
  5. Finally delete the content of the installation directory (typically: C:Program FilesBusiness ObjectsBusiness Objects Data Services)

Now you can launch the installer and this time it should work.

Hope this may help you if in case you are experiencing the same issue. Leave comments below if you have any doubts or if you would like to add anything.

 

Tip of the day - Table and index size in Oracle

Ever wanted to find out how big are the tables in your data warehouse or in your ETL storage area? Here is a quick tip.

You can get the size of each table belonging to a specific user with the following code:

select sum(bytes)/1048576 Size_MB, segment_name Table_name
from user_extents
where segment_name in (
     select table_name from all_tables
     where owner = 'OWNER_NAME_HERE')
group by segment_name
order by 1 desc;

In order to get the size of the indexes with the corresponding table names (useful when the indexes have system-generated names), we need another query:

select sum(u.bytes)/1048576 Size_MB, u.segment_name index_name, i.table_name
from user_extents u
join all_ind_columns i
     on u.segment_name = i.index_name
     and i.column_position = 1
where i.index_owner = 'OWNER_NAME_HERE'
group by u.segment_name, i.table_name
order by 1 desc;
If you have any doubts or suggestions, leave a comment below.

Export to Text automation in Web Intelligence

Users typically need their Web Intelligence (WebI) data tables exported automatically into text files in order to use them across other SAP BusinessObjects BI modules. Unfortunately SAP BusinessObjects, including the newest SAP BI 4 release, does not include a direct option to automate the export of content of a WebI document tab to text format. In order to cover this gap and achieve the Export to Text feature for WebI we designed a fully automated process which is shown in this article.

The problem

Users want to automatically export raw data tables from WebI to TXT file, but none of the existing scheduling format options – PDF, XLS, CSV – are satisfying, because:

  • A PDF brings an static document that cannot be re-used directly
  • An XLS or XLSX has the limitation of 65535 or 1 million rows respectively
  • CSV does not export tables, it just exports the Query content

Users of old releases could use the old Desktop Intelligence (DeskI) module as an alternative, but unfortunately it has been discontinued in the new SAP BusinessObjects BI4 release.

The consequences

Users see WebI as an “limited” module in terms of sharing options and export size. Moreover, customers will not migrate to the new SAP BI4, especially those who heavily do Query & Analysis an export the result table to txt using DeskI. The future does not look very promising because:

  • Even if a manual Export to TXT is available since SAP BI4 FP3, automation for it is not currently available and SAP does not have a release date for this feature
  • DeskI alternative is not possible in SAP BI4. Even if a DeskI add-on is planned for coming versions, the future of its scheduling function is uncertain and corporations should not allow DeskI to be part of their BI Roadmap.

The solution

The following method describes a way to schedule a WebI report with Export to text functionality and it involves the use of the following items:

  1. A 1st WebI document with the table to be exported
  2. A Web Service that is pointing to that document table as a source
  3. A 2nd WebI document with just one query that sits on the Web Service created. No tables nor charts are needed here
  4. A vbs script that adapts the output from this 2nd WebI document

Detailed steps to follow for every item are:

  1. The 1st webI document contains all the development needed (Queries, objects, variables, filters) and a table with the final data you would like to export
  2. This 1st WebI document must be edited with WebI Rich Client. Select the table you want to export -> Right Click -> Publish Block -> Create Web Service
  3. The 2nd WebI document which contains the Web Service based query can be scheduled to run with the following options:
    • CSV type
    • Double quote text qualifier, tab column delimiter
    • Export to a server folder (e.g. D:)
    • Name it with txt extension (e.g. Results1.txt)

See below a snapshot with the schedule configuration detail:

Configuration of the schedule in WebI for a txt export
Configuration of the schedule in WebI for a txt export

This example is applied to only 1 table to export, but multiple tables per document could be exported by ticking the “Generate separate CSV per Data Provider” option.

Once run with Success, the result of this schedule will be a text file (Results1.txt) with the content delimited by tabs but with a small defect: the so-called text qualifier (double quotes) appears everywhere.

In order to remove this annoying text qualifier (double quotes) a program can be scheduled. You can use your free style but if you copy and paste the following txt into a file called “QuoteRemoval.vbs” it will do the job:

set objRe = new RegExp

objRE.Pattern = """"

objRE.Global  = True

strFileName = "D:Results1.txt"

set objFS = CreateObject("Scripting.FileSystemObject")

set objTS = objFS.OpenTextFile(strFileName)

strFileContents = objTS.ReadAll

objTS.Close

strNewContents = objRE.replace(strFileContents,"")

set objWS = objFS.CreateTextFile("D:Results2.txt")

objWS.Write StrNewContents

objWS.close

The result of this executed script will be a perfectly formatted Results2.txt file

Last but not the least, you can build a system of events that triggers the different items sequentially, or embed these items in an object package that can be scheduled as a whole.

Applicability & Benefits

This method enhances the sharing options for the SAP BusinessObjects platform, allowing an unlimited amount of raw data to exit the platform through WebI automatically, and be re-used in Big Data modules like HANA, Visual Intelligence, Explorer or simply for individual consumption.

Seeing even further, this turns WebI into a real ETL (Extraction, Transformation and Load) tool providing integration capabilities to the end users.

Summarizing, this method:

  • Allows a better integration of SAP BusinessObjects with the corporate BI processes improving efficiency and effectiveness
  • Facilitates companies to opt for a migration to SAP BI4 release, with all the benefits that the newest platform brings

If you have questions about this method, or if you want to share your experience or tips, please feel free to leave a comment.

Turn data into actionable insight with BI

Making Better Data-Driven Decisions

Do you wish you had a clearer view on the performance of your company and feel you lack key information to guide your decisions? All the Data you gather in different departments is just piling up, isolated and useless? Taking your organization through the current fragile economy is already challenging enough to do it without visibility of what happens in your organization. In order to solve issues and take advantage of strengths you need to turn data into actionable insight. SAP business intelligence software solutions give you the visibility you need to make important business decisions based on key data and facts, not guess-work. They allow you to draw information from data, rather than just storing it for the sake of it.

Interactive dashboards and rich visualizations help you monitor your business performance at a glance, and the real-time insights allow you to adjust aspects of your business before they become a real problem.

Reporting allows you to access and transform corporate data into highly formatted and automatic reports, while interactive reports let you answer ad hoc questions and interact with data, building your own queries.

Analysis solutions help you determine trends from historical data and make better forecasts.

With data explorations tools you can find immediate answers to business questions in a search-engine manner.

With BI application design tools, your IT department will be able to create BI applications for specific audiences.

It´s not necessarily a matter of implementing each and everyone of the solutions. Depending on your particular needs and user types, you could select the more adequate tool. Take a look at the SAP Business Intelligence Solutions Comparison Matrix to understand a bit more about each product.

Take the example of Vodafone Turkey, they used Excel to manage their several marketing campaigns in the past, but this process was not only susceptible to human error, but also time-consuming. They needed a functional solution to serve multiple users and help them understand campaigns and act according to their results.

They implemented a central dashboard, a highly visual solution that could accommodate a large number of campaigns and variety of KPIs for both new and recurring campaigns. The Campaign Analytics Solution allows the team to analyze existing campaigns and design outlines for new ones based on key success factors. The dashboard also helps the team to understand the net take rate for each campaign compared to the targeted subscribers. And more significantly, marketers can now easily and definitively follow the revenue generated by each campaign.

If you wish to know how SAP Business Intelligence Solutions can help solve your company´s specific needs, contact us on info@clariba.com or leave a comment below.

Data Quality - the basis for good BI

Usually companies learn about the importance of data quality management in the worst possible way – by dealing with the issues generated by the lack of it, and addressing data errors, data movement, and unstructured data after many costly problems. If your data is lacking in quality, everything you learn from it is useless, as information cannot be trusted. Without accurate customer and performance insight you will never be able to see what areas of your business need to improve. Data Quality Management solutions allow you to integrate, transform, improve and deliver trusted data that supports critical business processes and enables sound decisions. As you expand into new markets or develop new products this will become even more important, as the more data you gather, the easier it is for problems to start occurring.

With SAP Data Services you can enjoy a single solution that encompasses data integration, data quality, data profiling, and text analysis. This will allow you to deliver trusted data that supports critical business processes and enables sound decisions.

To give you an example of the importance of data management, Vodafone Netherlands sought the help of Clariba to implement key reports within a maintainable BI solution while automating report generation and distribution and also to develop a dashboard with key indicators for management. However, the first phase of this project focused on ensuring that trusted data was provided from the current databases to the BI solution. Complex queries were streamlined and redundant data sources consolidated. Subsequently BusinessObjects universes were developed for the central data warehouse and the CDR data mart. Only when the relevant data sources were available, with good quality data, the Clariba team went on to develop the reports and dashboard.

Learn how SAP Analytics Solutions can help your company with its data quality management, making quality your goal. Contact us on info@clariba.com or leave a comment below.

Successful Change with SAP Business Intelligence

Deliveting Business Transformation

The pace of change, specially due to the rapid deployment of new technology is growing at an incredible pace. For businesses to remain competitive, they need to keep up with these changes almost constantly, it can come from an expansion, restructuring, merger and acquisition, regulatory compliance and more.SAP Business Intelligence Solutions unveils key concepts and processes that are vital to the planning and execution of successful change strategies.  Being aware of happens in your business can optimize organizational change and smooth and speed the transition periods.

Take the example of Doha Bank, when they had to adapt to a new mandate from the Central Bank of Qatar to report key balance-sheet figures on a monthly basis. The timeline was tight and the requirements complex, with the pressure of punitive fines for the non-compliant.

The internal financial reporting was previously done manually, and data required for the monthly reports was spread across several sources. After an intensive process of data cleansing and consolidation, the Bank went for a SAP Business Intelligence solution called Web Intelligence, which allowed for the reports to be produced automatically, in a timely and error-free manner.

Although we used the example of a big organization, these solutions can also be adopted by the SME. SMEs use their speed of action as competitive advantage to remain in the game with the big players, and they have to constantly adapt to change, be it in the market, in their organization or in their business model.

Find out how your SAP Business Intelligence Solutions can deliver business transformation. Contact us on info@clariba.com, o leave a comment below.