Building a 100% Cloud Solution with Oracle Data Integrator

by Ricardo Giampaoli and Rodrigo Ratke

January 2017

Abstract

Oracle has released a broad array of cloud services products, including Oracle Database Cloud Service (DBCS), Business Intelligence Cloud Service (BICS) and Planning and Budget Cloud Service (PBCS).

The initial idea was to provide an environment in which to develop small projects that could eventually be scalable to the point of doing away with the client’s on-premise environment. The possibility of having a top-tier database and BI tools without expending millions on a huge infrastructure is extremely attractive and heated up the market.

In fact, Oracle’s main objective is to target the small- and medium-size company markets, with attractive prices for the dream BI project that they all have. Oracle is investing a lot on their cloud service products, demonstrated by the number of releases that are installed monthly on servers as well as by the number of projects flooding the consultant market.

The only thing that doesn’t match reality is that, when it comes to BI, there are no simple projects. Hunger for a BI tool makes companies put everything they can dream of—and more—inside this cloud environment, without respecting the initial idea of “small projects” or even the initial specs of the environment.

The biggest challenge here is to manage client expectations; apart from that, there are also some technical issues.

Since the idea was to have small projects for small or medium-sized companies, or even a department within a large company, the cloud product has a very limited and manual integration service. Oracle is still working on the ODI cloud service that’ll make the “Big Projects” possible.

But wait. Is that actually true? Could we create “Big Projects” with seamless ODI integrations on premise or even with ODI on cloud?

The answer: Yes!

Introduction: The Study Case

This study case concerns a company that closed a contract with Oracle for several Cloud services and ODI. Their wish was to have a BI environment without the need for an on-premise infrastructure. In fact, this client’s databases were hosted by several different providers, meaning that this client has a very small IT infrastructure. The client wants to focus on core business instead of wasting time and money on IT infrastructure, and for that reason they contracted these Oracle services: PBCS, BICS, DBCS and ODI.

The project consisted of gathering Sales, PNL and Work Force data from four different ERPs. The data is provided in three different types of data source: SQL Server, Oracle Database and file system. The file system data comes from cash register tickets, which means that all the data is inside one file and must be stripped down in a given order.

These systems do not interact with each other; the ETL part will have to merge them in a unified datamart to provide a complete view of the company. The requirement would be that all sources converge in the DBCS; then, the PNL, Workforce and part of the sales data would be loaded to PBCS to provide Actuals information for forecast planning. More detailed sales data should be loaded to a star schema that will be used in BICS. The client also wants the forecast data from PBCS to be loaded in BICS as well.

All this must be done outside the client’s infrastructure, which is very basic (only user PCs), with no servers or databases.

Figure 1: Ideal Design Solution

Figure 1: Ideal Design Solution

The problems are:

  1. How will we get all this data from many different sources to the DBCS?
  2. How will we merge all this data to create a unified vision of the company and where will we do that, since the client does not have an on-premise database?
  3. How will we load PBCS with data generated from the ETL process?
  4. How will we show the forecast data from PBCS in BICS?
  5. How will we automate everything and make it fail proof?
  6. How will we do all of this if the client doesn’t have any IT Infrastructure (DBs and servers)?

Let’s firs take a look at the tools involved and their limitations.

Introduction: The Tools

PBCS is simply a rebranded version of Hyperion Planning that has been stripped down so it can be hosted by Oracle on their servers. Basically, it is a Hyperion Planning that can have just one application per instance (cloud instance), a standalone service that uses files to load or extract data from it. It has restrictive integration options since it has no cloud integration available.

BICS is the cloud-based version of Oracle Business Intelligence Enterprise Edition (OBIEE). Until now, BICS has been able only to access databases/schemas within the Oracle Database Cloud, but a new feature now enables you to define external database connections so you can build models and run queries against your own on-premise databases. You just need to make sure the databases are available via a public IP address. This would be excellent if the client we’re discussing had an on-premise database, which is not the case.

Even so, this feature is very useful since we can connect the BICS in our DBCS schema, making the first integration between our environments. Now it’s just a matter of creating the DW inside DBCS and use it as the BICS schema. By the way, BICS cannot connect directly with Essbase, the Planning database, in the same way that OBIEE does, since there is not yet connectivity between the cloud services.

Oracle Compute Cloud Service is for customers who need a cloud environment with complete control of Oracle software installation and setup. This service also allows “Bring Your Own License” for any Oracle product.

DBCS provides a number of ways to load data:

  1. Using DBCS Console (APEX console) to export/import data
  2. Using Database Cart feature within SQL Developer (and Oracle JDeveloper)
  3. Using DBCS inbound REST Services
  4. Using DBCS to invoke SOAP and REST Services programmatically
  5. Data Sync uses the BICS APIs and has built-in scheduling, triggering, error handling, notifications and more, and can load millions of rows if required.
  Using DBCS Console Using Database Cart Using Inbound REST services Using outbound SOAP and REST invocation APIs Oracle Data Sync
Load Style Manual Manual Programmatic Programmatic Programmatic
Load Volume Low High Low to Medium Medium (chunking recommended) High
Skill required to implement Basic Database knowledge Developer or DBA skills REST and PL/SQL PL/SQL Developer or PL/SQL Skills
Validation during loading Database level integrity Database level integrity PL/SQL level and Database level PL/SQL level and Database level PL/SQL level and Database level
Importing multiple objects in a single run No Yes Yes Yes Yes

Table 1: DBCS Load Options

Since we are talking about a BI project and, in this case, about daily loads with around 200,000 rows from on-premises environments—plus heavy ETL jobs to merge all the different ERP’s data as well as the data that will come from PBCS—none of these options are good enough for this project:

  • The APEX and the Database Cart features allow manual load to the DBCS, meaning that they’re not useful in our case.
  • The REST service manipulates one row at a time, so it’s not useful for big volumes of data.
  • ·SOAP and REST are a bit more robust and could be used to manage our 200,000 rows per day, but we’ll still have to handle some flat files and we would have to get the data from four different sources in four different providers, each with different rules of access to their environments.
  • The Oracle Data Sync certainly would be able to handle the volume, could be schedule/triggered after events, and could load from all sources, but it will not provide the strong ETL capabilities required for this project, for which we must merge data from four different ERPs, strip down cash machine tickets, and orchestrate everything seamlessly.

We need another option for our datamart integration, and here’s where ODI comes in. ODI is a fully unified solution for building, deploying, and managing real-time data-centric architectures in a SOA, BI, and data warehouse environment. In addition, it combines all the elements of data integration, real-time data movement, transformation, synchronization, data quality, data management, and data services to ensure that information is timely, accurate, and consistent across complex systems.

Looks good, but there’s one problem with ODI: there’s no direct connection (KM) between ODI and the cloud services.

Design Solution

With a better understanding of the environment, tools and constraints, the only missing thing is to put everything together to shape a 100% cloud solution. But we have a problem with this 100%, right? What about ODI, which is an on-premise application? How to make it cloud? How to make it access all the different data sources and load it to all targets?

We need to find a centralized place where ODI can orchestrate the entire environment, but where? For a 100% cloud, this place should be in the cloud as well, which gives us three options: PBCS, BICS and DBCS.

  1. PBCS provides no access to its server and no remote connections. It’s a closed box accessed only by HTML, SFTP and EPM Automate (which uses TLS 1.2).
  2. BICS also provides no access to its server, and no remote connections. It’s a closed box accessed only by HTML, SFTP and DataSync (which also uses TLS 1.2).
  3. DBCS is a Linux machine hosted in the Oracle datacenters, where you can have remote access, can install software, can open and close doors, can add disks, create partitions, upgrade memory, pretty much do everything you want. Basically, it is an open Linux server on cloud.

That sounds promising. If we can manage this machine as a regular Linux machine, that means we can:

  1. Install an ODI Client for developers
  2. Install an ODI Agent
  3. Install ODI EE to make easy for users to see or execute jobs in operator
  4. Set VPNs to all data sources, one for each provider
  5. Set SFTP server to let users drop files there
  6. Set Samba to easily grant user access to the ETL log folders
  7. Install EPM Automate to manage PBCS from the DBCS machine
  8. And more...

DBCS is our centralized environment that will glue everything together seamlessly.

Note that DBCS has four service options:

Service Ideal Customer
Oracle Compute Cloud Service For customers who need a Cloud environment with complete control of Oracle software installation and setup. This service also allows for “Bring Your Own License” for any Oracle product.
Oracle Database Cloud - Database as a Service - Virtual Image For customers who need pre-installed Oracle Database software with complete control over the environment and do not require a running Oracle Database.
Oracle Database Cloud - Database as a Service For customers who need a pre-configured Oracle Database with complete control over the environment, new Oracle Cloud automation tools and require a running Oracle Database.
Oracle Database Cloud - Database Schema Service For customers who need a development and deployment platform for Oracle Database and/or browser based applications without SQL*Net access.

Table 2: Oracle DBCS Service Options

All options allow us to build this environment but the last one: Oracle Database Cloud - Database Schema Service.

This is how the environment looks after everything is installed and configured (Figure 2).

Figure 2: Cloud Environment

Figure 2: Cloud Environment

Regardless of the client, the idea here is to use DBCS as the center of the cloud environment; BICS can access it outside the box, and we can configure VPNs to access outside databases, SFTPs to manage files, Samba to give easy user access to logs or even to get or put files in the data feed, install ODI Agent, ODI Client and more.

If possible, the best solution design would be to have a separate machine to install all tools needed, but for this project the client had some budget limits and we had to use the same machine as the Oracle DB. When using the same machine as the database, it’s is always good to have additional hard drivers to install the tools; in fact, the best approach would be to have one HDD for the Database, another for the Database files, and another one to install the tools. This way, if anything happens with the database, we can do a full recovery without impacting our installations/configurations.

Even without ODI you can still realize benefits from this architecture, since with all VPNs installed you can use DBCS as a DB inside your on-premise architecture. This lets you use heterogeneous services to access other databases as well as DB links; you can also create procedures and triggers, and even use GoldenGate to sync data between these environments.

DBCS Schema Configuration

This article does not intend to explain how to create an Oracle database instance, configure backups, etc. Assume that all the steps to make the database usable have been taken. Instead, let’s talk about schemas and table designs.

For each instance of PBCS and BICS you rent, you receive a test and a production environment. With DBCS, you receive a machine with a certain configuration and you can configure what and how you want to use it. If you get a DBCS with a big configuration, you could create two instances, one for DEV and other for PRD, and these instances would be “linked” with each PBCS and BICS environment.

If this is not the case, you can have just one instance and split your schemas by DEV and PRD schemas; this way you’ll be able to develop while the business is running interfaces and loading data in their environment.

The schema designed for this case was a Sources area (where all data sources from all database/files would be loaded), a Stage area (where everything would be transformed and merged), and a DW schema (where everything will be loaded in its own respective format).

We also had two more schemas for the ODI repository, one for the master and the other for the work/execution repository.

Since we want to create a DEV and a PRD environment, this is the list of the schemas created:

Development Production
DEV_ODI_MASTERREP PRD_ODI_MASTERREP
DEV_ODI_WORKREP PRD_ODI_EXECREP
DEV_SOURCE_AREA PRD_SOURCE_AREA
DEV_STAGE_AREA PRD_STAGE_AREA
DEV_DW PRD_DW

Table 3: Schema naming

ODI Installation and Configuration

The ODI install in the cloud environment is exactly the same as in an on-premise environment:

  • Install the ODI Client in the DBCS server.
    • You can use an on-premise client to access the schemas in the DBCS, but no matter what your link, it’ll be very slow. This is a characteristic of ODI; every time you access a database that is not in the same network, ODI gets extremely slow, even when it’s just different networks inside your intranet.
  • Configure the schemas using an ODI client.
    • Same way as in an on-premise environment.
  • Configure ODI Agent.
    • Same way as on-premise.
  • Configure topology.
    • With the VPN, you can configure topology normally as if all the databases are in your network, in this case in the DBCS network.

With ODI installed and configured we can move on to integration.

Integrations

Since BICS and PBCS development will be the same, we’ll focus on the integrations aspect, mainly between DBCS and PBCS since BICS will access DBCS directly.

BICS Integration

The BICS integration is the simplest, since it already has an out-of-the-box integration with DBCS and lately with the on-premise database (If the on premise database has a public IP).

To integrate BICS and DBCS, open the BICS instance and go to Console and Connections, as shown in Figure 3:

Figure 3: BICS Main Screen

Figure 3: BICS Main Screen

Here, we just need to set the DBCS connection configuration:

Figure 4: BICS Connection Configuration

Figure 4: BICS Connection Configuration

You have an advanced option to insert the TNS if you want to, but that’s all we need to do to integrate BICS with DBCS. After that, development for BICS is the same as if you were using the default database:

  • Create a new model
  • Reverse the table’s metadata
  • Join the dimensions to the fact
  • Create the logical columns and metrics
  • Create the logical dimensions
  • Publish the model
  • Create the dashboards

Everything that is loaded to the DW tables will show up in the dashboards. One less integration to worry about.

PBCS Integration

Here is where the challenge begins. First of all, let’s talk a little bit about PBCS. PBCS has two very different interfaces to work with: the regular interface and the simplified application.

The regular interface is exactly the same as the Planning on-premise interface: it offers only one way to load and extract data and metadata, by import and export files.

Figure 5: PBCS Import and Export Files

Figure 5: PBCS Import and Export Files

This is very handy when you want to start building the application, since you can just get a bunch of Excel files and start to load all the dimensions and data without any configuration or pre-settings. But when we talk about integration, this feature is of no use to us.

The simplified interface is a new, more modern interface that will fit a lot better on a tablet or a smartphone, and has new features as well. The main feature that we need to look at is the new Inbox/Outbox, the only point of contact with the external world. Everything you want to load must be uploaded there first and everything you want to extract needs to be there for a download.Here’s how to automate a process in PBCS:

Data or Metadata Load

  1. Upload the file with the data to the inbox.
  2. Create a job to import the data.
  3. Execute the job.

Data or Metadata Export

  1. Create an export job. (For data export it is possible to create Business Rules to export the data directly to the inbox folder: /u03/lcm/.)
  2. Download the files.

PBCS also offers a job scheduler to automate the process on the PBCS side. This is very nice feature if you have only PBCS and want to make an automated process. However, we want a more structured process and since we have ODI we don’t need to schedule any job—we just need to create the jobs for importing and exporting data and metadata.

But what are jobs? Jobs are basically templates with information about what you want to do. For example, a data export job will contain the POV of the data you want to export. Every time you run the job, that POV will be exported.

Let’s take a look at how to create a job. In PBCS, go to the Console menu:

Figure 6: PBCS Console Menu

Figure 6: PBCS Console Menu

Here, we have everything we need to create and monitor our jobs. Basically all data jobs are created by clicking the Actions menu and all metadata jobs are created by clicking the Dimension tab and on the Export and Import buttons.

Let’s take a look at each of them:

Figure 7: Import and Export Data in PBCS Simplified Interface

Figure 7: Import and Export Data in PBCS Simplified Interface

As we can see in Figure 7, in the Console menu Actions, we have Import and Export Data as well as Inbox/Outbox Explorer. Import Data is where we’ll create our job to import data.

There’re few options in this menu, and the only ones that matters for us are:

Figure 8: PBCS Import Data Job

Figure 8: PBCS Import Data Job

Location: Must be set as Inbox. If you select Local, it will not enable the Save as Job button. (Local is used to manually import a file.)

Source Type: This will determine the format of the file. Essbase will require a file with the same format we use today to load data to Essbase, which means you need one file for each plan type. Planning is different, because the file will be loaded through planning instead of directly in Essbase. That means we can have just one file to load data to all plan types. To do that, the file must have a specific format:

Account BegBalance Dec Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Point of View Data Load Cube Name
100000 1462.06 100 110 120 130 140 150 160 170 180 190 200 210 "112001|USD|Actual|Final|FY16" PNL
200000 2462.06 100 110 120 130 140 150 160 170 180 190 200 210 "112001|PR123|USD|Actual|Final|FY16" Sales
300000 3462.06 100 110 120 130 140 150 160 170 180 190 200 210 "112001|IC2|USD|Actual|Final|FY16" WrkForce

Table 4: Planning Import Data Format

The format is very straightforward:

  • The account: what we want to load
  • The periods: from BegBalance to the last period, which in this case is Nov.
  • The point-of-view: no matter how many dimensions we have, we will have all of them here, so you don’t need to worry about table column, etc. The entire point-of-view will be loaded in this column between quotes and split by what you define in the File type, in this case pipe line "|".
  • The Data Load Cube Name: the plan type that you want to load data

This format is efficient in terms of integrations since is easy to create generic components that will load all the data from all the sources in this specific format. Also, since we don’t have too much access to PBCS, if we can load everything we need at once it will be better than having many interactions with the system.

File Type: This is the column delimiter setting. You can define it as comma delimited, tab delimited, or with any delimiter that you want. I prefer a pipeline “|” since it is not used anywhere and it’s easy to spot.

These are the important options that we need to worry about. Everything else you can leave as default, even the name because we’ll send the name of the file using EPM Automate.

Now let’s see how to create an Export Data job.

Figure 9: PBCS Export Data Job

Figure 9: PBCS Export Data Job

Creating an Export Data job is not that different from the Import Data job. We still have to select the Outbox option, to enable the Save as Job button, then select the cube you want to export data from; that means we’ll need one export job per plan type, the column delimiter, and the POV (or Slice Definition).

The Slice Definition is pretty straightforward. You can use substitution or user variables as well Essbase functions. And now we have created our Data Jobs.

Let’s take a look at the Metadata jobs now. Still in the Console menu, we now need to click in the Dimensions tab to find the Import and Export buttons.

Figure 10: PBCS Import and Export Metadata

Figure 10: PBCS Import and Export Metadata

Creating a job to import and export metadata is even easier than with data jobs. To Import Metadata, we just need to select our old friend Outbox to enable the Save as Job button and then insert the name of the file for each dimension, select the column delimiter, and indicate whether we want to Clear Members before the metadata load.

Figure 11: PBCS Metadata Import Job

Figure 11: PBCS Metadata Import Job

And that’s it, we have only to upload the files to the Outbox and execute this job to load the metadata.

The format that PBCS expects is different depending on the dimension type (Account, Entity, Scenario, Version, Periods, Years, and User Defined Dimension)

Account Entity Scenario
Account Entity Scenario
Parent Parent Parent
Alias: Default Alias: Default Alias: Default
Valid For Consolidations Valid For Consolidations Data Storage
Data Storage Data Storage Two Pass Calculation
Two Pass Calculation Two Pass Calculation Description
Description Description Formula
Formula Formula Formula Description
Formula Description Formula Description UDA
UDA UDA Smart List
Smart List Smart List Data Type
Data Type Data Type Hierarchy Type
Hierarchy Type Hierarchy Type Enable for Dynamic Children
Enable for Dynamic Children Enable for Dynamic Children Number of Possible Dynamic Children
Number of Possible Dynamic Children Number of Possible Dynamic Children Access Granted to Member Creator
Access Granted to Member Creator Access Granted to Member Creator Allow Upper Level Entity Input
Allow Upper Level Entity Input Allow Upper Level Entity Input Process Management Enabled
Process Management Enabled Process Management Enabled Start Year
Account Type Base Currency End Year
Time Balance   Start Period
Skip Value   End Period
Exchange Rate Type   Include BegBal
Variance Reporting   Exchange Table
Source Plan Type    

Table 5:PBCS Metadata Templates - Part 1

Version Period Years User Defined Dimension
Version Period Years Dimension Name
Parent Parent Parent Parent
Parent Parent Parent Parent
Alias: Default Alias: Default Alias: Default Alias: Default
Data Storage Data Storage Data Storage Valid For Consolidations
Two Pass Calculation Two Pass Calculation Two Pass Calculation Data Storage
Description Description Description Two Pass Calculation
Formula Formula Formula Description
Formula Description Formula Description Formula Description Formula
UDA UDA UDA Formula Description
Smart List Smart List Smart List UDA
Data Type Data Type Data Type Smart List
Hierarchy Type Hierarchy Type Allow Upper Level Entity Input Data Type
Enable for Dynamic Children Allow Upper Level Entity Input Process Management Enabled Hierarchy Type
Number of Possible Dynamic Children Process Management Enabled   Enable for Dynamic Children
Access Granted to Member Creator Type   Number of Possible Dynamic Children
Allow Upper Level Entity Input Start Period   Access Granted to Member Creator
Process Management Enabled End Period   Allow Upper Level Entity Input
Version Type DTS Generation   Process Management Enabled

Table 6: PBCS Metadata Templates - Part 2

These are all kinds of properties that PBCS allows us to define when we are loading metadata. That doesn’t mean we need to inform them. Basically if you send a file with just the member (Dimension Name) and the parent information, PBCS should be able to load the data and define everything else as default.

Also, for each dimension (Year and Period don’t have this property since it always exists in all plan types), we can inform some unique information about that plan type.

  • Plan Type (PT): If that member will exist or not in this plan type
  • Aggregation (PT): How the member will aggregate in this plan type
  • Data Storage (PT): Member data storage for this plan type
  • Formula (PT): Member’s formula for this plan type
  • Formula Description (PT): Member formula description for this plan type

If five plan types are enabled in a PBCS application, you will have five times these properties above, one for each plan type, and the (PT) is replaced by the name of the plan type.

PBCS has an ASO cube that is created automatically when you create the application; if we have five plan types, we’ll have 5 times the properties above, plus one for the ASO cube. The ASO cube is treated as a plan type inside PBCS, so we have a total of six plan types.

To Export metadata is even easier, we just need to select the Outbox option, click in the boxes to select the dimensions we want to export and then select the delimiter type. And that’s it.

Figure 12: PBCS Metadata Export Jobs

Figure 12: PBCS Metadata Export Jobs

If we execute this Job, PBCS will create a zip file in the Outbox. This zip file will contain one txt file for each dimension we selected. The file name format will be username_job_name_dimension.txt (ServiceAdmin_ExportedMetadata_Account.txt).

Now we can take a look at how to use ODI to integrate PBCS.

ODI Integrations

Before we talk about the integration with PBCS using EPM Automate, let’s talk about the DW architecture for PBCS. We’ll not get too deep here since you can design your DW in any way you want, but the idea is to show the design that we thought was easier to maintain and to generate dynamic code.

Figure 13: PBCS Integration Design

Figure 13: PBCS Integration Design

As we can see in Figure 13, we can split our ODI integration environment in two: Providers and DBCS. In the Providers environment we have two different databases (Oracle DB and SQL Server DB) as well as Excel spreadsheets and text files. Since we have VPNs created for each provider, we can consider the databases as different databases in the same network, in the same place.

The PBCS environment can also be split in two: the database itself and all schemas created to better manage the ETL, and the file system where we’ll have the final product that will be used to load or to extract data from PBCS. The file system is the last frontier that ODI will directly control.

Now that we have the design of our ETL process, we just need to create steps to integrate them, and for that we need to understand the characteristics of each one of them:

  • Providers to Source Area: This is the first integration that will happen. Here is where all the data filters will be executed to guarantee that we have in our stage tables just what we need. It is always a good idea to load all columns as varchar since we can have problems with data format or number format and more. It’s a simple interface just to get all data we need to our integration environment.
  • Source Area to Stage Area: Here is where all transformation happens. This is the area where we prepare the data for the PBCS format, where we map any kind of business rules for accounts or entities or anything the business wants; it’s where we’ll merge all necessary data and where everything happens. The idea here is to transform the rough data into a more refined data.
  • Stage Area to DW: The final touch. This is where we’ll populate the last table, with the same format that planning is expecting. It is also where we’ll have all the data from all the plan types or all the cubes if you prefer. The idea here is to have just one table with all columns needed for all plan types (i.e., one column for each unique dimension of all the plan types together), plus one column to specify the plan type itself.

Table 7: PBSC Data Table Design

Table 7: PBSC Data Table Design

For the metadata table the concept is the same, but instead of dimensions we have properties. The table should have all possible properties for every dimension and two extra columns—one for the Plan Type and another for the Dimension.

And this is, in our opinion, a very good approach to load and extract data and metadata from Planning or PBCS. It’s an optimized way to create as many dynamic objects as we can and a very easy way to organize the inbound and outbound data.

And now, last but not least, the integration point: EPM Automate.

EPM Automate: The Final Touch

The EPM Automate utility will act as a bridge between ODI and PBCS and will allow us to integrate DBCS with PBCS (and could be our on-premise database as well).

The EPM Automate utility enables service administrators to remotely perform tasks within service instances and to automate many repeatable tasks, including:

  • Import and export metadata and data
  • Refresh the application
  • Run business rules on data
  • Copy data from one database to another (typically, from a block storage database to an aggregate storage database or from a block storage database to another block storage database)
  • Upload files into service instances, list files, and delete files from the service
  • Run a Data Management batch rule
  • Export and import application and artifact snapshots
  • Import pre-mapped balance data into Oracle Account Reconciliation Cloud (OARC)
  • Import currency rates, pre-mapped transactions, and profiles into OARC
  • Copy profiles to a period to initiate the reconciliation process
  • Deploy the calculation cube of an Oracle Hyperion Profitability and Cost Management Cloud application
  • Clear, copy, and delete Point of Views in profitability and cost management applications
  • Export and import template in profitability and cost management applications
  • Replay Oracle Smart View for Office load on a service instance to enable performance testing under heavy load

The utility uses Transport Layer Security 1.2 (TLS 1.2) to secure communication between your computer and the service. For a full reference of the possible commands, refer to docs.oracle.com.

The only thing we need now is a way to send commands to PBCS and get the results. We need to import and export data and metadata. Remember that all four jobs were created in the PBCS Integration session in this article.

Data Import:

For a good data import process we should:

  1. Clear the area: Clear the period slice from the cube that we want to load new data.
  2. Load the data: Load the new data.
  3. Execute a post rule: Run one or more rules to aggregate data, translate currencies and more.

The easiest way to do that is to create three scripts and call them from three different steps in ODI. You can create a big script that does everything at once, but three scripts is more flexible since we can loop just one step (post Business rules, for instance) without impact the others.

The Clear area script:

Copied to Clipboard
Error: Could not Copy
#PRE BR PBCS Data Import


 export JAVA_HOME=/u01/app/oracle/product/java/jdk1.8.0_74 

 url=$1;
 user=$2;
 password=$3;
 identity=$4;
 EPMAutomate_Path=$5;
 Job_Name=$6;
 File_Name=$7;
 DataFeed=$8;
 Year=$9;
 Period=${10};
 PType=${11};
 

 #EPM Automate commands
 cd $EPMAutomate_Path
 

 #Login
 echo ./epmautomate.sh login $user $password $url $identity

  ./epmautomate.sh login $user $password $url $identity

#Clear Load Area
 echo ./epmautomate.sh runbusinessrule "$PType-ODI_Clear_load_Region" 
"vg_Periods=$Period" 
"vg_Years=$Year"

./epmautomate.sh runbusinessrule "$PType-ODI_Clear_load_Region" "vg_Periods=$Period"

This script was created to receive 11 parameters and replace them in the commands. It calls a Business rule created in PBCS and passes to it the Period and the Year as parameter. This way the interface will be able to clear the area before load.

All the commands have an “echo” command before the real command just for the sake of the logs. This way it’s easier to identify what is going on during the script execution.

First, we need to log in to PBCS using EPM Automate. To do that we need four pieces of information: User, Password, PBCS Url and PBCS Identity. Since we are using ODI, the best place to store this information is in the topology. We don’t have a PBCS technology there, but it doesn’t matter; we just need a place to store this information, and for that I chose the Planning technology. The procedures are almost the same: create a Data Server and a Physical Schema, and associate them to a logical schema (as normal).

Figure 14: ODI PBCS Topology Configuration

Figure 14: ODI PBCS Topology Configuration

With these settings we can retrieve the information from the topology inside a KM or a procedure (that is our case here). To do so, we just need to get the information from the source and store it in some Java variables.

Figure 15: ODI Procedure PBCS Connection Information

Figure 15: ODI Procedure PBCS Connection Information

With the information stored in these Java variables we can now use it whenever we need to. It’s time to call the script passing all necessary information; for that, we’ll use another step inside the procedure.

Figure 16: ODI Procedure Call EPM Automate Script

Figure 16: ODI Procedure Call EPM Automate Script

To call the script, we use the OdiOSCommand command informing the ERR_File and OUT_File; this way, we’ll have a log and an error file in our folders as well the path where the script are in the server, the login information (all the java variables <@=….@>) and also the variable with the job name, period, year and any other information we need. This script does the following:

  1. Logs in to PBCS
  2. Runs a Business Rules in PBCS, passing some values to it
  3. Logs out

And that’s it. This is all we need to know to integrate PBCS, to connect, and to issue a command to it.

Let’s continue with the data load process.

After clearing the area we want to load, we can load the data. For that, I used this script:

Copied to Clipboard
Error: Could not Copy
#Import Data to PBCS


  export JAVA_HOME=/u01/app/oracle/product/java/jdk1.8.0_74

  url=$1;
 user=$2;
 password=$3;
 identity=$4;
 EPMAutomate_Path=$5;
 Job_Name=$6;
 File_Name=$7;
 DataFeed=$8;
 Year=$9;
 Period=${10};
 PType=${11};

  #EPM Automate commands
 cd $EPMAutomate_Path
v  #Login
 echo ./epmautomate.sh login $user $password $url $identity

  ./epmautomate.sh login $user $password $url $identity

  #Delete Previous Files
 echo ./epmautomate.sh deletefile $File_Name.zip 

 ./epmautomate.sh deletefile $File_Name.zip

  #Upload PBCS_Data.zip
 echo ./epmautomate.sh uploadfile $DataFeed/$File_Name.zip

  ./epmautomate.sh uploadfile $DataFeed/$File_Name.zip

  #Import Data
 echo ./epmautomate.sh importdata $Job_Name $File_Name.zip

  ./epmautomate.sh importdata $Job_Name $File_Name.zip

  #Logout
 echo ./epmautomate.sh logout

  ./epmautomate.sh logout

This script does the following:

  1. Logs in to PBCS
  2. Deletes the previous file from the PBCS Inbound Outbound box
  3. Uploads the data file
  4. Imports the data to the cubes
  5. Logs out

Finally, to complete the data load process, we just need to run the other business rules; for that, I used this script:

Copied to Clipboard
Error: Could not Copy
#POS BR PBCS Data Import


  export JAVA_HOME=/u01/app/oracle/product/java/jdk1.8.0_74 

 url=$1;
 user=$2;
 password=$3;
 identity=$4;
 EPMAutomate_Path=$5;
 Job_Name=$6;
 File_Name=$7;
 DataFeed=$8;
 PType=$9;

  #EPM Automate commands
 cd $EPMAutomate_Path

  #Login
 echo ./epmautomate.sh login $user $password $url $identity

  ./epmautomate.sh login $user $password $url $identity

  #Run BR
 echo ./epmautomate.sh runbusinessrule "$PType-ODI_Convert_Currency" "vg_Scenario=Actual" 
"vg_Version=Final" 

 ./epmautomate.sh runbusinessrule "$PType-ODI_Convert_Currency" "vg_Scenario=Actual" 
"vg_Version=Final" 

 #Logout
 echo ./epmautomate.sh logout 

 ./epmautomate.sh logout

This script does the following:

  1. Logs in in PBCS
  2. Runs a Business Rule in PBCS, passing some values to it
  3. Logs out

Note: This is a generic way to call a script, since the name of the business rules and the parameters are received from the package or procedure. This means that, if needed, we can loop this step to run more than once and execute more than one business rule as well.

Metadata Import:

For a good metadata import process, we should:

  1. Load the metadata
  2. Refresh the cube

For that, we need the following script:

Copied to Clipboard
Error: Could not Copy
#Metadata Import


  export JAVA_HOME=/u01/app/oracle/product/java/jdk1.8.0_74 

 url=$1;
 user=$2;
 password=$3;
 identity=$4;
 EPMAutomate_Path=$5;
 Job_Name=$6;
 File_Name=$7;
 DataFeed=$8;
 PType=$9;

  #EPM Automate commands
 cd $EPMAutomate_Path

  #Login
 echo ./epmautomate.sh login $user $password $url $identity
 ./epmautomate.sh login $user $password $url $identity

  #Delete Previous Files
 echo ./epmautomate.sh deletefile $File_Name.zip

 ./epmautomate.sh deletefile $File_Name.zip

  #Upload PBCS_Metadata.zip
 echo ./epmautomate.sh uploadfile $DataFeed/$File_Name.zip

  ./epmautomate.sh uploadfile $DataFeed/$File_Name.zip

  #Import Metadata
 echo ./epmautomate.sh importmetadata $File_Name.zip

  ./epmautomate.sh importmetadata $File_Name.zip

  #Refresh the Cube
 echo ./epmautomate.sh refreshcube

  ./epmautomate.sh refreshcube

  #Logout
 echo ./epmautomate.sh logout

  ./epmautomate.sh logout

This script does the following:

  1. Logs in to PBCS
  2. Uploads the metadata file to PBCS
  3. Loads the metadata
  4. Refreshes the cubes
  5. Logs out

For the ODI part, we will use the same approach as before since we just need to call the script and pass the information we need for that.

Data Export:

For a good data export process we must extract the data. For that, we need the following script:

Copied to Clipboard
Error: Could not Copy
#Export Data from PBCS


  export JAVA_HOME=/u01/app/oracle/product/jdk1.7.0_79/

  url=$1;
 user=$2;
 password=$3;
 identity=$4;
 EPMAutomate_Path=$5;
 Job_Name=$6;

  #EPM Automate commands
 cd $EPMAutomate_Path

  #Login
 ./epmautomate.sh login $user $password $url $identity

  #Extract Data
 ./epmautomate.sh exportdata $Job_Name $Job_Name.zip

  #Download Metadata.zip
 ./epmautomate.sh downloadfile $Job_Name.zip

  #Logout
 ./epmautomate.sh logout

This script does the following:

  1. Logs in to PBCS
  2. Exports the data file from PBCS
  3. Downloads the data file from PBCS
  4. Logs out

PBCS is kind enough to zip the exported file for us. After downloading it from PBCS, we need to unzip it and then load the data to our tables. This can be accomplished by a simple ODI Interface.

Metadata Export:

For a good Metadata export process we must extract the metadata. For that, we need the following script:

Copied to Clipboard
Error: Could not Copy
#Export Metadata from PBCS


  export JAVA_HOME=/u01/app/oracle/product/java/jdk1.8.0_74

 url=$1;
 user=$2;
 password=$3;
 identity=$4;
 EPMAutomate_Path=$5;
 Job_Name=$6;

  #EPM Automate commands
 cd $EPMAutomate_Path

  #Login
 ./epmautomate.sh login $user $password $url $identity

  #Extract Metadata
 ./epmautomate.sh exportmetadata $Job_Name $Job_Name.zip

  #Download Metadata.zip
 ./epmautomate.sh downloadfile $Job_Name.zip

  #Logout
 ./epmautomate.sh logout

This script does the following:

  1. Logs in to PBCS
  2. Exports the metadata file from PBCS
  3. Downloads the metadata file from PBCS
  4. Logs out

The metadata export is no different from the data export. PBCS zips all text files in one file that after download can be loaded with an ODI interface; the only difference is that the zip will contain one text file per dimension, as we saw in the PBCS Integration section. Since we normally export more than one dimension per time, we can create one interface per dimension or, if you follow the approach suggested here, just one interface with multiple data sources. Using a UNION ALL operator will do the trick.

Conclusion

We’ve just created a 100% cloud solution that includes PBCS, BICS, DBCS and ODI, with everything working together and being orchestrating by ODI. Of course, this is a simple example of what we can do with this tool because when we are talking about ODI the possibilities are infinite.

When we started to design this architecture we thought it would be a challenge; in the end, however, it turned out that this architecture is as simple as working with only on-premises tools. In fact, because the final results for PBCS are just files, it’s easier to integrate then in on-premise tools since we can move the ODI to on-premise and it’ll continue to work with PBCS.

This makes this architecture very flexible—and flexibility is always good. For the project under scrutiny, we had five people working on the project (two in the PBCS and ODI integration and three working in the BICS and its integration). The PBCS part was tested and delivered in four months. That could have been quicker, but since the client didn’t have any structure at all, we had to invest more time in understanding how we would put everything together.

The BICS development took five months to complete and the tests took two months. It took more time because it was bigger and more complex than the PBCS piece, mostly because of the merge between the ERP’s data.

In the future we’ll be looking to replace the EPM Automate to make a more direct integration. We are working in some ODI KMs to do that and, with some luck, maybe we can have something very soon. If not, I’m sure Oracle will release the KMs for PBCS soon, since Oracle is investing a lot in their cloud solutions.

If we had to point to any issues or opportunities for improvement with this environment, it would be that because all the releases are installed automatically in the cloud services, there can be problems with the ODI integrations. In this project specifically, after Oracle applied a new release, the metadata load process stopped working. This happened because the new path had two more columns in the extracted metadata files, so we had to update the datastores to reflect these new columns.

Other than that, we can only see good things regarding this environment. We would even say that is a perfect way for small and medium companies to get their hands on a very robust BI environment at a very affordable price; it’s also a good opportunity for big companies that want to expand their BI environment to external providers or even to their clients.

As we said, the possibilities are infinite.

About the Authors

Oracle ACE Ricardo Giampaoli has been working in the IT environment for 20 years, the last nine years as an EPM consultant. A Certified Professional in Hyperion Planning, Essbase, OBIEE, and ODI, Ricardo is a tireless promoter of Oracle’s EPM tools, Ricardo evangelizes the tools all over the world through his blog, his clients, Oracle Academy, universities and via interaction with Oracle product management.

Oracle ACE Rodrigo Radtke, a software development consultant at Dell, is a computer engineer experienced in software development, especially in BI for the finance space. A certified professional on Oracle Data Integrator, Oracle SQL Expert and Java (SCJP and SCWCD), he is a true advocate of Oracle technologies, always creating innovative ways to use ODI in the EPM space to maximize its potential. He shares tips and tricks along with co-blogger Ricardo Giampaoli at devepm.com.

This article has been reviewed by the relevant Oracle product team and found to be in compliance with standards and practices for the use of Oracle products.