Understanding the Exadata Cloud at Customer Technical Architecture (Gen 1/OCI-C)

This infographic explores the technical architecture for Gen 1 Exadata Cloud at Customer. Gen 1 is the first generation of Exadata Cloud at Customer, which is deployed in conjunction with Oracle Cloud at Customer using Oracle Cloud Infrastructure Classic (OCI-C).

Show Instructions

  • To step through the slide show, click the Previous and Next buttons. To return to the first slide, click the First button.
  • The breadcrumb below the buttons tracks your navigation. Click links in the breadcrumb to jump to different slides that you have visited.
  • To view more detail about an object in a slide, left-click the object. If more detail is available, a different slide will be displayed; otherwise, you will remain on the same slide.
  • Use your quick navigation keys to move between buttons, frames, checkboxes, headings, list items, and so on.
  • Hide Instructions

Slide 1 of 8

This slide is described in the notes below.

Exadata Cloud at Customer combines cloud simplicity, agility, and elasticity with deployment inside your data center to provide full-featured Oracle Databases hosted on Oracle Exadata Database Machine.

Exadata Cloud at Customer provides essentially the same capabilities as Exadata Cloud Service on Oracle Cloud, with some minor differences due to the location of the Exadata Cloud at Customer environment inside your data center.

Through the Cloud Control Plane, Exadata Cloud at Customer is equipped with a Web-based self-service management interface, which provides interactive access to service administration functions. REST APIs are also provided for programmatic access to service administration functions.

There are a few other ways that you can connect to Exadata Cloud at Customer:

  • You can connect to Exadata Cloud at Customer from your on-premises applications by using Oracle Net (SQL*Net). On Exadata Cloud at Customer, the default Oracle Net configuration secures data in transit by using native encryption and integrity capabilities.
  • You can also connect from clients running on the Oracle Cloud Machine that hosts the Cloud Control Plane. This may include Java applications connecting through JDBC running on the Java Cloud Service, or client applications connecting through Oracle Net running on a Compute Cloud Service instance.
  • Access to the database server operating system is provided using Secure Shell (SSH). This is primarily used for administration purposes.
  • A backup network is also provided. This network keeps high-load activities separate from application connections and is primarily used when Exadata Cloud at Customer database deployments are backed up to an Oracle Storage Cloud Service container.

Because the Exadata Cloud at Customer environment is hosted in your data center, no firewall is implemented to govern client access. However, you are free to implement additional firewalls within your network if desired.

Oracle monitors and manages the Exadata Cloud at Customer infrastructure components, including the physical compute node hardware, network switches, power distribution units (PDUs), integrated lights-out management (ILOM) interfaces, and the Exadata Storage Servers. These functions are performed remotely by Oracle using an Advanced Support Gateway, which may be located in your network DMZ.


The technical architecture for Exadata Cloud at Customer is essentially the same as for an on-premises implementation of Exadata, with each instance of Exadata Cloud at Customer being based on an Exadata configuration that contains a predefined number of database servers and a predefined number of Exadata Storage Servers, all tied together by a high-speed, low-latency InfiniBand network and intelligent Exadata software.

Like Exadata Cloud Service on Oracle Cloud, Exadata Cloud at Customer is offered in Quarter Rack, Half Rack or Full Rack system configurations. Exadata Cloud at Customer also has a Base System option, which was previously known as an Eighth Rack.

Exadata Cloud at Customer system configurations are all based on Oracle Exadata X6 or X7 systems.

The diagram illustrates a Quarter Rack or Base System configuration. Larger configurations are principally the same, except that they contain more database servers and Exadata Storage Servers.

Application users and administrators can connect only to the database servers, using the supplied client and backup network interfaces. Oracle manages all hardware, firmware, and the Exadata Storage Server software by using a separate management network.


A Quarter Rack contains two database servers and three Exadata Storage Servers. A Base System also has two database servers and three Exadata Storage Servers, but it contains Exadata Storage Servers with significantly less storage capacity and compute nodes with significantly less memory and processing power.

The diagram displays the vital statistics for a Base System or Quarter Rack based on Oracle Exadata X7 hardware. For details of other hardware configurations, see Exadata System Configuration.


A Half Rack contains four database servers and six Exadata Storage Servers.

Note that this configuration differs from an Exadata on-premises Half Rack. For Exadata Cloud at Customer, a Half Rack contains six storage servers, while an on-premises Half Rack contains seven storage servers. Consequently, an Exadata Cloud at Customer Half Rack is exactly twice the size of a Quarter Rack.

The diagram displays the vital statistics for a Half Rack based on Oracle Exadata X7 hardware. For details of other hardware configurations, see Exadata System Configuration.


A Full Rack eight database servers and twelve Exadata Storage Servers.

Note that this configuration contains twelve storage servers, which differs from an on-premises Full Rack that contains fourteen storage servers. So an Exadata Cloud at Customer Full Rack is exactly twice the size of a Half Rack.

The diagram displays the vital statistics for a Full Rack based on Oracle Exadata X7 hardware. For details of other hardware configurations, see Exadata System Configuration.


Each Exadata database server contains at least one Virtual Machine (VM), known as DomU, running on a VM hypervisor, known as Dom0. This configuration ensures a distinct separation between the Oracle-managed and customer-managed components.

Oracle manages the hardware, firmware, and Dom0 by using the Integrated Lights Out Manager (ILOM) and Dom0 management network interfaces. Customers have no access to these interfaces. Minimal resources are allocated to Dom0; only two CPU cores and 16 GB of RAM.

If the Exadata Cloud at Customer instance is enabled to support multiple VM clusters, then customers can define up to 8 DomU's on each database server and specify how the overall Exadata system resources are allocated to them.

Each DomU is provisioned with a complete Oracle Database installation that includes all the features of Oracle Database Enterprise Edition, plus all the database enterprise management packs and all the Enterprise Edition options, such as Oracle Database In-Memory and Oracle Real Application Clusters (RAC).

Customers have secure access to each DomU using the client and backup networks. The client and backup networks use bonded network interfaces to maximize performance and availability.

Operating system security for DomU is based on an SSH public/private key pair. Customers register a public key in each DomU. Thereafter, the corresponding private key must be provided in order to connect to a DomU using SSH. At all times, the customer retains the private key that enables access to the DomU operating system.

As a result of this configuration, customers manage each DomU and all of the Oracle software that it contains. Oracle provides assistance, through the provision of cloud tools that simplify backup and patching operations, but ultimately it is the responsibility of the customer to perform these operations.

The diagram shows the DomU processor and RAM capacity for each X7 database server. For details of other hardware configurations, see Exadata System Configuration.


As part of provisioning each Oracle Exadata Database Machine environment, the storage space inside the Exadata Storage Servers is provisioned for use by Oracle Automatic Storage Management (ASM). By default, the following ASM disk groups are created:

  • The DATA disk group is intended for the storage of Oracle Database data files.
  • The RECO disk group is primarily used for storing the Fast Recovery Area (FRA), which is an area of storage where Oracle Database can create and manage various files related to backup and recovery, such as RMAN backups and archived redo log files.

For Exadata Cloud at Customer instances that are based on Oracle Exadata X6 hardware, there are additional system disk groups that support various operational purposes. The DBFS disk group is primarily used to store the shared Oracle Clusterware files (Oracle Cluster Registry and voting disks), while the ACFS disk groups underpin shared file systems that are used to store software binaries (and patches) and files associated with the cloud-specific tooling that resides on your Exadata Cloud at Customer compute nodes. You must not remove or disable any of the system disk groups or related ACFS file systems. Compared to the other disk groups, the system disk groups are so small that they are typically ignored when discussing the overall storage capacity.

For Exadata Cloud at Customer instances that are based on Oracle Exadata X7 hardware, there are no additional system disk groups. On such instances, a small amount of space is allocated from the DATA disk group to support the shared file systems that are used to store software binaries (and patches) and files associated with the cloud-specific tooling.

In addition, you can optionally create the SPARSE disk group. The SPARSE disk group is required to support Exadata snapshots. Exadata snapshots enable space-efficient clones of Oracle databases that can be created and destroyed very quickly and easily. Snapshot clones are often used for development, testing, or other purposes that require a transient database.

As an input to the provisioning process, you must also decide if you intend to perform backups to the local storage within your Exadata Database Machine. Your backup storage choice profoundly affects how storage space in the Exadata Storage Servers is allocated to the ASM disk groups.

The following table outlines how storage capacity is allocated amongst the DATA, RECO, and SPARSE disk groups for each possible configuration.

Configuration Settings DATA disk group RECO disk group SPARSE disk group

Database backups on Exadata Storage: No

Create sparse disk group?: No

80%

20%

0%

The SPARSE disk group is not created.

Database backups on Exadata Storage: Yes

Create sparse disk group?: No

40%

60%

0%

The SPARSE disk group is not created.

Database backups on Exadata Storage: No

Create sparse disk group?: Yes

60%

20%

20%

Database backups on Exadata Storage: Yes

Create sparse disk group?: Yes

35%

50%

15%


The diagram illustrates the storage in a Quarter Rack X7 configuration. The usable storage capacity is the storage available for Oracle Database files after taking into account high-redundancy ASM mirroring (triple mirroring), which is used to provide highly resilient database storage on all Exadata Cloud at Customer configurations. The usable storage capacity does not factor in the use of Exadata compression capabilities, which can be used to increase the effective storage capacity.


This diagram shows the overall architecture for a Quarter Rack X7 Exadata Cloud at Customer instance. View the previous slides to learn more about each component in the architecture.