Exadata Database Machine

Exadata, a Guide For Decision Makers

The Oracle Exadata Database Machine is engineered to deliver dramatically better performance, cost effectiveness, and availability for Oracle Databases. With high-performance database servers, scale-out intelligent storage servers with leading-edge storage caching technologies, and cloud scale RDMA-enabled internal network fabric, Oracle Exadata Database Machine is the best platform to run the Oracle Database.

June 2023, Version 2.6

Oracle Exadata Database Machine

Engineered for Oracle Database

Exadata Introduction

Oracle Exadata Database Machine X10M

The Oracle Exadata Database Machine (Exadata) is a computing and storage platform specialized for running Oracle databases. The goal of Exadata is higher performance and availability at lower cost by optimizing and integrating hardware and software at all levels and moving database algorithms and intelligence into storage and networking, bypassing the traditional layers of general-purpose servers.

Exadata is a combined hardware and software platform that includes scale-out database servers, scale-out intelligent storage servers, ultra-fast networking, memory acceleration, NVMe flash, and specialized Exadata Software in a wide range of shapes and price points. Exadata storage features high-performance servers to store data and run Exadata Software for data-intensive database processing directly in the shared storage tier.

A Brief History

Exadata debuted in 2008 as the first offering in Oracle's family of Engineered Systems for use in corporate data centers deployed as private clouds. In October 2015, Exadata became available in the Oracle public cloud as a subscription service, known as the Exadata Cloud Service, later re-branded as Exadata Cloud Infrastructure in 2022, supporting both Exadata Database Service and Autonomous Database cloud services.

Oracle databases deployed with Exadata Database Service on Exadata Cloud Infrastructure are 100% compatible with databases deployed on Exadata on-premises, enabling customers to transition to the Oracle Cloud with zero application changes. Oracle manages this service, including hardware, network, Linux software and Exadata software, while customers retain complete control over their databases.

In early 2017, a third Exadata deployment choice became available. Exadata Cloud@Customer is Exadata Cloud Infrastructure deployed on-premises (behind the customer's firewall) and managed by Oracle Cloud experts. Exadata Cloud@Customer is owned and managed by Oracle and acquired by customers through a pay-as-you-go subscription. The Oracle Cloud@Customer program brings all the benefits of the Oracle public cloud while addressing potential network latency, security, and regulatory concerns.

In 2018 Oracle introduced Oracle Autonomous Database – a cloud-based self-driving, self-securing, self-protecting database that provides mission critical availability and security while reducing management costs. Oracle Autonomous Database is available on Exadata Cloud Infrastructure and Exadata Cloud@Customer deployments.

In 2019, the release of Exadata X8M enhanced Exadata’s performance through the addition of two major technical breakthroughs - persistent memory (PMem) and RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE). Oracle Exadata X8M used RDMA directly from the database to access persistent memory in smart storage servers, bypassing the entire OS, I/O, and network software stacks. This produced lower latency and higher throughput.

In 2023, the Exadata X10M release replaced persistent memory with Exadata RDMA Memory (XRMEM), a new memory acceleration tier in storage, due to changes in the vendor landscape for persistent memory. Also significant was the incorporation of AMD processors in all Exadata servers, offering a large increase in the number of available compute cores.

Exadata Use Cases

Exadata is designed to optimally run any Oracle Database workload or combination of workloads, such as an OLTP application running simultaneously with analytics processing. The platform is frequently used to consolidate many databases that were previously running on dedicated database servers. Exadata's scale-out architecture is naturally suited to running in the Oracle Cloud, where computing requirements can dynamically grow and shrink.

Historically, specialized database computing platforms were designed for a particular workload, such as data warehousing, and were poor or unusable for other workloads, such as OLTP. Exadata has optimizations for all database workloads, implemented such that mixed workloads share system resources fairly. Exadata Resource Management features automatically prioritize allocation of system resources, such as favoring workloads servicing interactive users over reporting and batch processing, even if they are accessing the same data.

Long running requests, typical of data warehouses, reports, batch jobs and analytics, run many times faster compared to a conventional, non-Exadata database server. Customer references often cite performance gains of 10x or greater. Analytics workloads can also use the Oracle Database In-Memory option on Exadata for additional acceleration, and in-memory databases on Exadata have been extended to take advantage of flash memory, whose capacity is many times larger than the capacity of DRAM. Exadata’s Hybrid Columnar Compression feature is intended to reduce the storage consumption of data warehouses and archival data as well as increase performance by reducing the amount of I/O.

Transactional (OLTP) workloads on Exadata benefit from the incorporation of XRMEM (memory acceleration) and flash memory into Exadata’s storage hierarchy, and the automatic tiering of data between XRMEM, flash, and disk storage. Special algorithms optimize response-time sensitive database operations such as log writes. For the most demanding OLTP, all-flash storage eliminates the latency of disk media completely.

Exadata Design Concepts

To better understand the design of Exadata, it helps to compare it with a traditional database computing platform, assembled from separate hardware and software components operating independently.

Traditional computing platforms are general-purpose

The hardware components that make up a typical database computing platform are a database server connected over a network to a storage array. The database software runs on the database server and sends or receives data to and from the storage array over the network. The hardware components use standard software protocols to communicate with each other. This separation via standard interfaces is what allows a general-purpose computing platform to run a wide variety of workloads, software, and hardware from different vendors. All the application logic and the processing of data are performed on the database server, to which all the data must be sent. This approach enables using a computing platform for a wide range of software applications, though it will not be optimized for any particular application.

Oracle Database is the focus of Exadata

The goal of Exadata is to create a complete stack of software and hardware tailored to the Oracle Database, that performs processing in the optimal location. Because Exadata is only processing Oracle Database requests, it can take advantage of that focus in all of the software layers. The hardware design includes technologies such as very fast Ethernet networking, specialized DRAM caching (XRMEM), and flash storage integrated into the architecture to yield the most advantages to Oracle Database applications. Given the importance of data storage to databases, Oracle Exadata places particular focus on optimizing that aspect of the platform.

Exadata uses unique technologies in the storage layer that easily scale out and parallelize Oracle Database requests. The addition of flash memory and XRMEM to Exadata Storage Servers also opens up a range of possibilities for optimizing performance in the storage layer. For example, as the performance and capacity of flash storage increases at a rapid rate, the network can become a bottleneck for traditional database platforms, whereas Exadata's offloading of database processing into Exadata Storage Servers avoids that problem. The addition of XRMEM in the Exadata storage layer exposes the limitations of traditional platforms even more acutely.

Adding database intelligence to storage

At the time Exadata was conceived, Oracle had several decades of experience developing database software, and was well aware of the limitations and performance bottlenecks imposed by traditional computing platforms. To fulfill the Exadata mission, Oracle needed a storage layer that could easily scale out and parallelize Oracle Database requests. Oracle also recognized the opportunity for storage to cooperate in the processing of database requests beyond just storing and shipping data. For example, rather than send an entire database table across the network to the database server to find a small number of records, such data filtering could be done in storage so that only the resulting records need be sent across the network.

In summary, Oracle recognized the need for a powerful server that could run intelligent database software and act as a storage array, with a modular design that could easily grow in capacity and performance as the database grew. Building a "database-aware" storage server that could cooperate with database servers in the execution of database requests became a compelling undertaking, enabled by focusing Exadata on what is best for the Oracle Database.

The database-aware Exadata Storage Server, invented by Oracle to replace the traditional storage array, is the foundation of Exadata.

Optimizing across the full stack

To maximize the effectiveness of Exadata, Oracle controls the software and hardware components of the platform, so that coordinated improvements can be tightly integrated and made anywhere at any time.

Oracle already had a broad portfolio of software products when Exadata was conceived, covering most of the software layers that are required to run a database platform, such as the Oracle Linux operating system, storage management software, monitoring and administrative tools and virtual machine software, and, of course, Oracle Database and options software.

The initial 2008 release of Exadata (V1) was a joint development between Oracle (software) and Hewlett-Packard (hardware). The second generation (V2) of Exadata switched to hardware from Sun Microsystems, and shortly thereafter, Oracle acquired Sun Microsystems and thus gained ownership of the main hardware components of Exadata.

Owning the main hardware components of Exadata completes Oracle's ability to develop an entire computing platform optimized around the Oracle Database. An additional benefit for customers is the ability to support the entire Exadata platform from one vendor: something impossible with a traditional computing platform of hardware and software components from multiple vendors.

Exadata Smart Software

Offloading to Storage - refers to the execution of data-intensive database operations within the Exadata Storage Servers, such as data scans, table joins, and filtering of rows and columns. Sending just the description of the operation, and getting back filtered results, substantially reduces the network traffic between the database servers and storage servers. This avoids the network bottleneck of traditional architectures where data-intensive operations require shipping large amounts of data between storage and database servers. Offloading is possible because Exadata storage is built on standard servers, capable of running database functions in coordination with the database server, simultaneous with storage I/O. Over time, more database functions and more data types have been offloaded. In addition, "reverse offloading" will push an operation back to the database servers if Exadata storage is too busy.

Storage Indexes - enable the avoidance of I/O by tracking column values within relatively small regions of storage. Storage Indexes are automatically maintained and kept in memory on Exadata Storage Servers. If the Storage Index indicates that an I/O to a region will not find a match, that I/O is avoided, which yields a significant performance benefit. Initially, Storage Indexes track value ranges within a small number of columns. Over time, more columns and more sophisticated value tracking have been added, so that additional classes of I/O operations can be avoided.

Flash and XRMEM Caching - delivers the low latency (fast response) of flash and DRAM, while preserving the lower cost of disk for storing large databases, for the best I/O performance at the lowest cost. In general, a small percentage of a database is active at any one time. If just the active data is held in flash, for instance, the I/O performance would be equal to all-flash storage, at a much lower cost. Exadata monitors the current workloads and keeps the most active data in flash or XRMEM, in the optimal format. For example, Exadata knows when an I/O is part of a database backup, and not an indication of an active data block, whereas traditional storage arrays view any I/O as a "hot" block. Flash caching will also reformat rows into columnar format in flash for data being accessed for analytics. Initially, flash caching was only used for reading data, but has since been enhanced to include log writes and all other write I/O. The flash cache is also used as an extension of Oracle's Database In-Memory columnar data store, for significantly larger in-memory databases than DRAM capacity alone. XRMEM adds an even faster cache in storage and substantially improves I/Os per second (IOPS) and latency.

Hybrid Columnar Compression (HCC) - reduces the amount of storage consumed by infrequently updated data, such as data warehouses, that can grow to enormous sizes. Conventional data compression algorithms yield between 2x and 4x compression, whereas HCC averages between 10x and 15x compression due to the greater compressibility of columnar formats. Such a large reduction in the amount of I/O can also substantially improve performance. Initially, HCC tables did not support row-level locking, limiting their use with OLTP applications. In 2016, support for row-level locking was added to HCC on Exadata, improving the performance of mixed workloads with HCC data. The hybrid format of HCC enables Exadata to avoid the performance pitfalls of columnar-only databases.

Resource Management - allocates Exadata system resources, such as CPU, I/O, and network bandwidth, to databases, applications, and users based on priorities. When consolidating many databases on Exadata, Resource Management ensures the appropriate quality of service. I/O Resource Management debuted in V1 of Exadata. Network Resource Management was added in Exadata X4.

In-Memory Databases - offer exceptional performance for Analytics workloads, leveraging DRAM on database servers, a complement to Exadata's emphasis on storage and networking. Oracle Database In-Memory became available in 2014 on Exadata, leveraging its fast internal network for In-Memory Fault Tolerance. To support larger in-memory databases, Exadata Storage Servers implement in-memory routines and in-memory data formats in Exadata flash, as an extension of the same in-memory processing that occurs on database servers.

Smart Software Enhancements

A more detailed listing of software enhancements is below, grouped by their value to analytics or OLTP workloads, and their impact on database availability and security. Similar enhancements cannot be duplicated on conventional platforms because they require modifications to system software and APIs, and integration across database software, operating systems, networking, and storage.

For Analytics For OLTP
Automatically parallelize and offload data scans to storage Exafusion Direct-to-Wire Protocol for inter-node data transfer
Filter rows in storage based on 'where' clause EXAchk full-stack validation
Filter rows in storage based on columns selected Active AWR includes storage stats for end to end monitoring
JSON and XML offload Cell-to-cell rebalance preserving flash cache
Filter rows in storage based on join with another table In-Memory commit cache
Offload index fast full scans Memory optimized OLTP and IoT lookups
Offload scans on encrypted data, with FIPS compliance Cell-to-cell rebalance preserves XRMEM cache
Storage offload for LOBs and CLOBs Exadata RDMA Memory (XRMEM) Data Accelerator (XRMEM Cache) using RoCE fabric
Storage Index data skipping XRMEM commit Accelerator (XRMEM Log) using RoCE Fabric (X8M, X9M only)
Storage offload for min/max operations Database-aware NVMe PCIe flash interface
Data mining offload Smart Flash Logging
Reverse offload to DB servers if storage CPUs are busy Smart Flash Log Write-back
Hybrid Columnar Compression Write-back Flash Cache
Temp I/O to Flash Cache for faster performance of large analytic queries and large loads I/O Resource Management by DB, user and workload to ensure QoS
All ports active network messaging Database-specific control of XRMEM usage
In-Memory Column Cache on Storage Server Network Resource Management
Automatic transformation to In-Memory columnar format from Flash Cache Control of flash cache size per database
Just in time smart columnar decryption In-Memory OLTP acceleration
Smart aggregation with columnar cache Undo-block remote RDMA read
Fast In-Memory columnar cache creation Support for 4000 Pluggable Databases per Container Database with Multitenant option
Columnar cache persistence
Storage Index persistence

For Availability For Security
Instant detection of node or cell failure Comprehensive monitoring and auditing functionality at the server, network, database, and storage layers
Sub-second failover of I/O on stuck disk or flash Secured access to perform secure lights-out management of (ILOM) database and storage servers
Prefetch OLTP data into secondary mirror Flash Cache Audit record of all logons and configuration changes
Offload incremental backups to storage servers FIPS 140-2 certification
Instant data file creation PCI-DSS compliance
Prioritized rebalance of critical files Minimal Linux distribution
Cell-to-cell rebalance to preserve Flash Cache population Secure RDMA fabric isolation
Automatic rebalance on predictive disk failures Multi-pass secure erase of disks and flash
Instant detection of node or cell failure Firewall-protected Exadata Storage Servers
Sub-second failover of I/O on stuck disk or flash Secure network access
Prefetch OLTP data into secondary mirror Flash Cache Secure RDMA fabric isolation
Offload incremental backups to storage servers Comprehensive monitoring and auditing functionality at the server, network, database, and storage layers
Instant data file creation Secured access to perform secure lights-out management of (ILOM) database and storage servers
Prioritized rebalance of critical files Audit record of all logons and configuration changes
Cell-to-cell rebalance to preserve Flash Cache population FIPS 140-2 certification
Automatic rebalance on predictive disk failures PCI-DSS compliance
Instant detection of node or cell failure Minimal Linux distribution
Automatic rebalance on disks predicted to fail Fast, hardware-based (AES) encryption/decryption
Automatic monitoring of CPU, network and memory using Machine Learning Full-stack security scanning
Automatic identification of underperforming disks Database and ASM-scoped security
Automatic Software Updates on an entire fleet of Exadata systems with one operation Multi-pass secure erase of disks and flash
Cell software transparent restart Fast, secure eraser of disk and flash (Crypto erase)
Online Linux patching (Ksplice) Advanced Intrusion Detection Environment (AIDE) detects and alerts unknown changes to system software
InfiniBand partitioning
Support for IPV6
Secure computing filter to restrict system calls
Centralized identification and authorization of OS users

Database Software

Exadata X10M database servers run the Oracle Linux 8 operating system, Oracle Database 19c Enterprise Edition, and Oracle Database 21c Enterprise Edition. Exadata system resources can be optionally virtualized using the KVM-based Oracle hypervisor. All Oracle Database options, such as Real Application Clusters, Multitenant, Database In-Memory, Advanced Compression, Advanced Security, Partitioning, Active Data Guard, and others are optionally available with Exadata.

Applications that are certified for a supported version of the Oracle Database are automatically compatible with Exadata. No additional modifications or certifications are required. The same database software that runs on Exadata on-premises will run on Exadata Cloud Infrastructure and Exadata Cloud@Customer. In addition, on-premises software licenses are eligible for the Bring Your Own License (BYOL) transfer into the Oracle public cloud or Exadata Cloud@Customer. Oracle Autonomous Database is available exclusively on Exadata cloud platforms.

Networking

Exadata provides high-speed networks for internal and external connectivity. A 100 gigabit per second (100 Gbit/s) RDMA-enabled Ethernet fabric is used for internal connectivity between database and storage servers and includes database cluster interconnect traffic. For external client connectivity, 100, 25, and 10 Gbit/s Ethernet ports are available.

Exadata uses a custom-designed, database-oriented protocol over the Ethernet fabric to achieve higher performance. It makes extensive use of Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) to improve efficiency by avoiding operating system overhead and extra copies when moving data between servers. Exadata also has a direct-to-wire protocol that allows the database to communicate directly to the RoCE network cards.

Exadata takes advantage of RoCE Class of Service in its Network Resource Management feature to prioritize important traffic across the network. In this feature, the Oracle Database software tags network messages that require low latency, such as transaction commits, lock messages and I/O operations issued by interactive users, prioritizing them over messages issued by less critical high-throughput workloads such as reporting and batch processing. The result is analogous to how an emergency vehicle with its siren on can move more quickly through heavy traffic - high-priority network messages are moved to the front of the server, network switch, and storage queues, bypassing lower-priority messages, and resulting in shorter and more predictable response times.

Management Software

For Exadata systems deployed in traditional on-premises configurations, Oracle Enterprise Manager (EM) supports a single pane of glass view of all the Exadata hardware and software components such as database servers, storage servers, and network switches, and monitors the operations running on them. EM integrates with the built-in Exadata management tooling, as well as with customers' existing systems management and helpdesk tools.

The Exadata Cloud Infrastructure and Exadata Cloud@Customer platforms are managed by Oracle Cloud Infrastructure operations, while customers control and manage the software and databases running on the database servers. Lifecycle operations for Oracle databases are performed using a web browser, command-line interface (CLI), or REST API-driven automation available through the Cloud Control Plane, including provisioning, updating, scaling, and backup.

Hardware

Prior to Exadata X10M, Exadata was available in two models: one based on 2-socket database servers and the other based on 8-socket database servers. The adoption of AMD processors in Exadata X10M database servers replaces the 8-socket Exadata model, due to the high core count per socket. Thus, with Exadata X10M, only one Exadata hardware model is available, and the -2 and -8 suffixes on the names are removed.

The latest Exadata generation, X10M, was introduced in June 2023. The X10M database and storage servers use a 2 Rack Unit (RU) form factor, employing 2-socket AMD EPYCTM processors, with 96 and 32 cores per socket, respectively. All prior 2-socket Exadata generations used 1 RU database servers. With X10M, database servers require more space for airflow and cooling due to the higher performance of the AMD processors. Memory in database servers starts at 512 gigabytes (GB) and can be expanded to 3 terabytes (TB).

The X9M-2 compute servers feature a small form factor, 1 RU (Rack Unit) in height. They employ 2-socket Intel Xeon processors; each socket with 32 compute cores for 64 total cores per compute server. Memory starts at 512 gigabytes (GB) and can be expanded to 2 terabytes (TB).

The Exadata Database Machine base configuration has 2 compute servers and 3 storage servers, referred to as a Quarter Rack. The same hardware is also available in an Eighth Rack configuration with half of the processing and half of the storage capacity. As the database workload and/or data size increases, additional compute and storage servers may be added to increase the volume of work performed in parallel, using Exadata's Elastic Configuration.

The Exadata Database Machine base configuration has 2 database servers and 3 storage servers, referred to as a Quarter Rack. As the database workload and/or data size increases, additional database and storage servers may be added to increase the volume of work performed in parallel, using Exadata's elastic configuration. Multi-rack Exadata configurations are available for scaling very large workloads that exceed a single rack.

Exadata Storage Servers

There are three choices for Exadata storage servers: Extreme Flash (EF), High Capacity (HC), and Extended (XT). The X10M Extreme Flash Storage Server is all-flash storage containing 4 performance-optimized and 4 capacity-optimized NVMe flash drives for 27.2 TB of Exadata Smart Flash Cache and 122.9 TB of raw flash storage capacity. Each storage server contains 1.25 TB of XRMEM as an acceleration tier in front of Flash Cache to boost performance further.

The X10M High Capacity Storage Server contains twelve 22 TB disk drives with 264 TB total raw disk capacity, 27.2 TB of NVMe Exadata Smart Flash Cache, and 1.25 TB of Exadata RDMA Memory. Exadata Smart Flash Cache is managed automatically by Exadata Smart Storage Software.

The X10M Extended Storage Server contains twelve disks, 22 TB each, for a total of 264 TB of raw storage capacity, but does not contain flash storage or XRMEM. Extended Storage Servers may be configured without Exadata Storage Server Software licenses. This storage option extends the operational and management benefits of Exadata to rarely accessed data that must be kept online.

In addition to adding storage servers into an Exadata Database Machine Quarter Rack configuration, storage servers may also be acquired with or added to Exadata Storage Expansion racks.

Performance specifications for a Quarter Rack Exadata X10M configuration with EF and HC storage servers are as follows:

Exadata Storage Server Max Scan Rate from Flash Max SQL Read IOPS from XRMEM Max SQL Write IOPS to Flash
X10M Extreme Flash 180 GB/sec 5,600,000 2,748,000
X10M High Capacity 135 GB/sec 5,600,000 2,748,000

Table 1. Maximum performance based on a Quarter Rack configuration of 2 database servers and 3 storage servers.

Note: IOPS = 8K I/O Operations per second from SQL

Memory-level performance with shared storage

Architects of traditional database platforms have always had to cope with technology change affecting the design of their systems. Their goal is to eliminate bottlenecks so that the output of storage moves through the network and is processed by database servers without any slowdown. Solving an imbalance generally involves adding faster or more network connections or database servers. This was before the advent of ultra-fast PCIe flash memory and the NVMe flash interface.

Flash memory started to become mainstream in corporate computing around 2010, used as a cache in front of hard disks or as a replacement for disks entirely. Every year thereafter flash capacity and performance increased significantly. In 2017, leading-edge flash performance crossed a threshold, where the most advanced networks were unable to match the performance of flash and became a substantial bottleneck. As an example, a popular all-flash storage system with 480 Flash cards is rated at only 37.5 GB/s of data throughput, whereas without a network bottleneck, that many flash cards should produce over 2,600 GB/s of data throughput. Offloading to storage in Exadata bypasses this network bottleneck by filtering out unneeded data in storage before sending the remaining data across the network. The addition of XRMEM, that is faster than flash, increases the value of Exadata offloading even further. While adding flash directly to a traditional database server removes the network bottleneck, it also removes the ability to share storage across multiple database servers. Exadata's approach does not suffer this limitation.

Hardware Specifications


Exadata Generation(2-socket) V1 V2 X2-2 X3-2 X4-2 X5-2 X6-2 X7-2 X8-2 X8M-2 X9M-2 X10M
Date Introduced Sep-2008 Sep-2009 Sep-2010 Sep-2012 Nov-2013 Jan-2015 Apr-2016 Oct-2017 Apr-2019 Sep-2019 Sep-2021 Jun-2023
Last Ship Date Oct-2009 Oct-2010 Sep-2012 Feb-2014 Mar-2015 Jul-2016 Nov-2017 Jun-2019 Dec-2020 Sep-2022 still shipping shipping now
Operating System Linux Linux Linux Linux Linux Linux Linux Linux Linux Linux Linux Linux
Disk Storage (raw TB) 168 336 504 504 672 1344 1344 1680 2352 2352 3024 3696
Flash Cache (raw TB) N/A 5.3 5.3 22.4 44.8 89.6 179.2 358 358 358 358 380
Persistent Memory (TB) N/A N/A N/A N/A N/A N/A N/A N/A N/A 21 21 N/A
Exadata RDMA Memory (XRMEM) (TB) N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A N/A 17.5
Extreme Flash (raw TB) N/A N/A N/A N/A N/A 179.2 358.4 716.8 716.8 716.8 716.8 1720.32
Compute Cores 64 64 96 128 192 288 352 384 384 384 512 1536
Max Memory (GB) 256 576 1,152 2,048 4,096 6,144 12,288 12,288 12,288 12,288 16,384 24,576
RDMA Network Fabric (Gb/sec) 20 40 40 40 40 40 40 40 40 100 200 200
Ethernet (Gb/sec) 8 24 184 400 400 400 400 800 800 800 800 800

Table 2. Key statistics for each Exadata 2-socket system since initial introduction. Comparisons are based on configurations of 8 database servers and 14 storage servers, except for X10M with 5 database servers and 14 storage servers.

Exadata Generation(8-socket) X2-8 X3-8 X4-8 X5-8 X6-8 X7-8 X8-8 X8M-8 X9M-8
Date Introduced Sep-2010 Sep-2012 Jul-2014 Nov-2015 Apr-2016 Oct-2017 Apr-2019 Sep-2019 Sep-2021
Last Ship Date Nov-2012 Dec-2014 Oct-2015 Mar-2016 Nov-2017 Jun-2019 Dec-2020 Jun-2022 still shipping
Operating System Linux Linux Linux Linux Linux Linux Linux Linux Linux
Disk Storage (raw TB) 504 504 672 1344 1344 1680 2352 2352 3024
Flash Cache (raw TB) 5.3 22.4 89.6 89.6 179.2 358.4 358.4 358.4 358.4
Persistent Memory (TB) N/A N/A N/A N/A N/A N/A N/A 21 21
Extreme Flash (raw TB) N/A N/A 179.2 179.2 358.4 716.8 716.8 716.8 716.8
Compute Cores 96 160 240 288 288 384 384 384 384
Max Memory (TB) 4 4 12 12 12 12 12 12 12
RDMA Network Fabric (Gb/sec) 40 40 40 40 40 40 40 100 100
Ethernet (Gb/sec) 176 176 180 180 180 540 540 540 540

Table 3. Key statistics for each Exadata 8-socket system, initially introduced with Exadata X2-8. X9M-8 was the last 8-socket model. Comparisons are based on configurations of 2 database servers and 14 storage servers.

Elastic Configurations

Prior to the X5-2 generation, Exadata systems were only available in fixed-size configurations of Eighth, Quarter, Half, and Full Rack sizes. The X5-2 Exadata release in January 2015 introduced elastic configurations. An elastic configuration has a customer-specified combination of database servers and storage servers, allowing individual storage or database servers to be added to a Quarter Rack configuration until the physical rack is full. The ratio of database to storage servers can vary, depending on the characteristics of the intended workload. For example, an Exadata system optimized for in-memory database processing would add many database servers, each with maximum memory. Conversely, an Exadata system optimized for a large data warehouse could add many High-Capacity storage servers. Elastic configurations may also be used to scale out earlier generation Exadata systems using the latest compatible servers. In addition, Exadata Database Machines have always been able to span multiple racks using the built-in network fabric connections. Thus, Exadata’s scale-out extends beyond a single physical rack.

Exadata Evolution

Oracle releases a new generation of Exadata every twelve to twenty-four months. At each release, Oracle refreshes most hardware components to the latest CPU processors, memory, disk, flash, and networking technologies. These hardware refreshes result in performance increases with every release. Exadata software innovations, delivered with each generation and periodically in between, consistently enhance performance, availability, security, management, and workload consolidation.

The evolution of Exadata is best understood through the innovations introduced in each generation.

Exadata V1, released in 2008, focused on accelerating data warehousing by delivering the full throughput of storage to the database. Exadata achieved this by moving database filtering operations into storage, instead of sending all data to the database servers and filtering it there. This capability is referred to as Exadata Smart Scan. Exadata V1 also supported a consolidation feature for allocating I/O bandwidth between databases or workloads, called IORM (I/O Resource Manager). Exadata V1 was available in Full Rack or Half Rack sizes, and the choice of High Performance or High Capacity storage servers, both based on disk drive for storage. The internal network fabric of Exadata was based upon InfiniBand technology.

Exadata V2, released in 2009, added a Quarter Rack configuration and support for OLTP workloads via flash storage and database-aware Flash Caching.

Exadata V2 also introduced Hybrid Columnar Compression to reduce the amount of storage consumed by large Data Warehouses.

Storage Indexes in Exadata V2 increased performance by eliminating the need to read entire regions of storage, based on the storage server's knowledge of the values contained in the region.

Exadata X2-, the third generation, was released in 2010 and a second model of Exadata, Exadata X2-8, was introduced. The X2-8 and subsequent 8 socket Exadata models featured processors targeted at large memory, scale-up workloads. The use of flash storage beyond caching began in this release with a Smart Flash Logging feature. Support for 10 Gigabit per second (Gb/sec) Ethernet client connectivity was also added.

Exadata X2-2 also encouraged data security through encryption with the incorporation of processor-based hardware decryption, largely eliminating the performance overhead of software decryption.

A Storage Expansion Rack based on Exadata X2-2 was added in 2011 to accommodate large, fast-growing data warehouses and archival databases. All subsequent Exadata generations have included a new Storage Expansion Rack.

Exadata X3-2 and X3-8 were released in 2012, including a new Eighth Rack X3-2 entry-level configuration. Flash storage capacity quadrupled and OLTP write throughput increased by 20x via the Write-Back Flash Cache feature.

A number of availability enhancements were added, bypassing slow or failed storage media, reducing the duration of storage server brownouts, and simplifying replacement of failed disks.

Exadata X4-2 was released in 2013. Flash capacity doubled and flash compression was added, effectively doubling capacity again. Network Resource Management was introduced, automatically prioritizing critical messages. InfiniBand bandwidth effectively doubled with support for active/active connections.

Exadata X4-8 was released in 2014, adding Capacity on Demand licensing, I/O latency capping, and timeout thresholds to all subsequent models.

Exadata X5-2 and X5-8 were released in 2015 with major enhancements. Flash and disk capacity doubled. Elastic configurations were introduced to enable expansion one server at a time. Virtualization was added as an option to Exadata along with Trusted Partitions for flexible licensing within a virtual machine. Database snapshots on Exadata storage enabled efficient development and testing. Oracle Database In-Memory on Exadata included Fault Tolerant redundancy. The High Performance Exadata storage servers were replaced with all-flash (Extreme Flash) storage servers and Exadata became the first major vendor to adopt the NVMe flash interface. Columnar Flash Cache was introduced to automatically reformat analytics data into columnar format in flash. IPv6 support was completed. Exadata Cloud Service was launched on the Oracle Cloud.

Exadata X6-2 and X6-8 were released in 2016. Flash capacity doubled. Exafusion Direct-to-Wire protocol reduced messaging overhead in a cluster and Smart Fusion Block Transfer eliminates redo log write delays when transferring blocks between database nodes. Exadata Cloud@Customer debuted, enabling Oracle Cloud benefits within corporate data centers.

Exadata X7-2 and X7-8 were released in 2017. Flash capacity doubled. Flash cards became hot-pluggable for online replacement. 10 Terabyte (TB) disk drives debuted along with 25 Gb/sec Ethernet client connectivity. Oracle Database In-Memory processing was extended into flash storage, and storage server DRAM was utilized for faster OLTP.

Exadata X8-2 and X8-8 were released in April 2019. Exadata Storage Server Extended (XT) was introduced for low-cost storage of infrequently accessed data. 14 Terabyte (TB) disk drives debuted along with 60% more compute cores in Exadata storage servers. Machine Learning algorithms were added that automatically monitor CPU, network, and memory to detect anomalies such as stuck processes, memory leaks, and flaky networks, and to automatically create (Auto index), rebuild, or drop indexes. Optimizer statistics are also gathered in real-time as DML executes. For enhanced security, Advanced Intrusion Detection Environment (AIDE) was added to detect and alert when unsanctioned changes to system software are made.

Exadata X8M-2 and X8M-8 were released in September 2019. Substantial performance increases resulted from the addition of Intel Optane DC Persistent Memory in Exadata Storage Servers, and a new 100 Gbit/s internal network fabric based on RoCE (RDMA over Converged Ethernet), replacing the previous InfiniBand fabric. These changes increased read I/O throughput by 2.5x and lowered I/O latency by 10x. In addition, a new KVM hypervisor replaced the Xen hypervisor, doubling the amount of memory available to a guest VM.

Exadata X9M-2 and X9M-8 were released in September 2021 and included the latest generation of Intel Optane Persistent Memory and PCIe Gen 4, which led to significant performance gains over the previous generation. OLTP read I/O throughput increased a further 1.6x and the 1 TB/s Smart Scan threshold was crossed within a single rack.

Exadata X10M was released in June 2023 as a 2-socket only model based on AMD processors with 96-core per socket database servers. The high core count and large memory capacity removed the need for the 8-socket Exadata model. Persistent memory was replaced with Exadata RDMA memory, based on DRAM. Disk storage capacity and all-flash storage capacity increased. Database servers increased in size from 1 to 2 RU for better airflow and cooling.