OLTP or Online Transaction Processing is a type of data processing that consists of executing a number of transactions occurring concurrently—online banking, shopping, order entry, or sending text messages, for example. These transactions traditionally are referred to as economic or financial transactions, recorded and secured so that an enterprise can access the information anytime for accounting or reporting purposes.
In the past, OLTP was limited to real-world interactions in which something was exchanged–money, products, information, request for services, and so on. But the definition of transaction in this context has expanded over the years, especially since the advent of the internet, to encompass any kind of digital interaction or engagement with a business that can be triggered from anywhere in the world and via any web-connected sensor. It also includes any kind of interaction or action such as downloading pdfs on a web page, viewing a specific video, or automatic maintenance triggers or comments on social channels that maybe critical for a business to record to serve their customers better.
The primary definition for transactions—economic or financial—remains the foundation for most OLTP systems, so online transaction processing typically involves inserting, updating, and/or deleting small amounts of data in a data store to collect, manage, and secure those transactions. Typically a web, mobile, or enterprise application tracks all those interactions or transactions with customers, suppliers, or partners and updates them in the OLTP database. This transaction data stored in the database is critical for businesses and used for reporting or analyzed to use for data-driven decision making.
Read how other companies like Retraced, Archaeological Park of Pompeii, Jasci or Siemens have been successful in building their transaction processing workloads in the cloud.
Businesses usually have two types of data processing capabilities: OLTP and OLAP.
Though they sound similar and are both online data processing systems, there is a stark difference between the two.
OLTP enables the real-time execution of large numbers of transactions by large numbers of people, whereas online analytical processing (OLAP) usually involves querying these transactions (also referred to as records) in a database for analytical purposes. OLAP helps companies extract insights from their transaction data so they can use it for making more informed decisions.
The table below shows comparison between OLTP and OLAP systems.
OLTP systems |
OLAP systems |
Enable the real-time execution of large numbers of database transactions by large numbers of people |
Usually involve querying many records (even all records) in a database for analytical purposes |
Require lightning-fast response times |
Require response times that are orders of magnitude slower than those required by OLTP |
Modify small amounts of data frequently and usually involve a balance of reads and writes |
Do not modify data at all; workloads are usually read-intensive |
Use indexed data to improve response times |
Store data in columnar format to allow easy access to large numbers of records |
Require frequent or concurrent database backups |
Require far less frequent database backup |
Require relatively little storage space |
Typically have significant storage space requirements, because they store large amounts of historical data |
Usually run simple queries involving just one or a few records |
Run complex queries involving large numbers of records |
So, OLTP is an online data modification system, whereas OLAP is an online historical multidimensional data store system that’s used to retrieve large amounts data for analytical purpose. OLAP usually provides analytics on data that was captured by one or more OLTP systems.
The most common architecture of an OLTP system that uses transactional data is a three-tier architecture that typically consists of a presentation tier, a business logic tier, and a data store tier. The presentation tier is the front end, where the transaction originates via a human interaction or is system-generated. The logic tier consists of rules that verify the transaction and ensure all the data required to complete the transaction is available. The data store tier stores the transaction and all the data related to it.
The main characteristics of an online transaction processing system are the following:
Relational databases were built specifically for transaction applications. They embody all the essential elements required for storing and processing large volumes of transactions, while also continuously being updated with new features and functionality for extracting more value from this rich transaction data. Relational databases are designed from the ground up to provide the highest possible availability and fastest performance. They provide concurrency and ACID compliance so the data is accurate, always available, and easily accessible. They store data in tables after extracting relationships between the data so the data can be used by any application, ensuring a single source of truth.
As transactions became more complex, originating from any source or device, from anywhere in the world, traditional relational databases were not advanced enough to meet the needs of modern-day transactional workflows. They had to evolve to handle the modern-day transactions, heterogeneous data, and global scale, and most importantly to run mixed workloads. Relational databases transformed into multimodal databases that store and process not only relational data but also all other types of data, including xml, html, JSON, Apache Avro and Parquet, and documents in their native form, without much transformation.. Relational databases also needed to add more functionality such as clustering and sharding so they could be distributed globally and scale infinitely to store and process increasingly large volumes of data and to make use of cheaper storage available on cloud. With other capabilities such as in-memory, advanced analytics, visualization, and transaction event queues included, these databases now can run multiple workloads — such as running analytics on transaction data or processing streaming (Internet of Things (IoT)) data, or running spatial, and graph analytics.
Modern relational databases built in the cloud automate a lot of the management and operational aspects of the database, making them easier for users to provision and use. They provide automated provisioning, security, recovery, backup, and scaling so DBAs and IT teams have to spend much less time maintaining them. They also embed intelligence to automatically tune and index the data so database query performance is consistent irrespective of the amount of data, the number of concurrent users, or the complexity of the queries. These cloud databases also include self-service capabilities and REST APIs so developers and analysts can easily access and use the data. This simplifies application development, giving flexibility and making it easier for developers to build new functionality and customizations into their applications. It also simplifies analytics, making it easier for analysts and data scientists to use the data for extracting insights.
As IT struggles to keep pace with the speed of business, it is important that when you choose an operational database you consider your immediate data needs and long-term data requirements. For storing transactions, maintaining systems of record, or content management, you will need a database with high concurrency, high throughput, low latency, and mission-critical characteristics such as high availability, data protection, and disaster recovery. Most likely, your workload will fluctuate throughout the day or week or year, so ensuring that the database can autoscale will help you save a lot of expense. You’ll also may need to decide whether to use a purpose-built database or general-purpose database. If your requirements are for a specific type of data, a purpose-built database may work for you, but make sure you aren’t compromising on any of the other characteristics you need. It would be costly and resource-intensive to build for those characteristics later in the application layer. Also, if your data needs grow and you want to expand the functionality of your application, adding more single-purpose or fit-for-purpose databases will only create data silos and amplify the data management problems. You must also consider other functionalities that may be necessary for your specific workload—for example, ingestion requirements, push-down compute requirements, and size at limit.
Select a future-proof cloud database service with self-service capabilities that will automate all the data management so that your data consumers—developers, analysts, data engineers, data scientists and DBAs—can do more with the data and accelerate application development.
Learn more about Oracle’s Autonomous Transaction Processing Database, the cloud OLTP database service. Try it for free.