Ideas for Event Sourcing in Oracle
Log Miner or are there other alternatives?
With microservices, the architects want to dismantle the monolithic database and replicate data rather than share it. Then raises the need to audit the changes where the modifications are done (like the C in CQRS). The Oracle database already does that for recovery purposes, building the redo records before modifying the data blocks, but that’s a physical change vector. We need something logical with more possibilities to filter and transform. There are multiple possible methods for this. But unfortunately, the ones that were built into the database are slowly being removed since Oracle has acquired Golden Gate and sells it separately.
- Streams is deprecated in 12c
- Advanced Replication is deprecated in 12c
- Change Data Capture is deprecated in 12c
- LogMiner Continuous Mine is deprecated in 12c
And deprecated became desupported, and even removed in further releases, like Continuous Mine in 19c — the final pathset of 12cR2:
The doc says that the replacement is Golden Gate but that’s another product to buy, very powerful but expensive (and there’s no Standard Edition).
Debezium DBZ-137
Debezium, an open source distributed platform for change data capture, is working on an Oracle Database connector. Many ideas are mentioned in https://issues.jboss.org/browse/DBZ-137 and in this post I give my thought about them.
Oracle XStreams
The perfect solution as it has a minimal overhead on the source and is very efficient. But it requires Golden Gate licensing and then is probably not the best solution for an Open Source product.
Oracle LogMiner
LogMiner is included in all Editions, reads the redo stream (archived and online redo logs), and extracts all information. When enabling Supplemental Logging, we have enough information to build the logical change information. Many solutions are already based on that. But I see two problems with it.
LogMiner Limits: Basically, LogMiner was not made for replication. The idea was more to give a troubleshooting tool to understand what happened on the data: what is generating too much redo? Who deleted some data? Which sessions were locking rows?… There are limits, like unsupported datatypes. And it is not designed to be efficient. But there’s also the possibility to mine on another system. However, I think that those limits can be acceptable for an Open Source solution on simple databases with a low rate of changes.
LogMiner Future: What is more wondering is how Oracle removes the features that may give an alternative to Golden Gate. In 19c the CONTINUOUS_MINE was removed. This means that we need to constantly open and read the whole the redo logs. And do we know what Oracle will remove in future versions when they will see a robust Open Source product that competes with Golden Gate?
On the DBZ-137 there are some remarks about RAC which is more complex because there are many redo threads. I don’t think that RAC is in the scope for this. RAC is an expensive option that is required only on large databases with very high load. That fits more in the Golden Gate scope.
Note that we can parse the SQL_REDO and SQL_UNDO from V$LOGMINER_CONTENTS but there’s also the possibility to get them from dbms_logmnr.mine_value
Mining the binary log stream
There are some attempts to mine the binary redo logs. Some well-known commercial products and some Open Source attempts. That’s very complex, but that’s also fun for an open-source community. The redo log structure is proprietary but Oracle will not change it too often because all availability features (recovery, standby,…) are based on it. However, there may be a legal issue to open-source this mining as it exposes the proprietary format of the redo. Reverse engineering is clearly forbidden by the Oracle license.
Continuous Query Notification
I studied the usage of dbms_change_notification as a CDC alternative: https://blog.dbi-services.com/event-sourcing-cqn-is-not-a-replacement-for-cdc. This feature is aimed at nearly static data, in order to invalidate and refresh a cache. It is not designed for a high change rate and is not efficient at all for this.
Client Result Cache
In the same idea as refreshing a cache from data that do not change often, one can think about querying with the client result cache as it has a mechanism to invalidate the cache when a modification occurs. However, the granularity is bad here as any change on the table will invalidate all queries on it.
Materialized View Logs
All changes can be logged in materialized view logs. This feature is built for materialized views fast refresh which is a kind of replication. This has nothing to do with the redo log used by LogMiner. With materialized view logs, the changes are stored in a table and must be deleted when consumed. But this feature exists for a long time and is widely used. However, I would seriously question the architecture if there’s a general need for double writing, then reading it and deleting it, just to put the same data into another place.
Triggers
With triggers, we can log the changes as with materialized view logs. It gives more possibilities, like sending the change rather than storing it in the database (but then we have to manage the transaction visibility). An optimized example to store the audited change has been published by Connor McDonald:
But this is still a lot of overhead and need to be adapted when columns are added or removed.
ORA_ROWSCN
When we enable row dependencies, the ORA_ROWSCN pseudo-column can help to filter the rows that may have been updated recently. However, there are two problems with this approach.
full read: if we want a near real-time replication, we will probably pool for changes frequently. ORA_ROWSCN would be nice if indexed, but that’s not the case. It just reads the information stored in the table block. That means that to find the changes done in the last 5 minutes we need to full scan the table and ORA_ROWSCN will then help to identify those rows that were changed. It is a transparent alternative to a “last update” column timestamp but does not help to access quickly to those rows.
commit time: there’s a general problem with anything that reads a “change” timestamp. Let’s say that I pool the changes every 5 minutes. I have a long transaction that updates a row at 12:39 and commits at 12:42. The pool that runs at 12:40, looking for changes since 12:35, does not see the change as it is not committed yet. The pool that runs at 12:45 can see it but not when it filters on the changes that occurred since the last run, which is 12:20. This means that each run must look on a larger window, including the longest transaction start. And then it must deal with duplicates as some of the changes have been captured by the previous run. This is a general problem when there’s no “commit SCN” available.
Userenv(‘commitscn’)
While talking about the visibility time (commit SCN) vs. the change there is an undocumented way to get it. insert or update with userenv(‘commitscn’) and this will magically get back to the table row at the end of the transaction to set the Commit SCN. It is not supported and anyway it can be invoked only once in a transaction and then cannot be added automatically in a trigger.
Oracle Flashback Query
If we don’t want to add additional auditing on DML, the redo log is not the only internal logging. Oracle also logs the undo information for consistent reads (MVCC) and this, without any additional overhead on the modification, can show all changes that occurred in a table. Basically, we can SELECT … FROM … VERSION BETWEEN SCN … AND … and all changes will be visible with the new and old values and additional information about the operation and the transaction.
However, this is not indexed. Like with ORA_ROWSCN we need to full scan the table and the consistent read will build the previous versions of the blocks, thanks to the undo.
Flashback Data Archive
Flashback Query can reconstruct the recent changes, limited by the undo retention, and by the last DDL that occurred. Flashback Data Archive (which was called Total Recall) can go further. This feature is available in Enterprise Edition without the need for an additional option. It can go beyond the undo retention and allows some DDL. But, again, it is not optimized to get all changes since a specific point in time. The idea is that when you know the rows you want to read, then it can get to the previous version.
Minimal trigger + flashback query
Some of those solutions can be combined. For example, a trigger can log only the ROWID of the changed rows and the replication process will get more information for these rows through a flashback query. This lowers the overhead on the changes, while still avoiding a full scan for the replication. Or you may get those ROWID directly from custom-mining the redo logs, which is much simple than trying to get all information from it.
DDL triggers
SQL is agile and allows the structure to change. If adding a column breaks the whole replication, then we have a problem. All the solutions above need to handle those changes. The redo log contains the changes in the dictionary, but it can be complex to decode. All other solution must adapt to those changes and that means having a DDL trigger and handling the different kinds of changes.
Not easy…
The summary is that there are no easy solutions, and the easiest ones have been removed by Oracle to push the sales for Golden Gate. My first recommendation when someone wants to replicate to changes to query it from another place is: don’t do that. Relational databases are made to ingest new data and modifications, and be able to query for different purposes. We have views to show data in different formats. We have indexes to get fast access for different use cases. Oracle is not like many of its competitors. It has been optimized for mixed workloads from the first versions. You can query the same database where the changes occur because a SELECT does not lock anything. You have a resource manager to be sure that runaway queries cannot slow down the transactional activity. And the CPU usage for those queries, when correctly tuned, will rarely be higher than the replication activity you need to implement to stream the changes to another database.
Then, which technology should an event sourcing be built upon? LogMiner looks good for small databases with basic usage. And the project should adapt to the features that are removed by Oracle in the future.
Hybrid trigger / flashback query
When only a few tables are concerned, generating DML triggers is probably the simplest, especially if they log only the minimum, like the ROWID. The ROWID will be visible only when the transaction is committed. Then the replication process must use a flashback query, reading only those blocks from the ROWID. The nice thing here is that the flashback query shows when the change was visible (the commit time) rather than the change time, which makes it easier to filter out the changes already processed by the last run.
Here is the idea when a trigger has logged the ROWID changed into a DEMO_CDC table, and we query:
The execution plan for this shows optimal access with the ROWID:
Explain Plan
------------------------------------------------
PLAN_TABLE_OUTPUT
Plan hash value: 3039832324------------------------------------------------
| Id | Operation | Name |
------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | NESTED LOOPS | |
| 2 | SORT UNIQUE | |
| 3 | TABLE ACCESS FULL | DEMO_CDC |
| 4 | TABLE ACCESS BY USER ROWID| DEMO |
------------------------------------------------
The important thing here is that the query cost is proportional to the changes and not to the full size of the table. And the trigger overhead is limited to the ROWID only. There’s no need to store in a table the values that are stored already in the base table and the undo segments. If reading this is done frequently, there are good chances that all blocks involved (the ROWID list, the UNDO records and the table block) are still in the buffer cache.
This trigger+flashback approach is just an idea that I’ve never used. So feedbacks welcome on https://twitter.com/franckpachot
Updated 23-JULY-2019
There is an Oracle Support note about: LogMiner — Is It Supported To Use LogMiner To Replicate Data Out Of Materialized View To An Ordinary Table? (Doc ID 1619523.1) The question is surprising: why mining the redo of the materialized view rather than the original tables? It looks like a double change capture. I don’t think it is allowed to reproduce a MOS note that is accessible only by paying support, but basically, it says that the LogMiner interface (the v$logmnr_contents view) is provided for ad-hoc queries and using it for logical replication is not supported.
Updated 23-JANNUARY-2023
- The LogMiner option has been implemented in Debezium:
Debezium Connector for Oracle :: Debezium Documentation - There’s also an open source project to mine the Oracle redo logs:
Voodoo (ora-600.pl) - The cloud providers have tools to migrate from Oracle like AWS DMS for Amazon.