Oracle Materialized View Refresh Group atomicity— How to prove transactional consistency with LogMiner
I had recently to prove to myself, and then to the Oracle Support, that a Materialized View Group was not refreshed atomically as it should, according to the documentation:
Add materialized views to a refresh group to ensure transactional consistency between the related materialized views in the refresh group. When a refresh group is refreshed, all materialized views that are added to a particular refresh group are refreshed at the same time.
AskTOM
The first idea was suggested by the user who encountered the issue. He has read a similar question on, AskTOM, which is a very good idea:
There, Connor quickly shows that there are no intermediate commits by enabling SQL_TRACE and looking at the XCTEND lines from the trace dump.
sql_trace
So I did the same in my case and here is an extract from the interesting lines, just grepping the inserts and the XCTEND:
Clearly, the first 4 tables were done in the same transaction. But we can see some commits between the 4 others. This seemed to confirm what the user has observed: a query on tables shows data from a different point in time. And then I opened a SR to fill a bug.
However, the support engineer disapproved this proof because the XCTEND can come from recursive transactions. And he is totally right. With SQL_TRACE you can prove that it is atomic, but you cannot prove that it is not.
LogMiner
When it comes to transactions, LogMiner is the right tool. It is incredibly powerful (all persistent changes on your database go to the redo stream, and LogMiner can read the most interesting out of it. And it is incredibly easy to use — at least until the latest release where Oracle removes many replication features which may overlap with GoldenGate — a product sold separately.
Here I’m refreshing the materialized view group LSA.CCDB_VIEWS, which contains 8 MVIEWs. I am tracking the SCN before (scn1) and after (scn2).
set numwidth 16 linesize 200 pagesize 1000
column scn1 new_value scn1
column scn2 new_value scn2
column sid format 999999 new_value sid
column seg_type format 99
column seg_name format a30
column seg_owner format a12
column operation format a12
set linesize 1000
alter database add supplemental log data;
select current_timestamp,current_scn scn1,sys_context('userenv','sid') sid from v$database;exec dbms_refresh.refresh('LSA.CCDB_VIEWS');
commit;select current_timestamp,current_scn scn2,sys_context('userenv','sid') sid from v$database;alter database drop supplemental log data;
Note that as I don’t have supplemental logging enabled here, I enable it just for this test. It is not a bad idea to enable it always, as long as the redo size is acceptable.
Then I start Log Miner for this SCN range. I use CONTINUOUS_MINE as I am on 18c here (it has been deprecated, de-supported and even removed in 19c 😡) and I set COMMITTED_DATA_ONLY so that I can query the COMMIT SCN (in this mode, LogMiner looks ahead to find the end of the transaction).
exec dbms_logmnr.start_logmnr(startscn=>&scn1,endscn=>&scn2
,options=>DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG
+DBMS_LOGMNR.CONTINUOUS_MINE
+DBMS_LOGMNR.COMMITTED_DATA_ONLY
);
Now, I aggregate all operations on the Refresh Group Materialized Views. I display the COMMIT_SCN to show that the first 4 tables were visible at the same time, but the 4 other tables were refreshed in different transactions. The transaction XID (which is the same as the parent PXID, proving that there are no recursive transactions by the way) confirms this.
select
operation,seg_name,commit_scn,count(*),min(scn),max(scn),pxid,xid
from v$logmnr_contents
where session#=sys_context('userenv','sid') -- my session
and (seg_owner,seg_name) in ( -- my refresh group mviews
select owner,name
from dba_rchild
where refgroup in (
select refgroup
from dba_rgroup
where owner='LSA' and name='CCDB_VIEWS'
)
)
group by operation,seg_name,commit_scn,xid,pxid
order by commit_scn,max(scn)
/
Now that the issue is clearly stated and reproducible in this environment, the hard part will be to fix it. But the goal of this post was to show how LogMiner can help to troubleshoot this kind of issues. And how easy it is: one statement to start it, one view to query it.