Truncate AWR tables (unsupported)

When WRH$ tables grow too large so that they cannot be purged

Franck Pachot
4 min readSep 3, 2019

This is no supported, please look at the My Oracle Support notes for a supported way to purge AWR when going too big, like re-creating AWR (needs to start the database in restricted mode) or purging with the normal procedure (can be long as it runs a delete). And do not copy-paste my statements as this is just an example.

When some tables grow too large, the purge job does not work correctly (because some things like the partitioning are done at the end). Then SYSAUX grows. And worse: the next upgrade may take hours if it has to change something on the AWR tables.

Upgrade time

Here is an example of an upgrade from 11g to 19c which took hours. Here is how I open the upgrade logs with “less” for the Top-10 longest statement execution:

eval $( grep -n "^Elapsed:" catupgrd*.log | sort -rk2 | awk -F: 'NR<10{print "less +"$2" "$1";"}' )

Or this one to display them in order:

tac catupgrd*.log | awk '/^Elapsed:/{print x;x="";e=$2}e>"00:01:00"{x=e" "NR" "$0"\n"x}' | sort -r | more

Many hours were spend on updating AWR tables like a new index on WRH$_EVENT_HISTOGRAM, add a column on WRH$_SQLSTAT to count obsoletes, add In-memory columns in WRH$_SEG_STAT, and many new indexes. If AWR became too large, you should do something before the upgrade. For system.logmnrc_gtlo that will probably be another blog post.

Check AWR size

Here is how to check the size of AWR before the upgrade:

sqlplus / as sysdba @ ?/rdbms/admin/awrinfo

If the query never returns, then maybe it is too large…

Baselines

First I check that I didn’t explicitly create AWR baseline to keep old snapshots:

select * from dba_hist_baseline where baseline_type<>'MOVING_WINDOW';

If there are some, then what I’ll do will lose son information someone wanted to retain, so check before.

Create new partitions

For the partitioned ones, I can force the creation of a new partition so that I can, later, truncate only the old ones without losing the recent AWR snapshots:

alter session set "_swrf_test_action" = 72;

This is the workaround when the partitions were not created automatically.

Truncate all old partitions

After a while (run a few dbms_workload.create_snapshot) I truncate those old partitions. Here is how I generate the drop statement:

select 'alter table '||object_name||' truncate partition '||subobject_name||' update global indexes;'
from dba_objects where object_name like 'WRH$%' and object_type = 'TABLE PARTITION' and created<sysdate-1
;

I mention CREATED<SYSDATE-1 because I don’t want to truncate those that I have just split today, in order to keep some basic recent statistics just in case.

But now remains a few big tables that cannot be partitioned.

Look at large tables in SYS for WRH$

If I am not sure about the statistics gathering, I run it explicitly in order to see the recent number of rows (but this can take long):

exec DBMS_STATS.GATHER_DICTIONARY_STATS;

I’m interested in those larger than 1 million rows. For the ones that are partitioned, I have truncated large partitions only. But for the non-partitioned ones, I’ll truncate the whole table — and then lose everything.

select owner,table_name,dbms_xplan.format_number(num_rows) num_rows,object_type,partition_name,(select count(*) from dba_tab_partitions p where s.owner=p.table_owner and s.table_name=p.table_name) part from dba_tab_statistics s where owner in ('SYS','SYSTEM') and table_name like 'WRH$%' and num_rows>1e6 order by s.num_rows;

Here is the result. Use the same query without the ‘WRH$’ pattern in order to see everything that may cause problems at upgrade time.

The column PART is the number of partitions. Those with 0 are not partitioned and then the truncate will remove all data.

Truncate large non-partitioned ones

Some tables are not partitioned and I truncate the largest ones (which I’ve identified from the query above). I prefer to limit them because:

  • Fresh snapshot information will be lost
  • Inconsistency (snapshots with no data)
-- this is my example, you may have different tables
--
truncate table WRH$_TABLESPACE_SPACE_USAGE update global indexes;
truncate table WRH$_EVENT_HISTOGRAM update global indexes;
truncate table WRH$_MUTEX_SLEEP update global indexes;
truncate table WRH$_ENQUEUE_STAT update global indexes;
truncate table WRH$_SYSMETRIC_SUMMARY update global indexes;
truncate table WRH$_BG_EVENT_SUMMARY update global indexes;
truncate table WRH$_SYSMETRIC_HISTORY update global indexes;
truncate table WRH$_SQL_BIND_METADATA update global indexes;
truncate table WRH$_SQL_PLAN update global indexes;

This is fast, but if you need to run this, better do it when there’s no snapshot gathering. And do not rely on my list: chose the largest ones you have.

Note that truncating WRH$_SQL_PLAN will remove all old SQL Plans. I usually rarely need to look at an old plan (better to tune the current one rather than looking at the past) but they may help sometimes to get the plan, with its outlines, that worked before. So, do not do that when you have performance instabilities. Or ensure that you have a SQL Tuning Set containing the critical queries.

Use regular purge

Now I want everything to be consistent. I determine the earliest snapshot I have that is fully consistent (gathered after my truncate table statements):

select dbid,min(snap_id) from WRH$_SQL_PLAN group by dbid;

I choose this table because it was the last one I truncated.

Then I run the supported purge procedure for those snapshots:

exec DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE(0,177189,28047622);

This brings me back to a consistent set of data. And this should not take long because but I ensured that there is no more than one million rows in each table.

I split again to start clean:

alter session set "_swrf_test_action" = 72;

Maybe reduce the retention

As I had AWR growing too much, I reduced the retention:

exec DBMS_WORKLOAD_REPOSITORY.MODIFY_BASELINE_WINDOW_SIZE(8);
exec DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(retention=>8*60*24);

I changed the baseline duration as it cannot be larger than the retention.

If you need larger retention, then maybe you should set up a Centralized AWR Warehouse.

--

--

Franck Pachot
Franck Pachot

Written by Franck Pachot

Developer Advocate for YugabyteDB (Open-Source, PostgreSQL-compatible Distributed SQL Database. Oracle Certified Master and AWS Data Hero.

Responses (1)