
Reducing redo generation
This section shows you how to reduce redo generation using the LOGGING and NOLOGGING operations. The discussion is divided into two parts. In one part, you'll learn about things to do when LOGGING is in effect. In the other, you'll learn what to do when NOLOGGING is enabled.
Tips when LOGGING is in effect (not using NOLOGGING)
This section will cover some interesting tips to reduce redo generation without the need to use NOLOGGING.
As I mentioned earlier in the Redo generation and recoverability section in Chapter 1, Understanding the Basics of Backup and Recovery, user-managed backups can generate more redo. The best way to eliminate this problem is to use RMAN. RMAN does not need to write entire blocks to redo because it knows when a block is being copied. If you need to use the user-managed backup technique, then you can follow these steps to reduce redo generation:
- Do not back up all the tablespaces at once (using the
ALTER DATABASE BEGIN BACKUP
command). Doing so will put every tablespace into theBACKUP
mode for longer than it really needs to be, and therefore generating redo for longer. Instead, back up one tablespace at the time using theALTER TABLESPACE <Tablespace_name> BEGIN/END BACKUP
command. - Generate automatic backups on the busy tablespaces during a time when they are least busy in terms of DML.
I use the term bulk inserts in this section to mean loading a large percentage compared to the existing data. To reduce or eliminate the amount of redo generated in a bulk data load, you first need to disable the indexes (when making a direct load to a table that has indexes, the indexes will also produce redo) before the load, and then rebuild them again as follows:
SQL> ALTER INDEX index_name UNUSABLE ; # Do this for every index SQL> INSERT /*+ APPEND */ INTO table_name SELECT … SQL> ALTER INDEX index_name REBUILD;
Tip
Please ensure that the initialization parameter skip_unusable_indexes
is set to TRUE
before making an index unusable. If set to FALSE
and a user tries to access data from a table with an index unsuable
, it will return an error to the user session. It is also important to know that prior to Oracle 10g, skip_unusable_indexes
was needed to be set at session level.
To reduce redo generation in a bulk delete in the LOGGING
mode, you should:
- Create a new table with the same structure as the table you want to bulk delete from, with only the rows you want to keep as in the following example:
SQL> CREATE TABLE new_table 2 AS SELECT * 3 FROM test1 4 WHERE … ;
- Create indexes on the new table.
- Create constraints, grants, and so on.
- Drop the original table.
- Rename the new table to the original name.
If the data remaining after step 2 is small, or if there are a lot of dependencies on the table in the form of views, procedures, functions, and so on, then following steps can be used after step 1 to move forward:
- Truncate the original table, thus deleting all its data.
- Disable all constraints on the original table.
- Insert all data in the new table back to the original table. For example:
SQL> INSERT /*+ APPEND */ INTO test1 2 SELECT * 3 FROM new_table;
- Commit your changes.
- Enable the constraints on the original table.
- Drop the new table that you created in step 1.
Use the method described in this section if the indexes are going to be affected by a bulk update, because a massive update on indexes is more expensive than rebuilding them. If a small portion of the data is updated, then use this first approach:
- Disable all constraints.
- Make all indexes associated with the columns to be updated,
UNUSABLE
. For example:SQL> ALTER INDEX index_name UNUSABLE;
- Run the update on the table.
- Commit the update.
- Rebuild all indexes that you made unusable in step 2. For example:
SQL> ALTER INDEX index_name REBUILD;
- Enable all constraints you disabled in step 1.
If the update causes a change to all the data to be updated, then follow this second approach:
- Create a new table to be used as a holding table and modify the amount in the column value at the same time:
SQL> CREATE TABLE new_table AS 2 SELECT (value*1.10) value, ... FROM goods;
- Create all the same indexes on the new table as exists on the original table. For example:
SQL> CREATE INDEX idx_test3 ON test3 (owner);
- Create all grants, constraints, and so on, on the new table.
- Drop the original table.
- Finally rename the new table to become the original one.
Table and index partitioning are very useful in reducing redo generation. Reduction is possible because partitions divide an object into smaller manageable units. You can use partition techniques with a table if you know which partitions the new data will be inserted into, deleted from, or updated. Redo generation is reduced because only the affected partitions need to be built or rebuilt, not the entire object. (Please note that global indexes are not partitioned and thus always require a rebuild.)
The following are some examples showing how to work with indexes on a partition-wise basis. First, here is how to mark an index unusable:
- Without partitioning:
ALTER INDEX index_name UNUSABLE;
- With partitioning:
...ALTER INDEX index_name PARTITION partition_name UNUSABLE;
Next is how to rebuild an index:
- Without partitioning:
ALTER INDEX index index_name REBUILD ;
- With partitioning:
...ALTER INDEX index_name REBUILD PARTITION partition_name ;
You can also use the ALTER TABLE … EXCHANGE PARTITION
command to swap a work table with a partition, thus almost instantly making the data in that table a part of the partitioned table. The following is an example of how the unpartitioned approach works:
SQL> INSERT /*+ APPEND */ INTO current_table 2 SELECT * 3 FROM new_table;
In a partitioning scenario, you can simply exchange the work table for an empty partition in the target table. Here's how it works:
SQL> ALTER TABLE current_table 2 EXCHANGE PARTITION partition_name 3 with new table...
Partitioning is very useful in archiving historic data. The table that contains historic data is created with a range partition on a date
field. When the data becomes old enough to remove, the partition gets dropped. This feature is so important that Oracle created a new type of range partition in Oracle 11g to handle this situation. The new type is called the interval partition. Here is an example that works using the older partition types in Oracle 8i to 10g:
SQL> CREATE TABLE hist_data( sample_date date, ….) PARTITION BY RANGE( sample_date) ( PARTITION data201203 VALUES LESS THAN (to_date('04/01/2012','mm/dd/yyyy')) TABLESPACE ts_201203, PARTITION data201204 VALUES LESS THAN (to_date('05/01/2012','mm/dd/yyyy')) TABLESPACE ts_201204, …. ) ;
A year down the line we want to delete all the data before April 2012. All we need to do is an ALTER TABLE hist_data DROP PARTITION data201203
command. This is much more efficient and produces far less redo than executing the command DELETE FROM hist_data WHERE sample_date < to_date( '04/01/2012', 'mm/dd/yyyy')
Developers also have a role to play, or at least they can choose to help. The points discussed in this section can help developers reduce the redo generation on the database. Here are some tips:
- Run DML in as few SQL statements as you can (in other words, as simply as possible). This will reduce the generation of undo and block header updates, and therefore reduce redo generation. Consider the problem of inserting a large amount of data. First create a test table:
SQL> CREATE TABLE test4 2 AS SELECT owner, object_name, object_type 3 FROM dba_objects;
- Now think about about inserting a large amount of data into that table. Think about the difference between using an
INSERT
statement and aPL/SQL
block. The following is the approach of using anINSERT
statement, which is more efficient:SQL> set autotrace on statistics SQL> INSERT INTO test4 2 SELECT owner, object_name, object_type 3 FROM dba_objects; 88019 rows created. Statistics ------------------------------------------------------ 4660736 redo size 88019 rows processed
Then execute a
PL/SQL
block to insert the same data as before:SQL> connect / Connected. DECLARE CURSOR cur_c1 is SELECT owner, object_name, object_type FROM dba_objects; rec_c1 cur_c1%ROWTYPE; BEGIN OPEN cur_c1; FOR rec_c1 in cur_c1 LOOP INSERT INTO test4 VALUES (rec_c1.owner, rec_c1.object_name,rec_c1.object_type); END LOOP; COMMIT; CLOSE cur_c1; END; / PL/SQL procedure successfully completed.
SQL> SELECT a.name, b.value 2 FROM v$statname a, v$mystat b 3 WHERE a.statistic# = b.statistic# 4 AND a.name = 'redo size'; NAME VALUE ------------------------------ ---------- redo size 26776640
The amount of redo generated was 26,776,640 bytes. The
PL/SQL
approach generated over 26 million bytes of redo, whereas theINSERT
approach generated a mere 4,660,736 bytes. Simpler is better. The simpleINSERT
statement generated far less redo than even that generated by simplePL/SQL
block.Then let's check how much redo was generated during this session:
- Do not commit more than you need. By issuing the
COMMIT
command you are forcing Oracle to do some internal updates which produce redo. I ran thePL/SQL
code from tip 1 with aCOMMIT
command inserted after theINSERT
command (making aCOMMIT
command after eachINSERT
command, instead that once in the end of theLOOP
), and the result was even more awful than before. The amount of redo generated increased to 51,917,676 bytes. You can see that excessive committing generates far more redo. By reducing unnecessary commits you will reduce the strain on the LGWR process. - Set sequences to cache correctly. This is important since your system generates a lot of sequence numbers using the Oracle sequences. The database keeps track of the next sequence number in the SGA, but it also keeps the starting value of the next set of sequence numbers in the data dictionary according to the sequence cache setting. This starting value is needed in case the database crashes. As a sequence
nextval
is acquired, the value in the SGA is updated. When the value in the SGA is the same as the one in the data dictionary, the data dictionary is updated, producing redo. If the sequence cache is small, the data dictionary will be updated more often. This is illustrated by the following examples:I created three identical tables,
test_seq_2
,test_seq_20
, andtest_seq_1000
. They all have a number column. Then I inserted rows intotest_seq_2
usingseq2
, intotest_seq_20
usingseq20
, and intotest_seq_1000
usingseq1000
. Each time, I checked how much redo was generated from inserting the same number of rows (taken fromdba_objects
). Here is the code I executed:SQL> create table test_seq_2 (a number); SQL> create table test_seq_20 (a number); SQL> create table test_seq_1000 (a number); SQL> INSERT INTO test_seq_2 SELECT seq2.nextval FROM dba_objects ; 88025 rows created. SQL> INSERT INTO test_seq_20 SELECT seq20.nextval FROM dba_objects ; 88025 rows created. SQL> INSERT INTO test_seq_1000 SELECT seq1000.nextval FROM dba_objects ; 88025 rows created.
The following table shows the relation between the redo generated and the sequence cache size:
Set the cache to a higher value if the application accesses the sequence a lot. It is wrong to believe that setting the cache to 1000 means that the SGA will have 1000 numbers stored for the sequence. There is only one number stored for the sequence, so do not worry about setting the cache as high as you need to.
Managing the amount of redo generated by a database requires a partnership between a DBA and a developer. The extent to which you can work with your developers to implement the discussed tips, and to educate them about the value from implementing those tips, the more you can reduce redo, making your job easier and your database more efficient.
Tips when NOLOGGING is in effect
Now that you have seen how you can achieve a reduction in redo generation without the use of NOLOGGING
, let's take a closer look at how to do the same using the NOLOGGING
option.
As I mentioned earlier in the Tips when logging is in effect section, table and index partitioning are very useful to help reduce redo generation. In this section, I would like to talk about interval partitioning and the NOLOGGGING
mode.
If you want to make use of the NOLOGGING
mode with interval partitioning, it is very important to check whether the tablespace where the partitions will be created is in the NOLOGGING
mode. If the target tablespace is not in the NOLOGGING
mode, then every time Oracle creates a new partition, that partition will be created in the LOGGING
mode regardless of whether the table by itself is in the NOLOGGING
mode.
Here I will show you how an interval partition behaves and how the table will automatically create new partitions when required. For this scenario, the tablespace (named EXAMPLE
) where the table and the partitions will be created is in the NOLOGGING
mode.
First, let's ensure that our tablespace EXAMPLE
is in the NOLOGGING
mode:
SQL> SELECT tablespace_name,logging 2 FROM dba_tablespaces; TABLESPACE_NAME LOGGING ------------------------------ --------- SYSTEM LOGGING SYSAUX LOGGING TEMP NOLOGGING EXAMPLE NOLOGGING
Next, create the test_auto_intpart
table with four initial partitions:
SQL> CREATE TABLE test_auto_intpart (id number, txt varchar2(4000), col_date date) PARTITION BY RANGE (col_date) INTERVAL(NUMTOYMINTERVAL(1, 'MONTH')) ( PARTITION ap2008 VALUES LESS THAN (TO_DATE('1-1-2009', 'DD-MM-YYYY')), PARTITION ap2009 VALUES LESS THAN (TO_DATE('1-1-2010', 'DD-MM-YYYY')), PARTITION ap2010 VALUES LESS THAN (TO_DATE('1-1-2011', 'DD-MM-YYYY')), PARTITION ap2011 VALUES LESS THAN (TO_DATE('1-1-2012', 'DD-MM-YYYY'))); Table Created
Check that all initial partitions were created using the NOLOGGING
option:
SQL> SELECT TABLE_NAME, PARTITION_NAME, LOGGING, tablespace_name FROM user_tab_partitions; TABLE_NAME PARTITION_NAME LOGGING TABLESPACE_NAME ------------------ -------------- ------- --------------- TEST_AUTO_INTPART AP2008 NO EXAMPLE TEST_AUTO_INTPART AP2009 NO EXAMPLE TEST_AUTO_INTPART AP2010 NO EXAMPLE TEST_AUTO_INTPART AP2011 NO EXAMPLE
As you can see, all four partitions are stored in the tablespace EXAMPLE
and are in the NOLOGGING
mode. Now insert some data in the initial partitions using the INSERT /*+APPEND*/
command:
SQL> set autotrace on statistics SQL> INSERT /*+ APPEND */ INTO test_auto_intpart SELECT OBJ#*LINE,SOURCE,sysdate-365*mod(rownum,4) FROM sys.source$; Statistics --------------------------------------------------------- 59020 redo size 26988 rows processed SQL> COMMIT;
You can see that NOLOGGING was respected and the command generated only 59,020 bytes of redo.
Next, insert more data as in the following example. I've written the INSERT
statement to force Oracle to automatically create four new partitions in the target table:
SQL> INSERT /*+ APPEND */ INTO test_auto_intpart 2 SELECT OBJ#*LINE,SOURCE,sysdate+365*mod(rownum,4) 3 FROM sys.source$; Statistics --------------------------------------------------------- 31908 redo size 26988 rows processed SQL> COMMIT; SQL> SELECT TABLE_NAME, PARTITION_NAME, LOGGING 2 FROM user_tab_partitions; TABLE_NAME PARTITION_NAME LOGGING -------------------- ----------------- ---------- ------- TEST_AUTO_INTPART SYS_P201 NO TEST_AUTO_INTPART SYS_P202 NO TEST_AUTO_INTPART SYS_P203 NO TEST_AUTO_INTPART SYS_P204 NO TEST_AUTO_INTPART AP2008 NO TEST_AUTO_INTPART AP2009 NO TEST_AUTO_INTPART AP2010 NO TEST_AUTO_INTPART AP2011 NO
Yes, the INSERT
command really works as expected. Oracle created four new partitions automatically, and these partitions were created in the NOLOGGING
mode because the tablespace EXAMPLE
was in the NOLOGGING
mode.
Note
Note: If your tablespace is in the LOGGING
mode, the four new partitions would be created in the LOGGING
mode, and INSERT /*+APPEND*/
would generate a normal amount of redo. If later you decided to alter the new partitions to NOLOGGING, you would need to use the following statement to achieve this goal: ALTER TABLE test_AUTO_INTPART NOLOGGING;
When using direct path inserts, the database will insert data without generation of redo or undo. Instead, Oracle logs a small number of block range invalidation redo records, and will periodically update the control file with information about the most recent direct writes.
Since the release of Oracle Database 11.2.0.2, you can significantly improve the performance of a direct path insert with NOLOGGING by disabling the periodic update of the control files. You can do so by setting the initialization parameter DB_UNRECOVERABLE_SCN_TRACKING
to FALSE
. However the resulting benefit comes at a price. If you perform a direct path insert with NOLOGGING and the control file is not updated, you will no longer be able to accurately determine whether any datafiles are currently unrecoverable.
To use direct path insert, use the /*+ APPEND */
hint as follows:
SQL> INSERT /*+ APPEND */ INTO test1 2 SELECT * 3 FROM dba_objects;
When direct path insert is used, Oracle does the following in order to bypass using the buffer cache:
- Formats the data to be inserted as Oracle blocks.
- Inserts the blocks above the HWM. When the commit takes place, the HWM is moved to include the newly placed block.
It is clear that direct load is useful for bulk inserts. However, using it to insert a few hundred records at a time can have a bad effect on space and performance.
The statement INSERT /*+APPEND*/ INTO <table_name> SELECT…FROM
will use the NOLOGGING
option if available and will not produce redo. However, since the introduction of 11.2, there is a new hint APPEND_VALUES
designed to be used with bulk (array) inserts.
For example:
SQL> DROP TABLE t; Table dropped. SQL> CREATE TABLE t ( x char(2000) ) nologging; Table created. SQL> CONNECT / Connected. SQL> DECLARE 2 type array is table of char(2000) index by binary_integer; 3 l_data array; 4 BEGIN 5 for i in 1 .. 1000 6 loop 7 l_data(i) := 'x'; 8 end loop; 9 forall i in 1 .. l_data.count 10 INSERT INTO t (x) VALUES (l_data(i)); 11 END; 12 / PL/SQL procedure successfully completed. SQL> SELECT a.name, b.value 2 FROM v$statname a, v$mystat b 3 WHERE a.statistic# = b.statistic# 4 AND a.name = 'redo size'; NAME VALUE ------------------------------ ---------- redo size 2253664 SQL> connect / Connected. SQL> DECLARE 2 type array is table of char(2000) index by binary_integer; 3 l_data array; 4 BEGIN 5 for i in 1 .. 1000 6 loop 7 l_data(i) := 'x'; 8 end loop; 9 forall i in 1 .. l_data.count 10 INSERT /*+ APPEND_VALUES */ INTO t (x) values (l_data(i)); 11 END; 12 / PL/SQL procedure successfully completed. SQL> SELECT a.name, b.value 2 FROM v$statname a, v$mystat b 3 WHERE a.statistic# = b.statistic# 4 AND a.name = 'redo size'; NAME VALUE ------------------------------ ---------- redo size 7408
It is very important to understand how direct path inserts affect redo generation. The operation of a direct path insert is affected by the following factors:
- The
LOGGING
mode (ARCHIVELOG
/NOARCHIVELOG
) of the database - Using the
/*+ APPEND */
hint - The
LOGGING
mode of the table - The
FORCE LOGGING
mode of the database
If the /*+ APPEND */
hint is not used, then the INSERT
command will generate the normal amount of redo regardless of the other factors. When the hint is used, the following table shows how redo generation is affected by the mode of a database (ARCHIVELOG
or NOARCHIVELOG
), and whether a table is in the LOGGING
or NOLOGGING
mode when the direct path is used. However, it does not consider index and data dictionary changes as these are a separate matter.

It is very easy to verify if our INSERT /*+APPEND*/
statement really did insert the data after the HWM. All you need to do is (without issuing a COMMIT
command) run a normal SELECT
command against the data just inserted. If the query works, the data was not appended. If the query returns an ORA-12838
error, then the data was appended. The error comes about because you cannot read from a table in the same transaction that a direct load was made without issuing a COMMIT
command.
To further reduce redo generation when doing a bulk insert, we will make use of direct pathing. Direct path operations help to skip undo generation and maintain indexes in bulk—hence less redo.
- Set the table in the
NOLOGGING
mode:SQL> ALTER TABLE table_name NOLOGGING;
- Make all table indexes unusable:
SQL> ALTER INDEX index_name UNUSABLE;
- Insert the bulk data using the
/*+ APPEND */
hint:SQL> Insert /*+ APPEND */ into table_name select
- Rebuild all indexes that were set as
UNUSABLE
using theNOLOGGING
option:SQL> ALTER INDEX index_name REBUILD NOLOGGING;
- Set your table back to the
LOGGING
mode:SQL> ALTER TABLE table_name LOGGING;
- Set your indexes back to the
LOGGING
mode:SQL> ALTER INDEX index_name LOGGING;
- Backup the data.
There is no direct way (at the time of writing this document) of reducing redo generation for bulk update and delete.
Use the following technique to reduce redo generation when doing a bulk delete:
- Create a new table with all records you want to keep from the original table with the
NOLOGGING
option:SQL> CREATE TABLE new_table NOLOGGING 2 AS SELECT * 3 FROM original_table WHERE ...
- Create the indexes on the new table with the
NOLOGGING
option. - Create all constraints, grants, and so on, as the original table.
- Drop the original table.
- Rename the new table as the original table.
- Place the table and all indexes to the
LOGGING
mode. - Back up the data.
If the amount of data that will be left is very small compared to the original number of rows, or there are many dependencies on the table (views, procedures, functions, and so on), then the following steps can be used after step 2:
- Disable constraints on the original table.
- Truncate the original table.
- Make indexes related to the original table
UNUSABLE
. - Place the original table in the
NOLOGGING
mode.SQL> ALTER TABLE original_table NOLOGGING ;
- Do a direct insert of the data in
new_table
tooriginal_table
:SQL> INSERT /*+ APPEND */ INTO original_table 2 SELECT * FROM new_table ;
- Do a commit.
- Rebuild all indexes using the
NOLOGGING
option. - Enable all constraints that were disabled in step 3.
- Place the original table and all indexes in the
LOGGING
mode. - Backup the data.
- Drop the holding table.
To make a bulk update, follow the same steps you used for the bulk delete in the previous section but integrate the update within the select
statement. Let's say that you want to update the value column in the goods
table by increasing it by 10 percent. The steps to achieve this goal are:
- Create a new table to be used as a holding table and modify the amount in the column value at the same time, specifying the
NOLOGGING
option:SQL> CREATE TABLE new_table NOLOGGING AS 2 SELECT (value*1.10) value, ... FROM goods;
- Create all indexes on the holding table as they exist in the original table. Specify
NOLOGGING
. - Create all constraints, grants, and so on.
- Drop the original table.
- Rename the holding table to the original name.
- Alter the table and all related indexes to the
LOGGING
mode. - Backup the data.