Showing posts with label RMAN. Show all posts
Showing posts with label RMAN. Show all posts

Tuesday, October 30, 2018

OS Logfiles via SQL

Due to security restrictions it has become cumbersome to login to a DB Server or to even login as SYS to the DB.
I've had to quickly adapt and found the following views and queries beneficial when querying log files that I normally accessed from the OS:

To view the rman log for the last rman session:

select output
from v$rman_output
where session_recid = (select max(session_recid) from v$rman_status)
order by recid ;

The v$rman_output view is memory-based so it will not persist thru a database restart, the view provides the exact output from rman as it would appear on the screen or a logfile.

To query the alert.log:

select * from
(select record_id,
        to_char(originating_timestamp,'DD.MM.YYYY HH24:MI:SS'),
        message_text
 from x$dbgalertext
 order by RECORD_ID desc)
where rownum <= 20 order by RECORD_ID asc;

but will fail with "ORA-00942: table or view does not exist" if you are not logged in as SYS, and you can't even grant select access to the X$ table to another user, the grant will fail with "ORA-02030: can only select from fixed tables/views". My workaround was to create a view and then grant a select privilege on the view:

create view alert_log as select * from  X$DBGALERTEXT;
grant select on alert_log to meyerm;

Then as user meyerm:

select * from
(select record_id,
        to_char(originating_timestamp,'DD.MM.YYYY HH24:MI:SS'),
        message_text
 from sys.alert_log
 order by RECORD_ID desc)
where rownum <= 20 order by RECORD_ID asc;

Monday, October 8, 2018

Move datafile to another ASM diskgroup

From Oracle 12.1 you can execute SQL commands from within RMAN, which was great for this exercise so I did not have to jump between RMAN and SQLPlus every few minutes.

The steps to move a datafile to another ASM diskgroup is basically to take the datafile offline, copy the datafile to the new diskgroup (using RMAN commands), rename the datafile (or switch to copy) and bring the datafile online again.

RMAN> alter database datafile '+DG_DATA/ac3duim/datafile/ggatetbs.362.905353545' offline;

RMAN> copy datafile '+DG_DATA/ac3duim/datafile/ggatetbs.362.905353545' to '+DG_C2DATA';

...
output file name=+DG_C2DATA/ac3duim/datafile/ggatetbs.275.923218515 tag=TAG20160922T093514 RECID=1 STAMP=923218516
...

RMAN> alter database rename file '+DG_DATA/ac3duim/datafile/ggatetbs.362.905353545' to '+DG_C2DATA/ac3duim/datafile/ggatetbs.275.923218515';
OR
RMAN> switch datafile '+DG_DATA/ac3duim/datafile/ggatetbs.362.905353545' to copy;

RMAN> recover datafile '+DG_C2DATA/ac3duim/datafile/ggatetbs.275.923218515';

RMAN> alter database datafile '+DG_C2DATA/ac3duim/datafile/ggatetbs.275.923218515' online;

The old datafile can now be deleted (also via RMAN)


RMAN> delete datafilecopy '+DG_DATA/ac3duim/datafile/ggatetbs.362.905353545';

Wednesday, October 3, 2018

Data Guard Failover surprise

Ran into a interesting scenario after reinstating the previous primary DB after an emergency failover.

A Data Guard failover is done during emergencies (an actual DR situation) and the old standby gets activated as a Primary database with RESETLOGS after an incomplete recovery. 

Here's the alert.log from the new Primary database:

Tue Oct 02 16:07:34 2018
 Data Guard Broker: A failover will now be attempted
Tue Oct 02 16:07:34 2018
ALTER DATABASE FAILOVER TO TMSBY
...
Tue Oct 02 16:07:40 2018
ALTER DATABASE SWITCHOVER TO PRIMARY (TMSBY1)
RESETLOGS after incomplete recovery UNTIL CHANGE 14297961467966 time 10/02/2018 16:02:10
...
Tue Oct 02 16:07:40 2018
Setting recovery target incarnation to 3
Tue Oct 02 16:07:41 2018
Switchover: Complete - Database mounted as primary

A few hours later the old Primary DB (TMPRD) was fixed, started in MOUNT mode and reinstated via the Broker:

DGMGRL> show configuration
Configuration - tmdg
  Protection Mode: MaxPerformance
  Members:
  TMSBY  - Primary database
    TMPRD - Physical standby database (disabled)
      ORA-16661: the standby database needs to be reinstated
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS   (status updated 26 seconds ago)
DGMGRL> reinstate database TMPRD
Reinstating database "TMPRD", please wait...
Reinstatement of database "TMPRD" succeeded

Reinstatement completed successfully, but the new standby (TMPRD) was not applying logs, the alert.log confirmed that it was waiting for an Archivelog that was only created on the previous standby site. 

Media Recovery Waiting for thread 2 sequence 358 branch(resetlogs_id) 987409798

But that archivelog has already been backed up and deleted via RMAN. A straight forward restore on TMSBY should work but then it failed with: 

RMAN> restore archivelog sequence 358 thread 2;
Starting restore at 03-OCT-18 08:40
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=232 instance=TMSBY1 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 10/03/2018 08:40:15
RMAN-20242: specification does not match any archived log in the repository

Remember the Failover forced a RESETLOGS so the incarnation of the DB changed, so the trick is:

RMAN> list incarnation;
List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       TMPRD   645377540        PARENT  1594143    31-AUG-18 10:37
2       2       TMPRD   645377540        PARENT  5368114    21-SEP-18 08:29
3       3       TMPRD   645377540        CURRENT 14297961467967 02-OCT-18 16:07

RMAN> restore archivelog sequence 358 thread 2 Incarnation 2;

Starting restore at 03-OCT-18 08:44
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=2 sequence=358
channel ORA_DISK_1: reading from backup piece +DG_FRA/TMSBY/BACKUPSET/2018_10_02/annnf0_TMSBY1_arch_0.337.988488123
channel ORA_DISK_1: piece handle=+DG_FRA/TMSBY/BACKUPSET/2018_10_02/annnf0_TMSBY1_arch_0.337.988488123 tag=TMSBY1_ARCH
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 03-OCT-18 08:45

And within a few minutes the restored Archivelog gets picked up by the FAL process, transfered and applied on the new standby (TMPRD). Alert.log contents:

RFS[3]: Opened log for thread 2 sequence 358 dbid 645377540 branch 987409798
...
Archived Log entry 1627 added for thread 2 sequence 358 rlc 987409798 ID 0x0 dest 2:
...
Media Recovery Log +DG_ARCHIVE/TMPRD/ARCHIVELOG/2018_10_03/thread_2_seq_358.49304.988533935

All in all I was impressed with the smooth failover and reinstatement via the broker, but even then, not everything can get automated and there's always some hicup where the DBA should roll up their sleeves and dig in in order to get things back to normal again.

Friday, September 21, 2018

Cloning a RAC DB on the same cluster

We are using OMF in ASM so will include the following 3 lines in the RMAN DUPLICATE TARGET command:
SET DB_FILE_NAME_CONVERT '+DG_DATA','+DG_DATA'
SET LOG_FILE_NAME_CONVERT '+DG_DATA','+DG_DATA'
SET CONTROL_FILES='+DG_DATA','+DG_DATA'

Do NOT use NOFILENAMECHECK when closing to the same server, any error in your script and RMAN might overwrite the target DBs db files!

Also need to SET CLUSTER_DATABASE='FALSE' in the RMAN DUPLICATE TARGET command because I will clone a RAC to single instance and then convert the single instance to a RAC.

Prep includes creating a pwd file and a init.ora file with 3 lines
db_name=TMTEST
db_unique_name=TMTEST
compatible=12.1.0.2.0

I added the 3rd line after getting “RMAN-06136: ORACLE error from auxiliary database: ORA-00201: control file version 12.1.0.2.0 incompatible with ORACLE version 12.0.0.0.0” on my first clone attempt

Set ORACLE_SID to the new DB (TMTEST) and startup nomount. When launching RMAN connect to the TARGET via a TNS entry

rman target sys/XX@DLIOWCC auxiliary sys/XX
RMAN> DUPLICATE TARGET DATABASE TO TMTEST FROM ACTIVE DATABASE
SPFILE
SET CONTROL_FILES='+DG_DATA','+DG_DATA';
SET DB_FILE_NAME_CONVERT '+DG_DATA','+DG_DATA'
SET LOG_FILE_NAME_CONVERT '+DG_DATA','+DG_DATA'
SET CLUSTER_DATABASE='FALSE';

10 minutes later I had a Cloned DB with ORACLE_SID=TMTEST

Some post cloning tasks in order to make it a RAC:

SQL> show parameter spfile
NAME    TYPE    VALUE
------- ------- ---------------------------------------------------
spfile  string  /u01/oracle/product/12.1.0/db_1/dbs/spfileTMTEST.ora
SQL> alter system set cluster_database=TRUE scope=spfile sid='*';
SQL> create pfile ‘initTMTESTcloned.ora’ from spfile;
vi initTMTESTcloned.ora

Found a few entries I had to change from:
DLIOWCC2.instance_number=2
DLIOWCC1.instance_number=1
DLIOWCC2.thread=2
DLIOWCC1.thread=1
DLIOWCC1.undo_tablespace='UNDOTBS1'
DLIOWCC2.undo_tablespace='UNDOTBS2'
To
TMTEST2.instance_number=2
TMTEST1.instance_number=1
TMTEST2.thread=2
TMTEST1.thread=1
TMTEST1.undo_tablespace='UNDOTBS1'
TMTEST2.undo_tablespace='UNDOTBS2'

SQL> create spfile='+DG_DATA/TMTEST/spfileTMTEST.ora' from pfile='initTMTESTcloned.ora';
vi initTMTEST1.ora
add line spfile='+DG_DATA/TMTEST/spfileTMTEST.ora'
copy the init.ora and orapw file to node 2

Register the database and modify the configuration
srvctl add database -db TMTEST -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 -spfile +DG_DATA/TMTEST/spfileTMTEST.ora
srvctl add instance -db TMTEST -i TMTEST1 -node node1za
srvctl add instance -db TMTEST -i TMTEST2 -node node2za
srvctl config database -db TMTEST

shutdown the instance (remember ORACLE_SID=TMTEST)
and start the DB from srvctl
srvctl start database -db TMTEST

Remember to change ORACLE_SID from TMTEST to TMTEST1 if you want to connect via sqlplus on the DB node node1za

Thursday, January 26, 2012

Resolving Gaps in Data Guard Apply Using Incremental RMAN Backup

I found quite a few blog entries on the web and off course the Oracle Documentation also helped. The reason I want to create a post is that I encountered 2 problems that I first had to overcome before the Physical Standby was in sync again. A sort overview of the problem and the steps to overcome them are:

On the standby get the current scn which will be used in your RMAN backup script as a starting point. 

SQL>select current_scn FROM V$DATABASE;

     CURRENT_SCN
----------------
11133157117269


 On the primary run the RMAN script with the supplied current_scn number from the standby 

run {
allocate channel c1 type disk format '/dump/abcprd/%U.rmb';
backup as compressed backupset skip readonly incremental from scn 11133157117269 database;

And this is were I got my first unexpected problem. The RMAN backup gave the following warning message:

RMAN-06755: WARNING: datafile 408: incremental-start SCN is too recent; using checkpoint SCN 9733080640801 instead

File 408 is a Read Only Tablespace with a much older scn number. V$SESSION_LONGOPS showed that the backup will take longer than 24 hours to complete, so I immediately stopped it and reset the Read Only Tablespace's scn number with the aim that it will shorten the duration of the RMAN backup. 
SQL> alter tablespace SA_ORD_RO read write ;
SQL> alter tablespace SA_ORD_RO read only ;

I restarted the RMAN backup, it did not give the warning again, but still took 16 hours to complete. All files created by the backup were copied to the standby server. I also created a new standby controlfile:
RMAN> backup current controlfile for standby format '/usr/users/oracle/ForStandbyCTRL.bck';

On the standby server I cataloged and then restored the controlfile 
RMAN> catalog start with ' /dump/backup/';
RMAN> restore standby controlfile from '/dump/backup/ForStandbyCTRL.bck';


And the recover script:
RMAN> catalog start with ' /dump/abcprd/';
RMAN> recover database noredo;

But the Incremental backup failed almost immediately with the error

RMAN-06094: datafile 569 must be restored

It turns out that datafile 569 was created on the primary after the gap occured, but before the rman incremental was run. So the controlfile was aware of the datafile, but the file was not on the standby server. I also need to do a datafile backup for the newly created datafile. So back to Primary:
RMAN> backup datafile 569;
scp the file to the standby server and restore it on the standby server:
RMAN> restore datafile 569;

This time I could start the RMAN> recover database noredo; again and it completed successfully.


Friday, May 7, 2010

Bug when creating a physical standby database in 11G

Creating a physical standby database in 11G became a whole lot easier with just one RMAN command – unless you run into a bug :-)

My primary DB runs on Oracle Managed Files on ASM. My physical standby will also run on Oracle Managed Files on ASM on another server, but the ASM diskgroups are not the same.

I used Arub Nanda’s great article Oracle Database 11g: The Top New Features for DBAs and Developers as a guide. The Data Guard article is number 15 in the series and well worth a read if you want to get up and running quickly.
The RMAN script I used:

run {
allocate channel c1 type disk;
allocate auxiliary channel s1 type disk;

SET NEWNAME FOR DATAFILE 1 TO NEW;
SET NEWNAME FOR DATAFILE 2 TO NEW;
SET NEWNAME FOR DATAFILE 3 TO NEW;
SET NEWNAME FOR DATAFILE 4 TO NEW;
SET NEWNAME FOR DATAFILE 5 TO NEW;
SET NEWNAME FOR DATAFILE 6 TO NEW;
SET NEWNAME FOR DATAFILE 7 TO NEW;
SET NEWNAME FOR DATAFILE 8 TO NEW;
SET NEWNAME FOR DATAFILE 9 TO NEW;
SET NEWNAME FOR DATAFILE 10 TO NEW;
SET NEWNAME FOR DATAFILE 11 TO NEW;
SET NEWNAME FOR DATAFILE 12 TO NEW;
SET NEWNAME FOR TEMPFILE 1 TO NEW;

duplicate target database
for standby
from active database
dorecover
spfile
parameter_value_convert 'adevc','adevt'
set db_unique_name='adevt'
set control_files='/opt/apps/oracle/product/11.1.0.7.0/dbs/control01.ctl'
set fal_client='adevt'
set fal_server='adevc'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(adevc,adevt)'
set log_archive_dest_1='location="/dump/oracle/adevt_archivelogs", valid_for=(ONLINE_LOGFILE,ALL_ROLES)'
set log_archive_dest_2='service=adevc LGWR ASYNC valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=casdev'
set log_archive_dest_state_2='enable'
set db_create_file_dest='+TADG1'
set db_create_online_log_dest_1='+TADG1'
;

sql channel c1 'alter system archive log current';
sql channel s1 'alter database recover managed standby database using current logfile disconnect';
}
The error found in the RMAN logfile:
channel c1: starting datafile copy
input datafile file number=00023 name=+CADG1/adevc/datafile/all_ts.408.688130429
RMAN-03009: failure of backup command on c1 channel at 04/30/2010 14:54:55
ORA-19504: failed to create file "+TADG1"
channel c1 disabled, job failed on it will be run on another channel
released channel: c1
released channel: s1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 04/30/2010 14:54:55
RMAN-03015: error occurred in stored script Memory Script

RMAN-03009: failure of backup command on c1 channel at 04/30/2010 14:54:55
ORA-19504: failed to create file "+TADG1"

I found the root cause of the problem on metalink - Standby DATAFILE Creation Failing with RAC / ASM / OMF [ID 787753.1] It turns out to be Bug:7462589. The workaround for the Bug is that the ASM diskgroups on the standby must also be on the primary. The note states that this will be fixed in 11GR2. So my workaround was to create a temporary ASM diskgroup called TADG1 (with only 1 disk) on the primary database server. After that the restore completed successfully.

Monday, April 6, 2009

DBMS_FILE_TRANSFER

Oracle recommends the use of RMAN to create a physical standby database. But in the manual and various MAA white papers they simply state that you should make the backups available to the target system and that the path to the backup files on the source and standby server must be the same.

Unfortunately they do not recommend how you should do this, especially when you have a FRA configured in ASM.

With the release of version 11 ASM Oracle introduced the ASMCMD cp command. I thought that solves the problem since the white paper (Oracle Database 11G Automatic Storage Management New Features Overview – published June 2007) states “The ASMCMD cp command allows you to copy files between ASM disk groups between 2 ASM servers”, but I could not get the command to copy my RMAN backupsets between ASM disk groups. I eventually logged SR 7173090.993 but even Oracle support was not sure and they wasted 2 days trying to make it work. Only after 2 days, did Oracle support say that the command does not work for copying backupsets between ASM disk groups (only data files).

OK what now?

I had a look around and decided that my only option is the cumbersome (but at least workable) DBMS_FILE_TRANSFER package.

First step is to locate the rman backupsets on the source server:

ASMCMD> pwd
+PCASDGF/CASOB1/BACKUPSET/2009_03_27


Now create the same directory structure on the remote server. On the standby server I first created an ASM Diskgroup with the same name as the FRA on production (+PCASDGF).

ASMCMD> mkdir +PCASDGF/CASOB1
ASMCMD> mkdir +PCASDGF/CASOB1/BACKUPSET
ASMCMD> mkdir +PCASDGF/CASOB1/BACKUPSET/2009_03_27


Use DBMS_FILE_TRANSFER to instantiate an ASM DataGuard database

DBMS_FILE_TRANSFER.GET_FILE contacts a remote database to read a remote file and then creates a copy of the file in the local file system. But before I can use this method I need a "stage" instance on the standby server. (I used DBCA to create a stage database - it took 20 minutes to create)

Prepare the newly created stage database to use DBMS_FILE_TRANSFER:

CONNECT sys/XXX
CREATE OR REPLACE DIRECTORY DEST_DIR AS '+PCASDGF/CASOB1/BACKUPSET/2009_03_27'
CREATE DATABASE LINK SOURCE_DB CONNECT TO system IDENTIFIED BY XXX USING 'CASOB1.VODACOM.CO.ZA';

On Primary:

CREATE OR REPLACE DIRECTORY SOURCE_DIR AS '+PCASDGF/CASOB1/BACKUPSET/2009_03_27';


Get a list of the backupsets on the source server:

ASMCMD> pwd
+PCASDGF/CASOB1/BACKUPSET/2009_03_27
ASMCMD> ls
annnf0_TAG20090327T115944_0.417.682603193
annnf0_TAG20090327T115944_0.418.682603193
...


Use an editor to create a SQL command (one per backupset) which looks like this:


execute dbms_file_transfer.get_file(source_directory_object => 'SOURCE_DIR',source_file_name => 'ncsdf0_TAG20090406T092911_0.1
367.683458159', destination_directory_object => 'DEST_DIR',destination_file_name => 'BACKUP1',source_database => 'SOURCE_DB');
execute dbms_file_transfer.get_file(source_directory_object => 'SOURCE_DIR',source_file_name => 'nnndf0_TAG20090406T092911_0.1
364.683458159', destination_directory_object => 'DEST_DIR',destination_file_name => 'BACKUP2',source_database => 'SOURCE_DB');



Note that the destination_file_name is not the same as the source file name. You will get an ORA-15046 if they are the same, this error is the result of the backupset being an Oracle-Managed file name.

So I execute these commands on the standby server from the stage instance. The progress of the command can be monitored using the V$SESSION_LONGOPS view. Once all the backupsets are copied over I was done with the stage database.

The next step is to create aliases for the new names of the backuppieces, so back on the source server issue:


ALTER DISKGROUP PCASDGF ADD ALIAS '+PCASDGF/CASOB1/BACKUPSET/2009_03_27/BACKUP1' for '+PCASDGF/CASOB1/BACKUPSET/2009_03_27/annnf0_TAG20090327T115944_0.417.682603193';



Once done, I can query V$ASM_ALIAS and there are two entries per backuppiece. One for the backuppiece and one for the alias.

The aim of doing this was to create a Data Guard instance, so before I can restore the backup I first have to prepare a Data Guard instance:

Log onto the standby server

# $export ORACLE_SID=casobc
# $orapwd=orapwcasobc password=XXX

sqlplus
SYS@casobc>create spfile from pfile; (I previously created a pfile from production with a few changes necessary for DG)
File created.
SYS@casobc>startup force nomount;
ORACLE instance started.


OK now I am ready to start the restore. First catalog the backuppieces in the source server/primary RMAN

Connect to RMAN on the primary server

RMAN> connect target sys/XXX@casob1.vodacom.co.za;

connected to target database: CASOB1 (DBID=1215042422)

RMAN> connect auxiliary /

connected to auxiliary database: CASOB1 (not mounted)

RMAN> catalog backuppiece '+PCASDGF/ CASOB1/BACKUPSET/2009_03_27/BACKUP1';
RMAN> catalog backuppiece '+PCASDGF/CASOB1/BACKUPSET/2009_03_27/BACKUP2';



Once done, I could list the backup in RMAN and see two entries, one for the backuppiece and one for the alias.
Eventually I am ready to use the RMAN backup to create a data guard instance.


RMAN> duplicate target database for standby ;
...
RMAN> exit

Monday, March 9, 2009

RMAN restore to new host




The mission:

Restore a production 10.2.0.4.0 database to a new host in order to test our recoveries. The production database is part of a Data Guard configuration in Maximum Availability mode and it runs on Unix RAW devices. The restore will be to ASM.

Notes by my colleague Ian Baugaard:

If you perform a test restore only, then do not connect to the recovery catalog when restoring the datafiles. Otherwise, RMAN records information about the restored datafiles to the recovery catalog. This intereferes with future attempts to restore and recover the primary database. If you must use a recovery catalog because the control file is not large enough to contain the RMAN repository data on all of the backups that you need to restore, then export the catalog and import it into a different schema or database and use the copied recovery catalog for the test restore. Otherwise, the catalog considers the restored database as the current target database.

Also confirm all of the following before commencing:

- Ensure that the /etc/oratab file is correctly configured
- Ensure that the following environment variable are set to the values shown
o NLS_DATE_FORMAT=DD-MON-YYYY HH24:MI:SS
o NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
- Ensure that the $ORACLE_BASE/admin/$ORACLE_SID/bdump directory is empty
- Ensure that the job_queue_processes parameter is set to 0 when editing the parameter file at step 3
- That the location specified in the log_archive_dest parameter is valid, writable and empty
- Confirm whether or not the source database is using a change tracking file, and if so, try to ensure that the corresponding path exists on the node where the restore is being performed
- Ensure that the tnsnames.ora file does not contain any references whatsoever to a production system

1. Ensure your environment is correctly configured, i.e. ORACLE_SID is set correctly and that the Oracle software installation matches the environment of the source system. When ready, start RMAN and connect to the target without connecting to the recovery catalog.

rman target / NOCATALOG

2. Start the instance without mounting it, using the DBID obtained from the v$database view on the source database. RMAN will fail to find the server parameter file, which has not yet been restored, but will start the instance with a "dummy" file.

RMAN> set DBID dbid ;
RMAN> startup nomount ;

3. Restore and edit the server parameter file. NB: The values for NSR Server and NSR Client will have to be adjusted accordingly, and can be derived from the values of the production script

run {
allocate channel c1 device type SBT PARMS='ENV=(NSR_SERVER=nsr server, NSR_CLIENT=original server)';
restore spfile to pfile '$ORACLE_HOME/dbs/init$ORACLE_SID.ora' from autobackup;
shutdown abort;
}

4. From a second session, edit all appropriate parameters of the restored parameter file to cater for the environment you are restoring on, i.e. to reflect new directory structures, memory differences, etc. Once done, start the instance using the new file

RMAN> startup force nomount pfile='$ORACLE_HOME/dbs/init$ORACLE_SID.ora' ;

5. Proceed to restore the controlfile from an autobackup from your original session

run {
allocate channel c1 device type SBT PARMS=' ENV=(NSR_SERVER=nsr server, NSR_CLIENT=original server)';
restore controlfile from autobackup;
alter database mount;
}

6. Query the database filenames recorded in the control file on the new host by running the following query in SQL*Plus

col name for a60
SELECT file# AS "File/Grp#", name FROM v$datafile
UNION
SELECT group#, member FROM v$logfile;
EXIT

7. Write the RMAN recovery script. The script must include the following steps:
a. For each datafile on the destination host that is restored to a different path than it had on the source host, use a SET NEWNAME command to specify the new path on the destination host. (If the file systems on the destination system are set up to have the same paths as the source host, then do not use SET NEWNAME for those files restored to the same path as on the source host.) See an example of a dynamic SQL script to generate the commands needed below

select
'set newname for datafile '|| file# ||' to '''|| replace( name, 'rabdg', 'rabbcv' ) ||''' ;'
from v$datafile
order by file#

b. For each online redo log that is to be created at a different location than it had on the source host, use SQL ALTER DATABASE RENAME FILE commands to specify the pathname on the destination host. (If the file systems on the destination system are set up to have the same paths as the source host, then do not use ALTER DATABASE RENAME FILE for those files restored to the same path as on the source host.) See an example of a dynamic SQL script to generate the commands needed below

select
'SQL "alter database rename file '''''|| member ||''''' to '''''|| replace ( member, 'rabdg', 'rabbcv' ) ||''''' " ;'
from v$logfile
order by member
c. Perform a SET UNTIL to limit media recovery to the end of the archived redo logs.
d. Run SWITCH so that the control file recognizes the new path names as the official new names of the datafiles
e. Restore and recover the database
NB: By default, RMAN does not restore read-only files when you issue the RESTORE DATABASE command.

run {
# ALLOCATE A CHANNEL TO THE TAPE DEVICE
allocate channel c1 device type SBT PARMS='ENV=(NSR_SERVER=nsr server, NSR_CLIENT=original server)';
allocate channel c2 device type SBT PARMS='ENV=(NSR_SERVER=nsr server, NSR_CLIENT=original server)';
allocate channel c3 device type SBT PARMS='ENV=(NSR_SERVER=nsr server, NSR_CLIENT=original server)';

# RENAME THE DATAFILES AND ONLINE REDO LOGS
set newname for datafile 1 to '+TUNXDG_RESTORE/casprd/datafile/casprd_SYSTEM01' ;
set newname for datafile 2 to '+TUNXDG_RESTORE/casprd/datafile/casprd_UNDO01' ;
SQL "alter database rename file ''/dev/vx/rdsk/redodg/casprd_REDO1'' to ''+TUNXDG_RESTORE/casprd/onlinelog/casprd_REDO1'' ";

# DO A SET UNTIL TO PREVENT RECOVERY OF THE ONLINE LOGS
set until time "TO_DATE('20080310 08:33:12','YYYYMMDD HH24:MI:SS')" ;
# RESTORE THE DATABASE AND SWITCH THE DATAFILE NAMES
restore database check readonly force;
switch datafile all;

# RECOVER THE DATABASE
recover database;
}
EXIT

8. Now perform an OPEN RESETLOGS at the restored database.

RMAN> alter database open resetlogs ;

9. Issue a shutdown and start the restored database in mount mode, and correct the location of all tempfiles

10. If this was a test restore, and it was successful, then you can shut down the test database instance, and delete the test database with all of its files.

SQL> startup force mount restrict pfile='$ORACLE_HOME/dbs/init$ORACLE_SID.ora' ;
SQL> drop database ;

rm $ORACLE_HOME/dbs/init$ORACLE_SID.ora
rm log_archive_dest/*

Scripts and Logfiles

casprd1:/opt/apps/oracle/database/10.2.0.4/rdbms/admin> rman target / NOCATALOG

Recovery Manager: Release 10.2.0.4.0 - Production on Wed Feb 18 14:17:28 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database (not started)

RMAN> set dbid 2070848595;

executing command: SET DBID

RMAN> startup nomount ;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/opt/apps/oracle/database/10.2.0.4/dbs/initcasprd1.ora'

starting Oracle instance without parameter file for retrival of spfile
Oracle instance started

Total System Global Area 159383552 bytes

Fixed Size 2054608 bytes
Variable Size 67110448 bytes
Database Buffers 83886080 bytes
Redo Buffers 6332416 bytes

RMAN> run {
2> allocate channel c1 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1,NSR_CLIENT=pcapa2)';
3> restore spfile to pfile '$ORACLE_HOME/dbs/init$ORACLE_SID.ora' from autobackup;
4> shutdown abort;
5> }

allocated channel: c1
channel c1: sid=39 devtype=SBT_TAPE
channel c1: NMO v4.5.0.0

Starting restore at 19-FEB-2009 12:40:16

Change init.ora file to:
*.audit_file_dest='/opt/apps/oracle/admin/casprd1/adump'
*.background_dump_dest='/opt/apps/oracle/admin/casprd1/bdump'
*.compatible='10.2.0'
*.control_file_record_keep_time=21
*.core_dump_dest='/opt/apps/oracle/admin/casprd1/cdump'
*.db_block_size=8192
*.db_cache_size=500000000
*.db_files=500
*.db_name='casprd'
*.db_writer_processes=4
*.instance_name='casprd1'
*.large_pool_size=50000000
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_dest_state_1='ENABLE'
*.log_archive_format='casprd_%t_%s_%r.arc'
*.log_archive_max_processes=5
*.log_archive_min_succeed_dest=1
*.log_buffer=16384000
*.os_authent_prefix='ops$'
*.recovery_parallelism=4
*.remote_login_passwordfile='EXCLUSIVE'
*.service_names='casprd'
*.sga_target=2G
*.undo_management='AUTO'
*.undo_retention=28800
*.undo_tablespace='UNDO'
*.user_dump_dest='/opt/apps/oracle/admin/casprd1/udump'
*.workarea_size_policy='auto'
*.db_create_file_dest='+TUNXDG_RESTORE'
*.db_recovery_file_dest='+TUNXDG_RESTORE'
*.DB_RECOVERY_FILE_DEST_SIZE=2G
*.PROCESSES=200

RMAN> startup force nomount pfile='$ORACLE_HOME/dbs/init$ORACLE_SID.ora' ;

Oracle instance started

Total System Global Area 2147483648 bytes

Fixed Size 2057496 bytes
Variable Size 503319272 bytes
Database Buffers 1610612736 bytes
Redo Buffers 31494144 bytes

RMAN> run {
2> allocate channel c1 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1,NSR_CLIENT=pcapa2,NSR_DEBUG_FILE=/home/oracle/nsr_icap2.log)';
3> restore controlfile from autobackup;
4> alter database mount;
5> }

allocated channel: c1
channel c1: sid=33 devtype=SBT_TAPE
channel c1: NMO v4.5.0.0

Starting restore at 19-FEB-2009 13:53:31

channel c1: looking for autobackup on day: 20090219
channel c1: autobackup found: c-2070848595-20090219-04
channel c1: control file restore from autobackup complete
output filename=+TUNXDG_RESTORE/casprd/controlfile/current.351.679240505
output filename=+TUNXDG_RESTORE/casprd/controlfile/current.352.679240507
Finished restore at 19-FEB-2009 13:55:17

database mounted
released channel: c1

Edit init.ora file to reflect new controlfile settings:
*.control_files='+TUNXDG_RESTORE/casprd/controlfile/current.351.679240505’,’+TUNXDG_RESTORE/casprd/controlfile/current.352.679240507'


casprd1:/home/oracle/meyert> cat icap_restore.ksh
rman target / nocatalog log='/home/oracle/meyert/icap_restore.log' cmdfile='/home/oracle/meyert/icap_restore.par'

casprd1:/home/oracle/meyert> cat icap_restore.par
run {
# ALLOCATE A CHANNEL TO THE TAPE DEVICE
allocate channel chnl1 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';
allocate channel chnl2 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';
allocate channel chnl3 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';

# RENAME THE DATAFILES AND ONLINE REDO LOGS
set newname for datafile 1 to '+TUNXDG_RESTORE/casprd/datafile/casprd_SYSTEM01' ;
set newname for datafile 2 to '+TUNXDG_RESTORE/casprd/datafile/casprd_UNDO01' ;

set newname for datafile 321 to '+TUNXDG_RESTORE/casprd/datafile/casprd_SA_T_CMD_PARAM_0422' ;

SQL "alter database rename file ''/dev/vx/rdsk/redodg/casprd_REDO1'' to ''+TUNXDG_RESTORE/casprd/onlinelog/casprd_REDO1'' ";


# DO A SET UNTIL TO PREVENT RECOVERY OF THE ONLINE LOGS
set until time "TO_DATE('20090303 06:00:00','YYYYMMDD HH24:MI:SS')" ;

# RESTORE THE DATABASE AND SWITCH THE DATAFILE NAMES
restore database check readonly force;
switch datafile all;

}
EXIT

casprd1:/home/oracle/meyert> cat icap_restore.log

Recovery Manager: Release 10.2.0.4.0 - Production on Thu Mar 5 15:44:38 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: CASPRD (DBID=2070848595, not open)
using target database control file instead of recovery catalog

allocated channel: chnl1
channel chnl1: sid=26 devtype=SBT_TAPE
channel chnl1: NMO v4.5.0.0

allocated channel: chnl2
channel chnl2: sid=33 devtype=SBT_TAPE
channel chnl2: NMO v4.5.0.0

allocated channel: chnl3
channel chnl3: sid=11 devtype=SBT_TAPE
channel chnl3: NMO v4.5.0.0

executing command: SET NEWNAME

executing command: SET NEWNAME



executing command: SET until clause

Starting restore at 05-MAR-2009 15:51:58

channel chnl1: starting datafile backupset restore
channel chnl1: specifying datafile(s) to restore from backup set
restoring datafile 00009 to +TUNXDG_RESTORE/casprd/datafile/casprd_ic_t_met2_12
restoring datafile 00027 to +TUNXDG_RESTORE/casprd/datafile/casprd_pp_t_conf_0401
restoring datafile 00156 to +TUNXDG_RESTORE/casprd/datafile/casprd_sa_i_ssb
restoring datafile 00193 to +TUNXDG_RESTORE/casprd/datafile/casprd_pp_t_conf_0409
channel chnl1: reading from backup piece /full_CASPRD_t680405445_s84501_p1/

Finished restore at 07-MAR-2009 01:19:52

datafile 1 switched to datafile copy
input datafile copy recid=665 stamp=680836812 filename=+TUNXDG_RESTORE/casprd/datafile/casprd_system01
datafile 2 switched to datafile copy
input datafile copy recid=666 stamp=680836813 filename=+TUNXDG_RESTORE/casprd/datafile/casprd_undo01

datafile 321 switched to datafile copy
input datafile copy recid=985 stamp=680837070 filename=+TUNXDG_RESTORE/casprd/datafile/casprd_sa_t_cmd_param_0422

Starting recover at 07-MAR-2009 01:25:06
channel chnl1: starting incremental datafile backupset restore
channel chnl1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00008: +TUNXDG_RESTORE/casprd/datafile/casprd_sa_t_cmd_param_0418
destination for restore of datafile 00035: +TUNXDG_RESTORE/casprd/datafile/casprd_sa_i_cmd
destination for restore of datafile 00042: +TUNXDG_RESTORE/casprd/datafile/casprd_sa_i_event
destination for restore of datafile 00192: +TUNXDG_RESTORE/casprd/datafile/casprd_ic_t_met2_05
channel chnl1: reading from backup piece /level1_CASPRD_t680504446_s84621_p1/

channel chnl3: reading from backup piece /level1_CASPRD_t680506557_s84697_p1/
channel chnl3: restored backup piece 1
piece handle=/level1_CASPRD_t680506557_s84697_p1/ tag=TAG20090303T050038
channel chnl3: restore complete, elapsed time: 00:17:46

starting media recovery

Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '+TUNXDG_RESTORE/casprd/datafile/casprd_system01'

released channel: chnl1
released channel: chnl2
released channel: chnl3
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 03/07/2009 08:19:08
RMAN-06557: unable to restore archived log thread 1, sequence 12670
RMAN-06558: archived log size of 2043389 kb is bigger than available space of 2020336 kb

Recovery Manager complete.

(My RECOVERY_DEST_SIZE was too small, so I increased it and simply ran a recovery – since the restore was completed)

casprd1:/home/oracle/meyert> cat icap_recovery.ksh
rman target / nocatalog log='/home/oracle/meyert/icap_recovery.log' cmdfile='/home/oracle/meyert/icap_recovery.par'

casprd1:/home/oracle/meyert> cat icap_recovery.par
run {
# ALLOCATE A CHANNEL TO THE TAPE DEVICE
allocate channel chnl1 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';
allocate channel chnl2 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';
allocate channel chnl3 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';

# DO A SET UNTIL TO PREVENT RECOVERY OF THE ONLINE LOGS
set until time "TO_DATE('20090303 06:00:00','YYYYMMDD HH24:MI:SS')" ;

# RECOVER THE DATABASE
recover database;
}
EXIT

casprd1:/home/oracle/meyert> cat icap_recovery.par
run {
# ALLOCATE A CHANNEL TO THE TAPE DEVICE
allocate channel chnl1 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';
allocate channel chnl2 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';
allocate channel chnl3 device type SBT PARMS='ENV=(NSR_SERVER=pbcka1, NSR_CLIENT=pcapa2)';

# DO A SET UNTIL TO PREVENT RECOVERY OF THE ONLINE LOGS
set until time "TO_DATE('20090303 06:00:00','YYYYMMDD HH24:MI:SS')" ;

# RECOVER THE DATABASE
recover database;
}
EXIT


casprd1:/home/oracle/meyert> cat icap_recovery.log

Recovery Manager: Release 10.2.0.4.0 - Production on Mon Mar 9 07:33:00 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: CASPRD (DBID=2070848595, not open)
using target database control file instead of recovery catalog

allocated channel: chnl1
channel chnl1: sid=33 devtype=SBT_TAPE
channel chnl1: NMO v4.5.0.0

allocated channel: chnl2
channel chnl2: sid=12 devtype=SBT_TAPE
channel chnl2: NMO v4.5.0.0

allocated channel: chnl3
channel chnl3: sid=27 devtype=SBT_TAPE
channel chnl3: NMO v4.5.0.0

executing command: SET until clause

Starting recover at 09-MAR-2009 07:33:37

starting media recovery

channel chnl1: starting archive log restore to default destination
channel chnl1: restoring archive log
archive log thread=1 sequence=12668
channel chnl1: restoring archive log
archive log thread=1 sequence=12669
channel chnl1: reading from backup piece /arch_CASPRD_t680507135_s84703_p1/
channel chnl1: restored backup piece 1
piece handle=/arch_CASPRD_t680507135_s84703_p1/ tag=TAG20090303T054533
channel chnl1: restore complete, elapsed time: 00:01:17
archive log filename=+TUNXDG_RESTORE/casprd/archivelog/2009_03_09/thread_1_seq_12668.354.681032213 thread=1 sequence=12668
channel default: deleting archive log(s)
archive log filename=+TUNXDG_RESTORE/casprd/archivelog/2009_03_09/thread_1_seq_12668.354.681032213 recid=108554 stamp=681032266
archive log filename=+TUNXDG_RESTORE/casprd/archivelog/2009_03_09/thread_1_seq_12669.353.681032215 thread=1 sequence=12669
channel default: deleting archive log(s)
archive log filename=+TUNXDG_RESTORE/casprd/archivelog/2009_03_09/thread_1_seq_12669.353.681032215 recid=108553 stamp=681032231
channel chnl1: starting archive log restore to default destination
channel chnl1: restoring archive log
archive log thread=1 sequence=12670
channel chnl1: reading from backup piece /arch_CASPRD_t680515326_s84705_p1/
channel chnl1: restored backup piece 1
piece handle=/arch_CASPRD_t680515326_s84705_p1/ tag=TAG20090303T080205
channel chnl1: restore complete, elapsed time: 00:02:06
archive log filename=+TUNXDG_RESTORE/casprd/archivelog/2009_03_09/thread_1_seq_12670.353.681033147 thread=1 sequence=12670
channel default: deleting archive log(s)
archive log filename=+TUNXDG_RESTORE/casprd/archivelog/2009_03_09/thread_1_seq_12670.353.681033147 recid=108555 stamp=681033249
media recovery complete, elapsed time: 00:05:01
Finished recover at 09-MAR-2009 07:59:13
released channel: chnl1
released channel: chnl2
released channel: chnl3

Recovery Manager complete.


Startup fails with ORA-16072: a minimum of one standby database destination is required Instance terminated by LGWR
I found the solution on Stewart F's blog.

Metalink note 245731.1 this was caused by that fact that there was still some kind of reference in my Primary databases data dictionary: "The Primary Database has still the Protection Mode stored in the Data Dictionary. The Protection Mode requires a running connection to a Standby Database with LGWR SYNC as Log-Transportation Mode. Since the standby is not available anymore,LGWR of the primary terminates the instance."

Solution:


SYS@casprd1> startup mount;
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size 2057496 bytes
Variable Size 503319272 bytes
Database Buffers 1610612736 bytes
Redo Buffers 31494144 bytes
Database mounted.
SYS@casprd1> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE;

Database altered.

SYS@casprd1> shutdown immediate
SYS@casprd1> startup mount;
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size 2057496 bytes
Variable Size 503319272 bytes
Database Buffers 1610612736 bytes
Redo Buffers 31494144 bytes
Database mounted.
SYS@casprd1> select name from v$tempfile ;

NAME
-----------------------------------------------------------------------------------------------------------------------------
/dev/vx/rdsk/oradg/casprd_TEMP01
/dev/vx/rdsk/oradg/casprd_TEMP02
/dev/vx/rdsk/oradg/casprd_TEMP04
/dev/vx/rdsk/oradg1/casprd_TEMP05
/dev/vx/rdsk/oradg1/casprd_TEMP06
/dev/vx/rdsk/oradg1/casprd_TEMP07
/dev/vx/rdsk/oradg1/casprd_TEMP08

7 rows selected.

SYS@casprd1> alter database rename file '/dev/vx/rdsk/oradg/casprd_TEMP01' to '+TUNXDG_RESTORE/casprd/tempfile/casprd_TEMP01';

Database altered.

Etc for all 7 tempfiles.

SYS@casprd1> alter database open;

Database altered.

Friday, December 12, 2008

Ora-17505 using RMAN with ASM

I had to create a Data Guard instance on test, so that we can evaluate Transparent Application Failover (TAF) for the app. I wanted to test the same procedure to build Data Guard that I will use for Production, and that meant that I had to do a RMAN backup to ASM. Our Storage team loaned me LUNs of different sizes for my temporary ASM on test.

But my backup command (Backup filesperset 10 database include current controlfile for standby) failed with:

RMAN-03009: failure of backup command on ORA_DISK_13 channel at 10/30/2008 20:21:27
ORA-19510: failed to set size of 5779210 blocks for file "+PCASDGF" (blocksize=8192)
ORA-17505: ksfdrsz:1 Failed to resize file to size 5779210 blocks

This was a surprise because the test db is 7TB and FRA is 9TB.


SQL> SELECT sum(space), sum(bytes) FROM v$asm_file;
and
SQL> SELECT * FROM V$FLASH_RECOVERY_AREA_USAGE;


confirmed that I had more than enough space available. In fact the backup failed after using only 47% fo the available space.

It turns out that the smallest disk in the diskgroup was the bottleneck.


SQL>select group_number, disk_number, total_mb, free_mb from v$asm_disk order by 4;

GROUP_NUMBER DISK_NUMBER TOTAL_MB FREE_MB
----------------------------------------
1 13 86315 70
1 16 17263 17080
1 14 17263 17080
1 129 34522 34168
1 130 34522 34168
1 131 34522 34168
1 19 34522 69052
...

As you can see disk 13 only had 70MB of space available. I removed all the disks of varying sizes and only kept the disks of 69052 MB Size. The total size of the FRA came down to 8493396 MB, but the RMAN backup completed successfully.

Lesson Learned:
ASM spreads file extents evenly accross all the disks disks on a diskgroup. An ORA-17505 error can still be encountered due to imbalanced free space between disks. The reason for this is that one disk lacking sufficient free space makes it impossible to do any allocation in a disk group because every file must be evenly allocated across all disks.


Friday, December 5, 2008

Delete archive logs from the standby server

I implemented the following script to delete archive logs from a standby server.
You cannot just delete the archive log files because that will leave entries in the V$ARCHIVED_LOG table.


#!/bin/ksh
#**************************************************************
# Here is where we loop through each SID in /etc/oratab . . .
#**************************************************************
cat /etc/oratab grep '^c.*:.*:[Yy]$' {
while IFS=":" read ORACLE_SID ORACLE_HOME junk
do
PATH=${ORACLE_HOME}/bin:$PATH
export ORACLE_SID ORACLE_HOME PATH

# Now run the SQL script to generate a rm script
sqlplus -s /nolog <<EOF
connect / as sysdba
set heading off feedback off timing off
spool /usr/users/oracle/cronscripts/logs/rm_arch_${ORACLE_SID}.ksh
select 'rm '||NAME from v\$archived_log
where REGISTRAR='RFS'
and APPLIED='YES'
and DELETED='NO'
and COMPLETION_TIME < (SYSDATE-1);
spool off
exit
EOF


# Now run the generated rm script
chmod 740 /usr/users/oracle/cronscripts/logs/rm_arch_${ORACLE_SID}.ksh
/usr/users/oracle/cronscripts/logs/rm_arch_${ORACLE_SID}.ksh

# Now use RMAN to update V$ARCHIVED_LOG.DELETED
rman <<EOF
connect target /
crosscheck archivelog all;
delete noprompt expired archivelog all;
EOF

done
}



Notes:
columns V$ARCHIVED_LOG.REGISTRAR and APPLIED
If REGISTRAR='RFS' and APPLIEDis NO, then the log has arrived at the standby but has not yet been applied.
If REGISTRAR='RFS' and APPLIED is YES, the log has arrived and been applied at the standby database.

column V$ARCHIVED_LOG.DELETED
Indicates whether an RMAN DELETE command has physically deleted the archived log file from disk (YES) or not (NO)

RMAN EXPIRED
Removes only files whose status in the repository is EXPIRED.
RMAN marks backups and copies as expired when you run a CROSSCHECK command and the files are absent or inaccessible.
To determine which files are expired, run a LIST EXPIRED command.

RMAN NOPROMPT
Beginning in Oracle9i, RMAN's default behavior is to prompt for confirmation when you run DELETE EXPIRED.
In prior releases, RMAN did not prompt.