CROSSPLATFORM MIGRATION USING RMAN CONVERT

CROSSPLATFORM MIGRATION OF RAC DATABASES :-

Cross Platform Migration using Rman Convert.

Limitations on Transportable Tablespace Use

  1. The source and target database must use the same character set and national character set.
  2. You cannot transport a tablespace to a target database in which a tablespace with the same name already exists. However, you can rename either the tablespace to be transported or the destination tablespace before the transport operation.
  3. Objects with underlying objects (such as materialized views) or contained objects (such as partitioned tables) are not transportable unless all of the underlying or contained objects are in the tablespace set.
    • Review Table "Objects Exported and Imported in Each Mode" from the Oracle Database Utilities documentation, there are several object types that are not exported in tablespace mode.
  1. If the owner/s of tablespace objects does not exist on target database, the usernames need to be created manually before starting the transportable tablespace import.
    • If you use spatial indexes, then:
      • be aware that TTS across different endian platforms are not supported for spatial indexes in 10gR1 and 10gR2; such a limitation has been released in 11g
      • specific Spatial packages must be run before exporting and after transportation, please see Oracle Spatial documentation.
  2. Beginning with Oracle Database 10g Release 2, you can transport tablespaces that contain XMLTypes, but you must use the IMP and EXP utilities, not Data Pump. When using EXP, ensure that the CONSTRAINTS and TRIGGERS parameters are set to Y (the default).

    The following query returns a list of tablespaces that contain XMLTypes:
select distinct p.tablespace_name
from dba_tablespaces p, dba_xml_tables x, dba_users u, all_all_tables t
where t.table_name=x.table_name and t.tablespace_name=p.tablespace_name and x.owner=u.username;

Transporting tablespaces with XMLTypes has the following limitations:
    • The target database must have XML DB installed.
    • Schemas referenced by XMLType tables cannot be the XML DB standard schemas.
    • Schemas referenced by XMLType tables cannot have cyclic dependencies.
    • Any row level security on XMLType tables is lost upon import.
    • If the schema for a transported XMLType table is not present in the target database, it is imported and registered. If the schema already exists in the target database an error is returned unless the ignore=y option is set.
  1. Advanced Queues Transportable tablespaces do not support 8.0-compatible advanced queues with multiple recipients.
  2. You cannot transport the SYSTEM tablespace or objects owned by the user SYS.
  3. Opaque Types Types(such as RAW, BFILE, and the AnyTypes) can be transported, but they are not converted as part of the cross-platform transport operation. Their actual structure is known only to the application, so the application must address any endianness issues after these types are moved to the new platform.
  4. Floating-Point Numbers BINARY_FLOAT and BINARY_DOUBLE types are transportable using Data Pump but not the original export utility, EXP.

Tasks for Migration of RAC Databases from Solaris to Linux platform.
TASK




Comments








Freeze Application




Freeze the system at this point.
No Application/User should be allowed to connect database.
Stop all backup Jobs/ Cronjobs etc








Source Database Tasks ( Databases on Solaris)






Step1:- Generate all scripts/sqls using generic scripts on source.






Below scripts are prepared for automation of transformation/conversion process.

@check_objects_sys_system.sql ---- This script checks if there are any sys /system objetcs 
 present in any other tablespace.

@create_exp_scripts.sql ---- This script will create export script.

@cts_drop_ts.sql.sql ----- This script will drop the user Tablespaces other then system and sys.

@cts_sys_grants.sql ---- This script will create grants for all users.

@reset_sequence.sql ---- This script will reset sequence.

@tablesapcereadonly.sql ---- This script will make all user tablespace readonly.
@tablespacereadwrite.sql ------- This script will make all user tablespace readwrite.

@transportts.sql ------- This script is used to check the transport set voilations.

@rmanconvert.sql ------- This script will be used for convert at source/target datafiles.




Step2 :- Perform the Self-Containment Check and Resolve Violations on source database.




SQL> @transportts.sql

e.g. SQL> select * from transport_set_violations;

VIOLATIONS
----------------------------------------------
ORA-39917: SYS owned object xxxxx in tablespace USERS not allowed in pluggable set

ORA-39917: SYS owned object xxxxx in tablespace USERS not allowed in pluggable set

ORA-39917: SYS owned object xxxxx in tablespace USERS not allowed in pluggable set

ORA-39917: SYS owned object xxxxx in tablespace USERS not allowed in pluggable set

ORA-39917: SYS owned object xxxxx in tablespace USERS not allowed in pluggable set

ORA-39917: SYS owned object xxxxx in tablespace USERS not allowed in pluggable set

Note:- These objects needs to move in sys tablespace if not then the export will fail.


Step3:- Export Source Database Metadata (No DDL allowed at this stage)






Export all metadata from the source database. After the tablespaces are transported,
 this will be imported into target database to create metadata that was not transported. 
Perform no DDL after this step.

(create ttsdir directory first and then run)

sql> create or replace directory ttsdir as '/u01/test/crosstest' ;

$ expdp system/password DIRECTORY=ttsdir LOGFILE=fullexp_meta.log 
 DUMPFILE=full_meta.dmp FULL=y CONTENT=METADATA_ONLY 
 EXCLUDE=USER,ROLE,ROLE_GRANT,PROFILE





Step4:- Disconnect Users and Restrict Access to Source Database
(Database will Not available to users)




Restrict access to the source database and disconnect existing users to ensure
 there is no additional data modification.

SQL> alter system enable restricted session;

On RAC ---fire above statement on both all nodes and check status as below.

SQL> alter system enable restricted session;

System altered.

SQL> select INST_ID,INSTANCE_NAME,LOGINS from gv$instance;

INST_ID INSTANCE_NAME LOGINS
---------- ---------------- ----------
2 racdb2 RESTRICTED
1 racdb1 RESTRICTED



SQL> alter system disconnected session ‘,’;

e.g. – ALTER SYSTEM DISCONNECT SESSION '1158,5,@2' IMMEDIATE;

To find sessions use below query -----


SET LINESIZE 100
COLUMN spid FORMAT A10
COLUMN username FORMAT A10
COLUMN program FORMAT A45
SELECT s.inst_id,
s.sid,
s.serial#,
p.spid,
s.username,
s.program
FROM gv$session s
JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE s.type != 'BACKGROUND';



Dynamic query can be created for Killing session ---
SELECT 'ALTER SYSTEM DISCONNECT SESSION '||''''||s.sid ||','|| s.serial#||',
'||'@'||s.inst_id||''''||' immediate;' FROM gv$session s
JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id 
WHERE s.type != 'BACKGROUND';


Step5:- Make All User Tablespaces READ ONLY on SOURCE Database.






SQL> @tablespacereadonly.sql
/* Make all user tablespaces READ ONLY */

SQL> select TABLESPACE_NAME,STATUS from dba_tablespaces;


Step6:- Gather Sequence Information on source DB and create script for Target






SQL>@reset_sequence.sql

This script sometimes not required so please check as per your requirement.


Step7:- Export Tablespaces from Source Database.






expdp system/password PARFILE= ctsexp.par


Note:- Content of ctsexp.par as below. It can be created by dynamic script 
instead of making it manually.

USERID ='/ as sysdba'
directory=ctsdir
dumpfile=cts_exp.dmp
logfile=ctsexp.log
transport_full_check=no
transport_tablespaces=TABLESPACE_XXX_01,
TABLESPACE_XXX_02,
TABLESPACE_XXX_03,
......
TABLESPACE_XXX_xxx










Step7:- Convert data files -- Make Source Datafiles Available to Target Database.






(Note:- to check the platform name of current use on source use below query ---
select name, platform_id,platform_name from v$database ; )



NOTE:- ISSUE may come due to compressed backup enabled in RMAN ...
you have to disable the compress rman backup using below commands.
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;

Please refer :- http://albertolarripa.com/?p=1469

$nohup ./convert_sun.sh &


Note:- Above script will call cts_convert_1.rman …........ cts_convert_7.rman
Check script cts_convert_xx.rman for correct paths
( Both Script convert and cts_convert_1.rman given at end of this doc)


Step 8:- SCP file transfer all converted datafiles to destination staging area






In our case all datafiles will be converted and placed directly on NAS drive so SCP not needed.

If both Source and Target server are in same WLAN then transfer converted
 files using SCP over the WAN


Target Database Tasks
( Databases on Linux Platform)






Note:-  On Target All below Tasks are performed on  Rac Node1.
             We are not going to touch Node2 till all migration complete.
           
Step1:- Create the Target Database
(This will be created during preintervention tasks)




This Task is performed before intervention.

Note:- Prerequisites are ,same timezone,same tempfiles,same redologs group,same undotbs,

If timezone version is low then import may fail at target …..so create database using 
same timezone as source if timezone is lower in source then before creating database
 export below ORA_TZFILE=$ORACLE_HOME/oracore/zoneinfo/timzlrg_xx.dat
After exporting above create database using DBCA and use no datafiles
 option (custom) which is there in third screen.

Metalink:- Data Pump TTS Import Fails With ORA-39002 And ORA-39322 Due To
 TIMEZONE Conflict [ID 1275433.1]

Select name,value$ from props$ where name='DST_PRIMARY_TT_VERSION'


Step2:- Create Placeholder Tablespaces ( This will be done during preintervention)




This Task is performed before intervention.
create Tablespaces with Minimum size of datafiles.


Step3:- Create Metadata required for TTS.






$ impdp system/password DIRECTORY=ttsdir LOGFILE=userimp.log 
 NETWORK_LINK=ttslink FULL=y INCLUDE=USER,ROLE,ROLE_GRANT,PROFILE

Note:- Errors already exists can be ignored check for other errors.

(Create directory ttsdir and also create ttslink before executing)

Note:- If we are not creating dblink then take export on source and do import in target.


Step4: Drop the placeholder Tablespaces






To drop all user tablespaces, run the tts_drop_ts.sql script created when 
cts_drop_ts.sql was run


Note:- If default tablespace is user then make sys as default and then drop user tablespace.

alter database default tablespace system;
drop tablespace users including contents and datafiles;

SQL> @tts_drop_ts.sql ( ..This scipt should be created for automation)


Step5- Import Tablespaces into Target Database. ( Plug the Tablespaces )






impdp system/password PARFILE= ctsimp.par

Note: - Watch Errors during this import and sort those.

We are plugging the datafiles placed in staging area.

e.g. :- ctsimp.par – (contents of this par as below)

USERID = '/ as sysdba'
directory=test
dumpfile=cts_exp.dmp
logfile=ctsimp.log
transport_datafiles= '/backup/convertstage/users.000.746700907',
'/backup/convertstage/users_01.000.746700907,
'/backup/convertstage/users_02.000.746700907',
'/backup/convertstage/users_03.000.746700907',
'/backup/convertstage/users_04.000.746700907'

Note:- Need to change datafiles path (/backup/convertstage/....
 is staging path where we copied the all converted datafiles from source)
 in actual script


Step6:- Make User Tablespaces READ WRITE on Target Database.






alter tablespace USERS read write;

..repeat for all tablespaces;

select tablespace_name,status from dba_tablespaces;

( all datafiles which were plugged from staging area will be online now)


Step7:- Copy the data files '/oraback/stage' into the ASM area using rman: on target





Note:- Below commands are to be run from script ….rmancopy.sh ( but please make sure you update the script copyrman.rman with datafiles details...attached script at end end of doc)

nocatalog target /
RMAN> backup as copy datafile '
/backup/convertstage....dbf' format '+DGROUPA';
where +DGROUPA is the name of the ASM diskgroup.


Step8:- Switch the data files to copy.




If the database is open you need to offline the datafile first:
SQL> alter database datafile '/backup/convertstage....dbf' offline;

5) Switch to the copy:
rman nocatalog target /
RMAN> switch datafile '
/backup/convertstage....dbf' to copy;

Note down the name of the copy created in the 
+DGROUPA diskgroup, ex. '+DGROUPA/s101/datafile/tts.270.5'.

(Note: Script for Offline and Switch to Copy attached in the End of Doc )

Step9:- Put all datafiles online and check they are in ASM area.




  1. Put the datafile online again, we need to recover it first:
SQL> recover datafile '+DGROUPA/s101/datafile/tts.270.5';
SQL> alter database datafile '+DGROUPA/s101/datafile/tts.270.5' online;
    1. Check if datafile is indeed part of the ASM area and online:
SQL> select name, status from v$datafile;

The output should be:
+DGROUPA/s101/datafile/tts.270.5 ONLINE


(Note:- Recover and Online script at end of Doc.)








Step10:- Import Source Database Metadata into Target Database.






impdp system/password DIRECTORY=ttsdir LOGFILE= fullmeta.log DUMPFILE=full_meta.dmp FULL=y

Note: - Warning errors can be ignored.
Step11: - Below Final check to be done










Check any errors during import of tablespaces and full meta. Some times you have to rebuild some indexes or Materialized views if they failed to create. So carefully observe the import logfiles.








  • Compile Invalid Objects




$ORACLE_HOME/rdbms/admin/utlrp.sql to compile invalid objects.
SQL> @?/rdbms/admin/utlrp.sql

  • Compare Database using toad




Sort any Problems
Make users tablespace as default




alter database default tablespace users;






Perform a full RMAN Backup.
Disk Backup can be done







Full backup on Disk/Tape should be done.



Start The Flow on application Server




Point all Applications to this new database and connect and test for any issues.

ROLLBACK IF ISSUES AFTER MIGRATION




If any issues then Rollback is very Easy. Point all applications to Old Database. Also make all tablespaces in read write mode on Old Database.

Scripts For Automation:-

check_objects_sys_system.sql
select owner, segment_name, segment_type,tablespace_name from dba_segments where tablespace_name in ('SYSTEM', 'SYSAUX') and owner not in ('SYS', 'SYSTEM', 'DBSNMP', 'SYSMAN', 'OUTLN', 'MDSYS', 'ORDSYS', 'EXFSYS', 'DMSYS', 'WMSYS', 'WKSYS','CTXSYS', 'ANONYMOUS', 'XDB', 'WKPROXY', 'ORDPLUGINS', 'DIP', 'SI_INFORMTN_SCHEMA', 'OLAPSYS', 'MDDATA', 'WK_TEST','MGMT_VIEW', 'TSMSYS','ORDDATA','APPQOSSYS');

create_exp_scripts.sql
REM
REM Create CTS Data Pump export and import PAR files
REM
set feedback off
trimspool on
set serveroutput on size 1000000
REM
REM Data Pump parameter file for CTS export
REM
spool ctsexp.par
declare
tsname varchar(30);
i number := 0;
begin
dbms_output.put_line(q'$USERID ='/ as sysdba'$');
dbms_output.put_line('directory=ctsdir');
dbms_output.put_line('dumpfile=cts_exp.dmp');
dbms_output.put_line('logfile=ctsexp.log');
dbms_output.put_line('transport_full_check=no');
dbms_output.put('transport_tablespaces=');
for ts in
(select tablespace_name from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX','XDB')
and contents = 'PERMANENT'
order by tablespace_name)
loop
if (i!=0) then
dbms_output.put_line(tsname||',');
end if;
i := 1;
tsname := ts.tablespace_name;
end loop;
dbms_output.put_line(tsname);
dbms_output.put_line('');
end;
/
spool off

cts_drop_ts.sql
set heading off
feedback off
trimspool on
linesize 500
spool tts_drop_ts.sql
prompt /* ===================== */ prompt
/* Drop user tablespaces */
prompt /* ===================== */
select 'DROP TABLESPACE ' || tablespace_name || ' INCLUDING CONTENTS AND DATAFILES;' from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX','XDB')
and contents = 'PERMANENT';
spool off

cts_sys_grants.sql
ding off feedback off trimspool on escape off
set long 1000 linesize 1000
col USERDDL format A150
spool cts_sys_privs.sql
prompt /* ============ */
prompt /* Grant privs */
prompt /* ============ */
select 'grant '||privilege||' on "'||
owner||'"."'||table_name||'" to "'||grantee||'"'||
decode(grantable,'YES',' with grant option ')||
decode(hierarchy,'YES',' with hierarchy option ')||
';'
from dba_tab_privs
where owner in
('SYS', 'SYSTEM', 'DBSNMP', 'SYSMAN', 'OUTLN', 'MDSYS',
'ORDSYS', 'EXFSYS', 'DMSYS', 'WMSYS', 'WKSYS', 'CTXSYS',
'ANONYMOUS', 'XDB', 'WKPROXY', 'ORDPLUGINS', 'DIP',
'SI_INFORMTN_SCHEMA', 'OLAPSYS', 'MDDATA', 'WK_TEST',
'MGMT_VIEW', 'TSMSYS')
and grantee in
(select username
from dba_users
where username not in
('SYS', 'SYSTEM', 'DBSNMP', 'SYSMAN', 'OUTLN', 'MDSYS',
'ORDSYS', 'EXFSYS', 'DMSYS', 'WMSYS', 'WKSYS', 'CTXSYS',
'ANONYMOUS', 'XDB', 'WKPROXY', 'ORDPLUGINS', 'DIP',
'SI_INFORMTN_SCHEMA', 'OLAPSYS', 'MDDATA', 'WK_TEST',
'MGMT_VIEW', 'TSMSYS')
);
spool off



reset_sequence.sql
set heading off
feedback off
set long 1000
set linesize 1000
set pagesize 0
col SEQDDL format A300
spool cts_create_seq.sql
prompt /* ========================= */
prompt /* Drop and create sequences */
prompt /* ========================= */
select regexp_replace(
dbms_metadata.get_ddl('SEQUENCE',sequence_name,sequence_owner),
'^.*(CREATE SEQUENCE.*CYCLE).*$',
'DROP SEQUENCE "'||sequence_owner||'"."'||sequence_name
||'";'||chr(10)||'\1;') SEQDDL
from dba_sequences
where sequence_owner not in
('SYS', 'SYSTEM', 'DBSNMP', 'SYSMAN', 'OUTLN', 'MDSYS',
'ORDSYS', 'EXFSYS', 'DMSYS', 'WMSYS', 'WKSYS', 'CTXSYS',
'ANONYMOUS', 'XDB', 'WKPROXY', 'ORDPLUGINS', 'DIP',
'SI_INFORMTN_SCHEMA', 'OLAPSYS', 'MDDATA', 'WK_TEST',
'MGMT_VIEW', 'TSMSYS');
spool off

rman_convertscript.sql
REM
REM Create RMAN CONVERT TABLESPACE script for cross platform CTS
REM Use for source system conversion only
REM
set feedback off
set serveroutput on size 1000000
spool cts_convert.rman
declare
tsname varchar(30);
i number := 0;
begin
dbms_output.put_line('# Sample RMAN script to perform file conversion on all user tablespaces');
dbms_output.put_line('# Tablespace names taken from DBA_TABLESPACES');
dbms_output.put_line('# Please review and edit before using');
dbms_output.put_line('CONVERT TABLESPACE ');
for ts in
(select tablespace_name from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX')
and contents = 'PERMANENT'
order by tablespace_name)
loop
if (i!=0) then
dbms_output.put_line(tsname||',');
end if;
i := 1;
tsname := ts.tablespace_name;
end loop;
dbms_output.put_line(tsname);
dbms_output.put_line('TO PLATFORM ''Linux x86 64-bit''');
dbms_output.put_line('PARALLELISM 4');
dbms_output.put_line('DB_FILE_NAME_CONVERT ''+DATA/uat/'',''/backups/test/crosstest/stage/''');
dbms_output.put_line(';');
end;
/
spool off


tablespacereadonly.sql
set heading off
set feedback off
set linesize 500
spool tablespacereadonly.sql
prompt /* =================================== */ prompt /* Make all user tablespaces READ ONLY */
prompt /* =================================== */
select 'ALTER TABLESPACE ' || tablespace_name || ' READ ONLY;' from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX','XDB')
and contents = 'PERMANENT';
spool off

tablespacereadwrite.sql
set heading off
feedback off
trimspool on
linesize 500
spool tablespace_read_write.sql
prompt /* ==================================== */ prompt
/* Make all user tablespaces READ WRITE */
prompt /* ==================================== */
select 'ALTER TABLESPACE ' || tablespace_name || ' READ WRITE;' from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX','XDB')
and contents = 'PERMANENT';
spool off


transportts.sql
declare
checklist varchar2(4000);
i number := 0;
begin
for ts in
(select tablespace_name
from dba_tablespaces
where tablespace_name not in ('SYSTEM','SYSAUX','XDB')
and contents = 'PERMANENT')
loop
if (i=0) then
checklist := ts.tablespace_name;
else
checklist := checklist||','||ts.tablespace_name;
end if;
i := 1; end loop;
dbms_tts.transport_set_check(checklist,TRUE,TRUE);
end;
/
select * from transport_set_violations;



convert_sun.sh
#!/bin/bash
###################################################################
# Script for RMAN CONVERT
# Prepared By:- Mr.Sandeep Magdum ( DBA)
# Date:- 24th october 2012
####################################################################
set -x
# Declare your ORACLE environment variables
ORACLE_SID=racdb1
export ORACLE_SID=racdb1
BKP_DATE=`date +"%Y-%m-%d:%T"`
#export ORACLE_BASE= /u01/app/oracle
export ORACLE_HOME=/oracle/app/oracle/product/11.2.0.2/db
# Start the rman commands
rman target=/ LOG /backups/test/crosstest/convert_$ORACLE_SID.$BKP_DATE << EOF
@cts_convert_1.rman
@cts_convert_2.rman
@cts_convert_3.rman
@cts_convert_4.rman
@cts_convert_5.rman
@cts_convert_6.rman
@cts_convert_7.rman

EXIT;
EOF
#


cts_convert_1.rman
# Sample RMAN script to perform file conversion on all user tablespaces
# Tablespace names taken from DBA_TABLESPACES
# Please review and edit before using
CONVERT TABLESPACE
TOOLS,
USERS
TO PLATFORM 'Linux x86 64-bit'
PARALLELISM 4
DB_FILE_NAME_CONVERT '+DATA/racdb/','/temp/stage/'
;

Note :- Create different cts_convert_rman_1 ...2....3 files for different set of tablespaces.



rmancopy.sh
#!/bin/bash
###################################################################
# Script for RMAN COPY
# Prepared By:- Mr.Sandeep Magdum ( DBA)
# Date:- 24th october 2012
####################################################################
#set -x
# Declare your ORACLE environment variables
ORACLE_SID=racdb1
export ORACLE_SID=racdb1
COPY_DATE=`date +"%Y-%m-%d:%T"`
#export ORACLE_BASE= /u01/app/oracle
export ORACLE_HOME=/oracle/app/oracle/product/11.2.0.3/db
Start the rman commands
rman target=/ LOG /backup/crosstest/convert_$ORACLE_SID.$COPY_DATE << EOF
@copyrman.rman
EXIT;
EOF
#


copyrman.rman
CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 5 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 6 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 7 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 8 DEVICE TYPE DISK FORMAT '+DATA';
BACKUP AS COPY DATAFILE 5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125;


copyrmanswitch.rman
CONFIGURE DEVICE TYPE DISK PARALLELISM 8;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '+DATA;
CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 5 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 6 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 7 DEVICE TYPE DISK FORMAT '+DATA';
CONFIGURE CHANNEL 8 DEVICE TYPE DISK FORMAT '+DATA';
switch datafile 5,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127;


ctsimp.par
USERID='/as sysdba'
directory=ttdir
dumpfile=cts_exp.dmp
logfile=ctsimp.log
transport_datafiles='/backup/stage/datafile/xxxx.341.785238781',
'/backup/stage/xxxx01.dbf',
'/backup/stage/xxxxxx02.dbf',
/backup/stage/xxxxxxx03.dbf'

Note:- Add all converted datafiles in the stage path on target server which were copied from source.



datafileoffline.sql
alter database datafile '/backup/stage/xxxxxx.dbf' offline;
alter database datafile '/backup/stage/xxxxxx.dbf' offline;
alter database datafile '/backup/stage/xxxxxx2.dbf' offline;
alter database datafile '/backup/stage/xxxxx.dbf' offline;
alter database datafile '/backup/stage/xxxxxxx0.dbf' offline;
alter database datafile '/backup/stage/xxxxxxxx9.dbf' offline;

Note:- Above is sample offline script ...you have to take all the files which are converted and plugged from staging area.


datafileonline.sql
alter database datafile '+DATA/backup/stage/xxxxxx.dbf' online;
alter database datafile '+DATA/datafile/xxxxxxx.xx.dbf' online;
alter database datafile '+DATA/datafile/xxxxxxx.xxxxxx2.dbf' online;
alter database datafile '+DATA/datafile/xxxxxxx.xxxxx.dbf' online;
alter database datafile '+DATA/datafile/xxxxxxx.xxxxx0.dbf' online;
alter database datafile '+DATA/datafile/xxxxxxx.xxxxxxx9.dbf' online;
Note:- Above is sample online script ...you have to take all the files which are plugged in ASM.

Recoverdatafile.sql
alter database recover automatic datafile '+DATA/datafile/xxxxxxx.290.803273709';
alter database recover automatic datafile '+DATA/datafile/xxxxxx.2387.803290287';
alter database recover automatic datafile '+DATA/datafile/xxxxxxxx.2308.803302575';
alter database recover automatic datafile '+DATA/datafile/xxxxx.2388.803290415';

Note:- Above is sample script ...you have to take all the files which are plugged in ASM and recover them.

6 comments:

  1. Sandeep, this is just an extremely valuable post, I've made extensive use of it. Thank you !!!
    Consider one thing:
    In the secton

    Target Database Tasks ( Databases on Linux Platform)
    Somewhere after database creation, and prior to step 5 "plug the tablespaces"

    Add the instruction to set the following init parameter to the target database:
    alter system set deferred_segment_creation=TRUE scope=both sid='*';

    This will eliminate the ORA-39083/ORA-01647 errors that result from the fact that the newly attached tablespace(s) are still READ ONLY.

    Thanks again, your blog is a great resource.

    -dave

    ReplyDelete
  2. Steps are very much detailed ... Better than oracle doc !!!

    ReplyDelete
  3. Thanks Guys will update the post.

    ReplyDelete
  4. WHere are you Bro, I tried to contact you number of times.
    Gurvinder
    gurvinder.gv@gmail.com

    ReplyDelete
  5. Nice blog. Thanks for sharing such great information.Develop xamarin application Hire xamarin developer

    ReplyDelete
  6. Cloud Endure is acloud migration solution that is used by companies like Netflix, Yammer, and others who need to migrate to the cloud quickly and easily.

    ReplyDelete