Converting manually an administrator-managed Oracle RAC One Node to administrator-managed Oracle RAC with instance name renaming
The main goal of all this is to share the steps to Convert Oracle RAC One Node to Oracle RAC with instance name changes.
What I mean by adding description instance name changes:
Oracle RAC One Node instance names convention are setup automatically with _1 , in case of relocation to other node/candidate server number _1 will become _2.
( In case of EBS Application instance name changes could be critical due to a lot of Application specifics, in such case the best approach is to stop and start database by only changing the node name with option (-n) )
Example:
Oracle RAC One Node | Oracle RAC |
TESTPRD_1 | TESTPRD1 |
TESTPRD_2 | TESTPRD2 |
Environment information:
Oracle Grid Infrastructure 12c - 12.1.0.2.0
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0
Oracle Cluster with two nodes
node1
node2
There are a following scenarios to perform such conversion:
Converting to Oracle RAC and Oracle RAC One Node Using DBCA
Converting Databases to Oracle RAC Using rconfig
Preparing to Convert with rconfig and Oracle Enterprise Manager
Converting to Oracle RAC and Oracle RAC One Node - manually -- this is out case
Oracle documentation doesn’t provide full you steps how to do it but you can find all mandatory steps for such conversion:
https://docs.oracle.com/cd/E11882_01/install.112/e48195/cvrt2rac.htm#RIWIN1107
Prerequisites for Converting to Oracle RAC Databases:
Oracle Clusterware 11g release 2 (11.2) or 12c is installed, configured, and running.
Oracle RAC 11g release 2 (11.2) or 12c software is installed.
Example how to do it
cd /u01/app/db/11.2.0/db2/bin Performing pre-checks for node addition Checking node reachability... Checking user equivalence... WARNING: Pre-check for node addition was successful. Checking swap space: must be greater than 500 MB. Actual 20472 MB Passed Performing tests to see whether nodes node1 are available The Cluster Node Addition of /u01/app/db/11.2.0/db2 was successful. [root@node2 ]# /u01/app/db/11.2.0/db2/root.sh |
if you are converting from non-RAC ( Single instance ) Oracle Home to RAC - you need to do following change on the Oracle Home Software
You can check what is the current situation by using following methods:
cd $ORACLE_HOME/rdbms/lib/
grep rac_on ins_rdbms.mk
if you have rac_off - this mean that RAC heafures is not enabled
if you have rac_on - this mean that RAC heafures is enabled
( ---- steps to enable cd $ORACLE_HOME/rdbms/lib/ make -f ins_rdbms.mk rac_on make -f ins_rdbms.mk ioracle grep rac_on ins_rdbms.mk parropt: rac_on rac_on: $(KNLOPT_LOCAL) $(RDBMSLIB)$(RAC_ON) $(SKGXP_RAC_ON) $(SKGXN_RAC_ON) ops_on: rac_on ---this is the important switch ) |
Shared storage, either Oracle Cluster File System or Oracle ASM, is available and accessible from all nodes.
User equivalence exists for the oracle account, or the user account used to install the Oracle software.
If you intend to use Oracle Enterprise Manager, then the Oracle Management Agent on each node is configured and running, and is configured with cluster and host information.
You must have backed up your existing database.
Checking Database Configuration
[oracle@node1 ~]$ srvctl config database -d TESTPRD Database unique name: TESTPRD Database name: TESTPRD Oracle home: /u01/app/db/11.2.0/db2 Oracle user: oracle Spfile: +REDO1/TESTPRD/spfileTESTPRD.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: TESTPRD Database instances: Disk Groups: REDO1,REDO2 Mount point paths: Services: TESTPRDSVC Type: RACOneNode Online relocation timeout: 30 Instance name prefix: TESTPRD Candidate servers: node1,node2 Database is administrator managed [oracle@node1 ~]$ |
Change Database Configuration from Type: RACOneNode to Type: RAC
srvctl convert database -d TESTPRD -c RAC |
Normally in Oracle RAC node node you should have two UNDO tablespaces in order to be able to perform instance relocation!
If you find out only one undo tablespace, you must add one additional UNDO and change instance configuration by allocating dedicated UNDO tablespace per instance
Example:
ALTER SYSTEM SET undo_tablespace='UNDOTBS1' SCOPE=SPFILE SID='TESTPRD1'; ALTER SYSTEM SET undo_tablespace='UNDOTBS2' SCOPE=SPFILE SID='TESTPRD2'; |
Create copy of spfile for backup purposes.
Example:
Create pfile='/tmp/backup_of_spfile.ora' from spfile; |
Prior adding new instance you must change database parameters related to instance number.
Example:
ALTER SYSTEM SET instance_number=1 SCOPE=SPFILE SID='TESTPRD1'; ALTER SYSTEM SET instance_number=2 SCOPE=SPFILE SID='TESTPRD2' |
Add new instance
srvctl add instance -d TESTPRD -i TESTPRD1 -n node2 |
Checking Database Configuration
[oracle@node1 ~]$ srvctl config database -d TESTPRD Database unique name: TESTPRD Database name: TESTPRD Oracle home: /u01/app/db/11.2.0/db2 Oracle user: oracle Spfile: +REDO1/TESTPRD/spfileTESTPRD.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: TESTPRD Database instances: TESTPRD_2,TESTPRD1 Disk Groups: REDO1,REDO2 Mount point paths: Services: TESTPRDSVC Type: RAC Database is administrator managed [oracle@node1 ~]$ |
As you can see we have two different instances now but name on of the instance is not correct based on the standards for full RAC.
TESTPRD_2 should be change to TESTPRD2
Checking redo log groups:
( in our case we have two thread`s with two groups per thread )
SQL> set linesize 1000 INST_ID GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME |
Start new instance:
Example:
srvctl start instance -d TESTPRD -i TESTPRD1 |
Checking number of redo log groups ( as you can see number of groups per thread was changed automatically immediately after instance startup )
SQL> select * from gv$log; INST_ID GROUP# THREAD# SEQUENCE# BYTES BLOCKSIZE MEMBERS ARC STATUS FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME 8 rows selected. SQL> |
Checking Number of Action instance from gv$instance
SQL> select INSTANCE_NAME,HOST_NAME from gv$instance; INSTANCE_NAME HOST_NAME SQL> |
Now we have to change service name TESTPRDSVC to point to both instances TESTPRD_2 and TESTPRD1, by changing service configuration we will enable possibility new connection to be allocated on newly added instanced and wrong instance name can be removed with less impact on Application.
Checking database status:
[oracle@node1 dbs]$ srvctl status database -d TESTPRD Instance TESTPRD_2 is running on node node1 Instance TESTPRD1 is running on node node2 [oracle@node1 dbs]$ |
Checking service configuration:
[oracle@node1 dbs]$ srvctl config service -d TESTPRD -s TESTPRDSVC Service name: TESTPRDSVC Service is enabled Server pool: TESTPRD_TESTPRDSVC Cardinality: 1 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: NONE TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: BASIC Edition: Preferred instances: TESTPRD_2 Available instances: [oracle@node1 dbs]$ |
As we can see service is point to only one of the instances TESTPRD_2
Modify Service configuration and add new instance name as a Preferred instance
[oracle@node1 dbs]$ srvctl modify service -d TESTPRD -s TESTPRDSVC -n -i TESTPRD1,TESTPRD_2 |
Checking service configuration:
[oracle@node1 dbs]$ srvctl config service -d TESTPRD -s TESTPRDSVC |
Checking service status:
[oracle@node1 dbs]$ srvctl status service -d TESTPRD -s TESTPRDSVC
Service TESTPRDSVC is running on instance(s) TESTPRD_2
[oracle@node1 dbs]$
Service TESTPRDSVC running only on one of the instance TESTPRD_2
Start Service TESTPRDSVC on instance TESTPRD1
[oracle@node1 dbs]$ srvctl start service -d TESTPRD -s TESTPRDSVC |
Checking service status:
[oracle@node1 dbs]$ srvctl status service -d TESTPRD -s TESTPRDSVC Service TESTPRDSVC is running on instance(s) TESTPRD_2,TESTPRD1 |
Checking information from cluster point of view
[grid@node1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA1.dg
ONLINE ONLINE node1 STABLE
ONLINE ONLINE node2 STABLE
----- cut ----
ora.ons
ONLINE ONLINE node1 STABLE
ONLINE ONLINE node2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE node1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE node1 STABLE
----- cut ----
ora.testprd.db
1 ONLINE ONLINE node2 Open,STABLE
2 ONLINE ONLINE node1 Open,STABLE
ora.testprd.testprdsvc.svc
1 ONLINE ONLINE node1 STABLE
2 ONLINE ONLINE node2 STABLE
--------------------------------------------------------------------------------
[grid@node1 ~]$
As you can see now both instance TESTPRD_2 and TESTPRD1 are running on two of the nodes and service testprdsvc is available on both instances to spread across workload.
by having new instance and already modified service we can simple go the next step which is to change instance name TESTPRD_2 to TESTPRD2
Modify service TESTPRDSVC as adding Preferred instance only TESTPRD1
srvctl modify service -d TESTPRD -s TESTPRDSVC -n -i TESTPRD1 |
Checking service configuration:
[oracle@node1 dbs]$ srvctl config service -d TESTPRD -s TESTPRDSVC Service name: TESTPRDSVC Service is enabled Server pool: TESTPRD_TESTPRDSVC Cardinality: 1 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: NONE TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: BASIC Edition: Preferred instances: TESTPRD1 Available instances: [oracle@node1 dbs]$ |
Checking service status:
[oracle@node1 dbs]$ srvctl status service -d TESTPRD -s TESTPRDSVC Service TESTPRDSVC is running on instance(s) TESTPRD1 |
Checking information from cluster point of view
[grid@node1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA1.dg
ONLINE ONLINE node1 STABLE
ONLINE ONLINE node2 STABLE
---- cut ---
ora.ons
ONLINE ONLINE node1 STABLE
ONLINE ONLINE node2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE node1 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE node1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE node1 STABLE
---- cut ---
1 ONLINE ONLINE node1 STABLE
ora.testprd.db
1 ONLINE ONLINE node2 Open,STABLE
2 ONLINE ONLINE node1 Open,STABLE
ora.testprd.testprdsvc.svc
2 ONLINE ONLINE node2 STABLE
--------------------------------------------------------------------------------
Now in a short period of time disconnected Application session will become visible session connection on the new instance TESTPRD1
check gv$session by using instance number to identify which session are allocated on which instance
Checking service configuration:
[oracle@node1 dbs]$ srvctl config database -d TESTPRD Database unique name: TESTPRD Database name: TESTPRD Oracle home: /u01/app/db/11.2.0/db2 Oracle user: oracle Spfile: +REDO1/TESTPRD/spfileTESTPRD.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: TESTPRD Database instances: TESTPRD_2,TESTPRD1 Disk Groups: REDO1,REDO2 Mount point paths: Services: TESTPRDSVC Type: RAC Database is administrator managed [oracle@node1 dbs]$ |
Now the next steps is to stop instance with wrong name TESTPRD_2
Stop instance TESTPRD_2
srvctl stop instance -d TESTPRD -i TESTPRD_2 |
Checking status:
[oracle@node1 dbs]$ srvctl status database -d TESTPRD
Instance TESTPRD_2 is not running on node node1
Instance TESTPRD1 is running on node node2
[oracle@node1 dbs]$
We should remove now instance TESTPRD_2
srvctl remove instance -d TESTPRD -i TESTPRD_2 |
[oracle@node1 dbs]$ srvctl remove instance -d TESTPRD -i TESTPRD_2 [oracle@node1 dbs]$ srvctl config database -d TESTPRD |
Now we should add instance with correct name
srvctl add instance -d TESTPRD -i TESTPRD2 -n node1 |
Adding new instance
[oracle@node1 dbs]$ srvctl add instance -d TESTPRD -i TESTPRD2 -n node1 |
Checking configuration
[oracle@node1 dbs]$ srvctl config database -d TESTPRD Database unique name: TESTPRD Database name: TESTPRD Oracle home: /u01/app/db/11.2.0/db2 Oracle user: oracle Spfile: +REDO1/TESTPRD/spfileTESTPRD.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: TESTPRD Database instances: TESTPRD2,TESTPRD1 Disk Groups: REDO1,REDO2 Mount point paths: Services: TESTPRDSVC Type: RAC Database is administrator managed [oracle@node1 dbs]$ |
Great!!! now we have all instances with correct name TESTPRD1/TESTPRD2
So, let’s start instance TESTPRD2
Start instance
[oracle@node1 dbs]$ srvctl start instance -d TESTPRD -i TESTPRD2 |
Checking status
[oracle@node1 dbs]$ srvctl status database -d TESTPRD |
By having newly added instance running we should modify service to become available on both instances and to spread across all workload
Checking status of the service TESTPRDSVC
[oracle@node1 dbs]$ srvctl status service -d TESTPRD -s TESTPRDSVC Service TESTPRDSVC is running on instance(s) TESTPRD1 |
Modify service by adding preferred instances TESTPRD2
[oracle@node1 dbs]$ srvctl modify service -d TESTPRD -s TESTPRDSVC -n -i TESTPRD1,TESTPRD2 |
Checking configuration after modification
[oracle@node1 dbs]$ srvctl config service -d TESTPRD -s TESTPRDSVC Service name: TESTPRDSVC Service is enabled Server pool: TESTPRD_TESTPRDSVC Cardinality: 2 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: NONE Failover method: NONE TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: BASIC Edition: Preferred instances: TESTPRD1,TESTPRD2 Available instances: [oracle@node1 dbs]$ |
Now modified service should be started on instance TESTPRD2
[oracle@node1 dbs]$ srvctl start service -d TESTPRD -s TESTPRDSVC |
Checking status of the service TESTPRDSVC
[oracle@node1 dbs]$ srvctl status service -d TESTPRD -s TESTPRDSVC Service TESTPRDSVC is running on instance(s) TESTPRD2,TESTPRD1 [oracle@node1 dbs]$ |
Now service will is available on all two instances and workload will be spread across the cluster nodes
The only interruption will be visible during the stopping process of instance TESTPRD_2, but you can perform such activities out of OLTP in order to avoid user impact. You should consider to startup newly renamed instance as soon as possible in order to offload workload on instance TESTPRD1 and avoid capacity issues from max process perspective.
This is one of example how to perform Oracle RAC one node ( passive/active ) conversion to Oracle RAC (active/active)
Dont forget to modify /etc/oratab information
Example
TESTPRD:/u01/app/db/11.2.0/db2:N # line added by Agent TESTPRD1:/u01/app/db/11.2.0/db2:N # line added by Agent --- add new instance name TESTPRD2:/u01/app/db/11.2.0/db2:N # line added by Agent --- add new instance name TESTPRD_1:/u01/app/db/11.2.0/db2:N # line added by Agent -- remove old instance name TESTPRD_2:/u01/app/db/11.2.0/db2:N # line added by Agent -- remove old instance name |