Part1:
Adding a Grid node and Database home to a 12c RAC Cluster
The generic steps to follow when adding the new node to the cluster are:
- Install Operating System
- Install required software
- Add/modify users and groups required for the installation
- Configure network
- Configure kernel parameters
- Configure services required such as NTP
- Configure storage (multipathing, zoning, storage discovery, ASMLib?)
Node 3 Configuration details
Below is /etc/hosts entry after creating the third node
[root@rac3 ~]# cat /etc/hosts 127.0.0.1 localhost.localdomain localhost # Public 192.168.56.71 rac1.localdomain rac1 192.168.56.72 rac2.localdomain rac2 192.168.56.73 rac3.localdomain rac3 # Private 192.168.10.1 rac1-priv.localdomain rac1-priv 192.168.10.2 rac2-priv.localdomain rac2-priv 192.168.10.3 rac3-priv.localdomain rac3-priv # Virtual 192.168.56.81 rac1-vip.localdomain rac1-vip 192.168.56.82 rac2-vip.localdomain rac2-vip 192.168.56.83 rac3-vip.localdomain rac3-vip # SCAN #192.168.56.91 rac-scan.localdomain rac-scan #192.168.56.92 rac-scan.localdomain rac-scan #192.168.56.93 rac-scan.localdomain rac-scan
As we can there is only two configured till now
[oracle@rac1 ~]$ olsnodes -n -i -t rac1 1 rac1-vip Unpinned rac2 2 rac2-vip Unpinned
Also check
- /etc/sysconfig/selinux to ensure that SELinux is in the required state (permissive in my case)
- chkconfig iptables –list to ensure that the local firewall is either off, or-in combination with iptables -L-uses the correct settings
- NTP configuration in /etc/sysconfig/ntpd must include the “-x” flag. If it’s not there, add it and restart NTP
👇Run cluster verify to check that host03 can be added as node👇
[grid@rac3 ~]$ $GRID_HOME/bin/cluvfy stage -pre nodeadd -n rac3 -fixup -fixupnoexec Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "rac3" Checking user equivalence... User equivalence check passed for user "grid" Package existence check passed for "cvuqdisk" Checking CRS integrity... CRS integrity check passed Clusterware version consistency passed. Checking shared resources... Checking CRS home location... Location check passed for: "/u01/app/12.1.0.1/grid" Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... << output is truncated>> NOTE: No fixable verification failures to fix Pre-check for node addition was successful on all the nodes.
Run the addnode.sh to add the node
[grid@rac3 ~]$ export IGNORE_PREADDNODE_CHECKS=Y
[grid@rac3 ~]$ cd $GRID_HOME/oui/bin
[grid@rac3 ~]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}" Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 1726 MB Passed Checking swap space: must be greater than 150 MB. Actual 767 MB Passed [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2016-07-20_11-51-00PM.log ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2016-07-20_11-51-00PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. Prepare Configuration in progress. Prepare Configuration successful. .................................................. 9% Done. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2016-07-20_11-51-00PM.log Instantiate files in progress. Instantiate files successful. .................................................. 15% Done. Copying files to node in progress. Copying files to node successful. .................................................. 79% Done. Saving cluster inventory in progress. .................................................. 87% Done. Saving cluster inventory successful. The Cluster Node Addition of /u01/app/12.1.0.1/grid was successful. Please check '/tmp/silentInstall.log' for more details. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/12.1.0.1/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [rac3] Execute /u01/app/12.1.0.1/grid/root.sh on the following nodes: [rac3] The scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes. .......... Update Inventory in progress. .................................................. 100% Done. Update Inventory successful. Successfully Setup Software. [grid@rac3 addnode]$
Execute oraInstroot.sh on node3 as root:-
[root@rac3 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete.
Execute root.sh on node3 as root:-
[root@rac3 addnode]# /u01/app/12.1.0.1/grid/root.sh << output truncated >> CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac2' CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac3' CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2016/07/21 00:10:23 CLSRSC-343: Successfully started Oracle clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2016/07/21 00:11:03 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
To verify the cluster is added or not
[oracle@rac1 ~]$ olsnodes -n -i -t rac1 1 rac1-vip Unpinned rac2 2 rac2-vip Unpinned rac3 3 rac3-vip Unpinned
we can also check as below
[root@rac3 ~]# crsctl check cluster -all ************************************************************** rac1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac3: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [root@rac3 ~]#
Part2 :
Adding a Database home to a 12c RAC Cluster
Now we are going to extend the database home in 3rd home.
From an existing node i.e rac1 – as the database software owner – run the following command to extend the Oracle database software to the new node “rac3”
[oracle@rac1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1/
[oracle@rac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "rac1" Checking user equivalence... User equivalence check passed for user "oracle" WARNING: Node "rac3" already appears to be part of cluster Pre-check for node addition was successful. Starting Oracle Universal Installer... << output truncated>> Copying to remote nodes (Tuesday, December 24, 2016 2:22:40 PM IST) ............................................................................................... 96% Done. Home copied to new nodes Saving inventory on nodes (Tuesday, December 24, 2016 2:36:10 PM IST) . 100% Done. Save inventory complete WARNING: The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes. /u01/app/oracle/product/12.1.0.1/dbhome_1/root.sh #On nodes rac3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node The Cluster Node Addition of /u01/app/oracle/product/12.1.0.1/dbhome_1 was successful. Please check '/tmp/silentInstall.log' for more details.
Now run root.sh on rac3 node
[root@rac3 ~]# /u01/app/oracle/product/12.1.0.1/dbhome_1/root.sh
Post Installation steps:-
From a node with an existing instance of “orcl” issue the following commands to create the needed public log thread, undo tablespace, and “init.ora” entries for the new instance
From RAC1 node
SQL> alter database add logfile thread 3 group 5 ('+DATA') size 50M, group 6 ('+DATA') size 50M; Database altered. SQL> alter database enable public thread 3; Database altered. SQL> create undo tablespace undotbs3 datafile '+DATA' size 200M autoextend on; Tablespace created. SQL> alter system set undo_tablespace='undotbs3' scope=spfile sid='orcl3'; System altered. SQL> alter system set instance_number=3 scope=spfile sid='orcl3'; System altered. SQL> alter system set cluster_database_instances=3 scope=spfile sid='*'; System altered.
Update Oracle Cluster Registry (OCR)
The OCR will be updated to account for a new instance – “orcl3” – being added to the “orcl” cluster database. Add “orcl3” instance to the “orcl” database and verify
[oracle@rac3 bin]$ srvctl add instance -d orcl -i orcl3 -n rac3
[oracle@rac3 bin]$ srvctl status database -d orcl -v
Instance orcl1 is running on node rac1.
Instance orcl2 is running on node rac2.
Instance orcl3 is not running on node rac3.
Start the Instance
Now that all the prerequisites have been satisfied and OCR updated, the “orcl3” instance will be started. Start the newly created instance – “orcl3” – and verify
[oracle@rac3 ~]$ srvctl start instance -d orcl -i orcl3
[oracle@rac1 ~]$ srvctl status database -d RAC -v
Instance RAC1 is running on node rac1. Instance status: Open.
Instance RAC2 is running on node rac2. Instance status: Open.
Instance RAC3 is running on node rac3. Instance status: Open.
[oracle@rac1 ~]$
SQL> select inst_id, instance_name, status, to_char(startup_time, 'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id; INST_ID INSTANCE_NAME STATUS START_TIME ---------- ---------------- ------------ ----------------------------- 1 ORCL1 OPEN 16-AUG-2016 03:27:08 2 ORCL2 OPEN 16-AUG-2016 03:36:37 3 ORCL3 OPEN 16-AUG-2016 03:36:00
So we could add database home successfully to 3rd node.
No comments:
Post a Comment