10/ Extend GI
10.1/ Check what is running
Before extending our GI, it is good to check what is running using the rac-status.sh script
[oracle@exadb01] ./rac-status.sh Cluster exa02 is a X5-2 Elastic Rack HC 8TB Listener | Port | db01 | db02 | db03 | db04 | Type | ------------------------------------------------------------------------------------------------------------------- LISTENER | TCP:1521 | Online | Online | Online | Online | Listener | LISTENER_SCAN1| TCP:1521 | - | - | Online | - | SCAN | LISTENER_SCAN2| TCP:1521 | - | - | - | Online | SCAN | LISTENER_SCAN3| TCP:1521 | Online | - | - | - | SCAN | ------------------------------------------------------------------------------------------------------------------- DB | Version | db01 | db02 | db03 | db04 | DB Type | ------------------------------------------------------------------------------------------------------------------- prod01 | 12.1.0.2 (1) | Open | Open | Open | Open | RAC (P) | prod02 | 12.1.0.2 (1) | - | - | Open | Open | RAC (P) | prod03 | 12.1.0.2 (1) | Open | Open | Open | Open | RAC (P) | ------------------------------------------------------------------------------------------------------------------- ORACLE_HOME references listed in the Version column : 1 : /u01/app/oracle/product/12.1.0.2/dbhome_1 [oracle@exadb01]
10.2/ Source home pre-requisites
On a node of the cluster (say node1), we have to verify that the installed home is ready to be cloned to the new node.
[oracle@exadb01] . oraenv <<< +ASM1 ORACLE_SID = [xxx] ? The Oracle base remains unchanged with value /u01/app/oracle [oracle@exadb01] cd $ORACLE_HOME/bin [oracle@exadb01] ./cluvfy stage -pre crsinst -n newnode -verbose . . . Pre-check for cluster services setup was successful. CVU operation performed: stage -pre crsinst Date: Apr 28, 2019 7:39:03 PM CVU home: /u01/app/12.2.0.1/grid/ User: oracle [oracle@exadb01]
10.3/ Node addition pre-requisites
Now the node addition pre-requisites.
[oracle@exadb01]$ ./cluvfy stage -pre nodeadd -n newnode -verbose . . . Pre-check for node addition was successful. CVU operation performed: stage -pre nodeadd Date: Apr 28, 2019 7:44:47 PM CVU home: /u01/app/12.2.0.1/grid/ User: oracle [oracle@exadb01]$
10.4/ Fix bug 26200970
As described in CLSRSC-670 Error from root.sh When Adding a Node (Doc ID 2353529.1), bug 26200970 impacts GI < 18 then we have to manually update the crsconfig_params file with ASM_UPGRADE=false to fix it.
[oracle@exadb01]$ grep ASM_UPGRADE /u01/app/12.2.0.1/grid/crs/install/crsconfig_params # srisanka 04/14/08 - ASM_UPGRADE param ASM_UPGRADE=true [oracle@exadb01]$ vi /u01/app/12.2.0.1/grid/crs/install/crsconfig_params [oracle@exadb01]$ grep ASM_UPGRADE /u01/app/12.2.0.1/grid/crs/install/crsconfig_params # srisanka 04/14/08 - ASM_UPGRADE param ASM_UPGRADE=false [oracle@exadb01]$
10.5/ Add the node
We can now safely add the node to the cluster.
[oracle@exadb01]$ cd /u01/app/12.2.0.1/grid/addnode [oracle@exadb01]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={newnode}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={newnode-vip}" "CLUSTER_NEW_NODE_ROLES={HUB}" Prepare Configuration in progress. Prepare Configuration successful. .................................................. 7% Done. Copy Files to Remote Nodes in progress. .................................................. 12% Done. .................................................. 17% Done. .............................. Copy Files to Remote Nodes successful. You can find the log of this install session at: /u01/app/oraInventory/logs/addNodeActions2019-04-28_07-52-05-PM.log Instantiate files in progress. Instantiate files successful. .................................................. 49% Done. Saving cluster inventory in progress. .................................................. 83% Done. Saving cluster inventory successful. The Cluster Node Addition of /u01/app/12.2.0.1/grid was successful. Please check '/u01/app/12.2.0.1/grid/inventory/silentInstall2019-04-28_7-52-04-PM.log' for more details. Setup Oracle Base in progress. Setup Oracle Base successful. .................................................. 90% Done. Update Inventory in progress. Update Inventory successful. .................................................. 97% Done. As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/12.2.0.1/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [newnode] Execute /u01/app/12.2.0.1/grid/root.sh on the following nodes: [newnode] The scripts can be executed in parallel on all the nodes. .................................................. 100% Done. Successfully Setup Software. [oracle@exadb01]$
10.6/ root.sh
Connect to the newnode and execute the root.sh scripts.
[root@node1]# ssh newnode [root@newnode]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@newnode]# /u01/app/12.2.0.1/grid/root.sh Check /u01/app/12.2.0.1/grid/install/root_newnode.domain.com_2019-04-28_20-08-17-303344744.log for the output of root script [root@newnode]#Check the contents of the root.sh logfile to be sure everything has been succesfully executed. You should see something like this if everything is OK.
Operation successful. 2019/04/28 20:14:11 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'. 2019/04/28 20:14:23 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
10.7/ rac-status.sh
One says that an image worth thousands words, let's then check how our cluster now looks:
[oracle@exadb01] ./rac-status.sh Cluster exa02 is a X5-2 Elastic Rack HC 8TB Listener | Port | db01 | db02 | db03 | db04 | newnode | Type | ---------------------------------------------------------------------------------------------------------------- LISTENER | TCP:1521 | Online | Online | Online | Online | Online | Listener | LISTENER_SCAN1| TCP:1521 | - | - | Online | - | - | SCAN | LISTENER_SCAN2| TCP:1521 | - | - | - | Online | - | SCAN | LISTENER_SCAN3| TCP:1521 | Online | - | - | - | - | SCAN | ---------------------------------------------------------------------------------------------------------------- DB | Version | db01 | db02 | db03 | db04 | newnode | DB Type | ---------------------------------------------------------------------------------------------------------------- prod01 | 12.1.0.2 (1) | Open | Open | Open | Open | - | RAC (P) | prod02 | 12.1.0.2 (1) | - | - | Open | Open | - | RAC (P) | prod03 | 12.1.0.2 (1) | Open | Open | Open | Open | - | RAC (P) | ---------------------------------------------------------------------------------------------------------------- ORACLE_HOME references listed in the Version column : 1 : /u01/app/oracle/product/12.1.0.2/dbhome_1 [oracle@exadb01]You newnode is here, congratulations !
10.8/ Update the GI Inventory
A last step to finish it properly is to add the newnode in the GI inventory on all the other nodes.
[oracle@node1]$ cd /u01/app/12.2.0.1/grid/oui/bin [oracle@node1]$ ./runInstaller -updatenodelist -ignoreSysPrereqs ORACLE_HOME=/u01/app/12.2.0.1/grid "CLUSTER_NODES={node1,node2,node3,node4,newnode}" LOCAL_NODE=node1 CRS=TRUE Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 24318 MB Passed The inventory pointer is located at /etc/oraInst.loc 'UpdateNodeList' was successful. [oracle@node1]$Note that this has to be executed on every node of your cluster; remember that you need to adapt the LOCAL_NODE parameter on each node.
All is now successfully completed !
No comments:
Post a Comment