9/ OS configuration
9.1/ SSH keys
First of all, we have to create the SSH config on the new node and make it reachable from every other node as well as making all the other nodes able to reach this new node with no password.
[root@newnode ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): . . . [root@newnode ~]#I guess that if you are doing this kind of maintenance, the SSH keys deployment across a cluster has no secret for you, I then won't go further here.
9.2/ ReclaimSpace
Same as when you install a new Exadata, you need to execute the Reclaim space script.
[root@newnode ~]# cd /opt/oracle.SupportTools/ [root@newnode oracle.SupportTools]# ./reclaimdisks.sh -free -reclaim Model is ORACLE SERVER X7-2 Number of LSI controllers: 1 Physical disks found: 4 (252:0 252:1 252:2 252:3) Logical drives found: 1 Linux logical drive: 0 RAID Level for the Linux logical drive: 5 Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3) Dedicated Hot Spares for the Linux logical drive: 0 Global Hot Spares: 0 [INFO ] Check for Linux with inactive DOM0 system disk [INFO ] Valid Linux with inactive DOM0 system disk is detected [INFO ] Number of partitions on the system device /dev/sda: 3 [INFO ] Higher partition number on the system device /dev/sda: 3 [INFO ] Last sector on the system device /dev/sda: 3509760000 [INFO ] End sector of the last partition on the system device /dev/sda: 3509759966 [INFO ] Remove inactive system logical volume /dev/VGExaDb/LVDbSys3 [INFO ] Remove xen files from /boot [INFO ] Remove ocfs2 logical volume /dev/VGExaDb/LVDbExaVMImages [root@newnode oracle.SupportTools]#
9.3/ Huge Pages
As you probably use Huge Pages on your other nodes, you may want to set the same configuration on this new node. Also, double check that transparent Huge Pages are disabled.
[root@newnode]# grep -i Huge /proc/meminfo AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB [root@newnode]# ssh node1 grep -i Huge /proc/meminfo AnonHugePages: 0 kB HugePages_Total: 200000 HugePages_Free: 162702 HugePages_Rsvd: 79 HugePages_Surp: 0 Hugepagesize: 2048 kB [root@newnode]# echo "vm.nr_hugepages = 200000" >> /etc/sysctl.conf [root@newnode]# sysctl -p . . . [root@newnode]# cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] [root@newnode]#
9.4/ limits.conf
Get the same limits from another node to the new node.
[root@newnode]# grep -v ^# /etc/security/limits.conf * hard maxlogins 1000 * soft stack 10240 * hard core 0 [root@newnode]# ssh node1 grep -v ^# /etc/security/limits.conf * hard maxlogins 1000 * hard core 0 * soft stack 10240 oracle soft core unlimited oracle hard core unlimited oracle soft nproc 131072 oracle hard nproc 131072 oracle soft nofile 65536 oracle hard nofile 65536 oracle soft memlock 475963920 oracle hard memlock 475963920 [root@newnode]# scp root@node1:/etc/security/limits.conf /etc/security/limits.conf limits.conf 100% 2447 2.4KB/s 00:00 [root@newnode]#
9.5/ /etc/hosts
On every node you have to add this new node and add the other nodes information on the new node. You should then add the private IPs to the other nodes /etc/hosts:
192.168.66.10 newnode-priv1.domain.com newnode-priv1 192.168.66.11 newnode-priv2.domain.com newnode-priv2And the other nodes information in the newnode's /etc/hosts:
192.168.66.10 newnode-priv1.domain.com newnode-priv1 192.168.66.11 newnode-priv2.domain.com newnode-priv2 10.11.12.13 newnode.domain.com newnode 10.22.33.4 4 newnodevip.domain.com newnode . . . other nodes info . . .
9.6/ cellinit.ora
The cellinit.ora file has to contain the private IP adresses of the new node.
[root@newnode]# ibhosts | grep newnode Ca : 0x506b4b030071c9f0 ports 2 "newnode S 192.168.66.10,192.168.66.11 HCA-1" [root@newnode]# vi /etc/oracle/cell/network-config/cellinit.ora [root@newnode]# cat /etc/oracle/cell/network-config/cellinit.ora ipaddress1=192.168.66.10/22 ipaddress2=192.168.66.11/22 [root@newnode]#
9.7/ cellip.ora
The cellip.ora file contains the private IP of the cells, this file is identical on all the nodes of a cluster, you can then copy it from another node.
[root@newnode]# ssh node1 cat /etc/oracle/cell/network-config/cellip.ora cell="192.168.66.50;192.168.66.51" cell="192.168.66.52;192.168.66.53" cell="192.168.66.54;192.168.66.55" cell="192.168.66.56;192.168.66.57" cell="192.168.66.58;192.168.66.59" [root@newnode]# scp root@node1:/etc/oracle/cell/network-config/cellip.ora /etc/oracle/cell/network-config/cellip.ora cellip.ora 100% 175 0.2KB/s 00:00 [root@newnode]# cat /etc/oracle/cell/network-config/cellip.ora cell="192.168.66.50;192.168.66.51" cell="192.168.66.52;192.168.66.53" cell="192.168.66.54;192.168.66.55" cell="192.168.66.56;192.168.66.57" cell="192.168.66.58;192.168.66.59" [root@newnode]#
9.8/ cellroute.ora
This file is identical across all nodes, you can copy it from another node.
[root@newnode]# ls -ltr /etc/oracle/cell/network-config/cellroute.ora ls: cannot access /etc/oracle/cell/network-config/cellroute.ora: No such file or directory [root@newnode]# scp root@node1:/etc/oracle/cell/network-config/cellroute.ora /etc/oracle/cell/network-config/cellroute.ora cellroute.ora [root@newnode]#
9.9/ Privileges
Verify and update if necessary the owner:group of the below files as they are on the other nodes.
[root@newnode]# chown -R oracle:dbmusers /etc/oracle/cell/network-config/ [root@newnode]# chown oracle:dbmusers /etc/oracle/cell/network-config/cellinit.ora [root@newnode]# chown oracle:oinstall /etc/oracle/cell/network-config/cellip.ora [root@newnode]# chown oracle:oinstall /etc/oracle/cell/network-config/cellroute.ora
9.10/ NTP and DNS
Verify and update the NTP and DNS config and options if needed.
[root@newnode]# ssh node1 grep -v ^# /etc/ntp.conf | grep -v ^$ [root@newnode]# ssh node1 grep -v ^# /etc/resolv.conf | grep -v ^$
9.11/ Create groups
Create the same groups for the oracle user as on the other nodes. Note that here, I had to create the asmadmin and asmdba groups as it is a pre-requisites for GI extension we will see in part 4. Check for your grid user as well if you use another owner for ASM / GI.
[root@newnode]# ssh node1 id oracle uid=1234(oracle) gid=1235(oinstall) groups=1235(oinstall),1234(dba) [root@newnode]# groupadd -g 1235 oinstall [root@newnode]# groupadd -g 1234 dba [root@newnode]# groupadd -g 1236 asmadmin [root@newnode]# groupadd -g 1237 asmdba
9.12/ Create the oracle user
Recreate the oracle user same as on the other nodes, also recreate your grid user if your ASM / GI runs with a different user than oracle.
[root@newnode]# useradd -u 1234 -g 1235 -G 1235,1234,1235,1237 -m -d /home/oracle -s /bin/bash oracle [root@newnode]# passwd oracle . . . [root@newnode]#Also, set the SSH passwordless connectivity for the oracle (and grid if needed) user.
9.13/ Directories
Create the directories needed to install GI (and DB if you wish, I won't cover the DB part in this blog).
[root@newnode]# ssh node1 grep ASM /etc/oratab +ASM1:/u01/app/12.2.0.1/grid:N [root@newnode]# mkdir -p /u01/app/12.2.0.1/grid [root@newnode]# mkdir -p /u01/app/oracle # ORACLE_BASE [root@newnode]# chown -R oracle:oinstall /u01/app
9.14/ Reboot
Let's reboot to be sure that all the modifications we made are good and the system reboots properly.
[root@newnode]# reboot
You are then almost done. Almost ? yes, almost ! indeed you no need to extend your GI adding this new node in your cluster which is described in part 4 !
No comments:
Post a Comment