Oracle 18c on-premises has been released in July 2018; it is then time to upgrade your GI to 12.2 !
Just to clarify about the versions here :
- 12.2 is 12.2.0.1
- 18c is 12.2.0.2
- 19c will be 12.2.0.3 (should be released in Feb 2019)
I will be presenting in this blog a well honed procedure for this upgrade, applied on many Exadatas. The procedure applies to non Exadata as well though.
0/ Preparation
Please find few things that are good to know and read before starting upgrading a GI to 12.2:
- 12.2 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.3 and later on Oracle Linux (Doc ID 2111010.1)
- Patches to apply before upgrading Oracle GI and DB to 12.2.0.1 (Doc ID 2180188.1)
- Download the GI 12.2 gold image: V840012-01.zip
- Download the RU you want to apply like Patch 27850694: GI APR 2018 RELEASE UPDATE 12.2.0.1.180417 (FOR QFSDP)
- GI 12.2 will be installed on /u01/app/12.2.0.1/grid
- GI 12.1 is running from /u01/app/12.1.0.2/grid on the systems I work on
- This procedure has been successfully applied on many Exadatas half rack and full rack; it also applies to non-Exadata GI system
- I use the rac-status.sh script to check the status of al the resources of my cluster before and after the maintenance to avoid any unpleasantness
- Check your oratab entries to avoid having them deleted during the upgrade as explained in this post
1/ Install GI 12.2 from the gold image
Let's enjoy this super new feature and quickly install GI 12.2 from a gold image:
-- Create the target directories for GI 12.2 sudo su - dcli -g ~/dbs_group -l root "ls -ltr /u01/app/12.2.0.1/grid" dcli -g ~/dbs_group -l root mkdir -p /u01/app/12.2.0.1/grid dcli -g ~/dbs_group -l root chown oracle:oinstall /u01/app/12.2.0.1/grid dcli -g ~/dbs_group -l root "ls -altr /u01/app/12.2.0.1/grid" -- Install GI using this gold image : /patches/V840012-01.zip sudo su - oracle unzip -q /patches/V840012-01.zip -d /u01/app/12.2.0.1/grid
2/ Pre requisites
2.1/ Upgrade opatch
As usual, it is recommended to patch opatch before starting any patching activity. If you work with Exadata, you may have a look at this post where I show how to quickly upgrade opatch with dcli.
2.2/ ASM spfile and password file
Check that the ASM passwordfile and the ASM spfile are located under ASM to avoid issues during the upgrade:
[oracle@exadatadb01]$ asmcmd spget +DATA/mycluster/ASMPARAMETERFILE/registry.253.909449003 [oracle@exadatadb01]$ asmcmd pwget --asm +DATA/orapwASM [oracle@exadatadb01]$If you don't, you may face the below error and ASM won't restart after being upgraded:
Verifying Verify that the ASM instance was configured using an existing ASM parameter file. ...FAILED PRCT-1011 : Failed to run "asmcmd". Detailed error: ASMCMD-8001: diskgroup 'u01' does not exist or is not mountedPlease find a quick procedure to move the ASM passwordfile from a FileSystem to ASM:
[oracle@exadatadb01]$ asmcmd pwcopy /u01/app/12.1.0.2/grid/dbs/orapw+ASM +DBFS_DG/orapwASM [oracle@exadatadb01]$ asmcmd pwset --asm +DBFS_DG/orapwASM [oracle@exadatadb01]$ asmcmd pwget --asm
2.3/ Prepare a responsefile such as this one
[oracle@exadatadb01]$ egrep -v "^#|^$" /tmp/giresponse.rsp | head -10 oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.2.0 INVENTORY_LOCATION= oracle.install.option=UPGRADE ORACLE_BASE=/u01/app/oracle oracle.install.asm.OSDBA=dba oracle.install.asm.OSOPER=dba oracle.install.asm.OSASM=dba oracle.install.crs.config.gpnp.scanName= oracle.install.crs.config.gpnp.scanPort= oracle.install.crs.config.ClusterConfiguration= [oracle@exadatadb01]$
2.4/ System pre requisites
Check these system pre requisites:
-- a 10240 limits for the "soft stack" (if not, set it, log off and log on) [root@exadatadb01]# dcli -g ~/dbs_group -l root grep stack /etc/security/limits.conf | grep soft exadatadb01: * soft stack 10240 exadatadb02: * soft stack 10240 exadatadb03: * soft stack 10240 exadatadb04: * soft stack 10240 [root@exadatadb01]# -- at least 1500 huge pages free [root@exadatadb01]# dcli -g ~/dbs_group -l root grep -i huge /proc/meminfo .... AnonHugePages: 0 kB HugePages_Total: 200000 HugePages_Free: 132171 HugePages_Rsvd: 38338 HugePages_Surp: 0 Hugepagesize: 2048 kB .... [root@exadatadb01]#
2.5/ Run the pre requisites
This step is very important and the logs need to be checked closely for any error:
[oracle@exadatadb01]$ cd /u01/app/12.2.0.1/grid [oracle@exadatadb01]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.2/grid -dest_crshome /u01/app/12.2.0.1/grid -dest_version 12.2.0.1 -fixup -verboseYou will find a summary of the errors at the end of the output; below an example of successfull pre requisites:
Pre-check for cluster services setup was successful.
and another example with some issues to fix before proceeding:
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
Verifying Network Time Protocol (NTP) ...FAILED
Verifying NTP daemon is synchronized with at least one external time source
...FAILED
exadatadb02: PRVG-13602 : NTP daemon is not synchronized with any external
time source on node "exadatadb02".
3/ Upgrade to GI 12.2
Now that all the pre requisites are successful, we can upgrade GI to 12.2.
3.0/ A status before starting the upgrade
I strongly recommend to keep a status of all the resources across your cluster before starting the maintenance to avoid any unpleasantness after the maintenance.
[oracle@exadatadb01]$ ./rac-status.sh -a | tee -a status_before_GI_upgrade_to_12.2 Cluster exadata is a X5-2 Elastic Rack HC 8TB Listener | Port | db01 | db02 | db03 | db04 | Type | ------------------------------------------------------------------------------------------------------------------- LISTENER | TCP:1551 | Online | Online | Online | Online | Listener | LISTENER_ABCD | TCP:1561 | Online | Online | Online | Online | Listener | LISTENER_SCAN1| TCP:1551,1561 | - | - | Online | - | SCAN | LISTENER_SCAN2| TCP:1551,1561 | - | Online | - | - | SCAN | LISTENER_SCAN3| TCP:1551,1561 | Online | - | - | - | SCAN | ------------------------------------------------------------------------------------------------------------------- DB | Service | db01 | db02 | db03 | db04 | ---------------------------------------------------------------------------------------------------- db01 | proddb_1_bkup | Online | - | - | - | | proddb_2_bkup | - | Online | - | - | | proddb_3_bkup | - | - | Online | - | | proddb_4_bkup | - | - | - | Online | db02 | db02svc1_bkup | - | - | Online | - | | db02svc2_bkup | - | - | Online | - | db03 | db03svc1_bkup | Online | - | - | - | | db03svc2_bkup | Online | - | - | - | db04 | db04svc1_bkup | Online | - | - | - | | db04svc2_bkup | - | Online | - | - | | db04svc3_bkup | - | - | Online | - | | db04svc4_bkup | - | - | - | Online | ---------------------------------------------------------------------------------------------------- DB | Version | db01 | db02 | db03 | db04 | DB Type | ------------------------------------------------------------------------------------------------------------------- db01 | 12.1.0.2 (1) | Readonly | Readonly | Readonly | Readonly | RAC (S) | db02 | 12.1.0.2 (1) | - | - | Open | Open | RAC (P) | db03 | 12.1.0.2 (1) | Open | Open | - | - | RAC (P) | db04 | 12.1.0.2 (1) | Readonly | Readonly | Readonly | Readonly | RAC (S) | ------------------------------------------------------------------------------------------------------------------- ORACLE_HOME references listed in the Version column : Primary : White and (P) Standby : Red and (S) 1 : /u01/app/oracle/product/12.1.0.2/dbhome_1 [oracle@exadatadb01]$
3.1/ ASM memory setting
Some recommended memory settings have to be set at ASM instance level:[oracle@exadatadb01]$ sqlplus / as sysasm SQL> alter system set sga_max_size = 3G scope=spfile sid='*'; SQL> alter system set sga_target = 3G scope=spfile sid='*'; SQL> alter system set memory_target=0 sid='*' scope=spfile; SQL> alter system set memory_max_target=0 sid='*' scope=spfile /* required workaround */; SQL> alter system reset memory_max_target sid='*' scope=spfile; SQL> alter system set use_large_pages=true sid='*' scope=spfile /* 11.2.0.2 and later(Linux only) */;
3.2/ Reset miscount to default
The miscount parameter is the maximum time, in seconds, that a network heartbeat can be missed before a node eviction occurs. It needs to be reset to default before upgrading. It has to be done as the GI owner.
[oracle@exadatadb01]$ . oraenv <<< +ASM1 [oracle@exadatadb01]$ crsctl unset css misscount
3.3/ gridSetup.sh
We will be using a script named gridSetup.sh to initiate the GI upgrade to 12.2
Please find below a whole output:
Please find below a whole output:
[oracle@exadatadb01]$ cd /u01/app/12.2.0.1/grid [oracle@exadatadb01]$ ./gridSetup.sh -silent -responseFile /tmp/giresponse.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.crs.enableRemoteGIMR=false -applyPSU /patches/12.2.0.1.180417GIRU/27850694 Preparing the home to patch... Applying the patch /patches/12.2.0.1.180417GIRU/27850694... Successfully applied the patch. The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2018-11-18_05-11-43PM/installerPatchActions_2018-11-18_05-11-43PM.log Launching Oracle Grid Infrastructure Setup Wizard... [WARNING] [INS-41808] Possible invalid choice for OSASM Group. CAUSE: The name of the group you selected for the OSASM group is commonly used to grant other system privileges (For example: asmdba, asmoper, dba, oper). ACTION: Oracle recommends that you designate asmadmin as the OSASM group. [WARNING] [INS-41809] Possible invalid choice for OSDBA Group. CAUSE: The group name you selected as the OSDBA for ASM group is commonly used for Oracle Database administrator privileges. ACTION: Oracle recommends that you designate asmdba as the OSDBA for ASM group, and that the group should not be the same group as an Oracle Database OSDBA group. [WARNING] [INS-41810] Possible invalid choice for OSOPER Group. CAUSE: The group name you selected as the OSOPER for ASM group is commonly used for Oracle Database administrator privileges. ACTION: Oracle recommends that you designate asmoper as the OSOPER for ASM group, and that the group should not be the same group as an Oracle Database OSOPER group. [WARNING] [INS-41813] OSDBA for ASM, OSOPER for ASM, and OSASM are the same OS group. CAUSE: The group you selected for granting the OSDBA for ASM group for database access, and the OSOPER for ASM group for startup and shutdown of Oracle ASM, is the same group as the OSASM group, whose members have SYSASM privileges onOracle ASM. ACTION: Choose different groups as the OSASM, OSDBA for ASM, and OSOPER for ASM groups. You can find the log of this install session at: /u01/app/oraInventory/logs/GridSetupActions2018-11-18_05-11-43PM/gridSetupActions2018-11-18_05-11-43PM.log As a root user, execute the following script(s): 1. /u01/app/12.2.0.1/grid/rootupgrade.sh Execute /u01/app/12.2.0.1/grid/rootupgrade.sh on the following nodes: [exadatadb01, exadatadb02, exadatadb04, exadatadb03] Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node. Successfully Setup Software. As install user, execute the following command to complete the configuration. /u01/app/12.2.0.1/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/giresponse.rsp [-silent] [oracle@exadatadb01]$Above are some ignorable warnings about OS groups. Is also described the next step which is to start rootupgrade.sh on each node.
3.4/ rootupgrade.sh
As specified by gridSetup.sh in the previous step, we now need to run rootupgrade.sh on each node knowing that you can start rootupgrade.sh in parallel except for the first and the last node; below an example with a half rack (4 nodes):
Here is a sample output; note that rootupgrade.sh is very silent, all logs go to the log file specified:
An interesting thing to note here after a node is patched is that the softwareversion is now the target one (12.2) but the activeversion is still the old one (12.1); indeed, the activeversion will be changed to 12.2 when applying rootupgrade.sh on the last node.
- Start rootupgrade.sh on the node 1
- Start rootupgrade.sh in parallel on the nodes 2 and 3
- Start rootupgrade.sh on the node 4
Here is a sample output; note that rootupgrade.sh is very silent, all logs go to the log file specified:
[root@exadatadb01]# /u01/app/12.2.0.1/grid/rootupgrade.sh Check /u01/app/12.2.0.1/grid/install/root_exadatadb01._2018-11-18_17-40-03-548575058.log for the output of root script [root@exadatadb01]#
An interesting thing to note here after a node is patched is that the softwareversion is now the target one (12.2) but the activeversion is still the old one (12.1); indeed, the activeversion will be changed to 12.2 when applying rootupgrade.sh on the last node.
[root@exadatadb01]# . oraenv <<< +ASM1 [root@exadatadb01]# crsctl query crs softwareversion Oracle Clusterware version on node [exadatadb01] is [12.2.0.1.0] [root@exadatadb01]# crsctl query crs activeversion Oracle Clusterware active version on the cluster is [12.1.0.2.0] [root@exadatadb01]#
3.5/ gridSetup.sh -executeConfigTools
Run the gridSetup.sh -executeConfigTools command:
[oracle@exadatadb01]$ /u01/app/12.2.0.1/grid/gridSetup.sh -executeConfigTools -responseFile /tmp/giresponse.rsp -silent Launching Oracle Grid Infrastructure Setup Wizard... You can find the logs of this session at: /u01/app/oraInventory/logs/GridSetupActions2018-11-18_07-11-22PM Successfully Configured Software. [oracle@exadatadb01]$
3.6/ Check that GI is relinked with RDS:
It is worth double checking that the new GI Home is properly relinked with RDS to avoid future performance issues (you may want to read this pdf for more information on what RDS is):
[oracle@exadatadb01]$ dcli -g ~/dbs_group -l oracle /u01/app/12.2.0.1/grid/bin/skgxpinfo exadatadb01: rds exadatadb02: rds exadatadb03: rds exadatadb04: rds [oracle@exadatadb01]$If not, relink the GI Home with RDS:
dcli -g ~/dbs_group -l oracle "ORACLE_HOME=/u01/app/12.2.0.1/grid; make -C /u01/app/12.2.0.1/grid/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle"
3.7/ Check the status of the cluster
Let's have a look at the status of the cluster and the activeversion:
[oracle@exadatadb01]$ /u01/app/12.2.0.1/grid/bin/crsctl check cluster -all ************************************************************** exadatadb01: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** exadatadb02: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** exadatadb03: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** exadatadb04: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** [oracle@exadatadb01]$ dcli -g ~/dbs_group -l oracle /u01/app/12.2.0.1/grid/bin/crsctl query crs activeversion Oracle Clusterware version on node [exadatadb01] is [12.2.0.1.0] Oracle Clusterware version on node [exadatadb02] is [12.2.0.1.0] Oracle Clusterware version on node [exadatadb03] is [12.2.0.1.0] Oracle Clusterware version on node [exadatadb04] is [12.2.0.1.0] [oracle@exadatadb01]$Let's check the status of all the resources like we did in paragraph 3.0:
[oracle@exadatadb01]$ ./rac-status.sh -a | tee -a status_after_GI_upgrade_to_12.2 Cluster exadata is a X5-2 Elastic Rack HC 8TB Listener | Port | db01 | db02 | db03 | db04 | Type | ------------------------------------------------------------------------------------------------------------------- LISTENER | TCP:1551 | Online | Online | Online | Online | Listener | LISTENER_ABCD | TCP:1561 | Online | Online | Online | Online | Listener | LISTENER_SCAN1| TCP:1551,1561 | - | Online | - | - | SCAN | LISTENER_SCAN2| TCP:1551,1561 | Online | - | - | - | SCAN | LISTENER_SCAN3| TCP:1551,1561 | - | - | Online | - | SCAN | ------------------------------------------------------------------------------------------------------------------- DB | Service | db01 | db02 | db03 | db04 | ---------------------------------------------------------------------------------------------------- db01 | proddb_1_bkup | Online | - | - | - | | proddb_2_bkup | - | Online | - | - | | proddb_3_bkup | - | - | Online | - | | proddb_4_bkup | - | - | - | Online | db02 | db02svc1_bkup | - | - | Online | - | | db02svc2_bkup | - | - | Online | - | db03 | db03svc1_bkup | Online | - | - | - | | db03svc2_bkup | Online | - | - | - | db04 | db04svc1_bkup | Online | - | - | - | | db04svc2_bkup | - | Online | - | - | | db04svc3_bkup | - | - | Online | - | | db04svc4_bkup | - | - | - | Online | ---------------------------------------------------------------------------------------------------- DB | Version | db01 | db02 | db03 | db04 | DB Type | ------------------------------------------------------------------------------------------------------------------- db01 | 12.1.0.2 (1) | Readonly | Readonly | Readonly | Readonly | RAC (S) | db02 | 12.1.0.2 (1) | - | - | Open | Open | RAC (P) | db03 | 12.1.0.2 (1) | Open | Open | - | - | RAC (P) | db04 | 12.1.0.2 (1) | Readonly | Readonly | Readonly | Readonly | RAC (S) | ------------------------------------------------------------------------------------------------------------------- ORACLE_HOME references listed in the Version column : Primary : White and (P) Standby : Red and (S) 1 : /u01/app/oracle/product/12.1.0.2/dbhome_1 [oracle@exadatadb01]$And check for differences:
[oracle@exadatadb01]$ diff status_before_GI_upgrade_to_12.2 status_after_GI_upgrade_to_12.2 8,10c8,10 < LISTENER_SCAN1| TCP:1551,1561 | - | - | Online | - | SCAN | < LISTENER_SCAN2| TCP:1551,1561 | - | Online | - | - | SCAN | < LISTENER_SCAN3| TCP:1551,1561 | Online | - | - | - | SCAN | --- > LISTENER_SCAN1| TCP:1551,1561 | - | Online | - | - | SCAN | > LISTENER_SCAN2| TCP:1551,1561 | Online | - | - | - | SCAN | > LISTENER_SCAN3| TCP:1551,1561 | - | - | Online | - | SCAN | [oracle@exadatadb01]$We can see here than only the SCAN listeners have been re shuffled by the maintenance which does not matter. You can relocate them but it has no impact whatsoever. It also means that all our instances and services are back as they were before the maintenance. We are then idempotent.
3.8/ Set Flex ASM Cardinality to "ALL"
Starting release 12.2 ASM will be configured as "Flex ASM". By default Flex ASM cardinality is set to 3. This means configurations with four or more database nodes in the cluster might only see ASM instances on three nodes. Nodes without an ASM instance running on it will use an ASM instance on a remote node within the cluster. Only when the cardinality is set to “ALL”, ASM will bring up the additional instances required to fulfill the cardinality setting.
[oracle@exadatadb01]$ srvctl modify asm -count ALL [oracle@exadatadb01]$Note that this command provides no output.
3.9/ Update compatible.asm to 12.2
Now that ASM 12.2 is running, it is recommended to update the compatible.asm to 12.2 to be able to enjoy the 12.2 new features.
-- Set env and connect [oracle@exadatadb01]$ . oraenv <<< +ASM1 [oracle@exadatadb01]$ sqlplus / as sysasm -- List the diskgroups SQL> select name, COMPATIBILITY from v$asm_diskgroup ; -- Set compatible to 12.2 (examples here with some usual DGs) SQL> ALTER DISKGROUP DATA SET ATTRIBUTE 'compatible.asm' = '12.2.0.1.0'; SQL> ALTER DISKGROUP DBFS_DG SET ATTRIBUTE 'compatible.asm' = '12.2.0.1.0'; SQL> ALTER DISKGROUP RECO SET ATTRIBUTE 'compatible.asm' = '12.2.0.1.0'; -- Verify the new settings SQL> select name, COMPATIBILITY from v$asm_diskgroup ;
3.10/ Update the Inventory
To wrap this up, let's update the Inventory
[oracle@exadatadb01]$ . oraenv <<< +ASM1 [oracle@exadatadb01]$ /u01/app/12.2.0.1/grid/oui/bin/runInstaller -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/u01/app/12.2.0.1/grid "CLUSTER_NODES={exadatadb01,exadatadb02,exadatadb03,exadatadb04}" CRS=true LOCAL_NODE=exadatadb01Note: you may also want to update the new GI Home patch in OEM or any other monitoring tool that would require it.
3.11/ /etc/oratab entries
If you have oratab entries that have disappeared after the upgrade, you may have missed the warning in the 0/ Preparation paragraph of this post. You may want to have a look at this post for an explanation of this behavior.
And you're all done ! enjoy !
No comments:
Post a Comment