3.4/ Patching the Grid Infrastructure
3.4.0 - Information
- Patching a node takes around 30 - 45 minutes
- You have to manually patch each node (there is no patchmgr orchestrating this for you here), one node after the other (myclusterdb01 then myclusterdb02 then myclusterdb03 then myclusterdb04, etc...)
- You can apply the patch in parallel except the first and the last node
- GI has to be patched as root
- This procedure will be patching the Grid Infrastructure (GI) only (and not the databases ORACLE_HOMEs)
- The below example is from a GI 12.2 patch, the same procedure applies for 12.1 and 18c
- As we have upgraded opatch for the GI and already done the pre-requisites, we can directly jump into the patching.,
3.4.1 - Check lsinventory
[oracle@myclusterdb01]$ . oraenv <<< +ASM1 [oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/lsinventory -all_nodes
3.4.2 - Patch GI on a Node
[root@myclusterdb01 ~]# cd /Oct2018_Bundle/28689205/Database/12.2.0.1.0/12.2.0.1.181016GIRU/28714316 [root@myclusterdb01 24448103]# nohup /u01/app/12.2.0.1/grid/OPatch/opatchauto apply -oh /u01/app/12.2.0.1/grid &Opatch will most likely finish with some warnings:
[Jun 5, 2016 5:50:47 PM] -------------------------------------------------------------------------------- [Jun 5, 2016 5:50:47 PM] The following warnings have occurred during OPatch execution: [Jun 5, 2016 5:50:47 PM] 1) OUI-67303: Patches [ 20831113 20299018 19872484 ] will be rolled back. [Jun 5, 2016 5:50:47 PM] -------------------------------------------------------------------------------- [Jun 5, 2016 5:50:47 PM] OUI-67008:OPatch Session completed with warnings.Checking the logfiles, you will find that this is probably due to superset patches:
Patch : 23006522 Bug Superset of 20831113If you check the patch number, you will find that this is an old patch : Patch 20831113: OCW PATCH SET UPDATE 12.1.0.2.4 Then this is safely ignorable as opatch rollback old patches after having applied the new ones.
3.4.3 - Check lsinventory
[oracle@myclusterdb01]$ . oraenv <<< +ASM1 [oracle@myclusterdb01]$ $ORACLE_HOME/OPatch/lsinventory -all_nodes
3.4.4 - How to Mitigate the Downtime
A way to greatly mitigate this outage (knowing that they're most likely RAC databases running on Exadata), is to use the power of the Oracle services.
- With load-balances services :
- With non load-balances services :
- You don't use services? This is then the opportunity you were waiting for to deploy the Oracle services! If you can't (or don't want to), you can always find a workaround in modifying the tnsnames.ora file of the application server. This will remove the node you want to patch so no new connection can go to this node any more. You can then wait for the current connections to finish and you can patch a node with no downtime.
Let's say you have a database running 4 instances on 4 nodes of the Exadata with a load-balanced APP service across the 4 nodes and you're about to patch the node1. Just stop the APP service on the node you will patch (no new connection will come on this node), wait for the current connections to finish and you are done. You can patch node 1 with no outage for the applications / users! Once done, rebalance the services to the node 1 and safely patch the other nodes.
You have non load balanced service? Not a problem. Just move this service away from the node you want to patch, wait for the current connections to finish and you can achieve the same goal.
3.5/ Upgrading the Grid Infrastructure
If you reaches that point, it means that you are done with your Exadata patching, congratulations !
Excellent Stuff Fred - great that you are sharing this with the community - best regards Gavin
ReplyDeleteHi, please check chapter 3.4.0 - in last bullet two URLs are broken and points to non-existing pages.
ReplyDeleteP.S. Once fixed - maybe remove my comment, as it will become useless
Thanks a lot, I fixed it !
Delete