Twitter

rac-status.sh: what's new ?

This post is a simple rac-status.sh What's new ?. I always track what new feature has been dev or fixed in the code itself, you can also check my github repo log but a dedicated post allows to be more verbose as some topics need more explanations to be fully clear (I have started this what's new in July 2021, previous information are only available in the code / git log).

This post comes on top of the main rac-status.sh feature description post and the rac-status.sh: FAQ.

- June 29th 2024: Thanks to Nikolay who raised this issue; I could adapt rac-status to 2 node Flex clusters. Indeed, Flex clusters always have a minimum of 3 ASM instances and one of them stays OFFLINE forever as 2 node clusters only have 2 nodes; below an example:
NAME=ora.ASMNET1LSNR_ASM.lsnr
TYPE=ora.asm_listener.type
LAST_SERVER=dbnode01            <== first node
STATE=ONLINE                    <== ONLINE
TARGET=ONLINE
CARDINALITY_ID=1
LAST_RESTART=0
LAST_STATE_CHANGE=0
INSTANCE_COUNT=3

TYPE=ora.asm_listener.type
LAST_SERVER=dbnode02
STATE=ONLINE
TARGET=ONLINE
CARDINALITY_ID=2
LAST_RESTART=1718602026
LAST_STATE_CHANGE=1718602026
INSTANCE_COUNT=3
TYPE=ora.asm_listener.type
LAST_SERVER=dbnode01            <== first node again, this is the 3rd ASM instance required by Flex
STATE=OFFLINE                   <== OFFLINE
TARGET=ONLINE
CARDINALITY_ID=3
LAST_RESTART=0
LAST_STATE_CHANGE=0
INSTANCE_COUNT=3
See in orange the dbnode01 information are shown a second time and it shows an OFFLINE status. This is the 3rd OFFLINE forever Flex ASM instance. What is confusing is that LAST_SERVER is one of the node (dbnode01 here) even if this resource will never be started (and has never been started ?). Also see INSTANCE_COUNT=3 (which is greated than the number of nodes) and CARDINALITY_ID which goes up to 3. Anyway, I have adapted rac-status not to consider this 3rd OFFLINE forever Flex ASM instance any more.

- Feb 22nd 2024:
  • rac-status is tested and validated against GI 23c, you can now upgrade your GI ! :)
  • Fixed an old bug with the Long Names, it works well now with no -L option
  • You can now only show the services (-ns); before you also had to show the databases; it is no more the case
  • A big thanks to Fernando, Yoann, Angel and an anonymous contributor of the blog for testing all of this!
- March 7th 2023:
  • A better -C option and more details about how this option works in the FAQ
- January 31st 2022:
  • Fixed a bug where standbys databases got a red color like if the TARGET was not like the STATE. I discovered that GI always have STATE=INTERMEDIATE for standby databases so after digging into it, I now moved to use USR_ORA_OPEN_MODE instead which fixes the standby issue and also make the coloring more accurate for databases. The USR_ORA_OPEN_MODE does not seem to be implemented for PDBs in 21c (yet ? ). I'll change it for PDBs as well if Oracle implements it in the future.
  • Created a rac-status.sh: FAQ and this what's new page on top of the main rac-status.sh feature description post for more clarity.
- November 11th 2021:
  • Performance improvement using the -attr option of crsctl, x10 faster on systems with 100th of resources registered in the cluster (more details about how I did this in this post)
  • rac-status.sh is now under the GPLv3 licence (same for all the scripts in my git repo); indeed, as rac-status.sh and my other scripts are widely used, it makes the use of them officially free and open source for everyone
- October 20th 2021:
  • RH 6 / OEL 6 (Exadata < 19) ships gawk 3.1.7 (released in 2009 !) which does not support the code I wrote to show the PDB status which requires gawk 4 (released in 2012; not 2021, 2012 !). As RH6/OEL6 is really used less and less and should eventually not be used any more in a near future, I was not very keen on redevelopping this feature for a close to death system so instead, I kept a version for gawk < 4 (RH6/OEL6); you'll find it here.
- September 12th 2021:
  • Implement the new GI 21c feature which allows the PDB management with srvctl to manage the PDBs with srvctl status; also a -p option to show/hide PDBs, default is we show the PDBs; for GI <= 21c which is what everyone is still running on production at this time, nothing changes, no PDB are shown in the Database table
  • The feature does not seem fully implemented by Oracle in term of metadata yet, we can only know if a PDB is Online or Offline but we dont have the info (STATE_DETAILS) if the PDB is Read Write or Read Only. I assume this will be added eventually -- I'll add it once available.









- August 25th 2021:
  • Added the PDB associated to the services (the PDB status is not shown as only GI 21c should have this information)
  • New -D option to specify a list of DB to show (and not the others)
  • New -S option to specify a list of services to show (and not the others)
  • The VIP IPs are not also shown on the right of the table
- July 14th 2021:
  • In some rare occasions, the cluster seems to show the ORACLE_HOMEs in a random order creating issues for the rac-mon.sh users; this ORACLE_HOME list is now alphabetically sorted which fixes this issue
  • For better visibility, all STANDBY resources (instances and services of STANDBY types) and now in blue, the PRIMARY resources still in the default white color
  • On top of this, if a STANDBY service is Online on a PRIMARY instance, it now appears with a red Online as a STANDBY service is not supposed to be running on a PRIMARY standby; same for a PRIMARY service on a STANDBY. Also, an Offline STANDBY service on a PRIMARY database now appears in green Offline as it is how it is supposed to be
  • The CRS environment is now set using /etc/oracle/olr.loc (thanks OzZyHH !) by default and no more /etc/oratab
  • ADVM devices are also shown on the right of the tech table (before, only the ACFS FS were)
  • The -k option also shows the ADVM devices on the same line as the ACFS FS which is handy when you need to remount some, right Kosseila ? :)
  • The -K option hides the ACFS FS and the ADVM devices if you are not interested
  • The cluster upgrade state is now shown to easily detect if your last GI patching has been successful or not:

1 comment:

  1. Excellent rac script ... first touch I loved. Thank you for sharing.

    ReplyDelete

OCI: Datapump between 23ai ADB and 19c ADB using database link

Now that we know how to manually create a 23ai ADB in OCI , that we also know how to create a database link between a 23ai ADB and a 19C AD...