Twitter

X8M: about the dbs_group, cell_group and roce_group files

Exadata X8M has been released for a few months now and one of the main new architectural change is that there are no more Infiniband Switches but ROCE switches. While this is a good news on the performance side, let's see how it goes on the day to day job side.

Indeed, the Infiniband Switches were coming with discovering capabilities through the ibhosts and ibswicthes commands which obviously don't work any more on X8M:
[root@x8m_01 ~]# ibhosts
ibwarn: [391160] mad_rpc_open_port: client_register for mgmt 1 failed
src/ibnetdisc.c:784; can't open MAD port ((null):0)
/usr/sbin/ibnetdiscover: iberror: failed: discover failed
[root@x8m_01 ~]# ibswitches
ibwarn: [391423] mad_rpc_open_port: client_register for mgmt 1 failed
src/ibnetdisc.c:784; can't open MAD port ((null):0)
/usr/sbin/ibnetdiscover: iberror: failed: discover failed
[root@x8m_01 ~]#
This is not really a good news for us as we were using these commands a lot to generate the dbs_group, cell_group and ib_group as shown in this post, we could easily generate these files like this:
[root@myclusterdb01 ~]# ibhosts | sed s'/"//' | grep db | awk '{print $6}' | sort > /root/dbs_group
[root@myclusterdb01 ~]# ibhosts | sed s'/"//' | grep cel | awk '{print $6}' | sort > /root/cell_group
[root@myclusterdb01 ~]# cat /root/dbs_group ~/cell_group > /root/all_group
[root@myclusterdb01 ~]# ibswitches | awk '{print $10}' | sort > /root/ib_group
[root@myclusterdb01 ~]#
This is also what were using the exa-versions.sh and cell-status.sh scripts to dynamically generate the list of nodes, cells and switches to connect to and gather information about them.

So now that this is gone, how can we gerenate the dbs_group, cell_group and roce_group on X8M ? Oracle support is pretty clear on how to achieve this:

What Support forgets to mention here is that the verify cable script needs the list of ROCE switches (or the nodes / cells) to be able to verify these cables (and you also need to run setup_switch_ssh_equiv.sh before using it against the ROCE switches):
[root@x8m_01 RoCE]# /opt/oracle.SupportTools/RoCE/verify_roce_cables.py
usage: verify_roce_cables.py [-h] -n NODES -s SWITCHES [-d]
verify_roce_cables.py: error: argument -n/--nodes is required
[root@x8m_01 RoCE]#

So this is no way an alternative to the ibhosts and ibswitches commands. And when insisting to Support, you finally face the truth (that truth you could feel but your mind was not ready to admit):

So Yes, there is no way of dynamically get the ROCE swicthes, database nodes or cells, you have to rely on the XML provided by OEDA or databasemachine.xml. If we look at the output of exa-racklayout.sh which is based on databasemachine.xml, we can indeed see every node and switch information (I will just have to adapt the color of the ROCE switches):

It is indeed a way of getting the information but I keep thinking that relying on hardcoded text files is awkward -- what if someone delete these files ? what if someone does not update them properly after a rack expansion ?

To sum up, we have no way (from X8M) to dynamically have the list of nodes, cells and switches -- which looks like an important regression to me:
  • From X8M (onwards ?), you have to rely on hardcoded list (databasemachine.xml, /etc/hosts, ...)
  • You can still have the list of your database nodes using olsnodes
  • You can get the IPs of the cells that a database node can access from /etc/oracle/cell/network-config/cellip.ora but you cannot be sure that these are all the cells in your rack

Feel free to let a comment below if you found an easy way to get these informations instead of relying on hardcoded lists.

2 comments:

  1. So in X9 we don't use cell_group and the other files?

    ReplyDelete
    Replies
    1. Yes but you cannot generate them easily as before X8; ROCE switches don't have the capability of dynamically list the components.

      Delete

CUDA: getting started on WSL

I have always preferred command line and vi finding it more efficient so after the CUDA: getting started on Windows , let's have a loo...