Twitter

Exadata: resize (increase and decrease) ASM diskgroups

Disk space is scarce -- always. It is true that when one buys an Exadata for example, he is told that there is 750 TB of disk space which looks huge (almost infinite ?) at first sight but this quickly drops to only 250 TB when in HIGH redundancy and even less than that if you use some for RECO and you then often have to juggle with disk space between disk groups. Hopefully, this juggling challenge is pretty easy and, most importantely, online !

Before jumping into the procedure describing how to increase/decrease a diskgroup size, one need to understand some Exadata storage concepts and vocabulary:
  • Each storage sever (aka cell) has 12 disks (except Extreme Flash which has 8 disks) -- you can remember this one, it has been a question in any Exadata certification I took :)
  • This means that on a 5 cells configuration, you will have 5*12=60 physical disks (of 12TB each for example on a X8 thus 12*60=720 TB)
  • Each of the 12 physical disk in a cell is called a cell disk (makes sense, it is a disk in a cell :)); cell disks are numbered from 0 to 11
  • When you allocate some space on a cell disk for a diskgroup, a slice on each cell disk is taken, this is called a grid disk
  • And ASM uses theses grid disks as ASM disks in its diskgroup and manages the redundancy you want (or "the redundancy you have inherited from a default installation" may be more accurate here :))

Following my previous explanations, let's start by having a look at the available disk space on the celll disks before increasing a RECOPROD diskgroup (this has to be done from a physical database node or a dom0/KVM Host if you use virtualization):
[root@exa01db01 ]# dcli -g ~/cell_group -l root "cellcli -e list celldisk attributes name,size,status,freespace where name like \'CD_.*\'"
exa01cel01: CD_00_exa01cel01     12.4737091064453125T    normal  1.409210205078125T
exa01cel01: CD_01_exa01cel01     12.4737091064453125T    normal  1.409210205078125T
. . .
exa01cel05: CD_10_exa01cel05     12.4737091064453125T    normal  1.409210205078125T
exa01cel05: CD_11_exa01cel05     12.4737091064453125T    normal  1.409210205078125T
[root@exa01db01 ]# 
Very cool, 1.4 TB free on each cell disk, this is plenty of available space !

Note that I use the where name like syntax to only show the CD_* disks which are the Cell Disks and not the FD_* which are the Flash Disks nor the PM_* which are the Persistent memory.

Let's look at the size of the current grid disks for this diskgroup:
[root@exa01db01 ]#  dcli -g ~/cell_group -l root "cellcli -e list griddisk attributes name,size,status where name like \'RECOPROD.*\'"
exa01cel01: RECOPROD_CD_00_exa01cel01     32G    active
exa01cel01: RECOPROD_CD_01_exa01cel01     32G    active
. . .
exa01cel05: RECOPROD_CD_10_exa01cel05     32G    active
exa01cel05: RECOPROD_CD_11_exa01cel05     32G    active
[root@exa01db01 ]# 
RECOPROD currently has 32GB grid disks which means that RECOPROD (let's say it is NORMAL redundancy) is ... 32*12*5/2 = ... ? (let's use this opportunity to show a bit of bc here)
32 => each grid disk is 32 GB
12 => each cell has 12 disks
5  => our system has 5 cells
2  => we divide by 2 as the diskgroup is NORMAL redundancy
[root@exa01db01 ]#  echo "32*12*5/2" | bc
960
[root@exa01db01 ]# 
So the current size of our RECOPROD is 960 GB; it is indeed not that big. Question now is how big should each grid disk be if we would like to have a 15 TB NORNMAL redundancy RECOPROD diskgroup ?
[root@exa01db01 ]# echo "15*1024*2/5/12" | bc
512
[root@exa01db01 ]# 
512 GB it is; we know we have enough space available on the cell disks, we can increase the size of all the grid disks:
[root@exa01db01 ]# dcli -g ~/cell_group -l root "cellcli -e alter griddisk size=512G where name like \'RECOPROD.*\'"
exa01cel01: GridDisk RECOPROD_CD_00_exa01cel01 successfully altered
exa01cel01: GridDisk RECOPROD_CD_01_exa01cel01 successfully altered
. . .
exa01cel05: GridDisk RECOPROD_CD_10_exa01cel05 successfully altered
exa01cel05: GridDisk RECOPROD_CD_11_exa01cel05 successfully altered
[root@exa01db01 ]#
And then we can increase the ASM disks (this has to be done connected to the ASM instance so on a VM or on a physical database node):
[oracle@exa01db01:~]$ sqlplus / as sysasm
SQL> alter diskgroup RECOPROD resize all size 512G rebalance power 64 ;
Diskgroup altered.
Elapsed: 00:00:02.62
SQL>
Note that this will trigger a rebalance operation which you want to monitore using select * from gv$asm_operation. Also note that 64 is an aggressive rebalance power, you may want to reduce it to 8 or 16 if you are doing this live during heavy production hours. Once the rebalance is done, you are done !
[root@exa01db01 ~]# ./asmdu.sh   <== asmdu.sh available here
Instances running on exa01db01 : +ASM1, PROD1, PROD2, BLABLA1, APP1, REPORT1
        DiskGroup      Redundancy        Total TB       Usable TB        % Free
        ---------     -----------        --------       ---------        ------
        DATAPROD        HIGH              150.01          100.03          33
        RECOPROD        NORMAL             15.00            0.50          99
[root@exa01db01 ~]#
That was easy; now how can we decrease the size of this RECOPROD ? just by doing the opposite:
[oracle@exa01db01:~]$ sqlplus / as sysasm
SQL> alter diskgroup RECOPROD resize all size 32G rebalance power 64 ;
SQL> select * from gv$asm_operation ;
[root@exa01db01 ]# dcli -g ~/cell_group -l root "cellcli -e alter griddisk size=32G where name like \'RECOPROD.*\'"
That's it for today, one more string in our Exadata bow !

1 comment:

  1. Very Nice presentation and explanation ! Good job .

    ReplyDelete

OCI: Datapump between 23ai ADB and 19c ADB using database link

Now that we know how to manually create a 23ai ADB in OCI , that we also know how to create a database link between a 23ai ADB and a 19C AD...