Twitter

Exadata: add extra disk and extend a FS on a VM

There's always a day you'll need more space on a filesystem. I you run Exadata physical (no VM), it is pretty straightforward as it is like on any Linux system and the disk space is already available in the Volume Group (as shown here). But when it comes to running Exadata in a virtual configuration, this is another story as the non-ASM physical storage is managed by the Hypervisor (dom0, KVM Host) but the disk space is used by the VMs. I'll show below an example of increasing a /u01 filesystem on a KVM Guest VM.

Speaking of non-ASM physical space, we first need to understand that this one is located in the /EXAVMIMAGES filesystem on the KVM host:
[root@kvmhost01 ~]# df -h /EXAVMIMAGES/
Filesystem                           Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbExaVMImages  1.5T  443G  1.1T  30% /EXAVMIMAGES
[root@kvmhost01 ~]#
Then it is a good idea to start by checking how much space is left on this FS before thinking about adding more disk space to a VM (please have a look at this post if you need to increase the size of /EXAVIMAGES).

Also worth understanding that all the files related to the VMs are located in /EXAVMIMAGES/GuestImages then all the files of a prodvm VM of a domain.com domain stay in:
/EXAVMIMAGES/GuestImages/prodvm.domain.com
We will then use vm_maker to create a new file which will be presented to the VM as a physical volume (like a new disk). I will name it pv3_vgexadb.img as my vgexadb volume group on the VM already has 2 physical volumes and I would recommend keeping the naming convention clear and neat to avoid future confusion; let's make this new disk 100GB:
[root@kvmhost01 ~]# vm_maker --create --disk-image /EXAVMIMAGES/GuestImages/prodvm.domain.com/pv3_vgexadb.img --size 100G --attach --domain prodvm.domain.com
[INFO] Allocating an image for /EXAVMIMAGES/GuestImages/prodvm.domain.com/pv3_vgexadb.img, size 100.000000G...
. . .
[INFO] Created image /EXAVMIMAGES/GuestImages/prodvm.domain.com/pv3_vgexadb.img
[INFO] Running 'vgscan --cache'...
[INFO] -------- MANUAL STEPS TO BE COMPLETED FOR MOUNTING THE DISK WITHIN DOMU prodvm.domain.com --------
[INFO] 1. Check a disk with name /dev/VGExaDbDisk.pv3_vgexadb.img/LVDBDisk exists.
[INFO] -  Check for the existence of a disk named: /dev/VGExaDbDisk.pv3_vgexadb.img/LVDBDisk. Use the 'lvdisplay' command and check the output. 
[INFO] 2. Create a mount directory for the new disk
[INFO] 3. Add the following line to /etc/fstab: /dev/VGExaDbDisk.pv3_vgexadb.img/LVDBDisk   defaults 1 1
[INFO] 4. Mount the new disk. Use the 'mount -a' command.
[root@kvmhost01 ~]# 
If you want to create a new FS on the VM, you want to follow the instructions provided at the end of the previous output. In the case of this blog, we want to extend the /u01 and not create a new FS so we will skip these instructions. Before continuing further, let's verify that our new disk is correctly attached to the VM:
[root@kvmhost01 ~]# vm_maker --list --disk-image --domain prodvm.domain.com
File /EXAVMIMAGES/GuestImages/prodvm.domain.com/System.img
File /EXAVMIMAGES/GuestImages/prodvm.domain.com/grid19.11.0.0.210420.img
File /EXAVMIMAGES/GuestImages/prodvm.domain.com/db19.11.0.0.210420_3.img
File /EXAVMIMAGES/GuestImages/prodvm.domain.com/pv1_vgexadb.img
File /EXAVMIMAGES/GuestImages/prodvm.domain.com/pv2_vgexadb.img
File /EXAVMIMAGES/GuestImages/prodvm.domain.com/pv3_vgexadb.img  <== this one
[root@kvmhost01 ~]# 
Great. Now that the new disk exists at the Hypervisor level, let's check it at the VM level:
[root@kvmhost01 ~]# ssh prodvm
[root@prodvm ~]# lvdisplay /dev/VGExaDbDisk.pv3_vgexadb.img/LVDBDisk  
  --- Logical volume ---
  LV Path                /dev/VGExaDbDisk.pv3_vgexadb.img/LVDBDisk  <== LV
  LV Name                LVDBDisk
  VG Name                VGExaDbDisk.pv3_vgexadb.img                <== VG
  LV UUID                xxxxxxxxxxxxxxxxx
  LV Write Access        read/write
  LV Creation host, time kvmhost01.domain.com, 2022-01-21 14:51:22 +0100
  LV Status              available
  # open                 0
  LV Size                100.00 GiB    <== the size we want to add
  . . .
[root@prodvm ~]#
Everything is here as expected; now it is important to note the name of the LV and the VG as we will delete them -- indeed, we are only interested in the physical volume and not in the LV nor in the VG:
[root@prodvm ~]# lvremove /dev/VGExaDbDisk.pv3_vgexadb.img/LVDBDisk
Do you really want to remove active logical volume VGExaDbDisk.pv3_vgexadb.img/LVDBDisk? [y/n]: y
  Logical volume "LVDBDisk" successfully removed
[root@prodvm ~]# vgremove VGExaDbDisk.pv3_vgexadb.img
  Volume group "VGExaDbDisk.pv3_vgexadb.img" successfully removed
[root@prodvm ~]#
Note that you can use lvremove -f to bypass the "do you really want ..." confirmation when dropping the logical volume; just be careful not dropping the wrong one :)

Now we need to get the name of the disk which has been added; you'll find it at the end of the output of the pvdisplay command:
[root@prodvm ~]# pvdisplay
. . .
  "/dev/sdf1" is a new physical volume of "100.00 GiB" <== here
  --- NEW Physical volume ---
  PV Name               /dev/sdf1
  VG Name
  PV Size               100.00 GiB
  . . .
[root@prodvm ~]#
Great, we have our new disk on the VM which we can add to the VGExaDb volume groupe:
[root@prodvm ~]# vgdisplay VGExaDb -s
  "VGExaDb" <141.24 GiB [118.00 GiB used / <23.24 GiB free]  <== Before adding the 100 GB
[root@prodvm ~]# vgextend VGExaDb /dev/sdf1
  Volume group "VGExaDb" successfully extended
[root@prodvm ~]# vgdisplay VGExaDb -s
  "VGExaDb" 193.23 GiB [118.00 GiB used / 123.24 GiB free]   <== +100 GB added
[root@prodvm ~]#
We are now good to extend the FS:
[root@prodvm ~]# df -h /u01
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbOra1   50G   40G   10G  80% /u01       <== FS is 50 GB
[root@prodvm ~]# lvextend -L +100G /dev/VGExaDb/LVDbOra1
  Size of logical volume VGExaDb/LVDbOra1 changed from 50.00 GiB to 150.00 GiB.
  Logical volume VGExaDb/LVDbOra1 successfully resized.
[root@prodvm ~]# xfs_growfs /u01
meta-data=/dev/mapper/VGExaDb-LVDbOra1 isize=256    agcount=8, agsize=1310720 blks
. . .
data blocks changed from 10485760 to 26214400
[root@prodvm ~]# df -h /u01
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbOra1  150G   40G   120G  20% /u01      <== 100 GB added
[root@prodvm ~]#
And keep in mind that all of this is obviously 100% online !

No comments:

Post a Comment

OCI: Datapump between 23ai ADB and 19c ADB using database link

Now that we know how to manually create a 23ai ADB in OCI , that we also know how to create a database link between a 23ai ADB and a 19C AD...