Shortcuts

  • How to install a brand new Exadata (X2 to X7)
  • / the blog is here
  • How to Patch Exadata / Upgrade Exadata to 18c and 19c
  • / the blog is here
  • How to re-image an Exadata database server
  • / the blog is here
  • How to re-image an Exadata cell storage server
  • / the blog is here
  • asmdu.sh
  • : A clear output of the ASM diskgroups sizes and usage / the code is here
  • rac-status.sh
  • : A GI 12c resources status output in a glimpse / the code is here
  • rac-mon.sh
  • : a GI 12c/18c monitoring tool based on rac-status.sh / the code is here
  • yal.sh
  • : Yet Another Launcher ! / the code is here
  • exa-versions.sh
  • : Exadata components versions in a glimpse / the code is here
  • cell-status.sh
  • : An overview of your Exadata disks / the code is here
  • lspatches.sh
  • : An Oracle patch reporting tool / the code is here
  • exa-racklayout.sh
  • : Each component of an Exadata in a Rack Layout style / the code is here
  • rac-on_all_db.sh
  • : Execute a SQL on all databases running on a cluster / the code is here
  • . . . more to come . . .
  • How I beat the most modern Cloud with a command developed in . . . 1976

    42. For decades, we thought that 42 was the answer to the Ultimate Question of Life, the Universe, and Everything but it seems that a subtle sed s'/42/Move to the Cloud !!/' has happened and we now got another answer to pretty much everything. And this is kind of cool as plenty of new challenging projects come to us with new tools and modern technologies.

    Introduction

    Then came another project of a client moving to a top Cloud vendor where the need was ton run DAGs with the Apache Airflow workflow manager. In simple words, a DAG is a graphical representation of some jobs containing steps to execute with dependencies and parallelism as the very simple DAG shows below:
    Let's describe this one:
        1/ Job starts
        2/ Step 1 runs
        3/ Step 2 and Step 3 run in parallel after Step 1 finishes
        4/ Step 4 has to wait for Step 2 and Step 3 to finish before being run
        5/ Job ends
    This is indeed a very simple one and in my case it was jobs with hundreds of steps and dependencies to run; that big and complex that no monitor in the market could fully show it on one screen as you may find some more complex examples here.

    Airflow being too slow running these complex DAGs (as per what I read here and there, it seems that complex dependencies and a large number of tasks is an Airflow known limitation), the adventure started for me with the below requirements:
    • A JSON file contains some jobs with many steps with dependencies
    • Execute them as fast as possible in parallel when possible

    Challenge accepted !

    I then started to have a look at the JSON file, becoming familiar with the dependencies structures, etc ... and started to think about coding that "manage dependencies and parallelism" feature.

    make

    make is a command Stuart Feldman started to develop in April 1976 inspired by the experience of a coworker who, in futilely debugging a program, got the executable accidentally not being updated with changes (more on the history here). He then created make which, based on makefiles is able to execute random code based on dependencies and parallelism to speed up the process. This looked to be exactly what I was looking for. Also, make is shipped with any Unix systems and is very light (compared to other tools which need GB of software).

    A first makefile

    make executes makefiles which are text files containing random code to execute and dependencies; let's have a look at a simple step syntax:
    step_name: step_1 step_2
             some_code_to_execute.sh
    
    Where:
    • step_name is the name of a step
    • step_1 step_2 are some steps which are step_name dependens on; in this example, step_name can only be executed after step_1 and step_2 have been executed (and successful -- if you wish)
    • some_code_to_execute.sh: some random code to execute
    Easy, right ? now let's have a look at what could be the above DAG screenshot makefile:
    done: the_end
    step1:
         step1.sh
    step2: step1
         step2.sh
    step3: step1
         step3.sh
    step4: step2 step3
          step4.sh
    the_end: step4
    
    This short makefile represents the DAG shown earlier; a new done keyword appears here, it is to define what label is the end of the makefile which I named the_end and which depends on step4.

    Parallelism

    -j tells make to start the steps with as many parallelism as possible (obviously respecting the dependencies) which is the number of CPU of the machine where make is executed if the machine is not already overloaded. In my example, step2 and step3 can be executed in parallel, not the other steps as enforced by the dependency tree. And this is managed automatically by make for more than 40 years, awesome, right ?

    If in any case you would like to modify the parallelism degree (and why not run everything serially), feel free to experiment the -j option with different values.

    A first execution

    Let's execute this first makefile; I will just add an echo before the step*.sh scripts as they don't exist and the purpose of this is just to show how make works, a date when a step starts and finishes and I will sleep few seconds during each step -- the makefile now looks like this:
    $ cat mymakefile
    done: the_end
    step1:
            @echo `date`": Executing step1.sh"
            sleep 1
            @echo `date`": End step 1"
    step2: step1
            @echo `date`": Executing step2.sh"
            sleep 4
            @echo `date`": End step 2"
    step3: step1
            @echo `date`": Executing step3.sh"
            sleep 10
            @echo `date`": End step 3"
    step4: step2 step3
            @echo `date`": Executing step4.sh"
            sleep 2
            @echo `date`": End step 4"
    the_end: step4
    $
    
    Now, execute it:
    $ make -f mymakefile -j
    Wed Nov 20 04:13:26 UTC 2019: Executing step1.sh
    sleep 1
    Wed Nov 20 04:13:27 UTC 2019: End step 1
    Wed Nov 20 04:13:27 UTC 2019: Executing step3.sh
    sleep 10
    Wed Nov 20 04:13:27 UTC 2019: Executing step2.sh
    sleep 4
    Wed Nov 20 04:13:31 UTC 2019: End step 2
    Wed Nov 20 04:13:37 UTC 2019: End step 3
    Wed Nov 20 04:13:37 UTC 2019: Executing step4.sh
    sleep 2
    Wed Nov 20 04:13:39 UTC 2019: End step 4
    $
    
    With this execution, you can validate the parallelism (thanks to the -j option -- more on that below) with step 2 and step 3 and the dependencies (step 2 and step 3 start after step 1 is done and step 4 wait for step 3 to finish even if step 2 is faster).

    Error codes

    As any Unix tool, make will react on code execution return codes:
    • If the return code of a step*.sh script is 0, make continues with the next steps
    • If the return code of a step*.sh script is different than 0, make stops executing the next steps and stop here
    • This last behavior can be modified with the -k (keep going) option

    A first implementation

    This small first test confirmed that make was the tool I needed: easy, light, standard, robust and well documented.
    I then just had to read the JSON configuration file containing all the jobs, steps and dependencies, generate a makefile with these steps and dependencies for a specific job and ... execute it !
    Here is my first code after few hours working on this (it is draft because it was a draft :)). Note that I have added some random sleep after each step to test and make sure that the parallelism and dependencies were working as expected -- and they were.

    The final implementation

    The previous ~100 lines script shows a fist implementation as a proof of concept, the final implementation is more complete (but still less than 1000 lines of code), it generates ~ 2000 lines makefiles with steps calling a shell script to execute SQLs against bigquery, with the below features:
    • There is not one JSON file but two JSON files; one with the main steps dependencies and one with the scripts that each step contain; each step having many scripts to execute and they also have dependencies amongst themselves

    • Each script has to be executed against bigquery, they are usually hundreds of lines generated SQLs with parameters. The values of these parameters are in a YAML file. The script dynamically replaces the parameters with the correct values before each execution against bigquery.

    • Some date work is also made to execute the SQLs by time interval

    • The time interval can be customised so you can divide the work to be done like executing SQLs with date starting from Jan 1st 2019 to the current date by month, week, etc ... obviously, start date, end date and interval can be any date

    • I have implemented a "disruptor" mechanism where you can stop an execution from outside the script, bigquery, Airflow, etc ...

    • I have implemented a rerun mechanism to be able to rerun a failed job execution with the exact same parameters skipping the steps or the whole DAGs which were previously successfully executed

    • The logs are shown on the screen, in logfiles and are also inserted row by row in a MYSQL database (it is too slow to do this in bigquery and you will hit table quota insertion very quickly)

    • The logs being inserted row by row when a step and/or a substep is executed, the execution can be followed live, we have designed a nice dashboard with statistics on each step, DAGs shown on a graphical manner with green color for an executed step, grey clor for a step being executed, a red step for a failed step, etc ... then the client can follow the execution of any complex DAG live in a nice and graphical way

    • I had to implement a wait mechanism as bigquery is eventually consistent so I have to wait (and then slow down what I wrote to execute jobs as fast as possible - world is crazy right ? :)) 1 second (this is a parameter and then can be changed) after some jobs to let the eventual consistency to happen and then a next step to find the correct data to work with (the logs being very detailed, we can calculate how long we slept to accomodate this eventual consistency then how much this eventual consistency "costs" per job, DAG etc ...)

    • As AirFlow was the tool designed to do this job and is a scheduler, it is used to start the script and then show a green light if it is successful and a red one if there is a failure. Also, you can follow the logs of the script from AirFlow; it is indeed a technical output for technicians and the client will prefer using the nice GUI showing graphical DAGs coloring themselves when steps are executing and executed

    • Everything previously cited is aaP ("as a Parameter" to mimic the "as a Service" :)) and can then be modified very easily in Airflow before any job execution if needed; everything also has a default value, only the name of the job to execute is a mandatory parameter

    • We can exclude some DAGs from an execution

    • We can run prescripts and postscripts if needed

    • . . . and more . . .

    It is also worth mentioning that many features implemented were not present in the first too slow Airflow design and my tool runs the DAGs at least 10 times faster than Airflow.


    Thanks for reading, I hope you enjoyed this blog as much as I enjoyed achieving this challenge with these old buddies make and awk !! humbly continuing the Unix ethos: printable, debuggable, understandable stuff.


    ExaWatcher: Manage the archives destination

    ExaWatcher is a tool installed by default on Exadata (database nodes and storage cells) which came in replacement of OSWatcher starting with 11.2.3.3. This tool collects system information (ps, top, vmstat, etc ...) which is useful to troubleshoot when an issue occurs.

    The thing with ExaWatcher is that the information it collects, even if compressed, can use a lot of space specially as it uses / by default and one day you'll get the below siuation -- your / is a bit too full:
    [root@exadatadb01] df -h /
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/mapper/VGExaDb-LVDbSys1
                           30G   24G  4.0G  86% /
    [root@exadatadb01]# du -sh /opt/oracle.ExaWatcher/
    2.9G    /opt/oracle.ExaWatcher/
    [root@exadatadb01]#
    

    You may then want to purge the ExaWatcher archives and/or moving these files to somwehere else like to a NFS for example.
    Let's start by having a look at the ExaWatcher configuration file which is located as shown below:
    [root@exadatadb01]# locate ExaWatcher.conf
    /opt/oracle.ExaWatcher/ExaWatcher.conf
    [root@exadatadb01]#
    

    And you'll see the ResultDir directive to specify where ExaWatcher will be writing its files:
    <ResultDir> /opt/oracle.ExaWatcher/archive
    

    We then just have to stop ExaWatcher, update the configuration file and then restart ExaWatcher. Before this, we will create some new directories to handle the ExaWatcher files on a NFS (/nfs); one directory by node
    [root@exadatadb01]# for i in `cat ~/dbs_group`
    > do
    > mkdir -p /nfs/exawatcherlogs/$i
    > done
    [root@exadatadb01]# ls -l /nfs/exawa*/*
    /nfs/exawatcherlogs/exadatadb01:
    total 0
    /nfs/exawatcherlogs/exadatadb02:
    total 0
    /nfs/exawatcherlogs/exadatadb03:
    total 0
    /nfs/exawatcherlogs/exadatadb04:
    total 0
    [root@exadatadb01]#
    

    Now, stop ExaWatcher
    [root@exadatadb01]# pwd
    /opt/oracle.ExaWatcher
    [root@exadatadb01]# ./ExaWatcher.sh --stop
    [INFO     ] Stopping ExaWatcher: Post processing vmstat data file...
    [1572217463][2019-10-27 19:04:23][INFO][/opt/oracle.ExaWatcher/ExecutorExaWatcher.pl][exadataLogger::Logger][] VmstatPostProcessing for /oracle/nasdev_backup1/exawatcherlogs_/Vmstat.ExaWatcher/2019_10_27_18_58_00_VmstatExaWatcher_.mycompany.com.dat.
    
    [INFO     ] Stopping ExaWatcher: Zipping unzipped ExaWatcher data files...
    [INFO     ] Stopping ExaWatcher: All unzipped ExaWatcher results have been zipped accordingly.
    [root@exadatadb01]#
    

    Update ExaWatcher.conf with the below lines (node 1 as an example)
    # Fred Denis -- Oct 28th 2019 -- ticket 123456
    #<ResultDir> /opt/oracle.ExaWatcher/archive
    <ResultDir> /nfs/exawatcherlogs/exadatadb01
    

    Move the archives to the new directory (node 1 as an example)
    $ mv /opt/oracle.ExaWatcher/archive/* /nfs/exawatcherlogs/exadatadb01/.
    

    Restart ExaWatcher (we can still see the oswatcher legacy here)
    [root@exadatadb01]# ps -ef | grep -i exawatch
    root     368051 148216  0 19:09 pts/0    00:00:00 grep -i exawatch
    [root@exadatadb01]# /opt/oracle.cellos/validations/bin/vldrun.pl -script oswatcher
    Logging started to /var/log/cellos/validations.log
    Command line is /opt/oracle.cellos/validations/bin/vldrun.pl -script oswatcher
    Run validation oswatcher - PASSED
    [root@exadatadb01]#
    

    Let it few seconds to start and check the path where ExaWatcher is now writting
    [root@exadatadb01]# ps -ef | grep -i exawatch
    . . .
    5 -n 720 | sed 's/[ ?]*$//g' | grep --regexp "top - [0-9][0-9]:[0-9][0-9]:[0-9][0-9] up.*load average.*" -A 1000 2>/dev/null >> /nfs/exawatcherlogs/exdatadb01/Top.ExaWatcher/2019_10_27_22_16_23_TopExaWatcher_exadatadb01.mycompany.com.dat
    root     233057 232998  0 22:16 pts/0    00:00:00 sh -c /usr/bin/iostat -t -x -p  5  720 2>/dev/null >> /nfs/exawatcherlogs/exdatadb01/Iostat.ExaWatcher/2019_10_27_22_16_23_IostatExaWatcher_exadatadb01.mycompany.com.dat
    root     233081 232998  0 22:16 pts/0    00:00:00 sh -c
    . . .
    [root@exadatadb01]#
    

    It is not because your ExaWatcher is now writing on a NFS that you want to let the ExaWatcher archives on disk forever. If you look at ExaWatcher.conf, you will find the SpaceLimit directive as shown below:
     6047
    # Hard limit: 600MB
    #       Exadata Cell node: 600MB
    #       Exadata DB node/non-Exadata:
    #           20% of the file system capacity if mounted on "/"
    #           80% of the file system capacity if mounted on other
    #   At anytime, the limit will be set to the lower of the specified
    #   or the hard limit.
    
    ExaWatcher will keep the archives in a limit of 3 GB on a database server and 600 MB for a storage server so you can keep it like this if it suits your needs. I personnally like the per month file deletion so I personnally add a find in a /etc/cron.daily/oracle file which I use to complete my logrotate configurations to ensure a efficient purge. Also, I am sure that find has no bug and will work 100% of the time whereas, sometimes, the Oracle tools may be a bit buggy . . .
    find /nfs/exawatcherlogs/exdatadb* -type f -mtime +30 -delete
    


    And you'll then ensure some space for you precious / forever -- Easy peasy !

    Exadata: Setting up FlashCache Mode to WriteThrough or WriteBack is an online operation

    I was about to change some Exadatas FlashCache to WriteBack (after some cells were added to some Exadatas) when I found some resources on the Internet stating that this action needed to stop the whole cluster. As this looked to be a

    to me, I wrote this blog to verify it and show below that setting up FlashCache to WriteBack is a 100% online operation (as well as setting up WriteThrough)!

    Before jumping into the procedure itself, let's remind that the Storage Cells must run 11.2.3.2.1+ version to be able to set the FlashCache mode to WriteBack. It is most likely your case as 11.2 is very old and I guess you patch your Exadatas on a regular basis. Also, if you are unsure of your storage cells versions, feel free to use this exa-versions.sh script.

    Let's start by having a look at our cells FlashCache mode:
    [root@exadatadb01 ~]# dcli -g ~/cell_group -l root "cellcli -e list cell attributes flashcachemode"
    exadatacel01: WriteBack
    exadatacel02: WriteBack
    exadatacel03: WriteBack
    exadatacel04: WriteBack
    exadatacel05: WriteBack
    exadatacel06: WriteBack
    exadatacel07: WriteThrough
    [root@exadatadb01 ~]#
    
    Here, clearly, our cel07 is still in WriteThrough mode, let's change it to WriteBack. As a first step, we need to verify that the FlashCache is in a normal state
    [root@exadatadb01 ~]#  dcli -g cell_group -l root cellcli -e list flashcache detail | grep status
    exadatacel01: status:                 normal
    exadatacel02: status:                 normal
    exadatacel03: status:                 normal
    exadatacel04: status:                 normal
    exadatacel05: status:                 normal
    exadatacel06: status:                 normal
    exadatacel07: status:                 normal
    [root@exadatadb01 ~]#
    

    Now, Verify that your cells are healthy and you asmdeactivationoutcome is YES, you can do this using the below command line:
    # dcli -g ~/cell_group -l root cellcli -e list griddisk attributes asmdeactivationoutcome, asmmodestatus
    
    I personnaly use cell-status.sh to have a nice output:

    Let's now proceed and update this flashcachemode to WriteBack on this cell07 by dropping the flashacache first:
    [root@exadatadb01 ~]# ssh exadatacel07
    [root@exadatacel07 ~]# cellcli
    CellCLI: Release 19.2.2.0.0 - Production on Tue Oct 15 19:24:39 CDT 2019
    Copyright (c) 2007, 2016, Oracle and/or its affiliates. All rights reserved.
    CellCLI> list cell attributes flashcachemode
             WriteThrough
    CellCLI> drop flashcache
    Flash cache exadatacel07_FLASHCACHE successfully dropped
    CellCLI>
    
    Now, update the flashcache mode to WriteBack:
    CellCLI> alter cell flashCacheMode=writeback
    Cell exadatacel07 successfully altered
    CellCLI>
    
    Recreate the flashcache
    CellCLI> create flashcache all
    Flash cache exadatacel07_FLASHCACHE successfully created
    CellCLI>
    
    Verify
    CellCLI> list cell attributes flashcachemode
             WriteBack
    CellCLI>
    
    On overview on all the cells now
    [root@exadatadb01 ~]# dcli -g ~/cell_group -l root "cellcli -e list cell attributes flashcachemode"
    exadatacel01: WriteBack
    exadatacel02: WriteBack
    exadatacel03: WriteBack
    exadatacel04: WriteBack
    exadatacel05: WriteBack
    exadatacel06: WriteBack
    exadatacel07: WriteBack
    [root@exadatadb01 ~]#
    

    We did it and we did it 100% online ! here you may want to use cell-status.sh to double check that nothing has changed on your cells.


    Now, let's see how to do the same thing but on many cells in parallel:
    [root@anotherexadatadb01 ~]# dcli -g ~/cell_group -l root "cellcli -e list cell attributes flashcachemode"
    anotherexadatacel01: WriteBack
    anotherexadatacel02: WriteBack
    anotherexadatacel03: WriteBack
    anotherexadatacel04: WriteBack
    anotherexadatacel05: WriteBack
    anotherexadatacel06: WriteThrough
    anotherexadatacel07: WriteThrough
    anotherexadatacel08: WriteThrough
    anotherexadatacel09: WriteThrough
    anotherexadatacel10: WriteThrough
    anotherexadatacel11: WriteThrough
    anotherexadatacel12: WriteThrough
    [root@anotherexadatadb01 ~]#
    
    We need to set the FlashCache to WriteBack on the cells 6 to 12 on this one. We will then create a file with these cells only:
    [root@anotherexadatadb01 ~]# cat ~/cell_group_6_to_12
    anotherexadatacel06
    anotherexadatacel07
    anotherexadatacel08
    anotherexadatacel09
    anotherexadatacel10
    anotherexadatacel11
    anotherexadatacel12
    [root@anotherexadatadb01 ~]#
    
    And verify the file
    [root@anotherexadatadb01 ~]# dcli -g ~/cell_group_6_to_12  -l root "cellcli -e list cell attributes flashcachemode"
    anotherexadatacel06: WriteThrough
    anotherexadatacel07: WriteThrough
    anotherexadatacel08: WriteThrough
    anotherexadatacel09: WriteThrough
    anotherexadatacel10: WriteThrough
    anotherexadatacel11: WriteThrough
    anotherexadatacel12: WriteThrough
    [root@anotherexadatadb01 ~]#
    
    Now check the flashcache status on cells 6 to 12
    [root@anotherexadatadb01 ~]#  dcli -g ~/cell_group_6_to_12 -l root cellcli -e list flashcache detail | grep status
    anotherexadatacel06: status:                 normal
    anotherexadatacel07: status:                 normal
    anotherexadatacel08: status:                 normal
    anotherexadatacel09: status:                 normal
    anotherexadatacel10: status:                 normal
    anotherexadatacel11: status:                 normal
    anotherexadatacel12: status:                 normal
    [root@anotherexadatadb01 ~]#
    
    Drop the flashcache on these cells
    [root@anotherexadatadb01 ~]#  dcli -g ~/cell_group_6_to_12 -l root cellcli -e drop flashcache
    anotherexadatacel06: Flash cache anotherexadatacel06_FLASHCACHE successfully dropped
    anotherexadatacel07: Flash cache anotherexadatacel07_FLASHCACHE successfully dropped
    anotherexadatacel08: Flash cache anotherexadatacel08_FLASHCACHE successfully dropped
    anotherexadatacel09: Flash cache anotherexadatacel09_FLASHCACHE successfully dropped
    anotherexadatacel10: Flash cache anotherexadatacel10_FLASHCACHE successfully dropped
    anotherexadatacel11: Flash cache anotherexadatacel11_FLASHCACHE successfully dropped
    anotherexadatacel12: Flash cache anotherexadatacel12_FLASHCACHE successfully dropped
    [root@anotherexadatadb01 ~]#
    
    Change the FlashCache mode to WriteBack
    [root@anotherexadatadb01 ~]#  dcli -g ~/cell_group_6_to_12 -l root cellcli -e "alter cell flashCacheMode=writeback"
    anotherexadatacel06: Cell anotherexadatacel06 successfully altered
    anotherexadatacel07: Cell anotherexadatacel07 successfully altered
    anotherexadatacel08: Cell anotherexadatacel08 successfully altered
    anotherexadatacel09: Cell anotherexadatacel09 successfully altered
    anotherexadatacel10: Cell anotherexadatacel10 successfully altered
    anotherexadatacel11: Cell anotherexadatacel11 successfully altered
    anotherexadatacel12: Cell anotherexadatacel12 successfully altered
    [root@anotherexadatadb01 ~]#
    
    Recreate the FlashCache
    [root@anotherexadatadb01 ~]#  dcli -g ~/cell_group_6_to_12 -l root cellcli -e create flashcache all
    anotherexadatacel06: Flash cache anotherexadatacel06_FLASHCACHE successfully created
    anotherexadatacel07: Flash cache anotherexadatacel07_FLASHCACHE successfully created
    anotherexadatacel08: Flash cache anotherexadatacel08_FLASHCACHE successfully created
    anotherexadatacel09: Flash cache anotherexadatacel09_FLASHCACHE successfully created
    anotherexadatacel10: Flash cache anotherexadatacel10_FLASHCACHE successfully created
    anotherexadatacel11: Flash cache anotherexadatacel11_FLASHCACHE successfully created
    anotherexadatacel12: Flash cache anotherexadatacel12_FLASHCACHE successfully created
    [root@anotherexadatadb01 ~]#
    
    Verify the flashcache mode
    [root@anotherexadatadb01 ~]# dcli -g ~/cell_group_6_to_12  -l root "cellcli -e list cell attributes flashcachemode"
    anotherexadatacel06: WriteBack
    anotherexadatacel07: WriteBack
    anotherexadatacel08: WriteBack
    anotherexadatacel09: WriteBack
    anotherexadatacel10: WriteBack
    anotherexadatacel11: WriteBack
    anotherexadatacel12: WriteBack
    [root@anotherexadatadb01 ~]#
    
    Check the FlashCache status
    [root@anotherexadatadb01 ~]# dcli -g ~/cell_group_6_to_12 -l root cellcli -e list flashcache detail | grep status
    anotherexadatacel06: status:                 normal
    anotherexadatacel07: status:                 normal
    anotherexadatacel08: status:                 normal
    anotherexadatacel09: status:                 normal
    anotherexadatacel10: status:                 normal
    anotherexadatacel11: status:                 normal
    anotherexadatacel12: status:                 normal
    [root@anotherexadatadb01 ~]#
    
    And we are all done on all these cells in parallel and 100% online !


    A last word about this: have a look at the cachingpolicy of your grid disks as it may have an impact when you enable WriteBack.

    Shortcuts

    How to install a brand new Exadata (X2 to X7) / the blog is here How to Patch Exadata / Upgrade Exadata to 18c and 19c / the blog is h...