Shortcuts

  • How to install a brand new Exadata (X2 to X7)
  • / the blog is here
  • How to Patch Exadata / Upgrade Exadata to 18c and 19c
  • / the blog is here
  • How to re-image an Exadata database server
  • / the blog is here
  • How to re-image an Exadata cell storage server
  • / the blog is here
  • asmdu.sh
  • : A clear output of the ASM diskgroups sizes and usage / the code is here
  • rac-status.sh
  • : A GI 12c resources status output in a glimpse / the code is here
  • rac-mon.sh
  • : a GI 12c/18c monitoring tool based on rac-status.sh / the code is here
  • yal.sh
  • : Yet Another Launcher ! / the code is here
  • exa-versions.sh
  • : Exadata components versions in a glimpse / the code is here
  • cell-status.sh
  • : An overview of your Exadata disks / the code is here
  • lspatches.sh
  • : An Oracle patch reporting tool / the code is here
  • exa-racklayout.sh
  • : Each component of an Exadata in a Rack Layout style / the code is here
  • rac-on_all_db.sh
  • : Execute a SQL on all databases running on a cluster / the code is here
  • . . . more to come . . .
  • YAL : Yet Another Launcher !

    Whatever is your database technology, architecture, whether you are with a Cloud vendor, multi Cloud, on-premises or a mix of all of that, you are really likely to have the below way of connecting to your servers:
    • 1/ ssh to a jump server as a non privileged user (fdenis for me for example)
    • 2/ ssh to a server as a non privileged user (opc for example in Oracle Cloud)
    • 3/ sudo to a more privileged user (oracle or grid in the Oracle world for example)
    • 4/ do something
    . . . and you repeat this for any server which leads to the below workflow:

    This is good from a security perspective indeed but it may be a bit more problematic when you want to:
    • Execute some commands on each server or a subset of those servers
    • Execute a script on each server or a subset of those servers
    • Copy a file on each server or a subset of those servers
    • . . . well . . . efficiently do your admin job . . .
    One could argue that tools already exist to achieve this like Ansible, Chef, Puppet, OEM, name it ... indeed these tools exist but first of all they are not standard (meaning not on all Unix boxes by default) and far more too complicated to do simple things like (and this was my starting point) executing asmdu.sh or rac-status.sh on all my servers.
    The most approaching simple and efficient tool I know (may be there are others) to achieve this kind of thing is dcli but this tool is Exadata specific. Also, dcli does not allow to execute a script in sudo on the remote server -- as well a few other things I will describe below in yal's features.

    YAL

    So, one more time when something does not exist or does not fit my needs, I used my favorite and standard vi to make this standard shell script yal.sh aka Yet Another Launcher with the goal of being able to easily achieve what I described above. The only requirement is to have SSH keys deployed from the jump server to the target server (it is actually not a requirement but being prompted for a password at every server connection would be painful).

    A first example

    Let's start with a first example and just execute the "uptime" and the "id" commands on a set of servers:
    [fdenis@jump]$ ./yal.sh -c server1,server2 -l opc -b "uptime;id"
    BEGIN on server1 : Thu Sep 12 02:56:01 UTC 2019
     02:56:01 up 125 days, 14:41,  0 users,  load average: 2.04, 2.36, 2.25
    uid=54322(opc) gid=54323(opc) groups=54323(opc)
    END   on server1 : Thu Sep 12 02:56:05 UTC 2019
    ELAPSED : 7 seconds
    BEGIN on server2 : Thu Sep 12 02:56:08 UTC 2019
     02:56:08 up 125 days, 14:38,  0 users,  load average: 3.30, 2.38, 2.09
    uid=54322(opc) gid=54323(opc) groups=54323(opc)
    END   on server2 : Thu Sep 12 02:56:11 UTC 2019
    ELAPSED : 6 seconds
    [fdenis@jump]$
    
    A first explanation of the options used:
    • -c: a comma separated list of target servers
    • -l: the user to use to login to the target servers
    • -b: commands to execute "before". Before what ? before copying or executing a script as I will show below. There is also a -a option to execute a command "after" a script is executed or copied on a target server
    You can note that the output also shows the servername, the current date and the elapsed time of the operation for each server. These headers can be modified in the script (you may not want to modify them though):
          HEADER="echo BEGIN on \`hostname -s\` : \`date\`"     # Header to print before execution on  target
          FOOTER="echo END\ \ \ on \`hostname -s\` : \`date\`"  # Footer to print after  execution on a target
    SHOW_ELAPSED="yes"                                          # Show the elapsed time
    

    Target servers

    The previous example shows that -c is used to select a comma separated list of target servers; you can also use a -g option to specify a file containing a list of files:
    [fdenis@jump]$ cat alist 
    server1
    server2
    [fdenis@jump]$ ./yal.sh -g alist -l opc -b "uptime;id"
    BEGIN on server1 : Thu Sep 12 03:11:38 UTC 2019
     03:11:38 up 125 days, 14:57,  0 users,  load average: 3.89, 2.88, 2.48
    uid=54322(opc) gid=54323(opc) groups=54323(opc)
    END   on server1 : Thu Sep 12 03:11:41 UTC 2019
    ELAPSED : 6 seconds
    BEGIN on server2 : Thu Sep 12 03:11:44 UTC 2019
     03:11:44 up 125 days, 14:53,  0 users,  load average: 2.46, 2.21, 2.15
    uid=54322(opc) gid=54323(opc) groups=54323(opc)
    END   on server2 : Thu Sep 12 03:11:47 UTC 2019
    ELAPSED : 6 seconds
    [fdenis@jump]$
    

    sudo

    As described earlier, you may not want to run your commands / copy / execute your scripts with the user you use to login as this one is most likely a non-privileged user; you may want to sudo to do so; yal can do that by specifying the user you want to execute with using the -e option:
    [fdenis@jump]$ ./yal.sh  -g alist -l opc -b "uptime;id" -e oracle
    BEGIN on server1 : Thu Sep 12 03:15:55 UTC 2019
     03:15:55 up 125 days, 15:01,  0 users,  load average: 1.88, 2.26, 2.32
    uid=101(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dbaoper),1003(dba),1006(asmdba)
    END   on server1 : Thu Sep 12 03:15:58 UTC 2019
    ELAPSED : 6 seconds
    BEGIN on server2 : Thu Sep 12 03:16:02 UTC 2019
     03:16:02 up 125 days, 14:58,  0 users,  load average: 1.76, 1.92, 2.05
    uid=101(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dbaoper),1003(dba),1006(asmdba)
    END   on server2 : Thu Sep 12 03:16:05 UTC 2019
    ELAPSED : 6 seconds
    [fdenis@jump]$
    On older systems, you may not be able to sudo through a script as no terminal is allocated here and you would then receive a sudo: sorry, you must have a tty to run sudo. error. This behavior has been acknowledged by Red Hat and Red Hat has fixed the issue as very well explained in this post. So you would basically face this issue if your /etc/sudoers still contains:
    Defaults requiretty

    Execute a script

    yal can indeed execute a script (with the -x option) on each specified target server. The script has to be on your jump server, where you execute yal, not on the target serverl; yal will then:
    • Copy the script on the target server (in /tmp by default, this can be modified with the -d option)
    • Execute the script on the target server
    • Remove the script from the target server
    The below example will do on each target server contained in the alist file:
    • Connect as user opc
    • Sudo to user oracle
    • Execute what is specified in the -b option before executing the script
    • Execute the rac-status.sh script
    • Execute what is specified in the -a option after executing the script
    • Show everything server by server with the elapsed time on each server
    Let's have a look at this in action:
    [fdenis@jump]$ ./yal.sh  -g alist -l opc -b "echo before" -e oracle -a "echo after" -x rac-status.sh 
    BEGIN on server1 : Thu Sep 12 03:32:41 UTC 2019
    before
    Script rac-status.sh successfully copied to /tmp/rac-status.sh71743
    
                    Cluster blablacluster
    
          Listener   |      Port      |   server1   |     Type     |
      -------------------------------------------------------------
       LISTENER      | TCP:1521       |    Online   |   Listener   |
       LISTENER_SCAN1| TCP:1521       |      -      |     SCAN     |
      -------------------------------------------------------------
      
             DB      |     Version    |   server1   |    DB Type   |
      -------------------------------------------------------------
       a_database    | 11.2.0.4   (1) |     Open    |  SINGLE (P)  |
      -------------------------------------------------------------
      ORACLE_HOME references listed in the Version column 
      
             1 : /u01/app/oracle/product/11.2.0.4/dbhome_1  oracle oinstall
    
    Script /tmp/rac-status.sh71743 removed successfully
    after
    END   on server1 : Thu Sep 12 03:32:50 UTC 2019
    ELAPSED : 13 seconds
    BEGIN on server2 : Thu Sep 12 03:32:54 UTC 2019
    before
    Script rac-status.sh successfully copied to /tmp/rac-status.sh71743
    
                    Cluster anothercluster
    
          Listener   |      Port      |   server2   |     Type     |
      -------------------------------------------------------------
       LISTENER      | TCP:1521       |    Online   |   Listener   |
       LISTENER_SCAN1| TCP:1521       |      -      |     SCAN     |
      -------------------------------------------------------------
      
             DB      |     Version    |   server2   |    DB Type   |
      -------------------------------------------------------------
       myprod_db     | 12.1.0.2   (1) |     Open    |  SINGLE (P)  |
      -------------------------------------------------------------
      ORACLE_HOME references listed in the Version column 
      
             1 : /u01/app/oracle/product/12.1.0.2/dbhome_1  oracle oinstall
    
    Script /tmp/rac-status.sh71743 removed successfully
    after
    END   on server2 : Thu Sep 12 03:33:02 UTC 2019
    ELAPSED : 11 seconds
    
    Note that are shown the copies and deletion of the temporary files used to execute the script; you can deactivate that with the -q option (quiet).

    Options to execute scripts

    As you may want to use options to the scripts you execute, I made the -O option (upper case letter o as lower case o is already taken):
    [fdenis@jump]$ ./yal.sh  -g alist -l opc -e oracle -x rac-status.sh -O "-n -d" -q
    
    Note that you'll have to double quote your script options not to mess up with the yal options.

    Copy a file

    You can also simply copy a file to target servers using the -f option and the -d option to specify the target directory:
    [fdenis@jump]$ ./yal.sh  -g alist -l opc -b "ls -ltr /tmp/asmdu.sh" -a "ls -ltr /tmp/asmdu.sh" -f asmdu.sh -d /tmp
    BEGIN on server1 : Thu Sep 12 03:46:57 UTC 2019
    ls: cannot access /tmp/asmdu.sh: No such file or directory
    Script asmdu.sh successfully copied to /tmp/asmdu.sh
    -rwxr--r-- 1 opc opc 12292 Sep 12 03:47 /tmp/asmdu.sh
    END   on server1 : Thu Sep 12 03:47:04 UTC 2019
    ELAPSED : 9 seconds
    BEGIN on server2 : Thu Sep 12 03:47:07 UTC 2019
    ls: cannot access /tmp/asmdu.sh: No such file or directory
    Script asmdu.sh successfully copied to /tmp/asmdu.sh
    -rwxr--r-- 1 opc opc 12292 Sep 12 03:47 /tmp/asmdu.sh
    END   on server2 : Thu Sep 12 03:47:14 UTC 2019
    ELAPSED : 10 seconds
    [fdenis@jump]$ 

    SSH options

    To best adapt to your environment and the potential SSH security in place in your environments, you can change the SSH and SCP options using the -s and -S options

    A config file

    You can also specify your favorite options in a config file named .yal.config by default to avoid repeting them every time you use a command line:
    [fdenis@jump]$ cat .yal.config 
    SSH_OPTIONS="-qT -i /tmp/fred.pub"
           USER_TO_EXEC = root             # comment
    [fdenis@jump]$ 
    
    Note that a value assigned to a parameter in the config file overwrites the default but is overwritten by a command line option. Consider the config file as your defaults.

    Show the options

    As you may end up with many options in your command line, I made a -o option to show (and only show, not execute anything) the options of the current command line:
    [fdenis@jump]$ ./yal.sh  -g alist -l opc -b "ls -ltr /tmp/asmdu.sh" -e oracle -a "ls -ltr /tmp/asmdu.sh" -f asmdu.sh -d /tmp -o
    Option              | Value                  |
    ----------------------------------------------
    -a: After           | ls -ltr /tmp/asmdu.sh  |
    -b: Before          | ls -ltr /tmp/asmdu.sh  |
    -c: Servers List    | server1,server2        |
    -d: Dest            | /tmp                   |
    -e: User to exec    | oracle                 |
    -f: File to copy    | asmdu.sh               |
    -g: Group File      | alist                  |
    -l: User to login   | opc                    |
    -q: Quiet           | NO                     |
    -s: SSH Options     | -qT                    |
    -S: SCP Options     | -q                     |
    -x: Script to exec  |                        |
        Config File     | .yal.config            |
    ----------------------------------------------
    [fdenis@jump]$
    

    -h for help and -V for the script version

    Feel free to use -h to show the script help and -V to show the script version.

    yal instead of dcli ?

    I am a big fan of dcli so I used the same options names for yal and added more that I found nice to have and I must admint that I sometimes use yal instead of dcli on Exadata but to each his own :)

    Get the code



    I hope you'll be enjoying yal as much as I do !

    out of place rolling patching aka mitigate the DB downtime when patching like a pro

    Patching is an important task in every DBA life (and we patch quite often in the Oracle world :)) and the task is fairly easy (if well prepared). What is more tricky though is how to mitigate the downtime and the risks specially when you have no standby. Indeed, you do not want to have hours of downtime to apply a patch / PSU / RU nor not being able to easily rollback your patch / PSU / RU in case of something goes wrong. Also, you do not want to work all your WE long applying patches then the more you can do during weekdays the better.

    Hopefully, this is the purpose of the "out of place rolling patching" method with you can achieve a very very close to 0 downtime (as well as 0 risk) patch application. Let's see how it works.

    0/ Introduction

    Few points to keep in mind to enjoy this blog as best as possible:
    • The source database home: /u01/app/oracle/product/12.2.0.1/dbhome_1
    • The target database home: /u01/app/oracle/product/12.2.0.1/dbhome_2
    • The example below will be executed on a 2 nodes RAC (please have a look at the end of this blog if you have no RAC); it is exactly the same with more nodes
    • This method works for any patch application whether you apply a one-off patch, a PSU/RU and whatever your database version is
    • I may name the database ORACLE_HOME as "OH"
    • The below example has been executed on Exadatas but it works the same exact way on non-Exadata systems

    1/ Out of place

    The first words in "out of place rolling" is "out of place" which means that we won't patch the source OH (dbhome_1) but another cloned OH (dbhome_2).

    1.1/ OH Clone

    First of all, we will be cloning the current OH to a new one.
    $ cp -r /u01/app/oracle/product/12.2.0.1/dbhome_1 /u01/app/oracle/product/12.2.0.1/dbhome_2
    $ cd /u01/app/oracle/product/12.2.0.1/dbhome_2/oui/bin
    $ ./runInstaller -clone -waitForCompletion ORACLE_HOME="/u01/app/oracle/product/12.2.0.1/dbhome_2" ORACLE_HOME_NAME="JULY2019" ORACLE_BASE="/u01/app/oracle" CLUSTER_NODES="{exadatadb01,exadatadb02}" LOCAL_NODE="exadatadb01" -silent
    
    You now get a new cloned OH (dbhome_2) which is the same exact as your current one (dbhome_1).

    1.2/ Patch application against dbhome_2

    We will now apply the patch(es) we want against our target OH dbhome_2 (below an example with the July 2019 RU but it can be any patch):
    # Upgrade opatch
    . . .
    # Patches pre-requisites 
    . . .
    # Apply OJVM
    $ cd /patches/29698987/Database/12.2.0.1.0/12.2.0.1.190716OJVMRU/29774415
    $ $ORACLE_HOME/OPatch/opatch apply
    
    # Apply PSU (as root)
    # /u01/app/oracle/product/12.2.0.1/dbhome_2/OPatch/opatchauto apply /patches/29698987/Database/12.2.0.1.0/12.2.0.1.190716GIRU/29708720 -oh /u01/app/oracle/product/12.2.0.1/dbhome_2
    
    Here your dbhome_2 OH is ready and fully patched and all this has been done with no disruption of the production database running on the same server and you did all this work during your weekdays -- cool, right ?

    1.3/ Clone dbhome_2 on the other nodes

    Now that we have our dbhome_2 ready, it is time to clone it on the other nodes:
    $ cd /u01/app/oracle/product/12.2.0.1
    $ tar czf dbhome_2.tgz dbhome_2
    $ scp dbhome_2.tgz oracle@exadatadb02:/u01/app/oracle/product/12.2.0.1/.
    $ ssh exadatadb02
    $ cd /u01/app/oracle/product/12.2.0.1
    $ tar xf dbhome_2.tgz
    $ cd dbhome_2/oui/bin
    $ ./runInstaller -clone -waitForCompletion ORACLE_HOME="/u01/app/oracle/product/12.2.0.1/dbhome_2" ORACLE_HOME_NAME="JULY2019" ORACLE_BASE="/u01/app/oracle" CLUSTER_NODES="{exadatadb01,exadatadb02}" LOCAL_NODE="exadatadb02" -silent
    
    All right, we are not ready to patch our production database !

    2/ Rolling

    Now that we have prepared the "out of place" by pre-patching a dbhome_2 target OH, we will do the "rolling" part meaning that we will restart (patch) the database from the source OH dbhome_1 to the target OH dbhome_2 instance by instance.

    2.1/ Starting position

    To illustrate this, I will use outputs from the rac-status.sh script to make it clear and visible. Here is the starting point with a JULY19 database I have created using dbca (this procedure has also been successfully applied on "real" databases full of data but as every name is confidential, I show the procedure with a database I have created myself):

    We can see here a 2 nodes RAC with databases running on the dbhome_1 OH, the JULY19 database is the one we will migrate to our already patched target OH dbhome_2. Note that here dbhome_2 is not shown by rac-status.sh; this is because the OH are not registered in the cluster. It may be the case in the future as discussed in this blog but so far they are not.

    2.2/ Update database configuration

    As the goal here is to restart the JULY19 instances on dbhome_2, we first need to update the database configuration which resides in the cluster:
    $ srvctl config database -d july19                         # ORACLE_HOME should point to dbhome_1
    $ srvctl modify database -db july19 -oraclehome /u01/app/oracle/product/12.2.0.1/dbhome_2
    $ srvctl config database -d july19                         # ORACLE_HOME should now point to dbhome_2
    

    2.3/ Shutdown instance on node 1

    Let's now shutdown the instance 1 from dbhome_1:
    $ srvctl stop instance -d july19 -i july191
    
    Which leads to the below status:
    We can see here that the instance 1 is well shutted down and the instance 2 has a very interesting status "Open,Running from Old Oracle Home". Also, rac-status.sh now shows that JULY19 is running on dbhome_2; indeed, we have updated the cluster configuration so JULY19 is now supposed to run on dbhome_2 ! The instance 2 is still opened for connection and no one connected to the instance 2 has been disconnected.

    2.4/ Restart instance on node 1

    We can then restart the instance 1:
    $ srvctl start instance -d july19 -i july191
    
    And have a look at how it looks:
    As expected, both instances are now opened, instance 1 running on dbhome_2 and the instance 2 still on the old dbhome_1.

    2.5/ Restart instance 2 on dbhome_2

    You guess the coming actions now, let's stop the instance 2 from dbhome_1:
    $ srvctl stop instance -d july19 -i july192
    
    To get the below status:

    And restart it on dbhome_2:
    $ srvctl start instance -d july19 -i july192
    
    To be in the final status:
    The "out of place rolling" has been succesfully executed !

    3/ Datapatch

    3.1/ update oraenv

    Now that our instances are running on dbhome_2, it is a good time to update oratab (on all nodes) not to mess up with the old environment:
    $ dcli -g ~/dbs_group -l oracle grep -i july /etc/oratab
    exadatadb01: july191:/u01/app/oracle/product/12.2.0.1/dbhome_2:N
    exadatadb02: july192:/u01/app/oracle/product/12.2.0.1/dbhome_2:N
    $
    

    3.2/ Datapatch

    If you look at dba_registry_sqlpatch, you will see that despite the fact that your instances are restarted on the fully patched home dbhome_2, the patches are NOT applied against the database (below are still the July 18 patches and not July 19 patches we want to apply):
    SQL> @check_patches
    
    ACTION_TIME          ACTION     STATUS     DESCRIPTION                                                                      VERSION      PATCH_ID BUNDLE_SER
    -------------------- ---------- ---------- -------------------------------------------------------------------------------- ---------- ---------- ----------
    15-AUG-2019 03:30:44 APPLY      SUCCESS    OJVM RELEASE UPDATE: 12.2.0.1.170718 (25811364)                                  12.2.0.1     25811364
    15-AUG-2019 03:30:45 APPLY      SUCCESS    DATABASE JUL 2018 RELEASE UPDATE 12.2.0.1.180717                                 12.2.0.1     28163133 DBRU
    
    SQL> get check_patches.sql
      1  SET LINESIZE 400
      2  SET PAGESIZE 100
      3  COLUMN action_time FORMAT A20
      4  COLUMN action FORMAT A10
      5  COLUMN status FORMAT A10
      6  COLUMN description FORMAT A80
      7  COLUMN version FORMAT A10
      8  COLUMN bundle_series FORMAT A10
      9  SELECT TO_CHAR(action_time, 'DD-MON-YYYY HH24:MI:SS') AS action_time,
     10   action,
     11   status,
     12   description,
     13   version,
     14   patch_id,
     15   bundle_series
     16   FROM   sys.dba_registry_sqlpatch
     17*  ORDER by action_time;
     18  /
    
    This is expected and it is datapatch's job to apply these SQL patches:
    $ . oraenv <<< july191
    $ echo $ORACLE_HOME                         # This has to be /u01/app/oracle/product/12.2.0.1/dbhome_2
    $ cd $ORACLE_HOME/OPatch
    $ ./datapatch -verbose  
    . . .
    
    Once datapatch has been successfully executed, you will see the correct July 19 patches properly installed:
    SQL> @check_patches
    
    ACTION_TIME          ACTION     STATUS     DESCRIPTION                                                                      VERSION      PATCH_ID BUNDLE_SER
    -------------------- ---------- ---------- -------------------------------------------------------------------------------- ---------- ---------- ----------
    26-AUG-2019 21:40:26 APPLY      SUCCESS    OJVM RELEASE UPDATE: 12.2.0.1.170718 (25811364)                                  12.2.0.1     25811364
    26-AUG-2019 21:40:27 APPLY      SUCCESS    DATABASE JUL 2018 RELEASE UPDATE 12.2.0.1.180717                                 12.2.0.1     28163133 DBRU
    26-AUG-2019 22:04:46 APPLY      SUCCESS    OJVM RELEASE UPDATE: 12.2.0.1.190716 (29774415)                                  12.2.0.1     29774415
    26-AUG-2019 22:04:46 APPLY      SUCCESS    DATABASE JUL 2019 RELEASE UPDATE 12.2.0.1.190716                                 12.2.0.1     29757449 DBRU
    26-AUG-2019 22:04:46 ROLLBACK   SUCCESS    OJVM RELEASE UPDATE: 12.2.0.1.170718 (25811364)                                  12.2.0.1     25811364
    
    SQL>
    

    4/ Sum up / downtime and risk mitigation

    To sum that up and to be exhaustive:
    • We have mitigated the risk to I would say 0 by following an out of place patching. If something goes wrong, just restart your instances on dbhome_1 and if you need to fallback as the patch(es) you have applied cause more issues than resolve problems, you just have to restart on dbhome_1, execute datapatch from dbhome_1 and you are done !
    • The downtime has been mitigated by the rolling method and we can even do better by using the services I haven't emphasys earlier to focus on the rolling instances method but a whole downtime mitigated patch would be:
      • Stop services on node 1 / relocate to node 2
      • Stop / restart instance 1 from dbhome_1 to dbhome_2
      • Stop services on node 2 / relocate to node 1
      • Stop / restart instance 2 from dbhome_1 to dbhome_2
      • Datapatch
      • Relocate the services as you like
      Following this exact method, you can have a 0 downtime as:
      • Relocating a service does not kill any session by default
      • You can wait for any session to finish before restarting the instances, there is no time limit to stay in "Open,Running from Old Oracle Home" mode so take your time, the users will thank you !

    If you have no RAC, you can still use the same method, the only thing is that you won't avoid a downtime but you can mitigate it seriously as the only downtime you will suffer is when you will stop / restart the instance to dbhome_2 which is what ... 1 minute ?
    And as we are in a renewable world, I would suggest you to reuse dbhome_1 next time you out place rolling patch ! (from dbhome_2 to dbhome_1)
    A last thing: since 12.2 you can also do this using opatchauto. As I am a bit cautious with the new features and knowing that all of this is easy, I still prefer doing it that way !


    Enjoy the out of place rolling way of patching !

    dbca: ORA-20001: Statistics Advisor: Invalid task name for the current user

    It looks like using dbca is nowadays quite an adventure; indeed, after few having to use few worakrounds to create a simple RAC database, my dbca 12.2 created database had an alert.log to grow like 200 GB in a week spamming this error message:

    2019-09-01 18:53:20.438000 -04:00
    Errors in file /u01/app/oracle/diag/rdbms/july2019/july20191/trace/july20191_j000_29383.trc:
    ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_1034"
    ORA-20001: Statistics Advisor: Invalid task name for the current user
    ORA-06512: at "SYS.DBMS_STATS", line 47207
    ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 882
    ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 20059
    ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 22201
    ORA-06512: at "SYS.DBMS_STATS", line 47197
    

    This comes from the known bug 25710407 because of the unavailability of the Stats Advisor Tasks; the fix is fairly easy:
    SQL> select name, ctime, how_created
    from sys.wri$_adv_tasks
    where owner_name = 'SYS'
    and name in ('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');
    
    no rows selected
    
    SQL> EXEC dbms_stats.init_package();
    
    PL/SQL procedure successfully completed.
    
    SQL> select name, ctime, how_created
    from sys.wri$_adv_tasks
    where owner_name = 'SYS'
    and name in('AUTO_STATS_ADVISOR_TASK','INDIVIDUAL_STATS_ADVISOR_TASK');
    
    NAME                              CTIME    HOW_CREATED
    -----------------------        ---------  -----------------
    AUTO_STATS_ADVISOR_TASK         01-SEP-19   CMD
    INDIVIDUAL_STATS_ADVISOR_TASK   01-SEP-19   CMD
    
    SQL>
    

    The thing is to know it before the database trace / log files filled your filesystem !

    Shortcuts

    How to install a brand new Exadata (X2 to X7) / the blog is here How to Patch Exadata / Upgrade Exadata to 18c and 19c / the blog is h...