Disclaimer

Saturday 10 July 2021

Oracle Restart 19c: silent installation and ASM Filter Driver

 

How do I install ASMFD together with Oracle Restart 19c?

There are essentially 2 ways of installing Oracle Restart 19c with ASMFD:

  • Using UDEV to change permissions on all future ASM disks
  • Labeling future ASM disks using asmcmd to achieve the same goal

According to the certification matrix (MOS 1369107.1), it also matters which Oracle Linux 7/kernel combination you are using.

The easiest thing to do should be switching the Oracle Linux from UEK to the Red Hat Compatible Kernel, and I’m going to write about that first. The simplicity gained by using RHCK is slightly offset by operational caveats such as kernel upgrades etc. But this is a post about Oracle Restart, not the intricacies of switching from UEK to RHCK …

For quite a while now, UEK 5 has been the default kernel for Oracle Linux 7. If you’d like to install Oracle Restart 19c/ASMFD on UEK 5 you can’t do that out of the box, a little magic is necessary.

The following is a list of things I hope to write in the upcoming days. It’s all about a silent installation of Oracle Restart 19c for use with ASMFD:

  • Part 1: RHCK and disk labeling using asmcmd afd_label
  • Part 2: UEK 5 and the use of UDEV rules

👉Part 1: Silent installation: Oracle Restart 19c, ASM Filter Driver, RHCK edition

Installing Oracle Restart 19c on Oracle Linux 7.7 using the RedHat compatible kernel (RHCK). Please consult the ACFS/ASMFD compatibility matrix, My Oracle Support DocID 1369107.1 for the latest information about ASMFD compatibility with various kernels as well.

Why am I starting the series with a seemingly “odd” kernel, at least from the point of view of Oracle Linux? If you try to install the Oracle Restart base release with UEK 5, you get strange error messages back from gridSetup telling you about invalid ASM disks. While that’s probably true, it’s a secondary error. The main cause of the problem is this:

[root@server5 bin]# ./afddriverstate supported
AFD-620: AFD is not supported on this operating system version: '4.14.35-1902.300.11.el7uek.x86_64'
AFD-9201: Not Supported
AFD-9294: updating file /etc/sysconfig/oracledrivers.conf 

Which is easy to run into since gridSetup.sh doesn’t validate this for you when running in silent mode. The GUI version of the installer protects you from the mistake though. Upgrading to the latest UEK 5 doesn’t change this message, you need to check the certification matrix to learn that Oracle Restart 19.4.0 and later are required for UEK 5 if you’d like to use ASMFD (or ACFS for that matter). This scenario will be covered in a later post.

Using the Red Hat Compatible Kernel alleviates this problem for me. Just be aware of the usual caveats when using the Red Hat Kernel on Oracle Linux such as YUM changing the default kernel during yum upgrade etc. I’d also like to iterate that this post isn’t an endorsement for ASM Filter Driver, but since the documentation was a little unclear I thought I’d write up how I got to a working installation. It is up to you to ensure that ASMFD is a workable solution for your environment by following industry best known practices.

Configuration Options

In the post introducing this series I claimed to have identified 2 options for installing Oracle Restart 19c using ASMFD: the first one is to use UDEV to prepare ASM block devices, the second one is to label the ASM disks using asmcmd afd_label.

Huh, UDEV? That hasn’t really been blogged about at all in the context of ASMFD, or at least I didn’t find anyone who did. I’m inferring the possibility of using UDEV from “Configuring Oracle ASM Filter Driver During Installation” (link to documentation):

“If you do not use udev on the system where the Oracle Grid Infrastructure is installed, then you can also complete the following procedure to provision disks for Oracle ASMFD before the installer is launched”

You actually only have to choose one of them. Let’s start with the more frequently covered approach of labelling disks using asmcmd.

My environment

I have applied all the patches to this environment up to March 26th to my lab enviroment. The Oracle Linux release I’m using is 7.7:

[root@server4 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.7 

The KVM VM I’m using for this blog post uses the latest Red Hat Compatible Kernel at the time of writing (kernel-3.10.0-1062.18.1.el7.x86_64). You will notice that I’m using the virtio driver, leading to “strange” device names. Instead of /dev/sd it’s /dev/vd. My first two block devices are reserved for the O/S and Oracle, the remaining ones are going to be used for ASM. I have an old (bad?) habit of partitioning block devices for ASM as you might notice. Most of the Oracle setup is done by the 19c preinstall RPM, which I used.

I created a grid owner – grid – to own the Oracle Restart installation. Quite a few blog posts I came across referenced group membership, and I’d like to do the same:

[root@server4 ~]# id -a grid 
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54328(asmadmin),54327(asmdba) 

The block devices I’m intending to use for ASM are /dev/vdc to /dev/vdf – the first 2 are intended for +DATA, the other 2 will become part of +RECO. As you can see they are partitioned:

[root@server4 ~]# lsblk --ascii
NAME                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdf                   251:80   0   10G  0 disk 
`-vdf1                251:81   0   10G  0 part 
vdd                   251:48   0   10G  0 disk 
`-vdd1                251:49   0   10G  0 part 
vdb                   251:16   0   50G  0 disk 
`-vdb1                251:17   0   50G  0 part 
  `-oraclevg-orabinlv 252:2    0   50G  0 lvm  /u01
sr0                    11:0    1 1024M  0 rom  
vde                   251:64   0   10G  0 disk 
`-vde1                251:65   0   10G  0 part 
vdc                   251:32   0   10G  0 disk 
`-vdc1                251:33   0   10G  0 part 
vda                   251:0    0   12G  0 disk 
|-vda2                251:2    0 11.5G  0 part 
| |-rootvg-swaplv     252:1    0  768M  0 lvm  [SWAP]
| `-rootvg-rootlv     252:0    0 10.8G  0 lvm  /
`-vda1                251:1    0  500M  0 part /boot  

With all that out of the way it is time to cover the installation.

Labeling disks

I’m following the procedure documented in the 19c Administrator’s Guide chapter 20, section “Configuring Oracle ASM Filter Driver During Installation”. I have prepared my environment up to the step where I’d have to launch gridSetup.sh. This is a fairly well known process, and I won’t repeat it here.

Once the 19c install image has been extracted to my future Grid Home, the first step is to check if my system is supported:

[root@server4 ~]# cd /u01/app/grid/product/19.0.0/grid/bin
[root@server4 bin]# ./afddriverstate supported
AFD-9200: Supported 
[root@server4 bin]# uname -r
3.10.0-1062.18.1.el7.x86_64 

“AFD-9200: Supported” tells me that I can start labeling disks. This requires me to be root, and I have to set ORACLE_HOME and ORACLE_BASE. For some reason, the documentation suggests using /tmp as ORACLE_BASE, which I’ll use as well:

[root@server4 bin]# pwd
/u01/app/grid/product/19.0.0/grid/bin
[root@server4 bin]# export ORACLE_BASE=/tmp
[root@server4 bin]# export ORACLE_HOME=/u01/app/grid/product/19.0.0/grid
[root@server4 bin]# ./asmcmd afd_label DATA1 /dev/vdc1 --init
[root@server4 bin]# ./asmcmd afd_label DATA2 /dev/vdd1 --init 

[root@server4 bin]# ./asmcmd afd_lslbl /dev/vdc1
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA1                                 /dev/vdc1

[root@server4 bin]# ./asmcmd afd_lslbl /dev/vdd1
--------------------------------------------------------------------------------
Label                     Duplicate  Path
================================================================================
DATA2                                 /dev/vdd1  

Note the use of the –init flag. This is only needed if Grid Infrastructure isn’t installed yet.

Labeling the disks did not have an effect on the block devices’ permissions. Right after finishing the 2 calls to label my 2 block devices, this is the output from my file system:

[root@server4 bin]# ls -l /dev/vd[c-d]*
brw-rw----. 1 root disk 252, 32 Mar 27 09:46 /dev/vdc
brw-rw----. 1 root disk 252, 33 Mar 27 12:55 /dev/vdc1
brw-rw----. 1 root disk 252, 48 Mar 27 09:46 /dev/vdd
brw-rw----. 1 root disk 252, 49 Mar 27 12:58 /dev/vdd1
[root@server4 bin]#  

The output of afd_lslbl indicated that both of my disks are ready to become part of an ASM disk group, so let’s start the installer.

Call gridSetup.sh

I haven’t been able to make sense of the options in the response file until I started the installer in GUI mode and created a response file based on my choices. To cut a long story short, here is my call to gridSetup.sh:

[grid@server4 ~]$ /u01/app/grid/product/19.0.0/grid/gridSetup.sh -silent \
> INVENTORY_LOCATION=/u01/app/oraInventory \
> SELECTED_LANGUAGES=en \
> ORACLE_BASE=/u01/app/grid \
> ORACLE_HOME_NAME=ASMFD_RHCK \
> -waitforcompletion -ignorePrereqFailure -lenientInstallMode \
> oracle.install.option=HA_CONFIG \
> oracle.install.asm.OSDBA=asmdba \
> oracle.install.asm.OSASM=asmadmin \
> oracle.install.asm.diskGroup.name=DATA \
> oracle.install.asm.diskGroup.disks=/dev/vdc1,/dev/vdd1 \
> oracle.install.asm.diskGroup.diskDiscoveryString=/dev/vd* \
> oracle.install.asm.diskGroup.redundancy=EXTERNAL \
> oracle.install.asm.diskGroup.AUSize=4 \
> oracle.install.asm.configureAFD=true \
> oracle.install.crs.rootconfig.executeRootScript=false \
> oracle.install.asm.SYSASMPassword=thinkOfASuperSecretPassword \
> oracle.install.asm.monitorPassword=thinkOfASuperSecretPassword
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp

You can find the log of this install session at:
 /tmp/GridSetupActions2020-03-27_01-06-14PM/gridSetupActions2020-03-27_01-06-14PM.log

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/grid/product/19.0.0/grid/root.sh

Execute /u01/app/grid/product/19.0.0/grid/root.sh on the following nodes:
[server4]

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
/u01/app/grid/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp [-silent]
Note: The required passwords need to be included in the response file.
Moved the install session logs to:
/u01/app/oraInventory/logs/GridSetupActions2020-03-27_01-06-14PM
[grid@server4 ~]$

It took a little while to work out that despite labeling the disks for ASMFD I didn’t have to put any reference to AFD into the call to gridSetup.sh. Have a look at the ASM disk string and the block devices: that’s what I’d use if I were using UDEV rules for device name persistence. The syntax might appear counter-intuitive. However there’s a “configureAFD” flag you need to set to true.

Since this is a lab environment I’m ok with external redundancy. Make sure you pick a redundancy level appropriate for your use case.

Running the configuration tools

The remaining steps are identical to a non ASMFD setup. First you run orainstRoot.sh followed by root.sh. The output of the latter showed this for me, indicating success:

[root@server4 ~]# /u01/app/grid/product/19.0.0/grid/root.sh
Check /u01/app/grid/product/19.0.0/grid/install/root_server4_2020-03-27_13-11-05-865019723.log for the output of root script

[root@server4 ~]#
[root@server4 ~]# cat /u01/app/grid/product/19.0.0/grid/install/root_server4_2020-03-27_13-11-05-865019723.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/product/19.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/product/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/server4/crsconfig/roothas_2020-03-27_01-11-06PM.log
2020/03/27 13:11:13 CLSRSC-363: User ignored prerequisites during installation
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node server4 successfully pinned.
2020/03/27 13:13:55 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

server4     2020/03/27 13:16:59     /u01/app/grid/crsdata/server4/olr/backup_20200327_131659.olr     724960844
2020/03/27 13:17:54 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
[root@server4 ~]# 

Well that looks ok, now on to the final step, configuration! As indicated in the output, you need to update the response (/u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp) file with the required passwords. For me that was oracle.install.asm.monitorPassword and oracle.install.asm.SYSASMPassword. Once the response file was updated, I called gridSetup.sh once again:

[grid@server4 ~]$ /u01/app/grid/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-03-27_01-06-14PM.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2020-03-27_01-20-47PM

You can find the log of this install session at:
 /u01/app/oraInventory/logs/UpdateNodeList2020-03-27_01-20-47PM.log
Successfully Configured Software. 

And that’s it! The software has been configured successfully. Don’t forget to remove the passwords from the response file!

Verification

After a little while I have been able to configure Oracle Restart 19c/ASMFD on Oracle Linux 7.7/RHCK. Let’s check what this implies.

I’ll first look at the status of ASM Filter Driver:

[grid@server4 ~]$ . oraenv
ORACLE_SID = [grid] ? +ASM
The Oracle base has been set to /u01/app/grid
[grid@server4 ~]$ afddriverstate installed
AFD-9203: AFD device driver installed status: 'true'
[grid@server4 ~]$ afddriverstate loaded
AFD-9205: AFD device driver loaded status: 'true'
[grid@server4 ~]$ afddriverstate version
AFD-9325:     Driver OS kernel version = 3.10.0-862.el7.x86_64.
AFD-9326:     Driver build number = 190222.
AFD-9212:     Driver build version = 19.0.0.0.0.
AFD-9547:     Driver available build number = 190222.
AFD-9548:     Driver available build version = 19.0.0.0.0.
[grid@server4 ~]$  

That’s encouraging: ASMFD is loaded and works on top of kernel-3.10 (RHCK)

I am indeed using the base release (and have to patch now!)

[grid@server4 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded. 

And … I’m also using ASMFD:

SQL> col name for a20
SQL> col path for a10
SQL> col library for a50
SQL> set lines 120
SQL> select name, path, library from v$asm_disk where group_number <> 0;

NAME                 PATH       LIBRARY
-------------------- ---------- --------------------------------------------------
DATA1                AFD:DATA1  AFD Library - Generic , version 3 (KABI_V3)
DATA2                AFD:DATA2  AFD Library - Generic , version 3 (KABI_V3)

SQL> show parameter asm_diskstring

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/vd*, AFD:*
SQL>  

This concludes the setup of my lab environment.

=====================================================================================================================================================================================================================================================================================================================================================================================================================


👉Part 2: Silent installation: Oracle Restart 19c, ASM Filter Driver, UEK 5 edition

Installing Oracle Restart with ASM Filter Driver (ASMFD) 19c on Oracle Linux 7 using UEK 5.

Since the approach you are about to read isn’t explicitly covered in the documentation I suggest you ask Oracle Support whether it is supported before using this outside a playground/lab environment.

I also forgot about this post waiting to be published in my drafts folder, it should have gone out early April. Some components I used to put the post together aren’t the latest and greatest, please adjust accordingly.

My environment

The environment hasn’t changed massively compared to the RHCK edition of this post, except of course for the kernel used:

  • My lab environment consists of a KVM VM using the virtio driver
  • Oracle Linux 7.7
  • Kernel UEK 5 (patched to 4.14.35-1902.300.11.el7uek.x86_64)
  • All other packages up to date as of April 1st 2020
  • The first partition of /dev/vd[c-f] are to be used as ASM disks for +DATA
  • This is a fresh installation, no upgrade, no ASMLib has ever been in use

The challenge of using ASMFD with Oracle Restart and UEK5

The challenge installing Oracle Restart 19c together with ASMFD is lack of support in the base release:

[root@server5 bin]# ./acfsdriverstate supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: '4.14.35-1902.300.11.el7uek.x86_64'
ACFS-9201: Not Supported
ACFS-9294: updating file /etc/sysconfig/oracledrivers.conf
[root@server5 bin]# uname -r
4.14.35-1902.300.11.el7uek.x86_64
[root@server5 bin]#  

Which is easy to run into since gridSetup.sh shipped with 19.3 doesn’t validate this for you when running in silent mode. The GUI version of the installer protects you from the mistake though. Upgrading to the latest UEK 5 doesn’t change this message as you just saw, but was a necessity in my case anyway as you can see later. As per My Oracle Support (MOS) certification matrix, DocID 1369107.1, Oracle 19.4.0 is the first release to support ASM Filter Driver (ASMFD). The base release, 19.3.0 does not support ASMFD (or ACFS for that matter) out of the box.

I’d like to iterate again that this post isn’t an endorsement for ASM Filter Driver, but since the documentation was a little unclear I thought I’d write up how I got to a working installation. It is up to you to ensure that ASMFD is a workable solution for your environment by following industry best known methods.

So how do I get to a working Oracle Restart 19c/ASMFD installation when I’m using UEK 5?

As I said in the first part of the series there are 2 options available for installing Oracle Restart 19c using ASMFD, at least in theory: the first one is to use UDEV to prepare ASM block devices, the second one is to label the ASM disks using asmcmd afd_label.

UDEV

Huh, UDEV? That hasn’t really been blogged about at all in the context of ASMFD, or at least I didn’t find anyone who did. I’m inferring the possibility of using UDEV from “Configuring Oracle ASM Filter Driver During Installation” (link to documentation):

If you do not use udev on the system where the Oracle Grid Infrastructure is installed, then you can also complete the following procedure to provision disks for Oracle ASMFD before the installer is launched

The docs then continue with an explanation of using disk labeling. I was very interested in the first part of the statement quoted above (… not using UDEV …) as it implies using UDEV is a viable option.

So I went ahead and tried to use ASMFD with UDEV rules. I have previously used UDEV rules without ASMFD when installing Oracle products so that wasn’t too hard to do.

After changing ownership of the relevant block devices to grid:asmadmin and mode to 0660 via UDEV, here’s the result on the changes. Permissions on my block devices to be used for ASM are now correct:

[root@server5 ~]# ls -l /dev/vd[cd]*
brw-rw----. 1 root disk     251, 32 Apr  1 15:49 /dev/vdc
brw-rw----. 1 grid asmadmin 251, 33 Apr  1 15:49 /dev/vdc1
brw-rw----. 1 root disk     251, 48 Apr  1 15:49 /dev/vdd
brw-rw----. 1 grid asmadmin 251, 49 Apr  1 15:49 /dev/vdd1
[root@server5 ~]#  

Preparing for patching and installing

Due to the issue of not being able to install Oracle Restart 19c/ASMFD with the base release I am using a slightly different approach this time. It’s rather subtle, but effective. There is only 1 difference in the call to gridSetup.sh compared to the last post. The goal is to merge the latest RU into the unzipped binaries prior to invoking the installer to create a supported configuration.

Since Oracle Grid Infrastructure 12.2 has been released it is possible to merge a Release Update (RU) into the unzipped installation image. This process has changed quite a bit over time, as reported by Ludovico Caldara for example. You can’t simply execute gridSetup.sh -applyRU and to merge the RU into the installation image, you also need to pass the parameters for an installation to avoid an error when trying a silent installation. In GUI mode, the graphical user interface starts after merging the patch into the binaries instead.

Combining the -applyRU flags and installation options as used in the previous post, I can both patch and install the software. My system has Oracle 19.6.0 available and I’m going to apply it. In preparation I have to update OPatch as the grid owner. Once that’s done I need to unzip the RU to a staging location, still logged in as the grid owner:

[grid@server5 ~]$ unzip -q /mnt/19.6.0/p6880880_190000_Linux-x86-64.zip -d /u01/app/grid/product/19.0.0/grid
replace /u01/app/grid/product/19.0.0/grid/OPatch/emdpatch.pl? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
[grid@server5 ~]$
[grid@server5 ~]$ unzip -q /mnt/19.6.0/p30501910_190000_Linux-x86-64.zip -d /u01/stage
[grid@server5 ~]$ 

Patch and install

With all preparations in place, it’s time to call gridSetup.sh with the -applyRU flag necessary for UEK5:

[grid@server5 ~]$ cd /u01/app/grid/product/19.0.0/grid/
[grid@server5 grid]$ ./gridSetup.sh -silent \
> INVENTORY_LOCATION=/u01/app/oraInventory \
> SELECTED_LANGUAGES=en \
> ORACLE_BASE=/u01/app/grid \
> ORACLE_HOME_NAME=RESTART_ASMFD_UEK \
> -waitforcompletion -ignorePrereqFailure -lenientInstallMode \
> -applyRU /u01/stage/30501910 \
> oracle.install.option=HA_CONFIG \
> oracle.install.asm.OSDBA=asmdba \
> oracle.install.asm.OSASM=asmadmin \
> oracle.install.asm.diskGroup.name=DATA \
> oracle.install.asm.diskGroup.disks=/dev/vdc1,/dev/vdd1 \
> oracle.install.asm.diskGroup.diskDiscoveryString=/dev/vd* \
> oracle.install.asm.diskGroup.redundancy=EXTERNAL \
> oracle.install.asm.diskGroup.AUSize=4 \
> oracle.install.asm.configureAFD=true \
> oracle.install.crs.rootconfig.executeRootScript=false \
> oracle.install.asm.SYSASMPassword=thinkOfASuperSecretPassword \
> oracle.install.asm.monitorPassword=thinkOfASuperSecretPassword
Preparing the home to patch...
Applying the patch /u01/stage/30501910...
Successfully applied the patch.
The log can be found at: /tmp/GridSetupActions2020-04-01_04-03-20PM/installerPatchActions_2020-04-01_04-03-20PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-04-01_04-03-20PM.rsp

You can find the log of this install session at:
 /tmp/GridSetupActions2020-04-01_04-03-20PM/gridSetupActions2020-04-01_04-03-20PM.log

As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/grid/product/19.0.0/grid/root.sh

Execute /u01/app/grid/product/19.0.0/grid/root.sh on the following nodes:
[server5]



Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/app/grid/product/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/grid/product/19.0.0/grid/install/response/grid_2020-04-01_04-03-20PM.rsp [-silent]
Note: The required passwords need to be included in the response file.


Moved the install session logs to:
 /u01/app/oraInventory/logs/GridSetupActions2020-04-01_04-03-20PM
[grid@server5 grid]$ 

“Successfully Setup Software”, the message I was waiting for ;) From then on it’s the same as described with the previous blog post. Heed over there to read more about the execution of orainstRoot.shroot.sh and the configuration tool step to finish the installation.

Verification

As with the previous post I’d like to share some of the results of the installation. First, what about the afddriverstate?

[grid@server5 ~]$ afddriverstate installed
AFD-9203: AFD device driver installed status: 'true'
[grid@server5 ~]$ afddriverstate loaded
AFD-9205: AFD device driver loaded status: 'true'
[grid@server5 ~]$ afddriverstate version
AFD-9325:     Driver OS kernel version = 4.14.35-1902.0.9.el7uek.x86_64.
AFD-9326:     Driver build number = 191219.
AFD-9212:     Driver build version = 19.0.0.0.0.
AFD-9547:     Driver available build number = 191219.
AFD-9548:     Driver available build version = 19.0.0.0.0.
[grid@server5 ~]$  

This looks all right. What about the patch level?

[grid@server5 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
30655595;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30655595)
30557433;Database Release Update : 19.6.0.0.200114 (30557433)
30489632;ACFS RELEASE UPDATE 19.6.0.0.0 (30489632)
30489227;OCW RELEASE UPDATE 19.6.0.0.0 (30489227)

OPatch succeeded. 

And finally, what about ASM? Is it using ASM Filter Driver for its disks?

SQL> col name for a20
SQL> col path for a10
SQL> col library for a50
SQL> set lines 120
SQL> select name, path, library from v$asm_disk where group_number <> 0;

NAME                 PATH       LIBRARY
-------------------- ---------- --------------------------------------------------
DATA1                AFD:DATA1  AFD Library - Generic , version 3 (KABI_V3)
DATA2                AFD:DATA2  AFD Library - Generic , version 3 (KABI_V3)

SQL> 
SQL> show parameter asm_diskstring

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring                       string      /dev/vd*, AFD:* 

So all in all, the status isn’t different from the previous post, except I have already patched my environment, saving me time.

Failed attempts

I am keeping these as reminder to myself. Maybe they are of use to you as well when troubleshooting.

When trying to install Oracle Restart after labeling ASM disks as demonstrated in the previous post, the installation of the 19.3.0 (base release) fails:

[grid@server5 grid]$ ./gridSetup.sh -silent \
> INVENTORY_LOCATION=/u01/app/oraInventory \
> SELECTED_LANGUAGES=en \
> ORACLE_BASE=/u01/app/grid \
> ORACLE_HOME_NAME=RESTART_ASMFD_UEK \
> -waitforcompletion -ignorePrereqFailure -lenientInstallMode \
> -applyRU /u01/stage/30501910 \
> oracle.install.option=HA_CONFIG \
> oracle.install.asm.OSDBA=asmdba \
> oracle.install.asm.OSASM=asmadmin \
> oracle.install.asm.diskGroup.name=DATA \
> oracle.install.asm.diskGroup.disks=/dev/vdc1,/dev/vdd1 \
> oracle.install.asm.diskGroup.diskDiscoveryString=/dev/vd* \
> oracle.install.asm.diskGroup.redundancy=EXTERNAL \
> oracle.install.asm.diskGroup.AUSize=4 \
> oracle.install.asm.configureAFD=true \
> oracle.install.crs.rootconfig.executeRootScript=false \
> oracle.install.asm.SYSASMPassword=thinkOfASuperSecretPassword \
> oracle.install.asm.monitorPassword=thinkOfASuperSecretPassword
Preparing the home to patch...
Applying the patch /u01/stage/30501910...
Successfully applied the patch.
The log can be found at: /tmp/GridSetupActions2020-04-01_03-25-52PM/installerPatchActions_2020-04-01_03-25-52PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-30508] Invalid ASM disks.
   CAUSE: The disks [/dev/vdd1, /dev/vdc1] were not valid.
   ACTION: Please choose or enter valid ASM disks.
[FATAL] [INS-30515] Insufficient space available in the selected disks.
   CAUSE: Insufficient space available in the selected Disks. At least, 32 MB of free space is required.
   ACTION: Choose additional disks such that the total size should be at least 32 MB.
Moved the install session logs to:
 /u01/app/oraInventory/logs/GridSetupActions2020-04-01_03-25-52PM
[grid@server5 grid]$  

Similarly, if you don’t run with a very recent UEK 5 kernel patch level (4.14.35-1902.300.11.el7uek.x86_64 and later), the call to gridSetup.sh also fails, even when specifying -applyRU:

[grid@server5 grid]$ ./gridSetup.sh -silent \
> INVENTORY_LOCATION=/u01/app/oraInventory \
> SELECTED_LANGUAGES=en \
> ORACLE_BASE=/u01/app/grid \
> ORACLE_HOME_NAME=RESTART_ASMFD_UEK \
> -waitforcompletion -ignorePrereqFailure -lenientInstallMode \
> -applyRU /u01/stage/30501910 \
> oracle.install.option=HA_CONFIG \
> oracle.install.asm.OSDBA=asmdba \
> oracle.install.asm.OSASM=asmadmin \
> oracle.install.asm.diskGroup.name=DATA \
> oracle.install.asm.diskGroup.disks=/dev/vdc1,/dev/vdd1 \
> oracle.install.asm.diskGroup.diskDiscoveryString=/dev/vd* \
> oracle.install.asm.diskGroup.redundancy=EXTERNAL \
> oracle.install.asm.diskGroup.AUSize=4 \
> oracle.install.asm.configureAFD=true \
> oracle.install.crs.rootconfig.executeRootScript=false \
> oracle.install.asm.SYSASMPassword=thinkOfASuperSecretPassword \
> oracle.install.asm.monitorPassword=thinkOfASuperSecretPassword
Preparing the home to patch...
Applying the patch /u01/stage/30501910...
Successfully applied the patch.
The log can be found at: /tmp/GridSetupActions2020-04-01_03-55-32PM/installerPatchActions_2020-04-01_03-55-32PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-41223] ASM Filter Driver is not supported on this platform.
   ACTION: To proceed, do not specify or select the Oracle ASM Filter Driver option.
*ADDITIONAL INFORMATION:*
 - AFD-620: AFD is not supported on this operating system version: 'EL7'
 - AFD-9999: Cannot change to Not valid path:
 - 

Moved the install session logs to:
 /u01/app/oraInventory/logs/GridSetupActions2020-04-01_03-55-32PM 

The kernel I used in this case was the stock Oracle Linux 7.7 kernel:

[root@server5 ~]# uname -r
4.14.35-1818.3.3.el7uek.x86_64 

The nice touch is that it’s telling me the use of ASMFD on my old kernel isn’t supported.

Happy installing!


No comments:

Post a Comment

100 Oracle DBA Interview Questions and Answers

  Here are 100 tricky interview questions tailored for a Senior Oracle DBA role. These questions span a wide range of topics, including perf...