Disclaimer

Monday 19 July 2021

ASM Filter Driver (ASMFD) - Oracle

1) If you are running Oracle RAC, then configuring the shared storage is one of the main preinstallation tasks that needs to be done. 

2) You need to configure multipathing and make sure that the device name that will be used for ASM is always the same. 

3) And you must set permissions and ownership for these devices. 

4) On Linux you can use ASMlib for that. 

5) It stamps the devices so that it can identify them, provides an unique and consistent name for ASM and sets propper permissions for the devices. 

6) But it still possible for other processes to write to these devices, using “dd” for instance.

7) Now there is Oracle Grid Infrastructure 12c which introduces a replacement for ASMlib called ASM Filter Driver (AFD). 

8) Basically it does the same things as ASMlib but in addition to that it is able to block write operations from other processes than Oracle’s own ones.

9) So that is a good thing and I wanted to use it for a new cluster that I should set up. 

10) And that is where the trouble starts. Beside the fact that there were some bugs in the initial versions of AFD from which most got fixed by the April 2016 PSU, AFD is installed as part of Grid Infrastructure. 

11) You can read that in the Automatic Storage Management Docs. It states the following:

After installation of Oracle Grid Infrastructure, you can optionally configure Oracle ASMFD for your system. 😐

12) What? After installation? ???????????????????????????????????????

But I need it right from the beginning to use it for my initial disk group. How about that? 

There is a MOS note How to Install ASM Filter Driver in a Linux Environment Without Having Previously Installed ASMLIB (Doc ID 2060259.1)  but this Whitepaper also assumes that Grid Infrastructure is already installed.

ASM Filter Driver (or AFD) is a replacement for ASMLib and is described by Oracle as follows:

Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.

The Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.

The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.Rreal pain as you need to have 12.1.0.2 installed before the AFD is available to label your disks, yet the default OUI mode wants to create an ASM diskgroup… and you cannot do that without any labelled disks.

The only solution I could come up with was to perform a software-only install, which in itself is a pain. I

Real pain as you need to have 12.1.0.2 installed before the AFD is available to label your disks, yet the default OUI mode wants to create an ASM diskgroup and you cannot do that without any labelled disks.

The only solution I could come up with was to perform a software-only install, which in itself is a pain.

What is Oracle ASM Filter Driver (ASMFD)

Oracle ASM Filter Driver (ASMFD) is an alternate of ASMLib. It is a kernel module that resides in the I/O path of the Oracle ASM disks. Starting 12.1.0.2, ASMFD can be used instead of ASMLib for the ASM disks management. Oracle ASM Filter Driver (Oracle ASMFD) is installed with an Oracle Grid Infrastructure installation (RAC as well as Standalone). I already have ASMLib configured on my server and I want to move to using ASMFD.In this article I will explain how to migrate from ASMLib to ASMFD. 




1. Install Grid Infrastructure Software

First step is to install Grid Infrastructure as a software only installation. That implies that you have to do it on all nodes that should form the future cluster. I did that on the first node, saved the response file and did a silent install on the other nodes.

1. Install Grid Infrastructure Software

First step is to install Grid Infrastructure as a software only installation. That implies that you have to do it on all nodes that should form the future cluster. I did that on the first node, saved the response file and did a silent install on the other nodes.

1
[oracle@vm140 ~] ./runInstaller -silent -responseFile /home/oracle/stage/grid/grid.rsp -ignorePrereq

At the end of the installation you need to run the “orainstRoot.sh” script which itself provides two other root scripts which configure either a cluster or a stand alone server:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
[root@vm140 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
 
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@vm140 ~]# /u01/app/12.1.0.2/grid/root.sh
Performing root user operation.
 
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.2/grid
 
Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...
 
 
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
 
To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/roothas.pl
 
 
To configure Grid Infrastructure for a Cluster execute the following command as oracle user:
/u01/app/12.1.0.2/grid/crs/config/config.sh
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.

For the moment, we do not run any of these scripts.

2. Patching Grid Infrastructure software

Next step is to patch GI software to get the latest version for AFD. Simply update OPatch on all nodes and use “opatchauto” to patch GI home. You need to specify the ORACLE_HOME path using “-oh” parameter to patch an unconfigured Grid Infrastructure home.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@vm140 ~]# export ORACLE_HOME=/u01/app/12.1.0.2/grid
[root@vm140 ~]# export PATH=$ORACLE_HOME/OPatch:$PATH
[root@vm140 ~]# opatch version
OPatch Version: 12.1.0.1.12
 
OPatch succeeded.
 
[root@vm140 ~]# opatchauto apply /home/oracle/stage/22646084 -oh $ORACLE_HOME
 
[...]
 
--------------------------------Summary--------------------------------
 
Patching is completed successfully. Please find the summary as follows:
 
Host:vm140
CRS Home:/u01/app/12.1.0.2/grid
Summary:
 
==Following patches were SUCCESSFULLY applied:
 
Patch: /home/oracle/stage/22646084/21436941
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-06-10_13-54-11PM_1.log
 
Patch: /home/oracle/stage/22646084/22291127
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-06-10_13-54-11PM_1.log
 
Patch: /home/oracle/stage/22646084/22502518
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-06-10_13-54-11PM_1.log
 
Patch: /home/oracle/stage/22646084/22502555
Log: /u01/app/12.1.0.2/grid/cfgtoollogs/opatchauto/core/opatch/opatch2016-06-10_13-54-11PM_1.log
 
 
OPatchAuto successful.

You see that with the latest OPatch version there is no need to create an ocm.rsp resopnse file anymore.

3. Configure Restart

Configure Restart? Why? Because it sets up everything we need to use AFD but does not need any shared storage or other cluster related things like virtual IPs, SCANs and so on.
Therefore you use the script that was provided earlier by the “orainstRoot.sh” script. Do that on all nodes of the future cluster.

1
[root@vm140 ~]# /u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/roothas.pl


4. Deconfigure Restart

After Restart was configured, you can deconfigure it right away. Everything that is needed for AFD is being kept. The documentation for that is here.

1
2
[root@vm140 ~]# cd /u01/app/12.1.0.2/grid/crs/install/
[root@vm140 install]# ./roothas.sh -deconfig -force

5. Confiure ASM Filter Driver

Now you can finally start configuring AFD. The whitepaper from the MOS note mentioned at the beginning provides a good overview of what has to be done. Simply connect as “root”, set the environment and run the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@vm140 install]# $ORACLE_HOME/bin/asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
ASMCMD-9524: AFD configuration failed 'ERROR: OHASD start failed'
[root@vm140 install]# $ORACLE_HOME/bin/asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DISABLED' on host 'vm140'

Don’t care about the error and the message that is telling it failed. That is simply because there is no cluster at all at the moment.
As a final configuration step you need to set the discovery string for AFD so that it can find the disks you want to use. This is defined inside “/etc/afd.conf”:

1
2
[root@vm140 install]# cat /etc/afd.conf
afd_diskstring='/dev/xvd*'

The above steps need to be done on all servers of the future cluster.
Now that AFD is configured, you can start labeling your disks. Do this on only one node:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@vm140 install]# $ORACLE_HOME/bin/asmcmd afd_label GI /dev/xvdb1
Connected to an idle instance.
[root@vm140 install]# $ORACLE_HOME/bin/asmcmd afd_label DATA /dev/xvdc1
Connected to an idle instance.
[root@vm140 install]# $ORACLE_HOME/bin/asmcmd afd_label FRA /dev/xvdd1
Connected to an idle instance.
 
[root@vm140 install]# $ORACLE_HOME/bin/asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
GI                         DISABLED   /dev/xvdb1
DATA                       DISABLED   /dev/xvdc1
FRA                        DISABLED   /dev/xvdd1

On all the other nodes just do a rescan of the disks:

1
2
3
4
5
6
7
8
9
10
[root@vm141 install]# $ORACLE_HOME/bin/asmcmd afd_scan
Connected to an idle instance.
[root@vm141 install]# $ORACLE_HOME/bin/asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
GI                         DISABLED   /dev/xvdb1
DATA                       DISABLED   /dev/xvdc1
FRA                        DISABLED   /dev/xvdd1

That’s it.

6. Configure cluster with AFD

Finally, you can start configuring your new cluster and use AFD disks right from the beginning. You can now use the Cluster Configuration Assistant that was mentioned by “orainstRoot.sh” to set up your cluster.

1
[oracle@vm140 ~]$ /u01/app/12.1.0.2/grid/crs/config/config.sh

Follow the steps and you will see the well-known screens for setting up a cluster. At the point when you define the initial Grid Inftrastructure diskgroup you can now specify the “Discovery String”:


And, voila, you see the previously labeled disks:



And after you run the root scripts on all nodes, you’ll get a running cluster:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
[root@vm140 bin]# ./crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       vm140                    STABLE
               ONLINE  ONLINE       vm141                    STABLE
               ONLINE  ONLINE       vm142                    STABLE
               ONLINE  ONLINE       vm143                    STABLE
ora.GI.dg
               ONLINE  ONLINE       vm140                    STABLE
               ONLINE  ONLINE       vm141                    STABLE
               ONLINE  ONLINE       vm142                    STABLE
               OFFLINE OFFLINE      vm143                    STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       vm140                    STABLE
               ONLINE  ONLINE       vm141                    STABLE
               ONLINE  ONLINE       vm142                    STABLE
               ONLINE  ONLINE       vm143                    STABLE
ora.net1.network
               ONLINE  ONLINE       vm140                    STABLE
               ONLINE  ONLINE       vm141                    STABLE
               ONLINE  ONLINE       vm142                    STABLE
               ONLINE  ONLINE       vm143                    STABLE
ora.ons
               ONLINE  ONLINE       vm140                    STABLE
               ONLINE  ONLINE       vm141                    STABLE
               ONLINE  ONLINE       vm142                    STABLE
               ONLINE  ONLINE       vm143                    STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       vm140                    STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       vm140                    169.254.231.166 192.
                                                             168.1.1,STABLE
ora.asm
      1        ONLINE  ONLINE       vm140                    Started,STABLE
      2        ONLINE  ONLINE       vm142                    Started,STABLE
      3        ONLINE  ONLINE       vm141                    Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       vm140                    STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       vm140                    Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       vm140                    STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       vm140                    STABLE
ora.vm140.vip
      1        ONLINE  ONLINE       vm140                    STABLE
ora.vm141.vip
      1        ONLINE  ONLINE       vm141                    STABLE
ora.vm142.vip
      1        ONLINE  ONLINE       vm142                    STABLE
ora.vm143.vip
      1        ONLINE  ONLINE       vm143                    STABLE
--------------------------------------------------------------------------------

And that’s it. Nothing more to do. Beside creating more disk groups and setting up databases. But that is simple compared to what we’ve done till now.



===========================================================

ASM Filter Driver (ASMFD)

 

ASM Filter Driver is a Linux kernel module introduced in 12c R1. It resides in the I/O path of the Oracle ASM disks providing the following features:

  • Rejecting all non-Oracle I/O write requests to ASM Disks.
  • Device name persistency.
  • Node level fencing without reboot.

 

In 12c R2 ASMFD can be enabled from the GUI interface of the Grid Infrastructure installation, as shown on this post GI 12c R2 Installation at the step #8 “Create ASM Disk Group”.

Once ASM Filter Driver is in use, similarly to ASMLib the disks are managed using the ASMFD Label Name.

 

Here few examples about the implementation of ASM Filter Driver.

--How to create an ASMFD label in SQL*Plus
SQL> Alter system label set 'DATA1' to '/dev/mapper/mpathak';

System altered.


--How to create an ASM Disk Group with ASMFD
CREATE DISKGROUP DATA_DG EXTERNAL REDUNDANCY DISK 'AFD:DATA1' SIZE 30720M
ATTRIBUTE 'SECTOR_SIZE'='512','LOGICAL_SECTOR_SIZE'='512','compatible.asm'='12.2.0.1',
'compatible.rdbms'='12.2.0.1','compatible.advm'='12.2.0.1','au_size'='4M';

Diskgroup created.

 

ASM Filter Driver can also be managed from the ASM command line utility ASMCMD

--Check ASMFD status
ASMCMD> afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'oel7node06.localdomain'


--List ASM Disks where ASMFD is enabled
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label                    Filtering                Path
================================================================================
DATA1                      ENABLED                /dev/mapper/mpathak
DATA2                      ENABLED                /dev/mapper/mpathan
DATA3                      ENABLED                /dev/mapper/mpathw
DATA4                      ENABLED                /dev/mapper/mpathac
GIMR1                      ENABLED                /dev/mapper/mpatham
GIMR2                      ENABLED                /dev/mapper/mpathaj
GIMR3                      ENABLED                /dev/mapper/mpathal
GIMR4                      ENABLED                /dev/mapper/mpathaf
GIMR5                      ENABLED                /dev/mapper/mpathai
RECO3                      ENABLED                /dev/mapper/mpathy
RECO1                      ENABLED                /dev/mapper/mpathab
RECO2                      ENABLED                /dev/mapper/mpathx
ASMCMD>


--How to remove an ASMFD label in ASMCMD
ASMCMD> afd_unlabel DATA4

 




Oracle ASM Filter Driver I/O Filtering Test



Before starting, you need .... Grid Infrastructure 12c already installed with ASMFD 


Check if ASM Filtering is enabled

We need to make sure that ASMFD is loaded and filtering is enabled on the disk we gonna try to corrupt (/dev/sda).

# Check if ASMFD is loaded
[grid]$ asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'oralab01.uxora.com'
 
# List ASM Disk with filtering enabled
[grid]$ asmcmd afd_lsdsk
---------------------------------------------------------
Label                     Filtering   Path
=========================================================
DISK01                      ENABLED   /dev/sda
DISK02                      ENABLED   /dev/sdb
DISK03                      ENABLED   /dev/sdc
DISK04                      ENABLED   /dev/sdd

Disk manipulation



As root user, we are going to do disk manipulation on /dev/sda which is an asm disk with filtering enabled.
First, we going to read the first bytes of /dev/sda :



# Get header
[root]$ od -c -N 128 /dev/sda
0000000 001 202 001 001  \0  \0  \0  \0  \0  \0  \0 200 247   @ 203 220
0000020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000040   O   R   C   L   D   I   S   K   D   I   S   K   0   1  \0  \0
0000060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000100  \0 001      \f  \0  \0 001 003   D   I   S   K   0   1  \0  \0
0000120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000140  \0  \0  \0  \0  \0  \0  \0  \0   D   A   T   A  \0  \0  \0  \0
0000160  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000200

fdisk

# Try partition with fdisk
[root]$ fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).
 
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
 
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xd2fcbd9d.
 
Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-50331647, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-50331647, default 50331647):
Using default value 50331647
Partition 1 of type Linux and of size 24 GiB is set
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
 
Error closing file
 
# Check /var/log/message for error messages
[root]$ tail -f /var/log/message
...
Aug 19 22:05:28 oralab01 kernel: Buffer I/O error on dev sda, logical block 0, lost async page write
Aug 19 22:05:28 oralab01 kernel: F 4297671.556/170819200528 fdisk[13113] oracleafd:18:0894:Write IO to ASM managed device: [8] [0]
...

mkfs

# Try to create filesystem with mkfs
[root]$ mkfs.xfs -f /dev/sda
meta-data=/dev/sda               isize=256    agcount=4, agsize=1572864 blks
=                       sectsz=512   attr=2, projid32bit=1
=                       crc=0        finobt=0, sparse=0
data     =                       bsize=4096   blocks=6291456, imaxpct=25
=                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=3072, version=2
=                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
mkfs.xfs: pwrite64 failed: Input/output error
 
# Check /var/log/message for error messages
[root]$ tail -f /var/log/message
...
Aug 19 22:24:18 oralab01 kernel: F 4306001.982/170819222418 mkfs.xfs[11228] oracleafd:18:0894:Write IO to ASM managed device: [8] [0]
...

dd

# Try to erase header with dd
[root]$ dd if=/dev/zero of=/dev/sda bs=4096 count=1000
1000+0 records in
1000+0 records out
4096000 bytes (4.1 MB) copied, 0.00493748 s, 830 MB/s
 
# Check /var/log/message for error messages
[root]$ tail -f /var/log/message
...
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 0, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 1, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 2, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 3, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 4, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 5, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 6, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 7, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 8, lost async page write
Aug 19 21:56:46 oralab01 kernel: Buffer I/O error on dev sda, logical block 9, lost async page write
...
 
# Try to erase header with dd direct io
[root]$ dd if=/dev/zero of=/dev/sda bs=4096 count=1000 oflag=direct
dd: error writing ‘/dev/sda’: Input/output error
1+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000186557 s, 0.0 kB/s
 
# Check /var/log/message for error messages
[root]$ tail -f /var/log/message
...
Aug 19 22:16:41 oralab01 kernel: F 4298344.799/170819201641 dd[20904] oracleafd:18:0894:Write IO to ASM managed device: [8] [0]
...

io redirection

# Try to erase header with io redirection
[root]$ echo HELLOWORLD > /dev/sda
 
# Check /var/log/message for error messages
[root]$ tail -f /var/log/message
...
Aug 19 21:55:17 oralab01 kernel: Buffer I/O error on dev sda, logical block 0, lost async page write
...

Check ASM Disk

After all theses disks manipulations, let's check /dev/sda disk.

Re-Read header with od

Header still is the same before and after disk manipulations.

# Get header
[root]$ od -c -N 128 /dev/sda
0000000 001 202 001 001  \0  \0  \0  \0  \0  \0  \0 200 247   @ 203 220
0000020  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000040   O   R   C   L   D   I   S   K   D   I   S   K   0   1  \0  \0
0000060  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000100  \0 001      \f  \0  \0 001 003   D   I   S   K   0   1  \0  \0
0000120  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000140  \0  \0  \0  \0  \0  \0  \0  \0   D   A   T   A  \0  \0  \0  \0
0000160  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0  \0
0000200

Read header with kfed

Kfed can still read the asm disk header.

# Header with kfed
[grid]$ kfed read /dev/sda
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  2424520871 ; 0x00c: 0x908340a7
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:   ORCLDISKDISK01 ; 0x000: length=14
kfdhdb.driver.reserved[0]:   1263749444 ; 0x008: 0x4b534944
kfdhdb.driver.reserved[1]:        12592 ; 0x00c: 0x00003130
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                203424000 ; 0x020: 0x0c200100
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                  DISK01 ; 0x028: length=6
kfdhdb.grpname:                    DATA ; 0x048: length=4
kfdhdb.fgname:                   DISK01 ; 0x068: length=6
kfdhdb.siteguid[0]:                   0 ; 0x088: 0x00
kfdhdb.siteguid[1]:                   0 ; 0x089: 0x00
kfdhdb.siteguid[2]:                   0 ; 0x08a: 0x00
kfdhdb.siteguid[3]:                   0 ; 0x08b: 0x00
kfdhdb.siteguid[4]:                   0 ; 0x08c: 0x00
kfdhdb.siteguid[5]:                   0 ; 0x08d: 0x00
kfdhdb.siteguid[6]:                   0 ; 0x08e: 0x00
kfdhdb.siteguid[7]:                   0 ; 0x08f: 0x00
kfdhdb.siteguid[8]:                   0 ; 0x090: 0x00
kfdhdb.siteguid[9]:                   0 ; 0x091: 0x00
kfdhdb.siteguid[10]:                  0 ; 0x092: 0x00
kfdhdb.siteguid[11]:                  0 ; 0x093: 0x00
kfdhdb.siteguid[12]:                  0 ; 0x094: 0x00
kfdhdb.siteguid[13]:                  0 ; 0x095: 0x00
kfdhdb.siteguid[14]:                  0 ; 0x096: 0x00
kfdhdb.siteguid[15]:                  0 ; 0x097: 0x00
kfdhdb.ub1spare[0]:                   0 ; 0x098: 0x00
kfdhdb.ub1spare[1]:                   0 ; 0x099: 0x00
kfdhdb.ub1spare[2]:                   0 ; 0x09a: 0x00
kfdhdb.ub1spare[3]:                   0 ; 0x09b: 0x00
kfdhdb.ub1spare[4]:                   0 ; 0x09c: 0x00
kfdhdb.ub1spare[5]:                   0 ; 0x09d: 0x00
kfdhdb.ub1spare[6]:                   0 ; 0x09e: 0x00
kfdhdb.ub1spare[7]:                   0 ; 0x09f: 0x00
kfdhdb.ub1spare[8]:                   0 ; 0x0a0: 0x00
kfdhdb.ub1spare[9]:                   0 ; 0x0a1: 0x00
kfdhdb.ub1spare[10]:                  0 ; 0x0a2: 0x00
kfdhdb.ub1spare[11]:                  0 ; 0x0a3: 0x00
kfdhdb.ub1spare[12]:                  0 ; 0x0a4: 0x00
kfdhdb.ub1spare[13]:                  0 ; 0x0a5: 0x00
kfdhdb.ub1spare[14]:                  0 ; 0x0a6: 0x00
kfdhdb.ub1spare[15]:                  0 ; 0x0a7: 0x00
kfdhdb.crestmp.hi:             33055335 ; 0x0a8: HOUR=0x7 DAYS=0x13 MNTH=0x8 YEAR=0x7e1
kfdhdb.crestmp.lo:            888763392 ; 0x0ac: USEC=0x0 MSEC=0x25d SECS=0xf MINS=0xd
kfdhdb.mntstmp.hi:             33055349 ; 0x0b0: HOUR=0x15 DAYS=0x13 MNTH=0x8 YEAR=0x7e1
kfdhdb.mntstmp.lo:           1119166464 ; 0x0b4: USEC=0x0 MSEC=0x148 SECS=0x2b MINS=0x10
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000
kfdhdb.mfact:                    454272 ; 0x0c0: 0x0006ee80
kfdhdb.dsksize:                    6144 ; 0x0c4: 0x00001800
kfdhdb.pmcnt:                         3 ; 0x0c8: 0x00000003
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                     10 ; 0x0d4: 0x0000000a
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             33055335 ; 0x0e4: HOUR=0x7 DAYS=0x13 MNTH=0x8 YEAR=0x7e1
kfdhdb.grpstmp.lo:            888013824 ; 0x0e8: USEC=0x0 MSEC=0x381 SECS=0xe MINS=0xd
kfdhdb.vfstart:                      24 ; 0x0ec: 0x00000018
kfdhdb.vfend:                        32 ; 0x0f0: 0x00000020
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000
kfdhdb.flags:                         1 ; 0x0fc: 0x00000001
kfdhdb.f1b1fcn.base:                  0 ; 0x100: 0x00000000
kfdhdb.f1b1fcn.wrap:                  0 ; 0x104: 0x00000000
kfdhdb.ip[0]:                       192 ; 0x108: 0xc0
kfdhdb.ip[1]:                       168 ; 0x109: 0xa8
kfdhdb.ip[2]:                         0 ; 0x10a: 0x00
kfdhdb.ip[3]:                        31 ; 0x10b: 0x1f
kfdhdb.modstmp:              1503170203 ; 0x10c: 0x59988e9b
kfdhdb.checklbl:                      0 ; 0x110: 0x00
kfdhdb.verlbl:                        0 ; 0x111: 0x00
kfdhdb.ub2spare:                      0 ; 0x112: 0x0000
kfdhdb.sitelbl:                         ; 0x114: length=0
kfdhdb.fglbl:                           ; 0x124: length=0
kfdhdb.vsnnum:                203424000 ; 0x144: 0x0c200100
kfdhdb.patchvsn:                      0 ; 0x148: 0x0000
kfdhdb.operation:                     0 ; 0x14a: 0x0000
kfdhdb.xtnd[0]:                       0 ; 0x14c: 0x0000
kfdhdb.xtnd[1]:                       0 ; 0x14e: 0x0000
kfdhdb.xtnd[2]:                       0 ; 0x150: 0x0000
kfdhdb.xtnd[3]:                       0 ; 0x152: 0x0000
kfdhdb.xtnd[4]:                       0 ; 0x154: 0x0000
kfdhdb.xtnd[5]:                       0 ; 0x156: 0x0000
kfdhdb.ub4spare[0]:                   0 ; 0x158: 0x00000000
kfdhdb.ub4spare[1]:                   0 ; 0x15c: 0x00000000
kfdhdb.ub4spare[2]:                   0 ; 0x160: 0x00000000
kfdhdb.ub4spare[3]:                   0 ; 0x164: 0x00000000
kfdhdb.ub4spare[4]:                   0 ; 0x168: 0x00000000
kfdhdb.ub4spare[5]:                   0 ; 0x16c: 0x00000000
kfdhdb.ub4spare[6]:                   0 ; 0x170: 0x00000000
kfdhdb.ub4spare[7]:                   0 ; 0x174: 0x00000000
kfdhdb.ub4spare[8]:                   0 ; 0x178: 0x00000000
kfdhdb.ub4spare[9]:                   0 ; 0x17c: 0x00000000
kfdhdb.ub4spare[10]:                  0 ; 0x180: 0x00000000
kfdhdb.ub4spare[11]:                  0 ; 0x184: 0x00000000
kfdhdb.ub4spare[12]:                  0 ; 0x188: 0x00000000
kfdhdb.ub4spare[13]:                  0 ; 0x18c: 0x00000000
kfdhdb.ub4spare[14]:                  0 ; 0x190: 0x00000000
kfdhdb.ub4spare[15]:                  0 ; 0x194: 0x00000000
kfdhdb.ub4spare[16]:                  0 ; 0x198: 0x00000000
kfdhdb.ub4spare[17]:                  0 ; 0x19c: 0x00000000
kfdhdb.ub4spare[18]:                  0 ; 0x1a0: 0x00000000
kfdhdb.ub4spare[19]:                  0 ; 0x1a4: 0x00000000
kfdhdb.ub4spare[20]:                  0 ; 0x1a8: 0x00000000
kfdhdb.ub4spare[21]:                  0 ; 0x1ac: 0x00000000
kfdhdb.ub4spare[22]:                  0 ; 0x1b0: 0x00000000
kfdhdb.ub4spare[23]:                  0 ; 0x1b4: 0x00000000
kfdhdb.ub4spare[24]:                  0 ; 0x1b8: 0x00000000
kfdhdb.ub4spare[25]:                  0 ; 0x1bc: 0x00000000
kfdhdb.ub4spare[26]:                  0 ; 0x1c0: 0x00000000
kfdhdb.ub4spare[27]:                  0 ; 0x1c4: 0x00000000
kfdhdb.ub4spare[28]:                  0 ; 0x1c8: 0x00000000
kfdhdb.ub4spare[29]:                  0 ; 0x1cc: 0x00000000
kfdhdb.ub4spare[30]:                  0 ; 0x1d0: 0x00000000
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

Check disk with ASM

# Check datagroup with asmcmd
[grid]$ asmcmd chkdg data
Diskgroup altered.
 
[grid]$ tail -20 /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log
...
SQL> /* ASMCMD */ALTER DISKGROUP data CHECK  NOREPAIR
NOTE: starting check of diskgroup DATA
GMON querying group 1 at 24 for pid 38, osid 9849
GMON checking disk 0 for group 1 at 25 for pid 38, osid 9849
GMON querying group 1 at 26 for pid 38, osid 9849
GMON checking disk 1 for group 1 at 27 for pid 38, osid 9849
GMON querying group 1 at 28 for pid 38, osid 9849
GMON checking disk 2 for group 1 at 29 for pid 38, osid 9849
GMON querying group 1 at 30 for pid 38, osid 9849
GMON checking disk 3 for group 1 at 31 for pid 38, osid 9849
SUCCESS: check of diskgroup DATA found no errors
SUCCESS: /* ASMCMD */ALTER DISKGROUP data CHECK  NOREPAIR
...


No comments:

Post a Comment

100 Oracle DBA Interview Questions and Answers

  Here are 100 tricky interview questions tailored for a Senior Oracle DBA role. These questions span a wide range of topics, including perf...