Disclaimer

Friday 25 February 2022

Step By Step Oracle 19c RAC Installation

 




Installation on Oracle Linux  Part-1 OS


1. PREPARING OPERATING SYSTEMS ON BOTH SERVERS

It is assumed that you have two servers running on Oracle Linux 


USED ​​IP AND HOSTNAMES




10.5.0.100 node1 node1.serhatcelik.local
10.5.0.101 node2 node2.serhatcelik.local

192.168.1.120 node1-priv node1-priv.serhatcelik.local
192.168.1.121 node2-priv node2-priv.serhatcelik.local

10.5.0.102 node1-vip node1-vip.serhatcelik.local
10.5.0.103 node2-vip node2-vip.serhatcelik.local

10.5.0.104 srhtdb-scan srhtdb-scan.serhatcelik.local
10.5.0.105 srhtdb-scan srhtdb-scan.serhatcelik.local
10.5.0.106 srhtdb-scan srhtdb-scan.serhatcelik.local

About SCAN Ping:-

 

https://community.oracle.com/tech/apps-infra/discussion/2394322/scan-ip-is-not-reachable

Note:- Your scan will be not pingable from node1 and node2 even after adding the SCAN entries in DNS it will invoke only after RAC installation, but prior to installation we need to confirm that whether my SCAN is working or not by doing the below steps, after every nslookup the IP’s order will be changed with that we can confirm that my SCAN is working.


Public IP:   The public IP address is for the server. This is the same as any server IP address, a unique address with exists in /etc/hosts.

Private IP:  Oracle RCA requires “private IP” addresses to manage the CRS, the clusterware heartbeat process and the cache fusion layer.

Virtual IP:   Oracle uses a Virtual IP (VIP) for database access. The VIP must be on the same subnet as the public IP address. The VIP is used for RAC failover (TAF).


SET NODE1 PUBLIC IP

IP Address : 10.5.0.100
Subnet : 255.255.0.0
Gateway : 10.5.XX

[root@node1 ~]# more /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=no
DEFROUTE= yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes IPV6_DEFROUTE =
yes IPV6_FAILURE_FATAL =
no
IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=yes IPV6_DEFROUTE=yes IPV6_ADDR_GEN_MODE=stable-privacy NAME =
eth0 UUID= 47592082
-bffe-45cf =255.255.0.0 GATEWAY=10.5.XX DOMAIN= xxx.com.tr DNS1=10.5.XX DNS2=10.5.XX


[root@node1 ~]#


SET NODE2 PUBLIC IP

IP Address : 10.5.0.101
Subnet : 255.255.0.0
Gateway : 10.5.XX

[root@node2 ~]# more /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=no
DEFROUTE= yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes IPV6_DEFROUTE =
yes IPV6_FAILURE_FATAL =
no
IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth0 UUID=yes IPV6_DEFROUTE=yes IPV6_ADDR_GEN_MODE=stable-privacy NAME =
eth0 UUID= 47592082
-bffe-45cf =255.255.0.0 GATEWAY=10.5.XX DOMAIN= xxx.com.tr DNS1=10.5.XX DNS2=10.5.XX


[root@node2 ~]#


CONFIGURATION OF SECOND NETWORK INTERFACE FOR INTERCONNECT CONNECTIONS

You should use different network interface for private IP apart from public IP. Also IP subnets should be different from each other otherwise installation can not be continued. (It is prerequisite.) I added second network interface for both VM servers running on Oracle VM Server.


SET NODE1 PRIVATE IP

IP Address: 192.168.1.120
Subnet: 255.255.0.0

[root@node1 ~]# more /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6INIT= yes
IPV_ILye_AUTOF6F IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth1 DEVICE=eth1 ONBOOT=yes NM_CONTROLLED=no IPADDR=192.168.1.120 NETMASK=255.255.0.0 [root@node1 ~]# [root@node1 ~]# service network restart Restarting network (via systemctl) : [ OK ] [root@node1 ~]#




SET NODE2 PRIVATE IP

IP Address: 192.168.1.121
Subnet: 255.255.0.0

[root@node2 ~]# more /etc/sysconfig/network-scripts/ifcfg-eth1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6INIT= yes
IPV_IDE_AUTOF6F IPV6_ADDR_GEN_MODE=stable-privacy NAME=eth1 DEVICE=eth1 ONBOOT=yes NM_CONTROLLED=no IPADDR=192.168.1.121 NETMASK=255.255.0.0 [root@node2 ~]# [root@node2 ~]# service network restart Restarting network (via systemctl) : [ OK ] [root@node2 ~]#




đŸ‘‰DISABLE SELINUX ON NODE1 AND NODE2








Stop avahi-daemon


[root@node1 ~]# systemctl status avahi-daemon
? avahi-daemon.service - Avahi mDNS/DNS-SD Stack
   Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-02-16 17:35:25 CET; 1 day 17h ago
 Main PID: 1454 (avahi-daemon)
   Status: "avahi-daemon 0.7 starting up."
    Tasks: 2 (limit: 1607258)
   Memory: 2.0M
   CGroup: /system.slice/avahi-daemon.service
           +-1454 avahi-daemon: running [node1.local]
           +-1491 avahi-daemon: chroot helper
Feb 16 17:43:19 node1 avahi-daemon[1454]: Registering new address record for 172.20.1.26 on eth0.IPv4.
Feb 16 17:43:20 node1 avahi-daemon[1454]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::221:f6ff:fecb:2871.
Feb 16 17:43:20 node1 avahi-daemon[1454]: New relevant interface eth0.IPv6 for mDNS.
Feb 16 17:43:20 node1 avahi-daemon[1454]: Registering new address record for fe80::221:f6ff:fecb:2871 on eth0.*.
Feb 16 17:45:02 node1 avahi-daemon[1454]: Joining mDNS multicast group on interface eth1.IPv4 with address 192.168.128.3.
Feb 16 17:45:02 node1 avahi-daemon[1454]: New relevant interface eth1.IPv4 for mDNS.
Feb 16 17:45:02 node1 avahi-daemon[1454]: Registering new address record for 192.168.128.3 on eth1.IPv4.
Feb 16 17:45:04 node1 avahi-daemon[1454]: Joining mDNS multicast group on interface eth1.IPv6 with address fe80::221:f6ff:feeb:840d.
Feb 16 17:45:04 node1 avahi-daemon[1454]: New relevant interface eth1.IPv6 for mDNS.
Feb 16 17:45:04 node1 avahi-daemon[1454]: Registering new address record for fe80::221:f6ff:feeb:840d on eth1.*.
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# systemctl stop avahi-daemon
Warning: Stopping avahi-daemon.service, but it can still be activated by:
  avahi-daemon.socket
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# systemctl disable avahi-daemon
Removed /etc/systemd/system/multi-user.target.wants/avahi-daemon.service.
Removed /etc/systemd/system/sockets.target.wants/avahi-daemon.socket.
Removed /etc/systemd/system/dbus-org.freedesktop.Avahi.service.
[root@node1 ~]#
[root@node1 ~]#

[root@node1 ~]# systemctl status avahi-daemon
? avahi-daemon.service - Avahi mDNS/DNS-SD Stack
   Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled)
   Active: inactive (dead) since Fri 2022-02-18 10:36:04 CET; 1min 35s ago
 Main PID: 1454 (code=exited, status=0/SUCCESS)
   Status: "avahi-daemon 0.7 starting up."
Feb 18 10:36:04 node1 systemd[1]: Stopping Avahi mDNS/DNS-SD Stack...
Feb 18 10:36:04 node1 avahi-daemon[1454]: Got SIGTERM, quitting.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth1.IPv6 with address fe80::221:f6ff:feeb:840d.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth1.IPv4 with address 192.168.128.3.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::221:f6ff:fecb:2871.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth0.IPv4 with address 172.20.1.26.
Feb 18 10:36:04 node1 avahi-daemon[1454]: avahi-daemon 0.7 exiting.
Feb 18 10:36:04 node1 systemd[1]: avahi-daemon.service: Succeeded.
Feb 18 10:36:04 node1 systemd[1]: Stopped Avahi mDNS/DNS-SD Stack.
[root@node1 ~]# systemctl stop avahi-daemon
Warning: Stopping avahi-daemon.service, but it can still be activated by:
  avahi-daemon.socket
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# systemctl disable avahi-daemon
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# systemctl status avahi-daemon
? avahi-daemon.service - Avahi mDNS/DNS-SD Stack
   Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled)
   Active: inactive (dead) since Fri 2022-02-18 10:36:04 CET; 2min 20s ago
 Main PID: 1454 (code=exited, status=0/SUCCESS)
   Status: "avahi-daemon 0.7 starting up."
Feb 18 10:36:04 node1 systemd[1]: Stopping Avahi mDNS/DNS-SD Stack...
Feb 18 10:36:04 node1 avahi-daemon[1454]: Got SIGTERM, quitting.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth1.IPv6 with address fe80::221:f6ff:feeb:840d.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth1.IPv4 with address 192.168.128.3.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::221:f6ff:fecb:2871.
Feb 18 10:36:04 node1 avahi-daemon[1454]: Leaving mDNS multicast group on interface eth0.IPv4 with address 172.20.1.26.
Feb 18 10:36:04 node1 avahi-daemon[1454]: avahi-daemon 0.7 exiting.
Feb 18 10:36:04 node1 systemd[1]: avahi-daemon.service: Succeeded.
Feb 18 10:36:04 node1 systemd[1]: Stopped Avahi mDNS/DNS-SD Stack.
[root@node1 ~]#


[root@node2 ~]# systemctl status avahi-daemon
? avahi-daemon.service - Avahi mDNS/DNS-SD Stack
   Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2022-02-16 17:29:15 CET; 1 day 17h ago
 Main PID: 1445 (avahi-daemon)
   Status: "avahi-daemon 0.7 starting up."
    Tasks: 2 (limit: 1607258)
   Memory: 3.0M
   CGroup: /system.slice/avahi-daemon.service
           +-1445 avahi-daemon: running [node2.local]
           +-1489 avahi-daemon: chroot helper
Feb 16 17:29:15 node2 avahi-daemon[1445]: Registering new address record for 192.168.128.4 on eth1.IPv4.
Feb 16 17:29:16 node2 avahi-daemon[1445]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
Feb 16 17:29:16 node2 avahi-daemon[1445]: New relevant interface virbr0.IPv4 for mDNS.
Feb 16 17:29:16 node2 avahi-daemon[1445]: Registering new address record for 192.168.122.1 on virbr0.IPv4.
Feb 16 17:29:16 node2 avahi-daemon[1445]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::221:f6ff:fe96:60b4.
Feb 16 17:29:16 node2 avahi-daemon[1445]: New relevant interface eth0.IPv6 for mDNS.
Feb 16 17:29:16 node2 avahi-daemon[1445]: Registering new address record for fe80::221:f6ff:fe96:60b4 on eth0.*.
Feb 16 17:29:17 node2 avahi-daemon[1445]: Joining mDNS multicast group on interface eth1.IPv6 with address fe80::221:f6ff:fee6:8bf0.
Feb 16 17:29:17 node2 avahi-daemon[1445]: New relevant interface eth1.IPv6 for mDNS.
Feb 16 17:29:17 node2 avahi-daemon[1445]: Registering new address record for fe80::221:f6ff:fee6:8bf0 on eth1.*.
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# systemctl stop avahi-daemon
Warning: Stopping avahi-daemon.service, but it can still be activated by:
  avahi-daemon.socket
[root@node2 ~]# systemctl disable avahi-daemon
Removed /etc/systemd/system/multi-user.target.wants/avahi-daemon.service.
Removed /etc/systemd/system/sockets.target.wants/avahi-daemon.socket.
Removed /etc/systemd/system/dbus-org.freedesktop.Avahi.service.
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# systemctl status avahi-daemon
? avahi-daemon.service - Avahi mDNS/DNS-SD Stack
   Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled)
   Active: inactive (dead) since Fri 2022-02-18 10:41:35 CET; 42s ago
 Main PID: 1445 (code=exited, status=0/SUCCESS)
   Status: "avahi-daemon 0.7 starting up."
Feb 18 10:41:35 node2 systemd[1]: Stopping Avahi mDNS/DNS-SD Stack...
Feb 18 10:41:35 node2 avahi-daemon[1445]: Got SIGTERM, quitting.
Feb 18 10:41:35 node2 avahi-daemon[1445]: Leaving mDNS multicast group on interface virbr0.IPv4 with address 192.168.122.1.
Feb 18 10:41:35 node2 avahi-daemon[1445]: Leaving mDNS multicast group on interface eth1.IPv6 with address fe80::221:f6ff:fee6:8bf0.
Feb 18 10:41:35 node2 avahi-daemon[1445]: Leaving mDNS multicast group on interface eth1.IPv4 with address 192.168.128.4.
Feb 18 10:41:35 node2 avahi-daemon[1445]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::221:f6ff:fe96:60b4.
Feb 18 10:41:35 node2 avahi-daemon[1445]: Leaving mDNS multicast group on interface eth0.IPv4 with address 172.20.1.27.
Feb 18 10:41:35 node2 avahi-daemon[1445]: avahi-daemon 0.7 exiting.
Feb 18 10:41:35 node2 systemd[1]: avahi-daemon.service: Succeeded.
Feb 18 10:41:35 node2 systemd[1]: Stopped Avahi mDNS/DNS-SD Stack.
[root@node2 ~]#


đŸ‘‰STOP AND DISABLE FIREWALL & NETWORK MANAGER SERVICES ON NODE1 AND NODE2


[root@node1 ~]# systemctl stop firewalld.service

[root@node1 ~]#
[root@node1 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/ multi-user.target.wants/firewalld.service .
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node1 ~]#

[root@node1 ~]# systemctl stop NetworkManager
[root@node1 ~]#
[root@node1 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/ dbus-org.freedesktop.nm -dispatcher.service .
Removed symlink /etc/systemd/system/ multi-user.target.wants/NetworkManager.service .
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service .
[root@node1 ~]#

[root@node2 ~]# systemctl stop firewalld.service
[root@node2 ~]#
[root@node2 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/ multi-user.target .wants/firewalld.service .
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@node2 ~]#

[root@node2 ~]# systemctl stop NetworkManager
[root@node2 ~]#
[root@node2 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/ dbus-org.freedesktop.nm -dispatcher.service .
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service .
Removed symlink /etc/systemd/system/ network-online.target.wants/NetworkManager-wait-online.service .
[root@node2 ~]#


INSTALLATION OF NEEDED PACKAGES ON NODE1 AND NODE2

Kernel parameter values: - Before running Package


[root@node1 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

[root@node2 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).


Run the below command to install packages


[root@node1 ~]# dnf install oracle-database-preinstall-19c






[root@node2 ~]# dnf install oracle-database-preinstall-19c

[root@node2 ~]# dnf install oracle-database-preinstall-19c
Last metadata expiration check: 3:50:27 ago on Fri 18 Feb 2022 07:03:42 AM CET.
Dependencies resolved.
===========================================================================================================
 Package                           Architecture    Version                    Repository              Size
===========================================================================================================
Installing:
 oracle-database-preinstall-19c    x86_64          1.0-2.el8                  ol8_appstream           31 k
Installing dependencies:
 glibc-devel                       x86_64          2.28-164.0.3.el8           ol8_baseos_latest      1.0 M
 ksh                               x86_64          20120801-254.0.1.el8       ol8_appstream          927 k
 libaio-devel                      x86_64          0.3.112-1.el8              ol8_baseos_latest       19 k
 libnsl                            x86_64          2.28-164.0.3.el8           ol8_baseos_latest      104 k
 libstdc++-devel                   x86_64          8.5.0-4.0.2.el8_5          ol8_appstream          2.1 M
 libxcrypt-devel                   x86_64          4.1.1-6.el8                ol8_baseos_latest       25 k
 make                              x86_64          1:4.2.1-10.el8             ol8_baseos_latest      498 k

Transaction Summary
==================================================================================================
Install  8 Packages
Total download size: 4.7 M
Installed size: 17 M
Is this ok [y/N]: y
Downloading Packages:
(1/8): libaio-devel-0.3.112-1.el8.x86_64.rpm                      393 kB/s |  19 kB     00:00
(2/8): libxcrypt-devel-4.1.1-6.el8.x86_64.rpm                     1.3 MB/s |  25 kB     00:00
(3/8): libnsl-2.28-164.0.3.el8.x86_64.rpm                         1.4 MB/s | 104 kB     00:00
(4/8): glibc-devel-2.28-164.0.3.el8.x86_64.rpm                     11 MB/s | 1.0 MB     00:00
(5/8): make-4.2.1-10.el8.x86_64.rpm                               9.8 MB/s | 498 kB     00:00
(6/8): ksh-20120801-254.0.1.el8.x86_64.rpm                         18 MB/s | 927 kB     00:00
(7/8): oracle-database-preinstall-19c-1.0-2.el8.x86_64.rpm        2.3 MB/s |  31 kB     00:00
(8/8): libstdc++-devel-8.5.0-4.0.2.el8_5.x86_64.rpm                42 MB/s | 2.1 MB     00:00
--------------------------------------------------------------------------------------------------
Total                                                              31 MB/s | 4.7 MB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                  1/1
  Installing       : libxcrypt-devel-4.1.1-6.el8.x86_64                                                               1/8
  Installing       : glibc-devel-2.28-164.0.3.el8.x86_64                                                              2/8
  Running scriptlet: glibc-devel-2.28-164.0.3.el8.x86_64                                                              2/8
  Installing       : libstdc++-devel-8.5.0-4.0.2.el8_5.x86_64                                                         3/8
  Installing       : ksh-20120801-254.0.1.el8.x86_64                                                                  4/8
  Running scriptlet: ksh-20120801-254.0.1.el8.x86_64                                                                  4/8
  Installing       : make-1:4.2.1-10.el8.x86_64                                                                       5/8
  Running scriptlet: make-1:4.2.1-10.el8.x86_64                                                                       5/8
  Installing       : libnsl-2.28-164.0.3.el8.x86_64                                                                   6/8
  Installing       : libaio-devel-0.3.112-1.el8.x86_64                                                                7/8
  Running scriptlet: oracle-database-preinstall-19c-1.0-2.el8.x86_64                                                  8/8
  Installing       : oracle-database-preinstall-19c-1.0-2.el8.x86_64                                                  8/8
  Running scriptlet: oracle-database-preinstall-19c-1.0-2.el8.x86_64                                                  8/8
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored
  Verifying        : glibc-devel-2.28-164.0.3.el8.x86_64                                                              1/8
  Verifying        : libaio-devel-0.3.112-1.el8.x86_64                                                                2/8
  Verifying        : libnsl-2.28-164.0.3.el8.x86_64                                                                   3/8
  Verifying        : libxcrypt-devel-4.1.1-6.el8.x86_64                                                               4/8
  Verifying        : make-1:4.2.1-10.el8.x86_64                                                                       5/8
  Verifying        : ksh-20120801-254.0.1.el8.x86_64                                                                  6/8
  Verifying        : libstdc++-devel-8.5.0-4.0.2.el8_5.x86_64                                                         7/8
  Verifying        : oracle-database-preinstall-19c-1.0-2.el8.x86_64                                                  8/8
Installed:
  glibc-devel-2.28-164.0.3.el8.x86_64             ksh-20120801-254.0.1.el8.x86_64                             libaio-devel-0.3.112-1.el8.x86_64
  libnsl-2.28-164.0.3.el8.x86_64                  libstdc++-devel-8.5.0-4.0.2.el8_5.x86_64                    libxcrypt-devel-4.1.1-6.el8.x86_64
  make-1:4.2.1-10.el8.x86_64                      oracle-database-preinstall-19c-1.0-2.el8.x86_64
Complete!


Node1:- Only half groups will be installed with this package dnf install oracle-database-preinstall-19c package.

Note2:- After installing the 19c pkg on server the sysctl value automatically appended om sysctl.conf file.


[root@node1 ~]# cat /etc/sysctl.conf

# sysctl settings are defined through files in

# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.

# Vendors settings live in /usr/lib/sysctl.d/.

# To override a whole file, create a new file with the same in

# /etc/sysctl.d/ and put new settings there. To override

# only specific settings, add a file with a lexically later

# name in /etc/sysctl.d/ and put new settings there.

# For more information, see sysctl.conf(5) and sysctl.d(5).

# oracle-database-preinstall-19c setting for fs.file-max is 6815744

fs.file-max = 6815744

# oracle-database-preinstall-19c setting for kernel.sem is '250 32000 100 128'

kernel.sem = 250 32000 100 128

# oracle-database-preinstall-19c setting for kernel.shmmni is 4096

kernel.shmmni = 4096

# oracle-database-preinstall-19c setting for kernel.shmall is 1073741824 on x86_64

kernel.shmall = 1073741824

# oracle-database-preinstall-19c setting for kernel.shmmax is 4398046511104 on x86_64

kernel.shmmax = 4398046511104

# oracle-database-preinstall-19c setting for kernel.panic_on_oops is 1 per Orabug 19212317

kernel.panic_on_oops = 1

# oracle-database-preinstall-19c setting for net.core.rmem_default is 262144

net.core.rmem_default = 262144

# oracle-database-preinstall-19c setting for net.core.rmem_max is 4194304

net.core.rmem_max = 4194304

# oracle-database-preinstall-19c setting for net.core.wmem_default is 262144

net.core.wmem_default = 262144

# oracle-database-preinstall-19c setting for net.core.wmem_max is 1048576

net.core.wmem_max = 1048576

# oracle-database-preinstall-19c setting for net.ipv4.conf.all.rp_filter is 2

net.ipv4.conf.all.rp_filter = 2

# oracle-database-preinstall-19c setting for net.ipv4.conf.default.rp_filter is 2

net.ipv4.conf.default.rp_filter = 2

# oracle-database-preinstall-19c setting for fs.aio-max-nr is 1048576

fs.aio-max-nr = 1048576

# oracle-database-preinstall-19c setting for net.ipv4.ip_local_port_range is 9000 65500

net.ipv4.ip_local_port_range = 9000 65500



Crosscheck KERNEL PARAMETER FILES ON NODE1 AND NODE2

[root@node1 ~]# more /etc/sysctl.conf
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

[root@node1 ~]# more /etc/security/limits.conf
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768

oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
[root@node1 ~]#

[root@node2 ~]# more /etc/sysctl.conf
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
[root@node2 ~]#

[root@node2 ~]# more /etc/security/limits.conf
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768

oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768



Installing Bind Package:-

[root@node1 ~]# yum install bind
Last metadata expiration check: 2:27:32 ago on Mon 21 Feb 2022 10:06:42 AM CET.
Dependencies resolved.
==================================================================================================
 Package                        Architecture        Version               Repository         Size
==================================================================================================
Installing:
 bind                           x86_64              32:9.11.26-6.el8      ol8_appstream     2.1 M
Transaction Summary
=================================================
Install  1 Package
Total download size: 2.1 M
Installed size: 4.5 M
Is this ok [y/N]: y
Downloading Packages:
bind-9.11.26-6.el8.x86_64.rpm                       15 MB/s | 2.1 MB     00:00
-----------------------------------------------------------------------------------
Total                                               15 MB/s | 2.1 MB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                              1/1
  Running scriptlet: bind-32:9.11.26-6.el8.x86_64                 1/1
  Installing       : bind-32:9.11.26-6.el8.x86_64                 1/1
  Running scriptlet: bind-32:9.11.26-6.el8.x86_64                 1/1
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored
  Verifying        : bind-32:9.11.26-6.el8.x86_64                 1/1
Installed:
  bind-32:9.11.26-6.el8.x86_64
Complete!




[root@node2 ~]# yum install bind
Last metadata expiration check: 2:27:32 ago on Mon 21 Feb 2022 10:06:42 AM CET.
Dependencies resolved.
==================================================================================================
 Package                        Architecture        Version               Repository         Size
==================================================================================================
Installing:
 bind                           x86_64              32:9.11.26-6.el8      ol8_appstream     2.1 M
Transaction Summary
=================================================
Install  1 Package
Total download size: 2.1 M
Installed size: 4.5 M
Is this ok [y/N]: y
Downloading Packages:
bind-9.11.26-6.el8.x86_64.rpm                       15 MB/s | 2.1 MB     00:00
-----------------------------------------------------------------------------------
Total                                               15 MB/s | 2.1 MB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                              1/1
  Running scriptlet: bind-32:9.11.26-6.el8.x86_64                 1/1
  Installing       : bind-32:9.11.26-6.el8.x86_64                 1/1
  Running scriptlet: bind-32:9.11.26-6.el8.x86_64                 1/1
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored
  Verifying        : bind-32:9.11.26-6.el8.x86_64                 1/1
Installed:
  bind-32:9.11.26-6.el8.x86_64
Complete!


đŸ‘‰CREATE NEEDED USERS & GROUPS ON NODE1 AND NODE2

Only Oracle user will be created with package dnf install oracle-database-preinstall-19c 
[root@node1 ~]# id oracle
uid=1000(oracle) gid=1000(oracle) groups=1000(oracle),10(wheel) 

[root@node1 ~]# useradd grid
[root@node1 ~]#

[root@node1 ~]# id grid
uid=1001(grid) gid=1001(grid) groups=1001(grid)


[root@node1 ~]# groupadd -g 54421 oinstall
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54322 dba
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54323 oper
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54324 backupdba
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54325 dgdba
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54326 kmdba
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54327 asmdba
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54328 asmoper
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54329 asmadmin
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54330 racdba
[root@node1 ~]#
[root@node1 ~]# groupadd -g 54421 oinstall
groupadd: group 'oinstall' already exists
[root@node1 ~]#
https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/creating-users-groups-on-other-cluster-nodes.html#GUID-3F1C9069-66F6-4C60-B162-24BA1116F683
# groupadd -g 54421 oinstall
# groupadd -g 54322 dba
# groupadd -g 54323 oper
# groupadd -g 54324 backupdba
# groupadd -g 54325 dgdba
# groupadd -g 54326 kmdba
# groupadd -g 54327 asmdba
# groupadd -g 54328 asmoper
# groupadd -g 54329 asmadmin
# groupadd -g 54330 racdba

https://docs.oracle.com/en/database/oracle/oracle-database/19/cwlin/example-of-creating-minimal-users-roles-groups.html#GUID-103186A1-74E0-42A8-AC3D-15AF833DCB40


User Modification:-

--GRID user -------------------------------------------------------------------------
[root@node1 ~]# usermod -u 54331 -g oinstall -G dba,asmdba,asmoper,asmadmin,racdba grid


[root@node1 ~]#
[root@node1 ~]# id grid
uid=54331(grid) gid=54421(oinstall) groups=54421(oinstall),54322(dba),54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba)


---Oracle user-----------------------------------------------------------------------
[root@node1 ~]# usermod -u 54321 -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,racdba oracle

[root@node1 ~]# id oracle
uid=54321(oracle) gid=54421(oinstall) groups=54421(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54330(racdba)

-------------------------------------------------------------------------------------









CREATE AND GIVE NEEDED PERMISSIONS FOR DIRECTORIES ON NODE1 AND NODE2

[root@node1 ~]# mkdir -p /u01/app/grid/19.3.0/gridhome_1
[root@node1 ~]# mkdir -p /u01/app/grid/gridbase/
[root@node1 ~]# mkdir - p /u01/app/oracle/database/19.3.0/dbhome_1
[root@node1 ~]# chown -R oracle:oinstall /u01/
[root@node1 ~]# chown -R grid:oinstall /u01/app/grid
[root@node1 ~]# chmod -R 775 /u01/
[root@node1 ~]#

[root@node2 ~]# mkdir -p /u01/app/grid/19.3.0/gridhome_1
[root@node2 ~]# mkdir -p /u01/app/grid/gridbase/
[root@node2 ~]# mkdir -p /u01/app/oracle/database/19.3.0/dbhome_1
[root@node2 ~]# chown -R oracle:oinstall / u01/
[root@node2 ~]# chown -R grid:oinstall /u01/app/grid
[root@node2 ~]# chmod -R 775 /u01/
[root@node2 ~]#


UPDATE ORACLE & GRID USERS PROFILE ON NODE1 AND NODE2

[root@node1 ~]# vi /home/oracle/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node1. serhatcelik.local
export ORACLE_UNQNAME=SERHAT19C export ORACLE_BASE= /
u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID= export ORACLE_SID TERM TERM1 export ORACLE_SID/export TERM TERM1 :/usr/local/bin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib




[root@node1 ~]# vi /home/grid/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node1. serhatcelik.local
export ORACLE_BASE=/u01/app/grid/gridbase/
export ORACLE_HOME=/u01/app/grid/19.3.0/gridhome_1
export GRID_BASE=/u01/app/grid/gridbase/
export GRID_HOME=/u01/app/ grid/19.3.0/gridhome_1
export ORACLE_SID=+ASM1
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib :/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

[root@node2 ~]# vi /home/oracle/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node2. serhatcelik.local
export ORACLE_UNQNAME=SERHAT19C export ORACLE_BASE= /
u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_SID=export ORACLE_SID TERMX/ export TERM TERM TERM2= exportSERHAT19C :/usr/local/bin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib




[root@node2 ~]# vi /home/grid/.bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node2. serhatcelik.local
export ORACLE_BASE=/u01/app/grid/gridbase/
export ORACLE_HOME=/u01/app/grid/19.3.0/gridhome_1
export GRID_BASE=/u01/app/grid/gridbase/
export GRID_HOME=/u01/app/ grid/19.3.0/gridhome_1
export ORACLE_SID=+ASM2
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib :/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


UPDATE /etc/hosts FILES ON NODE1 AND NODE2

[root@node1 ~]# vi /etc/hosts
10.5.0.100 node1 node1. serhatcelik.local
10.5.0.101 node2 node2. serhatcelik.local

10.5.0.102 node1-vip node1-vip.serhatcelik.local
10.5.0.103 node2-vip node2-vip.serhatcelik.local

#10.5.0.104 srhtdb-scan srhtdb-scan.serhatcelik.local
#10.5.0.105 srhtdb-scan srhtdb-scan.serhatcelik.local
#10.5.0.106 srhtdb-scan srhtdb-scan.serhatcelik.local

10.5.0.107 node1-priv node1-priv.serhatcelik.local
10.5.0.108 node2-priv node2-priv.serhatcelik.local

[root@node2 ~]# vi /etc/hosts
10.5.0.100 node1 node1. serhatcelik.local
10.5.0.101 node2 node2. serhatcelik.local

10.5.0.102 node1-vip node1-vip.serhatcelik.local
10.5.0.103 node2-vip node2-vip.serhatcelik.local

#10.5.0.104 srhtdb-scan srhtdb-scan.serhatcelik.local
#10.5.0.105 srhtdb-scan srhtdb-scan.serhatcelik.local
#10.5.0.106 srhtdb-scan srhtdb-scan.serhatcelik.local

10.5.0.107 node1-priv node1-priv.serhatcelik.local
10.5.0.108 node2-priv node2-priv.serhatcelik.local




DNS SETTINGS ON NODE1 AND NODE2

DNS is needed for RAC installation. It is another prerequisite. 



Edit /etc/resolv.conf File Like Below


[root@node1 ~]# more /etc/resolv.conf
nameserver 127.0.0.1
search serhatcelik.local
[root@node1 ~]#


Test DNS

[root@node1 ~]# nslookup srhtdb-scan
Server: 127.0.0.1
Address: 127.0.0.1#53

Name: srhtdb-scan.serhatcelik.local
Address: 10.5.0.106
Name: srhtdb-scan.serhatcelik.local
Address: 10.5. 0.104
Name: srhtdb-scan.serhatcelik.local
Address: 10.5.0.105
Name: srhtdb-scan.serhatcelik.local
Address: 10.5.0.106
Name: srhtdb-scan.serhatcelik.local
Address: 10.5.0.104
Name: srhtdb-scan.serhatcelik .local
Address: 10.5.0.105


NTP CONFIGURATION ON NODE1 AND NODE2

NTP is also needed for RAC installation. It is another prerequisite. 

Apply below steps on NODE-1 and NODE-2.



Specify Your NTP Server Address


Chronyd is a better choice for most networks than ntpd for keeping computers synchronized with the Network Time Protocol


[root@node1 ~]# sudo vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#pool 2.pool.ntp.org iburst
server 172.17.136.10    --------- add this IP (ORAAM IP ADDRESS)

[root@node1 ~]# sudo systemctl stop chronyd
[root@node1 ~]#
[root@node1 ~]#
[root@node1 ~]# sudo systemctl start chronyd
[root@node1 ~]#
[root@node1 ~]# timedatectl status
               Local time: Thu 2022-02-17 14:58:05 CET
           Universal time: Thu 2022-02-17 13:58:05 UTC
                 RTC time: Thu 2022-02-17 13:58:06
                Time zone: Europe/Berlin (CET, +0100)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
[root@node1 ~]#



[root@node2 ~]# sudo vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#pool 2.pool.ntp.org iburst
server 172.17.136.10    --------- add this IP (ORAAM IP ADDRESS)

[root@node2 ~]# sudo systemctl stop chronyd
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# sudo systemctl start chronyd
[root@node2 ~]#
[root@node2 ~]# timedatectl status
               Local time: Thu 2022-02-17 14:58:05 CET
           Universal time: Thu 2022-02-17 13:58:05 UTC
                 RTC time: Thu 2022-02-17 13:58:06
                Time zone: Europe/Berlin (CET, +0100)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
[root@node2 ~]#








Step By Step Oracle 19C RAC Installation on Oracle Linux  Part-2 GRID


2. PREPARING AND SETUP OF GRID ENVIRONMENT

We need to shared disc for RAC installation. Firstly, I have created below volume on my Dell Storage. Secondly, Server Cluster is created consists of Node1 and Node2. 

Lastly, test volume is mapped to Server Cluster using ISCSI. You should get help from your Linux System Admin / Storage Admin. I did all operations myself because storage and linux servers are managed by međŸ™‚


CHECK DISKS ON NODE1 AND NODE2

[root@node1 ~]# fdisk -l | grep /dev/mapper/asm

[root@node1 ]# fdisk -l|grep  /dev/mapper/asm
Disk /dev/mapper/asmvg01-ASM_New_1: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg03-ASM_New_3: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg06-ASM_New_6: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg04-ASM_New_4: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg05-ASM_New_5: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg02-ASM_New_2: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg07-ASM_New_7: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg07-ASM_New_8: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg07-ASM_New_9: 10 GiB, 10737418240 bytes, 20971520 sectors





[root@node2 ~]# fdisk -l | grep /dev/mapper/asm

[root@node2 ]# fdisk -l|grep  /dev/mapper/asm
Disk /dev/mapper/asmvg01-ASM_New_1: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg03-ASM_New_3: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg06-ASM_New_6: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg04-ASM_New_4: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg05-ASM_New_5: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg02-ASM_New_2: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg07-ASM_New_7: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg07-ASM_New_8: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg07-ASM_New_9: 10 GiB, 10737418240 bytes, 20971520 sectors




!!! APPLY BELOW STEPS ONLY ON NODE1 !!! 

NO OPERATION MUST BE DONE ON NODE2 !!!



!!! DISCS IN THE DISC GROUP WE WILL CREATE FOR ASM SHOULD BE THE SAME SIZE !!!



!!! FURTHER, OUR DISCS MUST BE UNFORMATTED AND UNUSED !!!





DISC PARTION PROCESS IS DONE ONLY NODE1

How many discs will be used, the following operations are done for all discs. 

[root@node1 ~]# fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x521f0f8c.

The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (4096-209715199, default 4096):
Using default value 4096
Last sector, +sectors or +size{K,M,G} (4096-209715199, default 209715199):
Using default value 209715199
Partition 1 of type Linux and of size 100 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.


WE ARE RESTARTING NODE1 AND NODE2 SERVERS. 

IN THE NEXT STEP, WE WILL PREPARE OUR DISCS FOR ASM.


The ASMLib is highly recommended for those systems that will be using ASM for shared storage within the cluster due to the performance and manageability benefits that it provides. Perform the following steps to install and configure ASMLib on the cluster nodes:

NOTE: ASMLib automatically provides LUN persistence, so when using ASMLib there is no need to manually configure LUN persistence for the ASM devices on the system.

Download the following packages from the ASMLib OTN page, if you are an Enterprise Linux customer you can obtain the software through the Unbreakable Linux network.

NOTE: The ASMLib kernel driver MUST match the kernel revision number, the kernel revision number of your system can be identified by running the "uname -r" command. Also, be sure to download the set of RPMs which pertain to your platform architecture, in our case this is x86_64.

 

  Enter the following command to determine the kernel version and architecture of the system:

Copy

# uname -rm


Depending on your operating system version, download the required Oracle Automatic Storage Management library driver packages and driver:

http://www.oracle.com/technetwork/server-storage/linux/asmlib/index-101839.html

See Also:

My Oracle Support note 1089399.1 for information about Oracle ASMLIB support with Red Hat distributions:

https://support.oracle.com/rs?type=doc&id=1089399.1


ORACLE ASM ON NODE1 AND NODE2

Need to install oracleasm package :-

[root@node1 ~]# yum install oracleasm  

Last metadata expiration check: 2:44:06 ago on Mon 21 Feb 2022 10:06:42 AM CET.
Package kernel-uek-5.4.17-2011.1.2.el8uek.x86_64 is already installed.
Package kernel-uek-5.4.17-2036.100.6.1.el8uek.x86_64 is already installed.
Dependencies resolved.
=====================================================================================
 Package                   Architecture      Version                       Repository       Size
=====================================================================================
Installing:
 kernel-uek                x86_64            5.4.17-2102.202.5.el8uek      ol8_UEKR6        66 M

Transaction Summary
======================================
Install  1 Package

Total download size: 66 M
Installed size: 88 M
Downloading Packages:
kernel-uek-5.4.17-2102.202.5.el8uek.x86_64.rpm       13 MB/s |  66 MB     00:04
------------------------------------------------------------------------------------
Total                                                13 MB/s |  66 MB     00:04
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                    1/1
  Running scriptlet: kernel-uek-5.4.17-2102.202.5.el8uek.x86_64         1/1
  Installing       : kernel-uek-5.4.17-2102.202.5.el8uek.x86_64         1/1
  Running scriptlet: kernel-uek-5.4.17-2102.202.5.el8uek.x86_64         1/1
ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored
ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2036.100.6.1.el8uek.x86_64.conf:6: hwcap directive ignored

/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2036.100.6.1.el8uek.x86_64.conf:6: hwcap directive ignored

  Verifying        : kernel-uek-5.4.17-2102.202.5.el8uek.x86_64         1/1

Installed:
  kernel-uek-5.4.17-2102.202.5.el8uek.x86_64

Complete!





Dependencies resolved.

2) rpm -ivh --nodeps oracleasmlib-2.0.17-1.el8.x86_64.rpm 



Oracle ASMLib Downloads for Oracle Linux 8

3) yum install -y oracleasm

4) yum install kmod-oracleasm

5) rpm -ivh --nodeps oracleasm-support-2.1.12-1.el8.x86_64.rpm

Installing K-mod package :-
https://docs.oracle.com/en/operating-systems/oracle-linux/8/relnotes8.3/ol8-issues-known.html#ol8-issues-31292199 4.1.6 point need to refer.

[root@node1 oracle]# rpm -qa kmod-oracleasm




[root@node1 oracle]# yum install kmod-oracleasm
Last metadata expiration check: 2:44:06 ago on Mon 21 Feb 2022 10:06:42 AM CET.
Dependencies resolved.
=========================================================================
 Package                   Architecture    Version               Repository          Size
=========================================================================
Installing:
 kmod-redhat-oracleasm     x86_64          8:2.0.8-12.0.1.el8    ol8_baseos_latest   44 k

Transaction Summary
=========================================================================
Install  1 Package

Total download size: 44 k
Installed size: 138 k
Is this ok [y/N]: y
Downloading Packages:
kmod-redhat-oracleasm-2.0.8-12.0.1.el8.x86_64.rpm         575 kB/s |  44 kB     00:00
-------------------------------------------------------------------------------------
Total                                                     558 kB/s |  44 kB     00:00
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1
  Installing       : kmod-redhat-oracleasm-8:2.0.8-12.0.1.el8.x86_64        1/1
  Running scriptlet: kmod-redhat-oracleasm-8:2.0.8-12.0.1.el8.x86_64        1/1


realpath: extra/kmod-kvdo/vdo/kvdo.ko: No such file or directory
realpath: extra/kmod-kvdo/uds/uds.ko: No such file or directory
dracut: installkernel failed in module kernel-modules-extra
warning: %posttrans(kmod-redhat-oracleasm-8:2.0.8-12.0.1.el8.x86_64) scriptlet failed, exit status 1

Error in POSTTRANS scriptlet in rpm package kmod-redhat-oracleasm
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored

  Verifying        : kmod-redhat-oracleasm-8:2.0.8-12.0.1.el8.x86_64        1/1

Installed:
  kmod-redhat-oracleasm-8:2.0.8-12.0.1.el8.x86_64

Complete!

[root@node2 ]# rpm -ivh oracleasmlib-2.0.17-1.el8.x86_64.rpm --nodeps --force

Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:oracleasmlib-2.0.17-1.el8        ################################# [100%]
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored
[root@node2 /]#

[root@node2 /]# rpm -ivh --nodeps oracleasm-support-2.1.12-1.el8.x86_64.rpm

Verifying...                          ################################# [100%]
Preparing...                          ################################# [100%]
Updating / installing...
   1:oracleasm-support-2.1.12-1.el8   ################################# [100%]
Note: Forwarding request to 'systemctl enable oracleasm.service'.
Synchronizing state of oracleasm.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable oracleasm
Created symlink /etc/systemd/system/multi-user.target.wants/oracleasm.service ? /usr/lib/systemd/system/oracleasm.service.
/sbin/ldconfig: /etc/ld.so.conf.d/kernel-5.4.17-2011.1.2.el8uek.x86_64.conf:6: hwcap directive ignored
[root@node2 ]#




CVU DISK PACKAGE :-


[root@node1 grid_home]# cd /applications/oracle/g19.3.0/grid_home/cv/rpm
[root@node1 rpm]#
[root@node1 rpm]#
[root@node1 rpm]# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
[root@node1 rpm]#
[root@node1 rpm]#
[root@node1 rpm]# ls -lrt
total 12
-rw-r--r--. 1 grid oinstall 11412 Mar 13  2019 cvuqdisk-1.0.10-1.rpm
[root@node1 rpm]#
[root@node1 rpm]# rpm -iv cvuqdisk-1.0.10-1.rpm
Verifying packages...
Preparing packages...
cvuqdisk-1.0.10-1.x86_64
[root@node1 rpm]#
[root@node1 rpm]#
[root@node1 rpm]# rpm -qi cvuqdisk
Name        : cvuqdisk
Version     : 1.0.10
Release     : 1
Architecture: x86_64
Install Date: Mon 21 Feb 2022 02:07:27 PM CET
Group       : none
Size        : 22920
License     : Oracle Corp.
Signature   : (none)
Source RPM  : cvuqdisk-1.0.10-1.src.rpm
Build Date  : Wed 13 Mar 2019 10:25:43 AM CET
Build Host  : rpm-build-host
Relocations : (not relocatable)
Vendor      : Oracle Corp.
Summary     : RPM file for cvuqdisk
Description :
This package contains the cvuqdisk program required by CVU.
cvuqdisk is a binary that assists CVU in finding scsi disks.
To install this package, you must first become 'root' and then set the
environment variable 'CVUQDISK_GRP' to the group that will own cvuqdisk.
If the CVUQDISK_GRP is not set, by default "oinstall" will be the owner group
of cvuqdisk.



Note:- On node 2  copy package (cvuqdisk)  and install the same



ORACLEASM CONFIGURATION ON NODE1 AND NODE2



[root@node1 ~]# oracleasm configure -I
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]’). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y

The next two configuration options take substrings to match device names.
The substring “sd” (without the quotes), for example, matches “sda”, “sdb”,
etc. You may enter more than one substring pattern, separated by spaces.
The special string “none” (again, without the quotes) will clear the value.

Device order to scan for ASM disks []:
Devices to exclude from scanning []:
Directories to scan []:
Use device logical block size for ASM (y/n) [n]: y
Writing Oracle ASM library driver configuration: done








[root@node2 ~]# oracleasm configure -I
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]’). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y

The next two configuration options take substrings to match device names.
The substring “sd” (without the quotes), for example, matches “sda”, “sdb”,
etc. You may enter more than one substring pattern, separated by spaces.
The special string “none” (again, without the quotes) will clear the value.

Device order to scan for ASM disks []:
Devices to exclude from scanning []:
Directories to scan []:
Use device logical block size for ASM (y/n) [n]: y
Writing Oracle ASM library driver configuration: done






OUR DISCS ARE PREPARED IMMEDIATELY. WE WILL MAKE KERNEL ACTIVE AND SEALING WITH ORACLEASM INIT

ACTIVATE KERNEL ON NODE1 AND NODE2


[root@node1 ~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module “oracleasm”: oracleasm
Configuring “oracleasm” to use device logical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@node1 ~]#

[root@node2 ~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module “oracleasm”: oracleasm
Configuring “oracleasm” to use device logical block size
Mounting ASMlib driver filesystem: /dev/oracleasm
[root@node2 ~]#


[root@node1 ~]# fdisk -l | grep  /dev/mapper/asm
Disk /dev/mapper/asmvg04-ASM_New_4: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg02-ASM_New_2: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg01-ASM_New_1: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg05-ASM_New_5: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg06-ASM_New_6: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg07-ASM_New_7: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg07-ASM_New_8: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg07-ASM_New_9: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg03-ASM_New_3: 500 GiB, 536870912000 bytes, 1048576000 sectors

[root@node2 ~]#  fdisk -l | grep  /dev/mapper/asm
Disk /dev/mapper/asmvg02-ASM_New_2: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg06-ASM_New_6: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg04-ASM_New_4: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg01-ASM_New_1: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg03-ASM_New_3: 500 GiB, 536870912000 bytes, 1048576000 sectors
Disk /dev/mapper/asmvg07-ASM_New_7: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg07-ASM_New_8: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk /dev/mapper/asmvg05-ASM_New_5: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/mapper/asmvg07-ASM_New_9: 10 GiB, 10737418240 bytes, 20971520 sectors


!!! WE WILL STAMP THE DISKS. 

WE DO THIS OPERATION ONLY ON NODE1 !!!

How many discs are given, the following operations are done for all discs. 

We are going to create OCR and VOTE Disk for clusterware - 

Normal Redundancy - Three Disks


NODE1:-

root@node1~]#  oracleasm createdisk OCR_VD1 /dev/mapper/asmvg07-ASM_NEW_7

[root@node1 ~]# oracleasm listdisks
OCR_VD1
[root@node1 ~]#
[root@node1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@node1 ~]#
[root@node1 ~]# oracleasm listdisks
OCR_VD1
[root@node1 ~]#
[root@node1 ~]# oracleasm createdisk OCR_VD2 /dev/mapper/asmvg07-ASM_New_8
Writing disk header: done
Instantiating disk: done
[root@node1 ~]#
[root@node1 ~]# oracleasm createdisk OCR_VD3 /dev/mapper/asmvg07-ASM_New_9
Writing disk header: done
Instantiating disk: done
[root@node1 ~]#
[root@node1 ~]# oracleasm listdisks
OCR_VD1
OCR_VD2
OCR_VD3







NODE2:-

[root@node2 ~]#  oracleasm listdisks
OCR_VD1
[root@node2 ~]#
[root@node2 ~]#  oracleasm listdisks
OCR_VD1
[root@node2 ~]#
[root@node2 ~]#  oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "OCR_VD2"
Instantiating disk "OCR_VD3"
[root@node2 ~]#
[root@node2 ~]#
[root@node2 ~]# oracleasm listdisks
OCR_VD1
OCR_VD2
OCR_VD3




CLUVFY UTILITY :-

The Cluster Verification Utility (CVU) performs system checks in preparation for installation, patch updates, or other system changes. Using CVU ensures that you have completed the required system configuration and preinstallation steps so that your Oracle Grid Infrastructure or Oracle Real Application Clusters (Oracle RAC) installation, update, or patch operation completes successfully.

  Cluvfy utility software download link (30839369):-




Patch 30839369: Standalone CVU version 19.11 April 2021

 

[grid@node1 bin]$ pwd
/home/grid/cuv/bin

[grid@node1 bin]$
[grid@node1 bin]$ ./cluvfy stage -pre crsinst -n node1,node2 -verbose


Performing following verification checks ...

  Physical Memory ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1         245.3025GB (2.5721834E8KB)  8GB (8388608.0KB)         passed
  node2         245.3025GB (2.5721834E8KB)  8GB (8388608.0KB)         passed
  Physical Memory ...PASSED
  Available Physical Memory ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1         244.3012GB (2.5616842E8KB)  50MB (51200.0KB)          passed
  node2         244.6284GB (2.56511424E8KB)  50MB (51200.0KB)          passed
  Available Physical Memory ...PASSED
  Swap Size ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1         20GB (2.0971516E7KB)      16GB (1.6777216E7KB)      passed
  node2         20GB (2.0971516E7KB)      16GB (1.6777216E7KB)      passed
  Swap Size ...PASSED
  Free Space: node1:/usr,node1:/sbin ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              node1  /usr          1.2822GB      25MB          passed
  /sbin             node1  /usr          1.2822GB      10MB          passed
  Free Space: node1:/usr,node1:/sbin ...PASSED
  Free Space: node1:/var ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /var              node1          /var          4.7939GB      5MB           passed
  Free Space: node1:/var ...PASSED
  Free Space: node1:/etc ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /etc              node1          /             9.6328GB      25MB          passed
  Free Space: node1:/etc ...PASSED
  Free Space: node1:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              node1         /tmp          25.4434GB     1GB           passed
  Free Space: node1:/tmp ...PASSED
  Free Space: node2:/usr,node2:/sbin ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /usr              node2         /usr          1.373GB       25MB          passed
  /sbin             node2         /usr          1.373GB       10MB          passed
  Free Space: node2:/usr,node2:/sbin ...PASSED
  Free Space: node2:/var ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /var              node2         /var          4.8115GB      5MB           passed
  Free Space: node2:/var ...PASSED
  Free Space: node2:/etc ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /etc              node2         /             9.6338GB      25MB          passed
  Free Space: node2:/etc ...PASSED
  Free Space: node2:/tmp ...
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              node2         /tmp          25.4443GB     1GB           passed
  Free Space: node2:/tmp ...PASSED
  User Existence: grid ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  node1  passed                    exists(54331)
  node2  passed                    exists(54331)

    Users With Same UID: 54331 ...PASSED
  User Existence: grid ...PASSED
  Group Existence: asmadmin ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  node1         passed                    exists
  node2         passed                    exists
  Group Existence: asmadmin ...PASSED
  Group Existence: asmdba ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  node1         passed                    exists
  node2         passed                    exists
  Group Existence: asmdba ...PASSED
  Group Existence: oinstall ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  node1         passed                    exists
  node2         passed                    exists
  Group Existence: oinstall ...PASSED
  Group Membership: asmdba ...
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  node1    yes           yes           yes           passed
  node2    yes           yes           yes           passed
  Group Membership: asmdba ...PASSED
  Group Membership: asmadmin ...
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  node1              yes           yes           yes           passed
  node2              yes           yes           yes           passed
  Group Membership: asmadmin ...PASSED
  Group Membership: oinstall(Primary) ...
  Node Name         User Exists   Group Exists  User in Group  Primary       Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1              yes           yes           yes           yes           passed
  node2              yes           yes           yes           yes           passed
  Group Membership: oinstall(Primary) ...PASSED
  Run Level ...
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1         5                         3,5                       passed
  node2         5                         3,5                       passed
  Run Level ...PASSED
  Architecture ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1          x86_64                    x86_64                    passed
  node2          x86_64                    x86_64                    passed
  Architecture ...PASSED
  OS Kernel Version ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  5.4.17-2136.304.4.1.el8uek.x86_64  4.18.0                    passed
  node2  5.4.17-2136.304.4.1.el8uek.x86_64  4.18.0                    passed
  OS Kernel Version ...PASSED
  OS Kernel Parameter: semmsl ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1             250           250           250           passed
  node2             250           250           250           passed
  OS Kernel Parameter: semmsl ...PASSED
  OS Kernel Parameter: semmns ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1               32000         32000         32000         passed
  node2               32000         32000         32000         passed
  OS Kernel Parameter: semmns ...PASSED
  OS Kernel Parameter: semopm ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1               100           100           100           passed
  node2               100           100           100           passed
  OS Kernel Parameter: semopm ...PASSED
  OS Kernel Parameter: semmni ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1               128           128           128           passed
  node2               128           128           128           passed
  OS Kernel Parameter: semmni ...PASSED
  OS Kernel Parameter: shmmax ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1            4398046511104  4398046511104  131695790080  passed
  node2            4398046511104  4398046511104  131695790080  passed
  OS Kernel Parameter: shmmax ...PASSED
  OS Kernel Parameter: shmmni ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1             4096          4096          4096          passed
  node2             4096          4096          4096          passed
  OS Kernel Parameter: shmmni ...PASSED
  OS Kernel Parameter: shmall ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1              1073741824    1073741824    1073741824    passed
  node2              1073741824    1073741824    1073741824    passed
  OS Kernel Parameter: shmall ...PASSED
  OS Kernel Parameter: file-max ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1              6815744       6815744       6815744       passed
  node2              6815744       6815744       6815744       passed
  OS Kernel Parameter: file-max ...PASSED
  OS Kernel Parameter: ip_local_port_range ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1    between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed
  node2    between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed
  OS Kernel Parameter: ip_local_port_range ...PASSED
  OS Kernel Parameter: rmem_default ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1             262144        262144        262144        passed
  node2             262144        262144        262144        passed
  OS Kernel Parameter: rmem_default ...PASSED
  OS Kernel Parameter: rmem_max ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1              4194304       4194304       4194304       passed
  node2              4194304       4194304       4194304       passed
  OS Kernel Parameter: rmem_max ...PASSED
  OS Kernel Parameter: wmem_default ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1               262144        262144        262144        passed
  node2               262144        262144        262144        passed
  OS Kernel Parameter: wmem_default ...PASSED
  OS Kernel Parameter: wmem_max ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1               1048576       1048576       1048576       passed
  node2               1048576       1048576       1048576       passed
  OS Kernel Parameter: wmem_max ...PASSED
  OS Kernel Parameter: aio-max-nr ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1                1048576       1048576       1048576       passed
  node2                1048576       1048576       1048576       passed
  OS Kernel Parameter: aio-max-nr ...PASSED
  OS Kernel Parameter: panic_on_oops ...
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  node1             1             1             1             passed
  node2             1             1             1             passed
  OS Kernel Parameter: panic_on_oops ...PASSED
  Package: kmod-20-21 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1       kmod(x86_64)-25-18.0.1.el8  kmod(x86_64)-20-21        passed
  node2       kmod(x86_64)-25-18.0.1.el8  kmod(x86_64)-20-21        passed
  Package: kmod-20-21 (x86_64) ...PASSED
  Package: kmod-libs-20-21 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  kmod-libs(x86_64)-25-18.0.1.el8  kmod-libs(x86_64)-20-21   passed
  node2  kmod-libs(x86_64)-25-18.0.1.el8  kmod-libs(x86_64)-20-21   passed
  Package: kmod-libs-20-21 (x86_64) ...PASSED
  Package: binutils-2.30-49.0.2 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  binutils-2.30-108.0.2.el8_5.1  binutils-2.30-49.0.2      passed
  node2  binutils-2.30-108.0.2.el8_5.1  binutils-2.30-49.0.2      passed
  Package: binutils-2.30-49.0.2 ...PASSED
  Package: libgcc-8.2.1 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  libgcc(x86_64)-8.5.0-4.0.2.el8_5  libgcc(x86_64)-8.2.1      passed
  node2  libgcc(x86_64)-8.5.0-4.0.2.el8_5  libgcc(x86_64)-8.2.1      passed
  Package: libgcc-8.2.1 (x86_64) ...PASSED
  Package: libstdc++-8.2.1 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  libstdc++(x86_64)-8.5.0-4.0.2.el8_5  libstdc++(x86_64)-8.2.1   passed
  node2  libstdc++(x86_64)-8.5.0-4.0.2.el8_5  libstdc++(x86_64)-8.2.1   passed
  Package: libstdc++-8.2.1 (x86_64) ...PASSED
  Package: sysstat-10.1.5 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1        sysstat-11.7.3-6.0.1.el8  sysstat-10.1.5            passed
  node2        sysstat-11.7.3-6.0.1.el8  sysstat-10.1.5            passed
  Package: sysstat-10.1.5 ...PASSED
  Package: ksh ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1          ksh                       ksh                       passed
  node2          ksh                       ksh                       passed
  Package: ksh ...PASSED
  Package: make-4.2.1 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1         make-4.2.1-10.el8         make-4.2.1                passed
  node2         make-4.2.1-10.el8         make-4.2.1                passed
  Package    : make-4.2.1 ...PASSED
  Package: glibc-2.28 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  glibc(x86_64)-2.28-164.0.3.el8  glibc(x86_64)-2.28        passed
  node2  glibc(x86_64)-2.28-164.0.3.el8  glibc(x86_64)-2.28        passed
  Package: glibc-2.28 (x86_64) ...PASSED
  Package: glibc-devel-2.28 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  glibc-devel(x86_64)-2.28-164.0.3.el8  glibc-devel(x86_64)-2.28  passed
  node2  glibc-devel(x86_64)-2.28-164.0.3.el8  glibc-devel(x86_64)-2.28  passed
  Package: glibc-devel-2.28 (x86_64) ...PASSED
  Package: libaio-0.3.110 (x86_64) ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  libaio(x86_64)-0.3.112-1.el8  libaio(x86_64)-0.3.110    passed
  node2  libaio(x86_64)-0.3.112-1.el8  libaio(x86_64)-0.3.110    passed
  Package: libaio-0.3.110 (x86_64) ...PASSED
  Package: nfs-utils-2.3.3-14 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1         nfs-utils-2.3.3-46.el8    nfs-utils-2.3.3-14        passed
  node2         nfs-utils-2.3.3-46.el8    nfs-utils-2.3.3-14        passed
  Package: nfs-utils-2.3.3-14 ...PASSED
  Package: smartmontools-6.6-3 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1           smartmontools-7.1-1.el8   smartmontools-6.6-3       passed
  node2           smartmontools-7.1-1.el8   smartmontools-6.6-3       passed
  Package: smartmontools-6.6-3 ...PASSED
  Package: net-tools-2.0-0.51 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  net-tools-2.0-0.52.20160912git.el8  net-tools-2.0-0.51        passed
  node2  net-tools-2.0-0.52.20160912git.el8  net-tools-2.0-0.51        passed
  Package: net-tools-2.0-0.51 ...PASSED
  Package: policycoreutils-2.9-3 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  policycoreutils-2.9-16.0.1.el8  policycoreutils-2.9-3     passed
  node2  policycoreutils-2.9-16.0.1.el8  policycoreutils-2.9-3     passed
  Package: policycoreutils-2.9-3 ...PASSED
  Package: policycoreutils-python-utils-2.9-3 ...
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  node1  policycoreutils-python-utils-2.9-16.0.1.el8  policycoreutils-python-utils-2.9-3  passed
  node2  policycoreutils-python-utils-2.9-16.0.1.el8  policycoreutils-python-utils-2.9-3  passed
  Package: policycoreutils-python-utils-2.9-3 ...PASSED
  Port Availability for component "Oracle Notification Service (ONS)" ...
  Node Name         Port Number   Protocol      Available     Status
  ----------------  ------------  ------------  ------------  ----------------
  Port Availability for component "Oracle Notification Service (ONS)" ...PASSED
  Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...
  Node Name         Port Number   Protocol      Available     Status
  ----------------  ------------  ------------  ------------  ----------------
  Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED
  Users With Same UID: 0 ...PASSED
  Current Group ID ...PASSED
  Root user consistency ...
  Node Name                             Status
  ------------------------------------  ------------------------
  node1                                passed
  node2                                passed
  Root user consistency ...PASSED
  Host name ...PASSED
  Node Connectivity ...
    Hosts File ...
  Node Name                             Status
  ------------------------------------  ------------------------
  node1                                 passed
  node2                                 passed
    Hosts File ...PASSED

Interface information for node "node1"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.5.0.100     172.20.1.0      0.0.0.0         172.20.1.126    00:21:F6:CB:28:71 1500
 eth1   192.168.1.120   192.168.128.0   0.0.0.0        172.20.1.126    00:21:F6:EB:84:0D 1500

Interface information for node "node2"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.5.0.101     172.20.1.0      0.0.0.0         172.20.1.126    00:21:F6:96:60:B4 1500
 eth1   192.168.1.121   192.168.128.0   0.0.0.0         172.20.1.126    00:21:F6:E6:8B:F0 1500

Check: MTU consistency of the subnet "192.168.128.0".

  Node              Name          IP Address    Subnet        MTU
  ----------------  ------------  ------------  ------------  ----------------
  node1            eth1          192.168.1.120  192.168.128.0  1500
  node2            eth1          192.168.1.121  192.168.128.0  1500

Check: MTU consistency of the subnet "172.20.1.0".

  Node              Name          IP Address    Subnet        MTU
  ----------------  ------------  ------------  ------------  ----------------
  node1                eth0          10.5.0.100   172.20.1.0    1500
  node2                eth0          10.5.0.101   172.20.1.0    1500

  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  node1[eth1:192.168.1.120]  node2[eth1:192.168.1.121]  no

  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  node1[eth0:10.5.0.100]  node2[eth0:10.5.0.101]  yes


    Check that maximum (MTU) size packet goes through subnet ...PASSED
    subnet mask consistency for subnet "192.168.128.0" ...PASSED
    subnet mask consistency for subnet "172.20.1.0" ...PASSED
  Node Connectivity ...FAILED (PRVG-11891, PRVG-11078, PRVG-11094, PRVG-11067, PRVG-11095)
  Multicast or broadcast check ...
Checking subnet "192.168.128.0" for multicast communication with multicast group "224.0.0.251"

Checking subnet "172.20.1.0" for multicast communication with multicast group "224.0.0.251"
  Multicast or broadcast check ...PASSED
  ASMLib installation and configuration verification. ...
    '/etc/init.d/oracleasm' ...PASSED
    '/dev/oracleasm' ...PASSED
    '/etc/sysconfig/oracleasm' ...PASSED

  Node Name                             Status
  ------------------------------------  ------------------------
  node1                        passed
  node2                        passed
  ASMLib installation and configuration verification. ...PASSED
  Network Time Protocol (NTP) ...
    '/etc/chrony.conf' ...
  Node Name                             File exists?
  ------------------------------------  ------------------------
  node1                        yes
  node2                        yes

    '/etc/chrony.conf' ...PASSED
  Network Time Protocol (NTP) ...FAILED (PRVG-1017)
  Same core file name pattern ...PASSED
  User Mask ...
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  node1  0022                      0022                      passed
  node2  0022                      0022                      passed
  User Mask ...PASSED
  User Not In Group "root": grid ...
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  node1  passed                    does not exist
  node2  passed                    does not exist
  User Not In Group "root": grid ...PASSED
  Time zone consistency ...PASSED
  Path existence, ownership, permissions and attributes ...
    Path "/var" ...PASSED
    Path "/dev/shm" ...PASSED
  Path existence, ownership, permissions and attributes ...PASSED
  Time offset between nodes ...PASSED
  resolv.conf Integrity ...
  Node Name                             Status
  ------------------------------------  ------------------------
  node1                        passed
  node2                        passed

checking response for name "node2" from each of the name servers
specified in "/etc/resolv.conf"

  Node Name     Source                    Comment                   Status
  ------------  ------------------------  ------------------------  ----------
  node2         172.20.0.73               IPv4                      passed

checking response for name "node1" from each of the name servers
specified in "/etc/resolv.conf"

  Node Name     Source                    Comment                   Status
  ------------  ------------------------  ------------------------  ----------
  node1         172.20.0.73               IPv4                      passed
  resolv.conf Integrity ...PASSED
  DNS/NIS name service ...PASSED
  Domain Sockets ...PASSED
  Daemon "avahi-daemon" not configured and running ...
  Node Name     Configured                Status
  ------------  ------------------------  ------------------------
  node1         no                        passed
  node2         no                        passed

  Node Name     Running?                  Status
  ------------  ------------------------  ------------------------
  node1         no                        passed
  node2         no                        passed
  Daemon "avahi-daemon" not configured and running ...PASSED
  Daemon "proxyt" not configured and running ...
  Node Name     Configured                Status
  ------------  ------------------------  ------------------------
  node1         no                        passed
  node2         no                        passed

  Node Name     Running?                  Status
  ------------  ------------------------  ------------------------
  node1         no                        passed
  node2         no                        passed
  Daemon "proxyt" not configured and running ...PASSED
  User Equivalence ...PASSED
  RPM Package Manager database ...INFORMATION (PRVG-11250)
  /dev/shm mounted as temporary file system ...PASSED
  File system mount options for path /var ...PASSED
  DefaultTasksMax parameter ...PASSED
  zeroconf check ...PASSED
  ASM Filter Driver configuration ...PASSED
  Systemd login manager IPC parameter ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Node Connectivity ...FAILED
PRVG-11891 : Connectivity failed between interface "eth1" configured with IP
address "192.168.1.120" on node "node1" and interface "eth1"
configured with IP address "192.168.1.121" on node "node2" for the
subnet "192.168.128.0".
PRVG-11078 : node connectivity failed for subnet "192.168.128.0"

node1: PRVG-11094 : Ping connectivity check from the source IP address
                "192.168.1.120" to the destination IP address "192.168.1.121"
                failed.
                PING 192.168.1.121 (192.168.1.121) from 192.168.1.120 : 56(84)
                bytes of data.--- 192.168.1.121 ping statistics ---3 packets
                transmitted, 0 received, 100 0x17dacket loss, time 2073ms
node1: PRVG-11067 : TCP connectivity from node "node1":
                "192.168.1.120" to node "node2": "192.168.1.121"
                failed.
                PRVG-11095 : The TCP system call "connect" failed with error
                "110" while executing exectask on node "node1"
                Connection timed out

node2: PRVG-11067 : TCP connectivity from node "node2":
                "192.168.1.121" to node "node1": "192.168.1.120"
                failed.
                PRVG-11095 : The TCP system call "connect" failed with error
                "111" while executing exectask on node "node2"
                Connection refused

Refer to My Oracle Support notes "1357657.1" for more details regarding errors
PRVG-11067".

Network Time Protocol (NTP) ...FAILED
node1: PRVG-1017 : NTP configuration file "/etc/chrony.conf" is
                present on nodes "node1,node2" on which NTP
                daemon or service was not running

node2: PRVG-1017 : NTP configuration file "/etc/chrony.conf" is
                present on nodes "node1,node2" on which NTP
                daemon or service was not running

RPM Package Manager database ...INFORMATION
PRVG-: The check "RPM Package Manager database" was not performed 11250 because
it needs 'root' user privileges.

Refer to My Oracle Support notes "2548970.1" for more details regarding errors
PRVG-11250".


CVU operation performed:      stage -pre crsinst
Date:                         Feb 22, 2022 12:43:36 PM
CVU home:                     /home/grid/cuv/bin/..
User:                         grid
Operating system:             Linux5.4.17-2136.304.4.1.el8uek.x86_64


Note : After rebooting the server all error resolved 


OUR DISKS ARE READY. NOW WE CAN START SETUP. 

FIRST, WE WILL INSTALL GRID

ORACLE DATABASE 19c GRID INFRASTRUCTURE (19.3) FOR LINUX X86-64 IS DOWNLOADED AND UPLOADED TO NODE1 SERVER

[root@node1 ~]# cd /u01/app/grid/19.3.0/gridhome_1
[root@node1 gridhome_1]#

[root@node1 gridhome_1]# ls -lrt
total 2821476
-rwxr–r– 1 root root 2889184573 Feb 14 00:13 LINUX.X64_193000_grid_home.zip

[root@node1 gridhome_1]#

[root@node1 gridhome_1]# chown grid:oinstall LINUX.X64_193000_grid_home.zip

[root@node1 gridhome_1]#

[root@node1 gridhome_1]# su – grid
[grid@node1 ~]$
[grid@node1 ~]$ cd /u01/app/grid/19.3.0/gridhome_1
[grid@node1 gridhome_1]$
[grid@node1 gridhome_1]$ unzip LINUX.X64_193000_grid_home.zip
[grid@node1 gridhome_1]$




[grid@node1 ~]$ cd /u01/app/grid/19.3.0/gridhome_1/cv/admin


grid@node1 admin]$ pwd

/u01/app/grid/19.3.0/gridhome_1/cv/admin


[grid@node1 admin]$ vi cvu_config

CV_ASSUME-DISTID=OEL8.1




ORACLE AND GRID USERS ARE ADDED TO VISUDO

[root@node1 ~]# visudo
oracle ALL=(ALL) ALL
oracle ALL=NOPASSWD: ALL
grid ALL=(ALL) ALL
grid ALL=NOPASSWD: ALL


WE HAVE COME TO THE MOST CRITICAL POINT. 

WE WILL START THE GRID SETUP.

THE MOST CONSIDERED SUBJECT HERE IS THE APPLICATION WE USE FOR SSH. I WAS USING THE ZOC APP AND THE SETUP SCREEN WAS NOT COMING DESPITE DISPLAY SETTINGS WERE DONE.

USING MOBAXTERM APPLICATION AS THE SOLUTION, I HAVE GIVEN THE FOLLOWING COMMANDS AND THE SETUP SCREEN WAS STARTED. THERE IS ANOTHER IMPORTANT POINT, YOU MUST PROVIDE THE IPS OF THE WINDOWS MACHINE YOU STARTED THE INSTALLATION.

[root@node1 ~]# su – grid
Last login: Sun Feb 14 14:08:16 +03 2021 on pts/0
[grid@node1 ~]$
[grid@node1 ~]$ cd $GRID_HOME
[grid@node1 gridhome_1]$
[grid@node1 gridhome_1]$ sh gridSetup.sh
ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable.No X11 DISPLAY variable was set, but this program performed an operation which requires it.

[grid@node1 gridhome_1]$
[grid@node1 gridhome_1]$ export DISPLAY=10.6.176.54:0.0
[grid@node1 gridhome_1]$
[grid@node1 gridhome_1]$ sh gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard…

[grid@node1 gridhome_1]$


I WILL CONTINUE THE NEXT STEPS WITH THE SCREENSHOTS
















































































Grid operations are completed.


Step By Step Oracle 19C RAC Installation on Oracle Linux Part-3 DATABASE


3. INSTALL ORACLE DATABASE 19C CDB & PDB

3.1. INSTALLATION OF ORACLE DATABASE 19C SOFTWARE

ORACLE DATABASE 19C (19.3) FOR LINUX X86-64 IS DOWNLOADED AND UPLOADED TO NODE1 SERVER

[root@node1 ~]# su – oracle
Last login: Sat Feb 20 13:30:57 +03 2021
[oracle@node1 ~]$
[oracle@node1 ~]$ cd $ORACLE_HOME/
[oracle@node1 dbhome_1]$
[oracle@node1 dbhome_1]$ ls -lrt
-rwxr–r– 1 root root 3059705302 Feb 20 11:12 LINUX.X64_193000_db_home.zip
[oracle@node1 dbhome_1]$
[oracle@node1 dbhome_1]$ unzip LINUX.X64_193000_db_home.zip
[oracle@node1 dbhome_1]$

WE HAVE COME TO THE MOST CRITICAL POINT. 

WE WILL START THE SETUP.

THE MOST CONSIDERED SUBJECT HERE IS THE APPLICATION WE USE FOR SSH. I WAS USING THE ZOC APP AND THE SETUP SCREEN WAS NOT COMING DESPITE DISPLAY SETTINGS WERE DONE.

USING MOBAXTERM APPLICATION AS THE SOLUTION, I HAVE GIVEN THE FOLLOWING COMMANDS AND THE SETUP SCREEN WAS STARTED. THERE IS ANOTHER IMPORTANT POINT, YOU MUST PROVIDE THE IP OF THE WINDOWS MACHINE YOU STARTED THE INSTALLATION.

I WILL CONTINUE THE NEXT STEPS WITH THE SCREENSHOTS































In order to get rid of above errors, apply below steps on Node1 and Node2.

CHANGE OPTIONS=”-g” as OPTIONS=”-x” then restart ntpd service.

[root@node1 /]# more /etc/sysconfig/ntpd
OPTIONS=”-x”
[root@node1 /]#
[root@node1 /]# systemctl restart ntpd.service
[root@node1 /]#

[root@node2 /]# more /etc/sysconfig/ntpd
OPTIONS=”-x”
[root@node2 /]#
[root@node2 /]# systemctl restart ntpd.service
[root@node2 /]#


When you press Check Again, normally failed status should be disappered, If the check does not work, cancel and start the installation again.










3.2. INSTALLATION ORACLE DATABASE 19C AS CDB & PDB

ADD BELOW ENTRIES TO ORACLE BASH PROFILE ON NODE1 AND NODE2

[root@node1 ~]# su – oracle
Last login: Sat Feb 20 17:58:50 +03 2021 on pts/0
[oracle@node1 ~]$ vi .bash_profile


export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node1.serhatcelik.local
export ORACLE_UNQNAME=CDBTEST
export ORACLE_SID=CDBTEST1
export ORACLE_BASE=/u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


[root@node2 ~]# su – oracle
Last login: Sat Feb 20 17:58:50 +03 2021 on pts/0
[oracle@node1 ~]$ vi .bash_profile

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_HOSTNAME=node1.serhatcelik.local
export ORACLE_UNQNAME=CDBTEST
export ORACLE_SID=CDBTEST2
export ORACLE_BASE=/u01/app/oracle/database/19.3.0/
export DB_HOME=$ORACLE_BASE/dbhome_1
export ORACLE_HOME=$DB_HOME
export ORACLE_TERM=xterm
export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

CREATING NEW DISC GROUP FOR FRA (IF YOU HAVE FRA, SKIP THIS STEP)

We need to shared disc for RAC installation. Firstly, I have created 150GB volume for fast recovery area on my Dell Storage. Secondly, Server Cluster is already created consists of Node1 and Node2 while installing grid. Lastly, fast recovery volume is mapped to Server Cluster using ISCSI. You should get help from your Linux System Admin / Storage Admin. I did all operations myself because storage and linux servers are managed by us.

CHECK DISKS ON NODE1 AND NODE2 

Fast Recovery Area disc newly created seen below as 150GB. (100GB disc >>> DATA disc was created while installing grid.)

[root@node1 /]# multipath -ll
mpathb (36000d3100cfea6000000000000000025) dm-3 COMPELNT,Compellent Vol
size=100G features=’1 queue_if_no_path’ hwhandler=’0′ wp=rw
-+- policy='service-time 0' prio=1 status=active |- 5:0:0:1 sda 8:0 active ready running |- 7:0:0:1 sdb 8:16 active ready running |- 8:0:0:1 sdd 8:48 active ready running – 9:0:0:1 sdc 8:32 active ready running
mpatha (36000d3100cfea6000000000000000026) dm-2 COMPELNT,Compellent Vol
size=150G features=’1 queue_if_no_path’ hwhandler=’0′ wp=rw
-+- policy='service-time 0' prio=1 status=active |- 2:0:0:2 sde 8:64 active ready running |- 3:0:0:2 sdf 8:80 active ready running |- 4:0:0:2 sdg 8:96 active ready running – 6:0:0:2 sdh 8:112 active ready running
[root@node1 /]#

[root@node2 ~]# multipath -ll
mpathb (36000d3100cfea6000000000000000025) dm-3 COMPELNT,Compellent Vol
size=100G features=’1 queue_if_no_path’ hwhandler=’0′ wp=rw
-+- policy='service-time 0' prio=1 status=active |- 5:0:0:1 sda 8:0 active ready running |- 7:0:0:1 sdc 8:32 active ready running |- 8:0:0:1 sdd 8:48 active ready running – 9:0:0:1 sdb 8:16 active ready running
mpatha (36000d3100cfea6000000000000000026) dm-2 COMPELNT,Compellent Vol
size=150G features=’1 queue_if_no_path’ hwhandler=’0′ wp=rw
-+- policy='service-time 0' prio=1 status=active |- 2:0:0:2 sde 8:64 active ready running |- 3:0:0:2 sdf 8:80 active ready running |- 4:0:0:2 sdg 8:96 active ready running – 6:0:0:2 sdh 8:112 active ready running

[root@node2 ~]#

!!! APPLY BELOW STEPS ONLY ON NODE1 !!! NO OPERATION MUST BE DONE ON NODE2 !!!

!!! DISCS IN THE DISC GROUP WE WILL CREATE FOR ASM SHOULD BE THE SAME SIZE !!!


!!! FURTHER, OUR DISCS MUST BE UNFORMATTED AND UNUSED !!!

DISC PARTION PROCESS IS DONE ONLY NODE1

How many discs will be used, the following operations are done for all discs. I did it once as I only gave one disk.

[root@node1 /]# fdisk /dev/mapper/mpatha
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xbb6f9348.

The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (4096-314572799, default 4096):
Using default value 4096
Last sector, +sectors or +size{K,M,G} (4096-314572799, default 314572799):
Using default value 314572799
Partition 1 of type Linux and of size 150 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
[root@node1 /]#

!!! WE WILL STAMP THE DISCS. WE DO THIS OPERATION ONLY ON NODE1 !!!

How many discs are given, the following operations are done for all discs. I did it once as I only gave one disk.

[root@node1 /]# oracleasm createdisk fra1 /dev/mapper/mpatha
Writing disk header: done
Instantiating disk: done
[root@node1 /]#

CHECK ASM DISCS

[root@node1 /]# ll /dev/oracleasm/disks/
total 0
brw-rw—- 1 grid asmadmin 8, 17 Feb 20 20:45 DATA1
brw-rw—- 1 grid asmadmin 252, 2 Feb 20 20:45 FRA1
[root@node1 /]#

WE RUN THE FOLLOWING COMMAND IN ORDER TO SEE THE OUR TRANSACTION ON NODE2

[root@node2 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk “FRA1”
[root@node2 ~]#

CREATE ASM DISC GROUP VIA ASMCA (AUTOMATIC STORAGE MANAGEMENT CONFIGURATION ASSISTANT)





























































































































No comments:

Post a Comment

100 Oracle DBA Interview Questions and Answers

  Here are 100 tricky interview questions tailored for a Senior Oracle DBA role. These questions span a wide range of topics, including perf...