Disclaimer

Saturday, 31 May 2025

Exadata Architecture

 

Non-Exadata Architecture:- 



Exadata - SmartScan




EXADATA SERVER SQL PROCESSING

With Exadata storage, SQL processing is handled much more efficiently because it uses Exadata storage software, which has database logic built into it. The following steps comprise Exadata SQL processing, as shown in the following diagram


1. A client submits a query.

2. The database server constructs an Intelligent Database (iDB) message, which includes the query criteria. This iDB message goes to all storage servers in a rack.

3. The cellsrv component of the ESS scans the data blocks to identify the matching rows and columns that satisfy the request.

4. Every storage server executes the query criteria in parallel and sends only the relevant rows, or the net result, to the database server by using interconnect.

5. The database consolidates the result and returns the rows to the client.



















The iDB (Intelligent Database) protocol is a custom protocol used in Oracle Exadata for communication between the database servers and the storage cells
It facilitates Smart I/O operations, allowing the storage cells to perform tasks like Smart Scan (SQL offload) and Fast File Initialization, thereby improving performance and reducing data transfer. 
Here's a more detailed explanation:
  • iDB Protocol:
    This protocol is an InfiniBand-aware network protocol designed by Oracle, implemented on Reliable Datagram Sockets V3. It's used for communication between the database servers (where the Oracle Database and ASM processes reside) and the storage cells (where the Exadata Storage Server software resides). 

  • Functionality:
  • iDB messages are used to direct Smart I/O operations on the storage servers. This includes Smart Scan, which offloads portions of SQL queries to the storage servers, and Fast File Initialization, which optimizes initial data loading. 

  • Smart I/O:
  • Exadata leverages iDB to offload I/O-intensive tasks to the storage servers, reducing the amount of data that needs to be transferred between the database servers and storage. 

  • Performance Benefits:
  • By performing certain operations on the storage servers, Exadata can significantly improve performance, especially for analytical workloads and large datasets. 

  • Implementation:
  • iDB is implemented in the database kernel and maps database operations to Exadata-enhanced operations, making it transparent to the user. 

  • Underlying Technology:
  • iDB uses the high-speed InfiniBand network fabric (or RDMA over Converged Ethernet/RoCE) to transmit data between the database servers and storage cells, ensuring efficient and low-latency communication. 






High Capacity( HC) and Extreme Flash (EF)
























Smart Scan/Cell Offloading:

Smart Scan or Cell offloading is the feature where the Database offload 
all resource consuming operations like Selecting the particular dataset (or) 
backup operations to its storage nodes instead of performing on DB Nodes.

Consider if you are querying employee table having terabytes of data and you just requested only a few records out of it.. 

In Traditional systems, the terabytes of data fetched from OS and DB Resources like DB Buffer Cache, Temp Segments and Server resources like CPU, Memory used to filter out the required data. Due to this, the other DB and OS Operations face a huge wait time.

In Exadata, the DB Node pass the SQL Operation to Storage Nodes through IDB.
 
The Storage server perform smart scan on the requested data and Return only the requested Query Output (or) the Requested blocks to the database that is just a few records. 

DB node just get the data and pass it to User.

As all these operations happening on Storage node which is higher Capacity, 
the performance impact is nearly zero even for complex query operations.









Hybrid Columnar Compression (HCC)

HCC is an unique compression algorithm which groups column data and compress them.

Traditionally, the Rows are stored directly on the blocks and if we compress, 
only the unused bytes of the blocks will get compressed.

In Hybrid Columnar compression, a set of rows grouped under Compression Unit based 
on the column data and get compressed.

For Ex. on the employee table, eventhough we have 1M records, the Department column might have a 
few unique values. 

So HCC will create a master data for the values and create only points to rest of the rows. 
Through this we can achieve 10x to 15x Compression ratio which will reduce the IO operations and Storage requirements.







Storage Indexes..

The Storage Indexes are created on the Storage servers which is built 
based on the queries and data being requested. 

The Storage Indexes having min and Max Values of every block data and 
it will help the storage servers to reach the appropriate block to get the required data. 

Storage Index is not status and wiped out during Storage node reboot and 
Start Caching again automatically.

The query must undergo Full Table Scan to enable the feature of Storage Indexes. 
This eliminates the need for DB indexes and its maintenance.









Smart Flash Cache..

Every Storage Server provided with 4 Flash Cards which makes the Flash Cache.. 
The Flash Cache is similar to DB Buffer Cache, where it stores the frequently 
used data which is beyond size of DB Buffer cache in its Flash memory. 

So that the storage server need not to fetch the data from Disks. 

As reading from Flash is much faster, The exadata storage server can perform 4M IO operation per second which is so enormous.

Now we have Frequently used data is available on Flash Cache. If not, we can use Smart Scan too.











Resource Manager (IORM/DBRM)

Now the storage server scanning only needed blocks using smart scan and 
the data compressed using HCC and we can able to point the data directly using Storage Indexes. 

Also the frequently used data is available on Flash Cache. 

All these are features are used to reduce the IOPS and IO Wait times to improve performance.

But how do i prioritize if my app and batch users running queries on the database at the same time… 

That is the Purpose of Resource Manager in Exadata. 

Resource Manager in storage server works based on IORM and its functionality similar 
to DB Resource Manager (DBRM) and work with DBRM to prioritize the query on Storage Nodes.









Thursday, 29 May 2025

ExaCS - Screenshots - OCI

 

OCI 





































Creating and mounting a new LVM volume on Linux - /arch

 

  • Create a new partition.

  • Initialize it as a physical volume.

  • Create a volume group and a logical volume.

  • Format it with EXT4.

  • Mount it to /arch.

  • add it to /etc/fstab for persistent mounting.



[root@rac10-p ~]# df -h
[root@rac10-p ~]# df -hP | column -t
[root@rac10-p ~]# parted
[root@rac10-p ~]# lvs -a
[root@rac10-p ~]# df -a
[root@rac10-p ~]# lsblk
[root@rac10-p ~]# vgs
[root@rac10-p ~]# fdisk /dev/nvme0n1
[root@rac10-p ~]# lsblk
[root@rac10-p ~]# pvcreate /dev/nvme0n1p3
[root@rac10-p ~]# lsblk
[root@rac10-p ~]# vgcreate ol1 /dev/nvme0n1p3
[root@rac10-p ~]# vgs
[root@rac10-p ~]# lvcreate -L +1G -n arch_lv ol1
[root@rac10-p ~]# df -h
[root@rac10-p ~]# mkfs.ext4 /dev/ol1/arch_lv
[root@rac10-p ~]# mkdir /arch
[root@rac10-p ~]# mount /dev/ol1/arch_lv /arch
[root@rac10-p ~]# cat /etc/fstab
[root@rac10-p ~]# mount -a
[root@rac10-p ~]# cat /etc/fstab







[root@rac10-p ~]#
[root@rac10-p ~]#
[root@rac10-p ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               5.8G     0  5.8G   0% /dev
tmpfs                  5.8G  2.4G  3.4G  42% /dev/shm
tmpfs                  5.8G  9.4M  5.8G   1% /run
tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root     98G  9.9G   84G  11% /
/dev/nvme0n1p2         9.8G  339M  9.0G   4% /boot
/dev/mapper/ol-data02   92G  109M   87G   1% /data02
/dev/mapper/ol-u01      92G  7.1G   80G   9% /u01
/dev/mapper/ol-data01   92G  961M   86G   2% /data01
/dev/mapper/ol-u02      92G  564K   87G   1% /u02
/dev/mapper/ol-backup  196G  200M  186G   1% /backup
tmpfs                  1.2G   12K  1.2G   1% /run/user/42
tmpfs                  1.2G     0  1.2G   0% /run/user/0
tmpfs                  1.2G  4.0K  1.2G   1% /run/user/54321
[root@rac10-p ~]#


Same as above, but with POSIX output format (-P) and neatly formatted into aligned columns using column -t.


[root@rac10-p ~]# df -hP | column -t
Filesystem             Size  Used  Avail  Use%  Mounted          on
devtmpfs               5.8G  0     5.8G   0%    /dev
tmpfs                  5.8G  2.4G  3.4G   42%   /dev/shm
tmpfs                  5.8G  9.4M  5.8G   1%    /run
tmpfs                  5.8G  0     5.8G   0%    /sys/fs/cgroup
/dev/mapper/ol-root    98G   9.9G  84G    11%   /
/dev/nvme0n1p2         9.8G  339M  9.0G   4%    /boot
/dev/mapper/ol-data02  92G   109M  87G    1%    /data02
/dev/mapper/ol-u01     92G   7.1G  80G    9%    /u01
/dev/mapper/ol-data01  92G   961M  86G    2%    /data01
/dev/mapper/ol-u02     92G   564K  87G    1%    /u02
/dev/mapper/ol-backup  196G  200M  186G   1%    /backup
tmpfs                  1.2G  12K   1.2G   1%    /run/user/42
tmpfs                  1.2G  0     1.2G   0%    /run/user/0
tmpfs                  1.2G  4.0K  1.2G   1%    /run/user/54321



parted
Opens the GNU parted interactive utility used for partitioning disks (especially GPT-based). You likely used it to create a new partition like /dev/nvme0n1p3.


[root@rac10-p ~]#
[root@rac10-p ~]# parted
GNU Parted 3.2
Using /dev/nvme0n1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)
(parted)
(parted)
(parted)
(parted) ^C

[root@rac10-p ~]#
[root@rac10-p ~]#



lvs -a
Lists all logical volumes (LVs), including hidden or inactive ones (due to -a).

 
[root@rac10-p ~]#
[root@rac10-p ~]# lvs -a
  LV     VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  backup ol -wi-ao---- 200.00g
  data01 ol -wi-ao---- <93.13g
  data02 ol -wi-ao---- <93.13g
  root   ol -wi-ao---- 100.00g
  swap   ol -wi-ao----  46.56g
  u01    ol -wi-ao---- <93.13g
  u02    ol -wi-ao---- <93.13g
  
  
 
df -a
Shows disk usage including filesystems with zero size or pseudo filesystems (like /proc, /sys).

[root@rac10-p ~]# df -a
Filesystem            1K-blocks     Used Available Use% Mounted on
sysfs                         0        0         0    - /sys
proc                          0        0         0    - /proc
devtmpfs                6004212        0   6004212   0% /dev
securityfs                    0        0         0    - /sys/kernel/security
tmpfs                   6035340  2506752   3528588  42% /dev/shm
devpts                        0        0         0    - /dev/pts
tmpfs                   6035340     9576   6025764   1% /run
tmpfs                   6035340        0   6035340   0% /sys/fs/cgroup
cgroup                        0        0         0    - /sys/fs/cgroup/systemd
pstore                        0        0         0    - /sys/fs/pstore
bpf                           0        0         0    - /sys/fs/bpf
cgroup                        0        0         0    - /sys/fs/cgroup/cpu,cpuacct
cgroup                        0        0         0    - /sys/fs/cgroup/blkio
cgroup                        0        0         0    - /sys/fs/cgroup/misc
cgroup                        0        0         0    - /sys/fs/cgroup/freezer
cgroup                        0        0         0    - /sys/fs/cgroup/net_cls,net_prio
cgroup                        0        0         0    - /sys/fs/cgroup/pids
cgroup                        0        0         0    - /sys/fs/cgroup/hugetlb
cgroup                        0        0         0    - /sys/fs/cgroup/rdma
cgroup                        0        0         0    - /sys/fs/cgroup/perf_event
cgroup                        0        0         0    - /sys/fs/cgroup/devices
cgroup                        0        0         0    - /sys/fs/cgroup/memory
cgroup                        0        0         0    - /sys/fs/cgroup/cpuset
none                          0        0         0    - /sys/kernel/tracing
configfs                      0        0         0    - /sys/kernel/config
/dev/mapper/ol-root   102627012 10345656  87038476  11% /
selinuxfs                     0        0         0    - /sys/fs/selinux
systemd-1                     -        -         -    - /proc/sys/fs/binfmt_misc
debugfs                       0        0         0    - /sys/kernel/debug
hugetlbfs                     0        0         0    - /dev/hugepages
mqueue                        0        0         0    - /dev/mqueue
fusectl                       0        0         0    - /sys/fs/fuse/connections
binfmt_misc                   0        0         0    - /proc/sys/fs/binfmt_misc
vmware-vmblock                0        0         0    - /run/vmblock-fuse
/dev/nvme0n1p2         10232668   346136   9362244   4% /boot
/dev/mapper/ol-data02  95533172   111516  90539020   1% /data02
/dev/mapper/ol-u01     95533172  7429996  83220540   9% /u01
/dev/mapper/ol-data01  95533172   983836  89666700   2% /data01
/dev/mapper/ol-u02     95533172      564  90649972   1% /u02
/dev/mapper/ol-backup 205315524   204528 194625236   1% /backup
sunrpc                        0        0         0    - /var/lib/nfs/rpc_pipefs
tmpfs                   1207068       12   1207056   1% /run/user/42
tmpfs                   1207068        0   1207068   0% /run/user/0
tmpfs                   1207068        4   1207064   1% /run/user/54321




[root@rac10-p ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               5.8G     0  5.8G   0% /dev
tmpfs                  5.8G  2.4G  3.4G  42% /dev/shm
tmpfs                  5.8G  9.4M  5.8G   1% /run
tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root     98G  9.9G   84G  11% /
/dev/nvme0n1p2         9.8G  339M  9.0G   4% /boot
/dev/mapper/ol-data02   92G  109M   87G   1% /data02
/dev/mapper/ol-u01      92G  7.1G   80G   9% /u01
/dev/mapper/ol-data01   92G  961M   86G   2% /data01
/dev/mapper/ol-u02      92G  564K   87G   1% /u02
/dev/mapper/ol-backup  196G  200M  186G   1% /backup
tmpfs                  1.2G   12K  1.2G   1% /run/user/42
tmpfs                  1.2G     0  1.2G   0% /run/user/0
tmpfs                  1.2G  4.0K  1.2G   1% /run/user/54321





lsblk
Lists all block devices (disks, partitions, LVMs) in a tree view. Very useful to view disks, partitions, and LVM structure.

[root@rac10-p ~]# lsblk
NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sr0            11:0    1  11.6G  0 rom
nvme0n1       259:0    0   800G  0 disk
├─nvme0n1p1   259:1    0 719.1G  0 part
│ ├─ol-root   252:0    0   100G  0 lvm  /
│ ├─ol-swap   252:1    0  46.6G  0 lvm  [SWAP]
│ ├─ol-u02    252:2    0  93.1G  0 lvm  /u02
│ ├─ol-u01    252:3    0  93.1G  0 lvm  /u01
│ ├─ol-data01 252:4    0  93.1G  0 lvm  /data01
│ ├─ol-data02 252:5    0  93.1G  0 lvm  /data02
│ └─ol-backup 252:6    0   200G  0 lvm  /backup
└─nvme0n1p2   259:2    0    10G  0 part /boot



vgs
Lists information about all Volume Groups (VGs) on the system, like size, number of LVs, etc.

[root@rac10-p ~]# vgs
  VG #PV #LV #SN Attr   VSize   VFree
  ol   1   7   0 wz--n- 719.08g 4.00m
  
  


fdisk /dev/nvme0n1
Starts fdisk utility to manage partitions on the specified disk. Used for MBR-style partitioning (non-GPT).

You likely created partition /dev/nvme0n1p3 here.

[root@rac10-p ~]# fdisk /dev/nvme0n1

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/nvme0n1: 800 GiB, 858993459200 bytes, 1677721600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x441c82c3

Device         Boot      Start        End    Sectors   Size Id Type
/dev/nvme0n1p1            2048 1508034559 1508032512 719.1G 8e Linux LVM
/dev/nvme0n1p2 *    1508034560 1529006079   20971520    10G 83 Linux

Command (m for help): n
Partition type
   p   primary (2 primary, 0 extended, 2 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (3,4, default 3):
First sector (1529006080-1677721599, default 1529006080):
Last sector, +sectors or +size{K,M,G,T,P} (1529006080-1677721599, default 1677721599):

Created a new partition 3 of type 'Linux' and of size 70.9 GiB.

Command (m for help): t
Partition number (1-3, default 3):
Hex code (type L to list all codes): 8e

Changed type of partition 'Linux' to 'Linux LVM'.

Command (m for help): p
Disk /dev/nvme0n1: 800 GiB, 858993459200 bytes, 1677721600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x441c82c3

Device         Boot      Start        End    Sectors   Size Id Type
/dev/nvme0n1p1            2048 1508034559 1508032512 719.1G 8e Linux LVM
/dev/nvme0n1p2 *    1508034560 1529006079   20971520    10G 83 Linux
/dev/nvme0n1p3      1529006080 1677721599  148715520  70.9G 8e Linux LVM

Command (m for help): w
The partition table has been altered.
Syncing disks.


[root@rac10-p ~]# lsblk
NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sr0            11:0    1  11.6G  0 rom
nvme0n1       259:0    0   800G  0 disk
├─nvme0n1p1   259:1    0 719.1G  0 part
│ ├─ol-root   252:0    0   100G  0 lvm  /
│ ├─ol-swap   252:1    0  46.6G  0 lvm  [SWAP]
│ ├─ol-u02    252:2    0  93.1G  0 lvm  /u02
│ ├─ol-u01    252:3    0  93.1G  0 lvm  /u01
│ ├─ol-data01 252:4    0  93.1G  0 lvm  /data01
│ ├─ol-data02 252:5    0  93.1G  0 lvm  /data02
│ └─ol-backup 252:6    0   200G  0 lvm  /backup
├─nvme0n1p2   259:2    0    10G  0 part /boot
└─nvme0n1p3   259:3    0  70.9G  0 part



pvcreate /dev/nvme0n1p3
Initializes the partition as a Physical Volume (PV) for LVM.


[root@rac10-p ~]# pvcreate /dev/nvme0n1p3
  Physical volume "/dev/nvme0n1p3" successfully created.





[root@rac10-p ~]# lsblk
NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sr0            11:0    1  11.6G  0 rom
nvme0n1       259:0    0   800G  0 disk
├─nvme0n1p1   259:1    0 719.1G  0 part
│ ├─ol-root   252:0    0   100G  0 lvm  /
│ ├─ol-swap   252:1    0  46.6G  0 lvm  [SWAP]
│ ├─ol-u02    252:2    0  93.1G  0 lvm  /u02
│ ├─ol-u01    252:3    0  93.1G  0 lvm  /u01
│ ├─ol-data01 252:4    0  93.1G  0 lvm  /data01
│ ├─ol-data02 252:5    0  93.1G  0 lvm  /data02
│ └─ol-backup 252:6    0   200G  0 lvm  /backup
├─nvme0n1p2   259:2    0    10G  0 part /boot
└─nvme0n1p3   259:3    0  70.9G  0 part



vgcreate ol1 /dev/nvme0n1p3
Creates a Volume Group (VG) named ol1 using the physical volume /dev/nvme0n1p3.

[root@rac10-p ~]# vgcreate ol1 /dev/nvme0n1p3
  Volume group "ol1" successfully created



vgs
Verifies the volume group was created and shows its size, free space, etc.

[root@rac10-p ~]# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  ol    1   7   0 wz--n- 719.08g  4.00m
  ol1   1   0   0 wz--n-  70.91g 70.91g

[root@rac10-p ~]# lvcreate -L +1G -n arch_lv ol1
  Logical volume "arch_lv" created.
  
  
  

[root@rac10-p ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               5.8G     0  5.8G   0% /dev
tmpfs                  5.8G  2.4G  3.4G  42% /dev/shm
tmpfs                  5.8G  9.4M  5.8G   1% /run
tmpfs                  5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root     98G  9.9G   83G  11% /
/dev/nvme0n1p2         9.8G  339M  9.0G   4% /boot
/dev/mapper/ol-data02   92G  109M   87G   1% /data02
/dev/mapper/ol-u01      92G  7.1G   80G   9% /u01
/dev/mapper/ol-data01   92G  961M   86G   2% /data01
/dev/mapper/ol-u02      92G  564K   87G   1% /u02
/dev/mapper/ol-backup  196G  200M  186G   1% /backup
tmpfs                  1.2G   12K  1.2G   1% /run/user/42
tmpfs                  1.2G     0  1.2G   0% /run/user/0
tmpfs                  1.2G  4.0K  1.2G   1% /run/user/54321



[root@rac10-p ~]# df -Th
Filesystem            Type      Size  Used Avail Use% Mounted on
devtmpfs              devtmpfs  5.8G     0  5.8G   0% /dev
tmpfs                 tmpfs     5.8G  2.4G  3.4G  42% /dev/shm
tmpfs                 tmpfs     5.8G  9.4M  5.8G   1% /run
tmpfs                 tmpfs     5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root   ext3       98G  9.9G   83G  11% /
/dev/nvme0n1p2        ext3      9.8G  339M  9.0G   4% /boot
/dev/mapper/ol-data02 ext3       92G  109M   87G   1% /data02
/dev/mapper/ol-u01    ext3       92G  7.1G   80G   9% /u01
/dev/mapper/ol-data01 ext3       92G  961M   86G   2% /data01
/dev/mapper/ol-u02    ext3       92G  564K   87G   1% /u02
/dev/mapper/ol-backup ext3      196G  200M  186G   1% /backup
tmpfs                 tmpfs     1.2G   12K  1.2G   1% /run/user/42
tmpfs                 tmpfs     1.2G     0  1.2G   0% /run/user/0
tmpfs                 tmpfs     1.2G  4.0K  1.2G   1% /run/user/54321
[root@rac10-p ~]#



mkfs.ext4 /dev/ol1/arch_lv
Formats the logical volume with the EXT4 filesystem.

[root@rac10-p ~]# mkfs.ext4 /dev/ol1/arch_lv
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: eacb7ee0-cedc-483c-a0c1-cfdce6c73b06
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done



mkdir /arch
Creates a directory to use as a mount point.

[root@rac10-p ~]# mkdir /arch
[root@rac10-p ~]#
[root@rac10-p ~]# mount /dev/ol1/arch_lv /arch


[root@rac10-p ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  5.8G     0  5.8G   0% /dev
tmpfs                   tmpfs     5.8G  2.4G  3.4G  42% /dev/shm
tmpfs                   tmpfs     5.8G  9.4M  5.8G   1% /run
tmpfs                   tmpfs     5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root     ext3       98G  9.9G   83G  11% /
/dev/nvme0n1p2          ext3      9.8G  339M  9.0G   4% /boot
/dev/mapper/ol-data02   ext3       92G  109M   87G   1% /data02
/dev/mapper/ol-u01      ext3       92G  7.1G   80G   9% /u01
/dev/mapper/ol-data01   ext3       92G  961M   86G   2% /data01
/dev/mapper/ol-u02      ext3       92G  564K   87G   1% /u02
/dev/mapper/ol-backup   ext3      196G  200M  186G   1% /backup
tmpfs                   tmpfs     1.2G   12K  1.2G   1% /run/user/42
tmpfs                   tmpfs     1.2G     0  1.2G   0% /run/user/0
tmpfs                   tmpfs     1.2G  4.0K  1.2G   1% /run/user/54321
/dev/mapper/ol1-arch_lv ext4      974M   24K  907M   1% /arch





cat /etc/fstab
Displays the file that controls persistent mounts on boot. You would add an entry here to make the mount survive reboot.
[root@rac10-p ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Jul  8 05:53:07 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/ol-root     /                       ext3    defaults        1 1
/dev/mapper/ol-backup   /backup                 ext3    defaults        1 2
UUID=6f3446f6-c314-438e-ac81-6a009e92a03f /boot                   ext3    defaults        1 2
/dev/mapper/ol-data01   /data01                 ext3    defaults        1 2
/dev/mapper/ol-data02   /data02                 ext3    defaults        1 2
/dev/mapper/ol-u01      /u01                    ext3    defaults        1 2
/dev/mapper/ol-u02      /u02                    ext3    defaults        1 2
/dev/mapper/ol-swap     none                    swap    defaults        0 0



[root@rac10-p ~]# vi /etc/fstab




mount -a
Forces a re-evaluation of all entries in /etc/fstab. If your new mount entry is added there, it will be mounted now.


[root@rac10-p ~]# mount -a
[root@rac10-p ~]#
[root@rac10-p ~]#
[root@rac10-p ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Mon Jul  8 05:53:07 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/ol-root     /                       ext3    defaults        1 1
/dev/mapper/ol-backup   /backup                 ext3    defaults        1 2
UUID=6f3446f6-c314-438e-ac81-6a009e92a03f /boot                   ext3    defaults        1 2
/dev/mapper/ol-data01   /data01                 ext3    defaults        1 2
/dev/mapper/ol-data02   /data02                 ext3    defaults        1 2
/dev/mapper/ol-u01      /u01                    ext3    defaults        1 2
/dev/mapper/ol-u02      /u02                    ext3    defaults        1 2
/dev/mapper/ol-swap     none                    swap    defaults        0 0
/dev/ol1/arch_lv        /arch                   ext4    defaults        0 0



[root@rac10-p ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  5.8G     0  5.8G   0% /dev
tmpfs                   tmpfs     5.8G  2.4G  3.4G  42% /dev/shm
tmpfs                   tmpfs     5.8G  9.4M  5.8G   1% /run
tmpfs                   tmpfs     5.8G     0  5.8G   0% /sys/fs/cgroup
/dev/mapper/ol-root     ext3       98G  9.9G   83G  11% /
/dev/nvme0n1p2          ext3      9.8G  339M  9.0G   4% /boot
/dev/mapper/ol-data02   ext3       92G  109M   87G   1% /data02
/dev/mapper/ol-u01      ext3       92G  7.1G   80G   9% /u01
/dev/mapper/ol-data01   ext3       92G  961M   86G   2% /data01
/dev/mapper/ol-u02      ext3       92G  564K   87G   1% /u02
/dev/mapper/ol-backup   ext3      196G  200M  186G   1% /backup
tmpfs                   tmpfs     1.2G   12K  1.2G   1% /run/user/42
tmpfs                   tmpfs     1.2G     0  1.2G   0% /run/user/0
tmpfs                   tmpfs     1.2G  4.0K  1.2G   1% /run/user/54321
/dev/mapper/ol1-arch_lv ext4      974M   24K  907M   1% /arch



Understanding SQL Plan Baselines in Oracle Database

  Understanding SQL Plan Baselines in Oracle Database SQL Plan Baseline is the feature in Oracle started from Database 11g that helps to pre...