Disclaimer

Monday, 31 March 2025

Understanding CSSD Heartbeat Mechanisms in Oracle RAC

 


Understanding CSSD Heartbeat Mechanisms in Oracle RAC

The Cluster Services Synchronization Daemon (CSSD) is a critical process in Oracle RAC that continuously monitors the health of cluster nodes using two independent heartbeat mechanisms:

  1. Network Heartbeat

  2. Disk Heartbeat




🔹 Network Heartbeat

  • Sent every 1 second over the interconnect (private network) using TCP.

  • A sending thread of CSSD sends the heartbeat to all other nodes and itself.

  • A receiving thread on each node listens for heartbeats from others.

✅ TCP handles error correction, but Oracle does not rely on TCP retransmissions for heartbeat monitoring. Heartbeat loss is interpreted at the Oracle level.

Heartbeat Loss Monitoring (Misscount Logic):

  • If a node does not receive a heartbeat from another node:

    • At 15 seconds (50% of misscount)WARNING logged.

    • At 22 seconds (75%) → Another WARNING logged.

    • At 27 seconds (90%) → Additional warning.

    • At 30 seconds (100%) [default misscount]Node is evicted from the cluster.




🔹 Disk Heartbeat

  • Occurs between each node and the voting disk.

  • CSSD maintains a 1 OS block-sized heartbeat in a specific offset on the voting disk using pread / pwrite syscalls.

  • CSSD:

    • Writes its own heartbeat (with a counter and node name in the block header).

    • Reads/Monitors the heartbeat blocks of all other nodes.

⚠️ If a node fails to write its heartbeat within the disk I/O timeout period, it is considered dead.
If its status is unknown and it's not part of the "survivor" node group, the node is evicted (via a "kill block" update in the voting disk).

 


🔸 Summary of Heartbeat Requirements

Heartbeat TypeFrequencyTimeout ConditionConsequence
Network1 secondcss_misscount (default: 30s)Node eviction
Disk1 seconddisktimeoutNode eviction




🔸 Failure Matrix for Heartbeat Scenarios

Network PingDisk PingReboot?
Completes within misscount secondsCompletes within disktimeoutNo
Completes within misscount secondsTakes more than misscount but < disktimeoutNo
Completes within misscount secondsTakes more than disktimeoutYes
Takes more than misscount secondsCompletes within disktimeoutYes




🔧 Understanding Voting Disk and Its Role in Oracle RAC Clusterware

The Voting Disk is a vital component in Oracle RAC that helps determine node membership, resolve split-brain conditions, and enforce I/O fencing. It plays a key role alongside the CSSD process, which uses both network and disk heartbeats for node health monitoring.




🧠 What is Stored in the Voting Disk?

  • Information about cluster node membership.

  • Disk-based heartbeat blocks for each node.

  • Kill blocks to mark evicted nodes.

Voting Disks are written using pwrite() and read using pread() system calls by the CSSD process.

Each node writes to a specific offset (its own heartbeat block) and reads others’ blocks to check their liveness.

Although the OCR and OLR also store node information, Voting Disk heartbeat plays a runtime role in eviction decisions. There’s no persistent user or application data, so if a Voting Disk is lost, it can be re-added without data loss—but only after stopping CRS.




🔁 Why Voting Disks Are Crucial

While it’s technically true that data in voting disks can be recreated, they’re instrumental in avoiding split-brain and enforcing evictions when:

  • Heartbeat failures occur.

  • Nodes lose contact with others.

  • Shared storage needs to be protected (I/O fencing).




💥 Split Brain Syndrome

A split-brain situation arises when cluster nodes lose communication via the private interconnect but continue running independently. Each node assumes others are down and may attempt to access and modify shared data blocks.

❌ Risk:

This leads to data integrity violations, such as concurrent conflicting updates to the same data block.




🧱 I/O Fencing

After a node failure or eviction, it’s possible that leftover I/O from that node reaches storage out of order, corrupting data. To prevent this:

  • Oracle performs I/O fencing by removing failed nodes' access to shared storage.

  • This ensures only surviving nodes can read/write to the disk.




⚖️ Simple Majority Rule

Oracle Clusterware requires a simple majority of voting disks to be accessible at all times:

"More than half" of the voting disks must be online for the cluster to operate.

📌 Formula:

To tolerate loss of N disks → Configure at least 2N+1 voting disks.




🔍 Examples – Voting Disk in Action

✅ Example 1: Network Heartbeat Failure in 3-node Cluster

  • Setup: 3 nodes (Node 1, Node 2, Node 3) and 3 Voting Disks.

  • Issue: Node 3 loses network heartbeat with Node 1 and Node 2, but disk heartbeat still working.

  • Action: Node 1 and 2 can still see each other and determine via the Voting Disk that Node 3 is isolated.

  • They mark Node 3’s kill block in Voting Disk.

  • During next pread(), Node 3 sees the self-kill flag and evicts itself.

  • I/O fencing ensures safe disk access. OHASD then gracefully shuts down and restarts the stack on Node 3.



✅ Example 2: Disk Heartbeat Split in 2-node Cluster

  • Setup: 2 nodes and 3 Voting Disks.

  • Issue: Node 1 sees 2 voting disks; Node 2 sees only 1.

  • Based on Simple Majority Rule:

    • Node 1 (majority access) is the survivor.

    • CSSD of Node 1 marks Node 2’s kill block.

  • Node 2 reads the kill flag and evicts itself.

  • I/O fencing is applied, and OHASD restarts the stack on Node 2.

🧠 Without an odd number of disks, both nodes could think they're healthy, leading to potential split-brain.


 



📌 Summary

ComponentPurpose
Voting DiskMaintains disk heartbeats, kill blocks, and node membership info.
Network HeartbeatChecks interconnect communication every second via TCP.
Disk HeartbeatChecks I/O access health via shared storage every second.
Split-BrainScenario where isolated nodes continue operating independently.
I/O FencingPrevents failed nodes from sending stale writes to shared storage.
Simple MajorityEnsures more than half of voting disks are accessible to avoid eviction.


aws import in Oracle

 

1) 
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
           p_bucket_name    =>  'orcl-oracle-backup-bucket', 
           p_s3_prefix      =>  '20250305_ORCLV52/', 
           p_directory_name =>  'DATA_PUMP_DIR') 
           AS TASK_ID FROM DUAL;


Note:-:
This command is used to download files from an Amazon S3 bucket into an Oracle RDS instance using the rdsadmin.rdsadmin_s3_tasks.download_from_s3 procedure.


Explanation of Parameters:
p_bucket_name → 'orcl-oracle-backup-bucket'

The name of the S3 bucket where the files are stored.

p_s3_prefix → '20250305_ORCLV52/'

The prefix (folder path) inside the S3 bucket containing the files to download.

p_directory_name → 'DATA_PUMP_DIR'

The Oracle directory where the files will be placed after downloading.

AS TASK_ID FROM DUAL;

Returns a TASK_ID, which can be used to monitor the progress of the task.

Checking Task Status

After executing this command, you can check the status of the task using:


SELECT * FROM rdsadmin.rdsadmin_s3_tasks WHERE task_id = '<TASK_ID>';
Common Use Case
This is typically used to restore Oracle database backups (such as Data Pump export .dmp files) that were previously uploaded to S3.

Let me know if you need further clarification! 🚀

----------------------------------

SELECT text from table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<TASK_ID>.log'));


Note:-:
This SQL command reads the log file for the S3 download task from the BDUMP directory in an Oracle RDS instance.

Breakdown of the Query:
rdsadmin.rds_file_util.read_text_file

This function is used to read the contents of a file stored in an Oracle directory.

'BDUMP'

The Oracle directory where the log file is stored.

BDUMP (Background Dump Destination) is used for logging diagnostic messages.

'dbtask-<TASK_ID>.log'

The specific log file associated with the S3 download task.

<TASK_ID> should be replaced with the actual TASK_ID returned from the download_from_s3 command.

How to Use This?
First, execute:


SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
    p_bucket_name    => 'orcl-oracle-backup-bucket', 
    p_s3_prefix      => '20250305_ORCLV52/', 
    p_directory_name => 'DATA_PUMP_DIR') AS TASK_ID 
FROM DUAL;

This returns a TASK_ID (e.g., 12345).

Then, check the status:


SELECT * FROM rdsadmin.rdsadmin_s3_tasks WHERE task_id = '12345';

Finally, read the log file:


SELECT text FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-12345.log'));
This shows the detailed logs of the task, including errors, progress, and completion status.

Why Use This?
Helps in troubleshooting if the S3 download fails.



------------------------------------------

DECLARE
  v_hdnl NUMBER;
BEGIN
  v_hdnl := DBMS_DATAPUMP.OPEN( 
    operation => 'IMPORT', 
    job_mode  => 'SCHEMA', 
    job_name  => null);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP1.DMP', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_dump_file);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP1_to_SAMIK1_Import_24032025.log', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_log_file);
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_SCHEMA', 'EMP1', 'SAMIK1');
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_TABLESPACE', 'MERMS', 'SAMIK1_DATA');
  DBMS_DATAPUMP.SET_PARALLEL(v_hdnl, degree => 2);
  DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/


Note:-:
This PL/SQL block performs an Oracle Data Pump Import (IMPDP) operation, importing schema-level objects from the EMP1.DMP dump file while remapping the schema and tablespace.

Step-by-Step Breakdown:
DBMS_DATAPUMP.OPEN

Opens a Data Pump Import job (IMPORT mode) at the SCHEMA level.

job_name => null → Oracle assigns a system-generated job name.

DBMS_DATAPUMP.ADD_FILE

Adds the dump file EMP1.DMP stored in DATA_PUMP_DIR.

Adds the log file EMP1_to_SAMIK1_Import_24032025.log for tracking progress.

DBMS_DATAPUMP.METADATA_REMAP

Schema Remap: Changes all objects from schema EMP1 to SAMIK1.

Tablespace Remap: Moves objects from MERMS tablespace to SAMIK1_DATA.

DBMS_DATAPUMP.SET_PARALLEL

Sets parallel execution to 2 for improved performance.

DBMS_DATAPUMP.START_JOB

Starts the import job.

Use Case
This script is useful when you need to: ✅ Restore a schema backup from an exported dump file.
✅ Move schema objects from one user (EMP1) to another (SAMIK1).
✅ Change the storage location (tablespace) of imported objects.


Monitoring the Import Job
After execution, you can monitor the job using:

SELECT * FROM DBA_DATAPUMP_JOBS WHERE job_name LIKE 'SYS_IMPORT%';
To check real-time logs:


SELECT text FROM TABLE(RDSADMIN.RDS_FILE_UTIL.READ_TEXT_FILE('DATA_PUMP_DIR', 'EMP1_to_SAMIK1_Import_24032025.log'));


Possible Enhancements
🔹 Specify job_name to manage the job explicitly:


v_hdnl := DBMS_DATAPUMP.OPEN('IMPORT', 'SCHEMA', 'EMP1_TO_SAMIK1_JOB');

🔹 Add filters to import specific objects:

DBMS_DATAPUMP.DATA_FILTER(v_hdnl, 'INCLUDE_TABLE', 'EMPLOYEES');



-------------------------------------------------------------------------------------------------------------

SELECT * FROM TABLE 
    (rdsadmin.rds_file_util.read_text_file(
        p_directory => 'DATA_PUMP_DIR',
        p_filename  => 'EMP1_to_SAMIK1_Import_24032025.log'));


Note:-:
This SQL command reads the log file generated by the Data Pump Import job to monitor its progress and identify any errors or warnings.

🔍 Breakdown of the Query:
rdsadmin.rds_file_util.read_text_file

This function reads the contents of a text file stored in an Oracle directory.

Parameters:

p_directory => 'DATA_PUMP_DIR'

Specifies the Oracle directory object where the log file is located.

p_filename => 'EMP1_to_SAMIK1_Import_24032025.log'

The name of the import log file generated by DBMS_DATAPUMP.

TABLE() Function

Converts the function output into a readable table format.

📌 When to Use This?
✅ To monitor an ongoing or completed Data Pump Import job.
✅ To debug errors such as missing objects, permissions, or remapping issues.
✅ To verify successful completion of the schema import (EMP1 → SAMIK1).


🚀 Next Steps:
If the log file shows errors like ORA-xxxx, you can:


🔹 Check Data Pump job status:

SELECT * FROM DBA_DATAPUMP_JOBS WHERE job_name LIKE 'SYS_IMPORT%';

🔹 Restart failed jobs:

DBMS_DATAPUMP.START_JOB(v_hdnl, skip_current = 1);

🔹 Investigate log errors and re-run the import with necessary fixes.



-----------------------------------------------------------------------------------------------------------

select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by filename, mtime;


Note:-:
This SQL command lists all files present in the DATA_PUMP_DIR directory within an Oracle RDS instance.

📌 Breakdown of the Query:
RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')

Retrieves file details (filename, size, and modification time) from the DATA_PUMP_DIR directory.

TABLE() Function

Converts the function's output into a readable table format.

ORDER BY filename, mtime

Sorts files alphabetically by name and chronologically by modification time (mtime).

📄 Example Output:
FILENAME FILESIZE MTIME
EMP1.DMP 2GB 2025-03-24 10:15:30
EMP1_to_SAMIK1_Import_24032025.log 15KB 2025-03-24 10:16:10
EXPDP_METADATA.LOG 8KB 2025-03-24 10:18:45


📌 When to Use This?
✅ To verify if a dump file (.DMP) or log file (.LOG) exists before performing import/export.
✅ To monitor file sizes and timestamps in Oracle RDS.
✅ To troubleshoot missing or outdated files.

🚀 Next Steps:
🔹 Read a specific log file:

SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.READ_TEXT_FILE('DATA_PUMP_DIR', 'EMP1_to_SAMIK1_Import_24032025.log'));


🔹 Check active Data Pump jobs:

SELECT * FROM DBA_DATAPUMP_JOBS WHERE job_name LIKE 'SYS_IMPORT%';

🔹 Delete an unwanted file (if required):

EXEC RDSADMIN.RDS_FILE_UTIL.DELETE_FILE('DATA_PUMP_DIR', 'OLD_BACKUP.DMP');

--------------


EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','filename');
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','EMP3_to_SAMIK3_Import_19032025.log'); 


Note:-:
🗑️ Deleting a File in Oracle RDS Using UTL_FILE.FREMOVE
Your command is using UTL_FILE.FREMOVE to delete a file from the DATA_PUMP_DIR directory in Oracle RDS.

📌 Explanation
UTL_FILE.FREMOVE('DATA_PUMP_DIR', 'filename')

Deletes the specified file from the DATA_PUMP_DIR directory.

You need to replace 'filename' with the actual file you want to remove.

UTL_FILE.FREMOVE('DATA_PUMP_DIR', 'EMP3_to_SAMIK3_Import_19032025.log')

Deletes the import log file 'EMP3_to_SAMIK3_Import_19032025.log'.

⚠️ Important Considerations
✅ Ensure the file exists before attempting to delete it. Use:


SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) 
WHERE filename = 'EMP3_to_SAMIK3_Import_19032025.log';
✅ You must have proper privileges to delete files.
✅ This operation is irreversible, so be careful before running it.

🔍 Alternative in AWS RDS
If UTL_FILE.FREMOVE is restricted in Oracle RDS, use:

EXEC RDSADMIN.RDS_FILE_UTIL.DELETE_FILE('DATA_PUMP_DIR', 'EMP3_to_SAMIK3_Import_19032025.log');


This is the recommended approach in AWS RDS.

🚀 Next Steps
🔹 Verify file deletion:

SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) 
ORDER BY filename;

🔹 Check Data Pump log files before deletion:

SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.READ_TEXT_FILE('DATA_PUMP_DIR', 'EMP3_to_SAMIK3_Import_19032025.log'));


















How to import dump from RDS S3 bucket to AWS Oracle RDS instance


Below are the step-by-step details on how to import a dump from an RDS S3 bucket to an AWS Oracle RDS instance.


Step 1: Gather the DB Connection details from AWS Secrets Manager
Step 2: How to establish the AWS Session Manager for Port forwarding (Oracle RDS machine connectivity purpose)
Step 3: Connect to Oracle RDS database using SQL Developer tool and invoke worksheet (SQL Query execution)
Step 4: Creation of empty schema structure along with necessary permissions, tablespaces, etc
Step 5: Import your dump file into your target DB instance using DBMS_DATAPUMP package
Step 6: Once the Import job completed read the logfile to verify for any errors
Step 7: Verification of imported DB Objects & its Storage size information
Step 8: Clean-up of files
Step 9: Import "SAMIK2" in orcl database
Step 10: Import "SAMIK3" in orcl database
Step 11: Remove the source database dump files from DATA_PUMP_DIR which are copied from S3 Bucket



Step 1: Gather the DB Connection details from AWS Secrets Manager
===================================================
E.x.

DB_User_Account: admin
DB_User_Account_Password: Welcome123 (This password might change due to rotation. 
So, please check secrets manager for updated password.)
DB_Name: ORCL
Engine: Oracle
Port: 1521
Host: orcl.eu-central-1.rds.amazonaws.com
End_Point: orcl.eu-central-1.rds.amazonaws.aws.glpoly.net




Step 2: How to establish the AWS Session Manager for Port forwarding (Oracle RDS machine connectivity purpose)
============================================================================================


Open the Power Shell command prompt in your local machine and execute below commands. 
Basically, you need to collect environment variables from AWS Access Key webpage under PowerShell tab.

DevOps-abcdefgh
982534357203 | aws-root-17c0820db8@oracle.com

AWSAdministratorAccess|Access keys 

E.x.

Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows

PS C:\Users\sam>
PS C:\Users\sam> $Env:AWS_ACCESS_KEY_ID="ASIA6JQ44QDJ2RE65ITL"
PS C:\Users\sam> $Env:AWS_SECRET_ACCESS_KEY="3jk8TKMPz7f3"
PS C:\Users\sam> $Env:sHpSvWCCw1++evj4qK1dzwbzzziFAvjsCs5ncWx+VY/xqA0IHrYR+DuT2fNISVm6yiuNBzgG6P


After environment variables set, please use below given AWS SSM command to start the session.

PS C:\Users\sam> aws ssm start-session  --region eu-central-1 --target i-8e0ae0085 --document-name AWS-StartPortForwardingSessionToRemoteHost --parameters host="orcl-oraclerds.982534357203.aws.glpoly.net",portNumber="1521",localPortNumber="1521"

Starting session with SessionId: scott@oracle.com-frs7fgk7yfe
Port 1521 opened for sessionId scott@oracle.com-frs7fgk7yfe
Waiting for connections...

Connection accepted for session [scott@oracle.com-frs7fgk7yfe]




Step 3: Connect to Oracle RDS database using SQL Developer tool and invoke worksheet (SQL Query execution)
==============================================================================================
Now, you can run the SQL based commands here and perform required DBA actions.


Q) How to download export dump from Bucket to RDS DATA_PUMP_DIR location..????

First, we need to download the Oracle DB dump files in RDS machine from S3 bucket. To do that, please execute below commands,
-----------------------------------------------------------------------------------------------------------------------------
Note: Below "rdsadmin.rdsadmin_s3_tasks.download_from_s3" package will download the dumpfiles and keep in DATA_PUMP_DIR location on RDS instance.


SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(
           p_bucket_name    =>  'orcl-oracle-backup-bucket', 
           p_s3_prefix      =>  '20250305_ORCLV52/', 
           p_directory_name =>  'DATA_PUMP_DIR') 
           AS TASK_ID FROM DUAL;

O/p will be like -> 1742397812960-2284 -> Task_ID

Here, you need to replace the bucket name and in prefix option you have to specify if there any sub-directories.

Our S3 bucket structure is like below,

Amazon S3 -> Buckets -> "orcl-oracle-backup-bucket/20250305_ORCLV52/"

Once the above SELECT statement execution completed, please verify the logfile for successful dump download from S3 bucket.

Syntax:-
SELECT text from table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-<TASK_ID>.log'));

Actual Query:-
SELECT text from table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1742397812960-2284.log'));

O/p will be like below,

2025-03-19 15:25:43.937 UTC [INFO ] This task is about to list the Amazon S3 objects for AWS Region eu-central-1, bucket name orcl-oracle-backup-bucket, and prefix 20250305_ORCLV52/.
2025-03-19 15:25:43.976 UTC [INFO ] The task successfully listed the Amazon S3 objects for AWS Region eu-central-1, bucket name orcl-oracle-backup-bucket, and prefix 20250305_ORCLV52/.
2025-03-19 15:25:43.992 UTC [INFO ] This task is about to download the Amazon S3 object or objects in /rdsdbdata/datapump from bucket name orcl-oracle-backup-bucket and key 20250305_ORCLV52/EMP2.DMP.
2025-03-19 15:25:44.582 UTC [INFO ] The task successfully downloaded the Amazon S3 object or objects from bucket name orcl-oracle-backup-bucket with key 20250305_ORCLV52/EMP2.DMP to the location /rdsdbdata/datapump.
2025-03-19 15:25:44.582 UTC [INFO ] This task is about to download the Amazon S3 object or objects in /rdsdbdata/datapump from bucket name orcl-oracle-backup-bucket and key 20250305_ORCLV52/EMP2.log.
2025-03-19 15:25:44.706 UTC [INFO ] The task successfully downloaded the Amazon S3 object or objects from bucket name orcl-oracle-backup-bucket with key 20250305_ORCLV52/EMP2.log to the location /rdsdbdata/datapump.
2025-03-19 15:25:44.706 UTC [INFO ] This task is about to download the Amazon S3 object or objects in /rdsdbdata/datapump from bucket name orcl-oracle-backup-bucket and key 20250305_ORCLV52/EMP3.DMP.
2025-03-19 15:25:44.880 UTC [INFO ] The task successfully downloaded the Amazon S3 object or objects from bucket name orcl-oracle-backup-bucket with key 20250305_ORCLV52/EMP3.DMP to the location /rdsdbdata/datapump.
2025-03-19 15:25:44.880 UTC [INFO ] The task skipped downloading already existing key orcl-oracle-backup-bucket to the location 20250305_ORCLV52/EMP3.log.
2025-03-19 15:25:44.880 UTC [INFO ] This task is about to download the Amazon S3 object or objects in /rdsdbdata/datapump from bucket name orcl-oracle-backup-bucket and key 20250305_ORCLV52/EMP1.DMP.
2025-03-19 15:28:13.968 UTC [INFO ] The task successfully downloaded the Amazon S3 object or objects from bucket name orcl-oracle-backup-bucket with key 20250305_ORCLV52/EMP1.DMP to the location /rdsdbdata/datapump.
2025-03-19 15:28:13.999 UTC [INFO ] This task is about to download the Amazon S3 object or objects in /rdsdbdata/datapump from bucket name orcl-oracle-backup-bucket and key 20250305_ORCLV52/EMP1.log.
2025-03-19 15:28:14.216 UTC [INFO ] The task successfully downloaded the Amazon S3 object or objects from bucket name orcl-oracle-backup-bucket with key 20250305_ORCLV52/EMP1.log to the location /rdsdbdata/datapump.
2025-03-19 15:28:14.217 UTC [INFO ] The task finished successfully.


Now, please verify the downloaded files available in DATA_PUMP_DIR directory on RDS server,

SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by filename, mtime;

O/p will be like below,

File_Name Type FileSize MTIME

EMP2.DMP file 28131328 19-03-25
EMP2.log file 1982 19-03-25
EMP3.DMP file 479232 19-03-25
EMP3.log file 3036 19-03-25
EMP1.DMP file 15426408448 19-03-25
EMP1.log file 195909 19-03-25
datapump/ directory 4096 19-03-25



Step 4: Creation of empty schema structure along with necessary permissions, tablespaces, etc.,
===============================================================================================
SQL> select * from dba_users where oracle_maintained='N' and username in ('SAMIK1','SAMIK2','SAMIK3') order by created;

no rows selected.

Now, start creating the all three empty schemas. To do that, please execute below SQL commands in SQL Developer worksheet to create the respective tablespaces.

E.x.

SQL> select tablespace_name from dba_tablespaces;

SYSTEM
SYSAUX
UNDO_T1
TEMP
USERS
RDSADMIN

SQL> select tablespace_name, file_name from dba_data_files order by tablespace_name;

RDSADMIN /rdsdbdata/db/ORCL_A/datafile/o1_mf_rdsadmin_mw07j5xz_.dbf
SYSAUX /rdsdbdata/db/ORCL_A/datafile/o1_mf_sysaux_mw0648yh_.dbf
SYSTEM /rdsdbdata/db/ORCL_A/datafile/o1_mf_system_mw06475h_.dbf
UNDO_T1 /rdsdbdata/db/ORCL_A/datafile/o1_mf_undo_t1_mw0649fs_.dbf
USERS /rdsdbdata/db/ORCL_A/datafile/o1_mf_users_mw0649pl_.dbf


Now, verify the above output to identify the naming convention and existing tablespaces details.

In AWS RDS, you don't have to specify the datafile location explicitly. 
Hence, use below SQL commands to create the required three tablespaces and verify once created.

SQL> CREATE BIGFILE TABLESPACE SAMIK1_DATA DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 33554431M;

TABLESPACE SAMIK1_DATA created.

SQL> CREATE BIGFILE TABLESPACE SAMIK2_DATA DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 33554431M;

TABLESPACE SAMIK2_DATA created.

SQL> CREATE BIGFILE TABLESPACE SAMIK3_DATA DATAFILE SIZE 1G AUTOEXTEND ON MAXSIZE 33554431M;

TABLESPACE SAMIK3_DATA created.


Verification:
------------

SQL> select tablespace_name from dba_tablespaces;

SYSTEM
SYSAUX
UNDO_T1
TEMP
USERS
RDSADMIN
SAMIK3_DATA
SAMIK1_DATA
SAMIK2_DATA

SQL> select tablespace_name, file_name from dba_data_files order by tablespace_name;

RDSADMIN /rdsdbdata/db/ORCL_A/datafile/o1_mf_rdsadmin_mw07j5xz_.dbf
SYSAUX /rdsdbdata/db/ORCL_A/datafile/o1_mf_sysaux_mw0648yh_.dbf
SYSTEM /rdsdbdata/db/ORCL_A/datafile/o1_mf_system_mw06475h_.dbf
UNDO_T1 /rdsdbdata/db/ORCL_A/datafile/o1_mf_undo_t1_mw0649fs_.dbf
USERS /rdsdbdata/db/ORCL_A/datafile/o1_mf_users_mw0649pl_.dbf
SAMIK1_DATA /rdsdbdata/db/ORCL_A/datafile/o1_mf_apm_dev__mxp3q4x5_.dbf
SAMIK2_DATA /rdsdbdata/db/ORCL_A/datafile/o1_mf_apm_dev__mxp3r06r_.dbf
SAMIK3_DATA /rdsdbdata/db/ORCL_A/datafile/o1_mf_apm_dev__mxp0s461_.dbf


After these all three tablespaces created, start creating the empty user accounts and provide the necessary grants (Source DB -> Target DB) like below,

1.) EMP1 -> SAMIK1
2.) EMP2 -> SAMIK2
3.) EMP3 -> SAMIK3


First create the custom role -> "MI_CONNECT_ROLE" exported in source database server & assign necessary privilege.
------------------------------------------------------------------------------------------------------------------

SQL> CREATE ROLE MI_CONNECT_ROLE;

Role MI_CONNECT_ROLE created.

SQL> grant CREATE PROCEDURE TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant CREATE SEQUENCE TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant CREATE SESSION TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant CREATE TABLE TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant CREATE TRIGGER TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant CREATE VIEW TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant QUERY REWRITE TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant CREATE TYPE TO MI_CONNECT_ROLE;

Grant succeeded.

SQL> grant EXECUTE ON SYS.DBMS_CRYPTO TO MI_CONNECT_ROLE;

Grant succeeded.



Now, start creating the empty user accounts.
--------------------------------------------

User Account: SAMIK1
------------------------

SQL> CREATE USER SAMIK1 IDENTIFIED BY SAMIK1#032025 default tablespace SAMIK1_DATA;

User SAMIK1 created.

SQL> alter user SAMIK1 quota UNLIMITED on SAMIK1_DATA;

User SAMIK1 altered.

SQL> grant DBA to SAMIK1 WITH ADMIN OPTION;

Grant succeeded.

SQL> grant MI_CONNECT_ROLE to SAMIK1;

Grant succeeded.

SQL> grant CREATE VIEW to SAMIK1;

Grant succeeded.

SQL> grant CREATE TABLE to SAMIK1;

Grant succeeded.

SQL> grant CREATE DATABASE LINK to SAMIK1;

Grant succeeded.

SQL> grant EXECUTE on CTXSYS.CTX_DDL to SAMIK1;

Grant succeeded.

SQL> grant CREATE TYPE to SAMIK1;

Grant succeeded.

SQL> grant CREATE MATERIALIZED VIEW to SAMIK1;

Grant succeeded.

SQL> grant CREATE SEQUENCE to SAMIK1;

Grant succeeded.

SQL> grant CREATE TRIGGER to SAMIK1;

Grant succeeded.

SQL> grant EXECUTE on SYS.DBMS_CRYPTO to SAMIK1;

Grant succeeded.



User Account: SAMIK2
-------------------------

SQL> CREATE USER SAMIK2 IDENTIFIED BY SAMIK2#032025 default tablespace SAMIK2_DATA;

User SAMIK2 created.

SQL> alter user SAMIK2 quota UNLIMITED on SAMIK2_DATA;

User SAMIK1 altered.

SQL> grant DBA to SAMIK2 WITH ADMIN OPTION;

Grant succeeded.

SQL> grant MI_CONNECT_ROLE to SAMIK2;

Grant succeeded.

SQL> grant CREATE VIEW to SAMIK2;

Grant succeeded.

SQL> grant CREATE TABLE to SAMIK2;

Grant succeeded.

SQL> grant CREATE DATABASE LINK to SAMIK2;

Grant succeeded.

SQL> grant EXECUTE on CTXSYS.CTX_DDL to SAMIK2;

Grant succeeded.

SQL> grant CREATE TYPE to SAMIK2;

Grant succeeded.

SQL> grant CREATE MATERIALIZED VIEW to SAMIK2;

Grant succeeded.

SQL> grant CREATE SEQUENCE to SAMIK2;

Grant succeeded.

SQL> grant CREATE TRIGGER to SAMIK2;

Grant succeeded.

SQL> grant EXECUTE on SYS.DBMS_CRYPTO to SAMIK2;

Grant succeeded.


User Account: SAMIK3
-------------------------

SQL> CREATE USER SAMIK3 IDENTIFIED BY SAMIK3#032025 default tablespace SAMIK3_DATA;

User SAMIK3 created.

SQL> alter user SAMIK3 quota UNLIMITED on SAMIK3_DATA;

User SAMIK3 altered.

SQL> grant DBA to SAMIK3 WITH ADMIN OPTION;

Grant succeeded.

SQL> grant CONNECT to SAMIK3;

Grant succeeded.

SQL> grant RESOURCE to SAMIK3;

Grant succeeded.

SQL> grant CREATE VIEW to SAMIK3;

Grant succeeded.

SQL> grant CREATE TABLE to SAMIK3;

Grant succeeded.

SQL> grant CREATE DATABASE LINK to SAMIK3;

Grant succeeded.

SQL> grant EXECUTE on CTXSYS.CTX_DDL to SAMIK3;

Grant succeeded.

SQL> grant CREATE TYPE to SAMIK3;

Grant succeeded.

SQL> grant CREATE MATERIALIZED VIEW to SAMIK3;

Grant succeeded.

SQL> grant CREATE SEQUENCE to SAMIK3;

Grant succeeded.

SQL> grant CREATE TRIGGER to SAMIK3;

Grant succeeded.

SQL> grant EXECUTE on SYS.DBMS_CRYPTO to SAMIK3;

Grant succeeded.



Step 5: Import your dump file into your target DB instance using DBMS_DATAPUMP package
======================================================================================

For schema -> "SAMIK1" use below PL/SQL block and run it on SQL Developer (or) SQL*Plus as master user (admin) on RDS Oracle DB instance.


DECLARE
  v_hdnl NUMBER;
BEGIN
  v_hdnl := DBMS_DATAPUMP.OPEN( 
    operation => 'IMPORT', 
    job_mode  => 'SCHEMA', 
    job_name  => null);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP1.DMP', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_dump_file);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP1_to_SAMIK1_Import_24032025.log', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_log_file);
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_SCHEMA', 'EMP1', 'SAMIK1');
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_TABLESPACE', 'MERMS', 'SAMIK1_DATA');
  DBMS_DATAPUMP.SET_PARALLEL(v_hdnl, degree => 2);
  DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/


O/p:-
-----

PL/SQL procedure successfully completed.


Step 6: Once the Import job completed read the logfile to verify for any errors
===============================================================================




SELECT * FROM TABLE 
    (rdsadmin.rds_file_util.read_text_file(
        p_directory => 'DATA_PUMP_DIR',
        p_filename  => 'EMP1_to_SAMIK1_Import_24032025.log'));


O/p will be like below,

Master table "ADMIN"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_SCHEMA_01":  
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SAMIK1" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SAMIK1"."MI_INDX_COLUMNS"                  0 KB       0 rows
. . imported "SAMIK1"."MI_LUBRICANT"                     0 KB       0 rows
. . imported "SAMIK1"."MI_STG_QUERIES_BKUP_4030007"  16.89 KB     154 rows
. . imported "SAMIK1"."MI_ROLE_RES"                  55.06 KB    2190 rows
. . imported "SAMIK1"."MI_TMCAWS00"                      0 KB       0 rows
. . 
. . 
. . 
. . 
. . 
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/VIEW/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
ORA-39082: Object type PROCEDURE:"SAMIK1"."TEMP_SEG_USAGE_INSERT" created with compilation warnings
ORA-39082: Object type VIEW:"SAMIK1"."BYV_TECH_DATA" created with compilation warnings
Job "ADMIN"."SYS_IMPORT_SCHEMA_01" completed with 5 error(s) at Mon Mar 24 07:56:10 2025 elapsed 0 01:48:51


Now analyze the errors (Post import steps):-
==========================================

ORA-31684: Object type USER:"SAMIK1" already exists ----> Please ignore this error since we created this schema manually in target database.

ORA-39083: Object type INDEX failed to create with error:

Here this index creation step failed because of schema "DBOP" unavailability in target database. So, re-create them manually using "SAMIK1" schema like below,


SQL> CREATE INDEX "SAMIK1"."N_MI_EVWKHIST_OID_RID_SS" ON "SAMIK1"."MI_EVWKHIST" ("MI_EVWKHIST_ORDR_ID_C", "MI_EVWKHIST_RQST_ID_C", "MI_EVWKHIST_SAP_SYSTEM_C") PCTFREE 10 INITRANS 2 MAXTRANS 255  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SAMIK1_DATA" PARALLEL 1
/

Index "SAMIK1"."N_MI_EVWKHIST_OID_RID_SS" created.


SQL> CREATE INDEX "SAMIK1"."N_MI_EVWKHIST_RID_OID_SS" ON "SAMIK1"."MI_EVWKHIST" ("MI_EVWKHIST_RQST_ID_C", "MI_EVWKHIST_ORDR_ID_C", "MI_EVWKHIST_SAP_SYSTEM_C") PCTFREE 10 INITRANS 2 MAXTRANS 255  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SAMIK1_DATA" PARALLEL 1 
/

Index "SAMIK1"."N_MI_EVWKHIST_RID_OID_SS" created.


ORA-39082: Object type PROCEDURE:"SAMIK1"."TEMP_SEG_USAGE_INSERT" created with compilation warnings
ORA-39082: Object type VIEW:"SAMIK1"."BYV_TECH_DATA" created with compilation warnings

The above two errors, please ignore since these objects are already in INVALID state in source database.


Step 7: Verification of imported DB Objects & its Storage size information
==========================================================================


SQL> select distinct owner, segment_type, sum(bytes)/1024/1024/1024 from dba_segments where owner='SAMIK1' group by owner, segment_type;

Owner Segment_Type Size (GB)
---------- ------------ -----------------
SAMIK1 LOBINDEX 0.03961181640625
SAMIK1 TABLE 46.92852783203125
SAMIK1 LOBSEGMENT 28.79962158203125
SAMIK1 INDEX 79.62249755859375

SQL> select distinct owner, object_type, status, count(*) from dba_objects where owner='SAMIK1' group by owner, object_type, status order by status, object_type;

Owner Object_Type Status Count(*)
---------- ----------- ------- --------
SAMIK1 PROCEDURE INVALID 1
SAMIK1 VIEW INVALID 1
SAMIK1 FUNCTION VALID 15
SAMIK1 INDEX VALID 6951
SAMIK1 LOB VALID 1099
SAMIK1 PACKAGE VALID 21
SAMIK1 PACKAGE BODY VALID 19
SAMIK1 PROCEDURE VALID 67
SAMIK1 SEQUENCE VALID 57
SAMIK1 TABLE VALID 2421
SAMIK1 TRIGGER VALID 33
SAMIK1 TYPE VALID 3
SAMIK1 TYPE BODY VALID 2
SAMIK1 VIEW VALID 1688




Step 8: Clean-up of files
==========================



SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by filename, mtime;

Filename Type Size Mtime
--------------- ---- ---------- --------
EMP2.DMP file 28131328 19-03-25
EMP2.log file 1982 19-03-25
EMP3.DMP file 479232 19-03-25
EMP3.log file 3036 19-03-25
EMP3_to_SAMIK3_Import_19032025.log file 5788 19-03-25
EMP1.DMP file 15426408448 19-03-25
EMP1.log file 195909 19-03-25
EMP1_to_SAMIK1_Import_24032025.log file 194293 24-03-25
datapump/ directory 4096 24-03-25


Now delete the unwanted files from the DATA_PUMP_DIR using below command,

EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','filename');
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','EMP3_to_SAMIK3_Import_19032025.log'); 






Step 9: Import "SAMIK2" in orcl database
================================================
Please create below sequences and empty tablespace before executing the data pump process.


SQL> CREATE SEQUENCE  "SAMIK2"."CUSTOM_ID_SEQ"  MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT BY 1 START WITH 1 NOCACHE  NOORDER  NOCYCLE  NOKEEP  NOSCALE  GLOBAL
/

Sequence "SAMIK2"."CUSTOM_ID_SEQ" created.


SQL> CREATE SEQUENCE  "SAMIK2"."BASELINE_ID_SEQ"  MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT BY 1 START WITH 725903 NOCACHE  NOORDER  NOCYCLE  NOKEEP  NOSCALE  GLOBAL
/

Sequence "SAMIK2"."BASELINE_ID_SEQ" created.


SQL> CREATE SEQUENCE  "SAMIK2"."TEMPORARY_ID_SEQ"  MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT BY 1 START WITH 1 NOCACHE  NOORDER  NOCYCLE  NOKEEP  NOSCALE  GLOBAL
/

Sequence "SAMIK2"."TEMPORARY_ID_SEQ" created.


SQL> CREATE TABLE "SAMIK2"."CUSTOM" ("ID" NUMBER(19,0) DEFAULT "SAMIK2"."CUSTOM_ID_SEQ"."NEXTVAL" NOT NULL ENABLE, "LANGUAGEID" VARCHAR2(10 BYTE) NOT NULL ENABLE, "NAME" VARCHAR2(250 BYTE) NOT NULL ENABLE, "TEXT" VARCHAR2(1000 BYTE) NOT NULL ENABLE, "TRANSLATION" VARCHAR2(1500 BYTE) NOT NULL ENABLE, "PATH" VARCHAR2(350 BYTE), "TYPE" VARCHAR2(1 BYTE) DEFAULT '0', "SUBTYPE" VARCHAR2(1 BYTE) DEFAULT '0', "SUBMODULE" VARCHAR2(50 BYTE), "MODULEID" VARCHAR2(40 BYTE) NOT NULL ENABLE, "TENANTID" VARCHAR2(50 BYTE) NOT NULL ENABLE, "NEEDTRANSLATION" VARCHAR2(2 BYTE) NOT NULL ENABLE, "LASTUPDATEDATE" TIMESTAMP (2) DEFAULT LOCALTIMESTAMP(2)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SAMIK2_DATA"
/

Table "SAMIK2"."CUSTOM" created.


SQL> CREATE TABLE "SAMIK2"."BASELINE" ("ID" NUMBER(19,0) DEFAULT "SAMIK2"."BASELINE_ID_SEQ"."NEXTVAL" NOT NULL ENABLE, "LANGUAGEID" VARCHAR2(10 BYTE) NOT NULL ENABLE, "NAME" VARCHAR2(250 BYTE) NOT NULL ENABLE, "TEXT" VARCHAR2(1000 BYTE) NOT NULL ENABLE, "TRANSLATION" VARCHAR2(1500 BYTE) NOT NULL ENABLE, "PATH" VARCHAR2(350 BYTE), "TYPE" VARCHAR2(1 BYTE) DEFAULT '0', "SUBTYPE" VARCHAR2(1 BYTE) DEFAULT '0', "SUBMODULE" VARCHAR2(50 BYTE), "MODULEID" VARCHAR2(40 BYTE) NOT NULL ENABLE, "NEEDTRANSLATION" VARCHAR2(2 BYTE) NOT NULL ENABLE, "LASTUPDATEDATE" TIMESTAMP (2) DEFAULT LOCALTIMESTAMP(2)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SAMIK2_DATA" 
/

Table "SAMIK2"."BASELINE" created.


SQL> CREATE TABLE "SAMIK2"."TEMPORARY" ("ID" NUMBER(19,0) DEFAULT "SAMIK2"."TEMPORARY_ID_SEQ"."NEXTVAL" NOT NULL ENABLE, "LANGUAGEID" VARCHAR2(10 BYTE) NOT NULL ENABLE, "NAME" VARCHAR2(250 BYTE) NOT NULL ENABLE, "TEXT" VARCHAR2(1000 BYTE) NOT NULL ENABLE, "TRANSLATION" VARCHAR2(1500 BYTE) NOT NULL ENABLE, "PATH" VARCHAR2(350 BYTE), "TYPE" VARCHAR2(1 BYTE) DEFAULT '0', "SUBTYPE" VARCHAR2(1 BYTE) DEFAULT '0', "SUBMODULE" VARCHAR2(50 BYTE), "MODULEID" VARCHAR2(40 BYTE) NOT NULL ENABLE, "TENANTID" VARCHAR2(50 BYTE) NOT NULL ENABLE, "NEEDTRANSLATION" VARCHAR2(2 BYTE) NOT NULL ENABLE, "LASTUPDATEDATE" TIMESTAMP (2) DEFAULT LOCALTIMESTAMP(2)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SAMIK2_DATA" 
/

Table "SAMIK2"."TEMPORARY" created.


Now, Please import the schema "SAMIK2" use the below PL/SQL block and run it on SQL Developer (or) SQL*Plus as master user (admin) on RDS Oracle DB instance.


DECLARE
  v_hdnl NUMBER;
BEGIN
  v_hdnl := DBMS_DATAPUMP.OPEN( 
    operation => 'IMPORT', 
    job_mode  => 'SCHEMA', 
    job_name  => null);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP2.DMP', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_dump_file);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP2_to_SAMIK2_Import_24032025.log', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_log_file);
  DBMS_DATAPUMP.SET_PARAMETER (v_hdnl, 'TABLE_EXISTS_ACTION','TRUNCATE');
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_SCHEMA', 'EMP2', 'SAMIK2');
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_TABLESPACE', 'MERMS', 'SAMIK2_DATA');
  DBMS_DATAPUMP.SET_PARALLEL(v_hdnl, degree => 2);
  DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/


O/p:-
-----

PL/SQL procedure successfully completed.


Now, verify the import logfile for any errors and take necessary actions.


SELECT * FROM TABLE 
    (rdsadmin.rds_file_util.read_text_file(
        p_directory => 'DATA_PUMP_DIR',
        p_filename  => 'EMP2_to_SAMIK2_Import_24032025.log'));


O/p:-
-----

Master table "ADMIN"."SYS_IMPORT_SCHEMA_03" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_SCHEMA_03":  
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SAMIK2" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
ORA-31684: Object type SEQUENCE:"SAMIK2"."BASELINE_ID_SEQ" already exists
ORA-31684: Object type SEQUENCE:"SAMIK2"."CUSTOM_ID_SEQ" already exists
ORA-31684: Object type SEQUENCE:"SAMIK2"."TEMPORARY_ID_SEQ" already exists
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Table "SAMIK2"."CUSTOM" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Table "SAMIK2"."BASELINE" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Table "SAMIK2"."TEMPORARY" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SAMIK2"."UPGRADE_HISTORY"             5.468 KB      13 rows
. . imported "SAMIK2"."LANGUAGES"                   5.289 KB      14 rows
. . imported "SAMIK2"."MODULES"                     4.937 KB       1 rows
. . imported "SAMIK2"."CUSTOM"                          0 KB       0 rows
. . imported "SAMIK2"."TEMPORARY"                       0 KB       0 rows
. . imported "SAMIK2"."BASELINE"                    26.66 MB  725912 rows
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Job "ADMIN"."SYS_IMPORT_SCHEMA_03" completed with 4 error(s) at Mon Mar 24 11:58:47 2025 elapsed 0 00:00:19





Step 10: Import "SAMIK3" in orcl database
================================================

Please create below sequences and empty tablespace before executing the data pump process.


SQL> CREATE SEQUENCE "SAMIK3"."JOB_EXECUTION_ID_SEQ" MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT BY 1 START WITH 1 NOCACHE  NOORDER  NOCYCLE  NOKEEP  NOSCALE  GLOBAL
/

Sequence "SAMIK3"."JOB_EXECUTION_ID_SEQ" created.


SQL> CREATE SEQUENCE "SAMIK3"."UPGRADE_HISTORY_SEQUENCE" MINVALUE 1 MAXVALUE 9223372036854775807 INCREMENT BY 1 START WITH 1 NOCACHE  NOORDER  NOCYCLE  NOKEEP  NOSCALE  GLOBAL
/

Sequence "SAMIK3"."UPGRADE_HISTORY_SEQUENCE" created.


SQL> CREATE TABLE "SAMIK3"."JOB_EXECUTION" ("JOB_EXECUTION_ID" NUMBER(19,0) DEFAULT "SAMIK3"."JOB_EXECUTION_ID_SEQ"."NEXTVAL" NOT NULL ENABLE, "TENANT_ID" VARCHAR2(120 BYTE) NOT NULL ENABLE, "SCHEDULER_NAME" VARCHAR2(120 BYTE) NOT NULL ENABLE, "SCHEDULER_INSTANCE_ID" VARCHAR2(255 BYTE) NOT NULL ENABLE, "TRIGGER_GROUP" VARCHAR2(150 BYTE) NOT NULL ENABLE, "TRIGGER_NAME" VARCHAR2(150 BYTE) NOT NULL ENABLE, "TRIGGER_DESCRIPTION" VARCHAR2(250 BYTE), "JOB_GROUP" VARCHAR2(200 BYTE) NOT NULL ENABLE, "JOB_NAME" VARCHAR2(200 BYTE) NOT NULL ENABLE, "JOB_DESCRIPTION" VARCHAR2(4000 BYTE), "JOB_TYPE" VARCHAR2(250 BYTE), "JOB_CLASS_NAME" VARCHAR2(250 BYTE), "JOB_PARAMETER" VARCHAR2(4000 BYTE), "EXECUTION_STATUS" NUMBER(*,0), "STATUS_DESCRIPTION" VARCHAR2(4000 BYTE), "IS_CANCEL_REQUESTED" NUMBER(3,0) DEFAULT 0, "SCHEDULED_TIME" TIMESTAMP (6), "SCHEDULED_TIMEZONE" VARCHAR2(250 BYTE), "FIRE_INSTANCE_ID" VARCHAR2(255 BYTE), "FIRED_TIME" TIMESTAMP (6) NOT NULL ENABLE, "NEXT_FIRE_TIME" TIMESTAMP (6), "PREVIOUS_FIRE_TIME" TIMESTAMP (6), "PROGRESS" FLOAT(49), "COMPLETED_TIME" TIMESTAMP (6), "RUN_TIME" NUMBER(19,0), "RECOVERY_TRIGGER_GROUP" VARCHAR2(150 BYTE), "RECOVERY_TRIGGER_NAME" VARCHAR2(150 BYTE), "RECOVERY_TRIGGER_FIRE_TIME" TIMESTAMP (6), "MODIFIED_DT" TIMESTAMP (6)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SAMIK3_DATA" 
/

Table "SAMIK3"."JOB_EXECUTION" created.


SQL> CREATE TABLE "SAMIK3"."UPGRADE_HISTORY" ("SCHEMAVERSIONID" NUMBER(10,0) DEFAULT "SAMIK3"."UPGRADE_HISTORY_SEQUENCE"."NEXTVAL" NOT NULL ENABLE, "SCRIPTNAME" VARCHAR2(255 BYTE), "APPLIED" TIMESTAMP (6)) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SAMIK3_DATA"
/

Table "SAMIK3"."UPGRADE_HISTORY" created.


Now, Please import the schema "SAMIK3" use the below PL/SQL block and run it on SQL Developer (or) SQL*Plus as master user (admin) on RDS Oracle DB instance.


DECLARE
  v_hdnl NUMBER;
BEGIN
  v_hdnl := DBMS_DATAPUMP.OPEN( 
    operation => 'IMPORT', 
    job_mode  => 'SCHEMA', 
    job_name  => null);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP3.DMP', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_dump_file);
  DBMS_DATAPUMP.ADD_FILE( 
    handle    => v_hdnl, 
    filename  => 'EMP3_to_SAMIK3_Import_24032025.log', 
    directory => 'DATA_PUMP_DIR', 
    filetype  => dbms_datapump.ku$_file_type_log_file);
  DBMS_DATAPUMP.SET_PARAMETER (v_hdnl, 'TABLE_EXISTS_ACTION','TRUNCATE');
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_SCHEMA', 'EMP3', 'SAMIK3');
  DBMS_DATAPUMP.METADATA_REMAP(v_hdnl, 'REMAP_TABLESPACE', 'MERMS', 'SAMIK3_DATA');
  DBMS_DATAPUMP.SET_PARALLEL(v_hdnl, degree => 2);
  DBMS_DATAPUMP.START_JOB(v_hdnl);
END;
/


O/p:-
-----

PL/SQL procedure successfully completed.


Now, verify the import logfile for any errors and take necessary actions.


SELECT * FROM TABLE 
    (rdsadmin.rds_file_util.read_text_file(
        p_directory => 'DATA_PUMP_DIR',
        p_filename  => 'EMP3_to_SAMIK3_Import_24032025.log'));


O/p:-
-----

Master table "ADMIN"."SYS_IMPORT_SCHEMA_04" successfully loaded/unloaded
Starting "ADMIN"."SYS_IMPORT_SCHEMA_04":  
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SAMIK3" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
ORA-31684: Object type SEQUENCE:"SAMIK3"."UPGRADE_HISTORY_SEQUENCE" already exists
ORA-31684: Object type SEQUENCE:"SAMIK3"."JOB_EXECUTION_ID_SEQ" already exists
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Table "SAMIK3"."JOB_EXECUTION" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Table "SAMIK3"."UPGRADE_HISTORY" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SAMIK3"."JOB_EXECUTION"               145.2 KB    2415 rows
. . imported "SAMIK3"."QRTZ_TRIGGERS"               6.187 KB       4 rows
. . imported "SAMIK3"."JOB_EXECUTION_MSG"           6.085 KB      38 rows
. . imported "SAMIK3"."QRTZ_JOB_DETAILS"            6.039 KB       4 rows
. . imported "SAMIK3"."QRTZ_BLOB_TRIGGERS"          5.992 KB       2 rows
. . imported "SAMIK3"."SCHEDULES"                   5.656 KB       2 rows
. . imported "SAMIK3"."QRTZ_CRON_TRIGGERS"          5.257 KB       2 rows
. . imported "SAMIK3"."QRTZ_SCHEDULER_STATE"        5.203 KB       2 rows
. . imported "SAMIK3"."UPGRADE_HISTORY"             5.164 KB       3 rows
. . imported "SAMIK3"."DBUP_MIGRATION_HISTORY"      5.125 KB       2 rows
. . imported "SAMIK3"."QRTZ_LOCKS"                  4.953 KB       4 rows
. . imported "SAMIK3"."QRTZ_CALENDARS"                  0 KB       0 rows
. . imported "SAMIK3"."QRTZ_FIRED_TRIGGERS"             0 KB       0 rows
. . imported "SAMIK3"."QRTZ_PAUSED_TRIGGER_GRPS"        0 KB       0 rows
. . imported "SAMIK3"."QRTZ_SIMPLE_TRIGGERS"            0 KB       0 rows
. . imported "SAMIK3"."QRTZ_SIMPROP_TRIGGERS"           0 KB       0 rows
. . imported "SAMIK3"."SCHEDULER_SETTINGS"              0 KB       0 rows
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
ORA-39083: Object type REF_CONSTRAINT:"SAMIK3"."JOB_EXECUTION_MSG_JOB_EXECUTION" failed to create with error:
ORA-02270: no matching unique or primary key for this column-list

Failing sql is:
ALTER TABLE "SAMIK3"."JOB_EXECUTION_MSG" ADD CONSTRAINT "JOB_EXECUTION_MSG_JOB_EXECUTION" FOREIGN KEY ("JOB_EXECUTION_ID") REFERENCES "SAMIK3"."JOB_EXECUTION" ("JOB_EXECUTION_ID") ON DELETE CASCADE ENABLE
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Job "ADMIN"."SYS_IMPORT_SCHEMA_04" completed with 4 error(s) at Mon Mar 24 12:16:33 2025 elapsed 0 00:00:20





Step 11: Remove the source database dump files from DATA_PUMP_DIR which are copied from S3 Bucket
=================================================================================================


SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by filename, mtime;

Filename Type Size Mtime
--------------- ---- ---------- --------
EMP2.DMP file 28131328 19-03-25
EMP2.log file 1982 19-03-25
EMP2_to_SAMIK2_Import_24032025.log file 2132 24-03-25
EMP3.DMP file 479232 19-03-25
EMP3.log file 3036 19-03-25
EMP3_to_SAMIK3_Import_24032025.log file 3370 24-03-25
EMP1.DMP file 15426408448 19-03-25
EMP1.log file 195909 19-03-25
EMP1_to_SAMIK1_Import_24032025.log file 194293 24-03-25
datapump/ directory 4096 24-03-25


Now, delete the unwanted files from the DATA_PUMP_DIR using below command,

E.x.

EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','filename');

SQL> EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','EMP2.DMP'); 

PL/SQL procedure successfully completed.

SQL> EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','EMP3.DMP'); 

PL/SQL procedure successfully completed.

SQL> EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR','EMP1.DMP'); 

PL/SQL procedure successfully completed.


Now, check the current files available in DATA_PUMP_DIR directory using below command,


SQL> select * from table(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR')) order by filename, mtime;

Filename Type Size Mtime
--------------- ---- ---------- --------
EMP2.log file 1982 19-03-25
EMP3.log file 3036 19-03-25
EMP1.log file 195909 19-03-25
EMP2_to_SAMIK2_Import_24032025.log file 2132 24-03-25
EMP3_to_SAMIK3_Import_24032025.log file 3370 24-03-25
EMP1_to_SAMIK1_Import_24032025.log file 194293 24-03-25
datapump/ directory 4096 28-03-25
















Understanding CSSD Heartbeat Mechanisms in Oracle RAC

  Understanding CSSD Heartbeat Mechanisms in Oracle RAC The Cluster Services Synchronization Daemon (CSSD) is a critical process in Oracle...