Disclaimer

Sunday 23 January 2022

Startup Nomount and DB Creation script runs - log

[oracle@rac5 trace]$ cat alert_CATDB.log


2022-01-23T18:23:51.929754+05:30
Starting ORACLE instance (normal) (OS id: 25545)
2022-01-23T18:23:51.995350+05:30
****************************************************
 /dev/shm will be used for creating SGA
Large pages will not be used. Only standard 4K pages will be used
****************************************************
2022-01-23T18:23:52.008448+05:30
**********************************************************************
2022-01-23T18:23:52.008533+05:30
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)

2022-01-23T18:23:52.008749+05:30
 Domain name: user.slice
2022-01-23T18:23:52.008828+05:30
 Per process system memlock (soft) limit = 128G
2022-01-23T18:23:52.008903+05:30
 Expected per process system memlock (soft) limit to lock
 instance MAX SHARED GLOBAL AREA (SGA) into memory: 1024M
2022-01-23T18:23:52.009048+05:30
 Available system pagesizes:
  4K, 2048K
2022-01-23T18:23:52.009191+05:30
 Supported system pagesize(s):
2022-01-23T18:23:52.009264+05:30
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
2022-01-23T18:23:52.009339+05:30
        4K       Configured          262150          262150        NONE
2022-01-23T18:23:52.009422+05:30
 Reason for not supporting certain system pagesizes:
2022-01-23T18:23:52.009497+05:30
  2048K - Dynamic allocate and free memory regions
2022-01-23T18:23:52.009570+05:30
**********************************************************************
2022-01-23T18:24:12.013644+05:30
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
2022-01-23T18:24:12.057139+05:30
Initial number of CPU is 1
Number of processor cores in the system is 1
Number of processor sockets in the system is 1
Shared memory segment for instance monitoring created
Capability Type : Network
capabilities requested : 7 detected : 0 Simulated : 0
Capability Type : Runtime Environment
capabilities requested : 400000FF detected : 40000000 Simulated : 0
Capability Type : Engineered Systems
capabilities requested : 7 detected : 0 Simulated : 0
Capability Type : Database Test
capabilities requested : 3 detected : 0 Simulated : 0
Using LOG_ARCHIVE_DEST_1 parameter default value as /u01/app/oracle/product/19.0.0/dbhome_1/dbs/arch
Autotune of undo retention is turned on.
IMODE=BR
ILAT =22
LICENSE_MAX_USERS = 0
SYS auditing is enabled
NOTE: remote asm mode is local (mode 0x1; from cluster type)
2022-01-23T18:24:13.951075+05:30
NOTE: Using default ASM root directory ASM
NOTE: remote asm mode is local (mode 0x1; from cluster type)
NOTE: Cluster configuration type = NONE [2]
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0.
ORACLE_HOME:    /u01/app/oracle/product/19.0.0/dbhome_1
System name:    Linux
Node name:      rac5.samik.com
Release:        4.14.35-1818.3.3.el7uek.x86_64
Version:        #2 SMP Mon Sep 24 14:45:01 PDT 2018
Machine:        x86_64
Using parameter settings in server-side pfile /u01/app/oracle/product/19.0.0/dbhome_1/dbs/initCATDB.ora
System parameters with non-default values:
  memory_target            = 1G
  memory_max_target        = 1G
  control_files            = "/data1/CATDB/control01.ctl"
  compatible               = "19.0.0"
  undo_tablespace          = "UNDOTBS1"
  remote_login_passwordfile= "EXCLUSIVE"
  db_name                  = "CATDB"
  diagnostic_dest          = "/u01/app/oracle"
2022-01-23T18:24:14.532024+05:30
============================================================
NOTE: PatchLevel of this instance 0
============================================================
2022-01-23T18:24:15.169676+05:30
Starting background process PMON
2022-01-23T18:24:15.212486+05:30
PMON started with pid=2, OS id=25670
Starting background process CLMN
2022-01-23T18:24:15.459233+05:30
CLMN started with pid=3, OS id=25672
Starting background process PSP0
2022-01-23T18:24:15.567595+05:30
PSP0 started with pid=4, OS id=25674
Starting background process VKTM
2022-01-23T18:24:15.727296+05:30
VKTM started with pid=5, OS id=25676 at elevated (RT) priority
Starting background process GEN0
2022-01-23T18:24:15.895158+05:30
VKTM running at (1)millisec precision with DBRM quantum (100)ms
Starting background process MMAN
2022-01-23T18:24:15.996149+05:30
GEN0 started with pid=6, OS id=25681
2022-01-23T18:24:16.175964+05:30
MMAN started with pid=7, OS id=25683
2022-01-23T18:24:16.736287+05:30
Starting background process GEN1
2022-01-23T18:24:17.435801+05:30
GEN1 started with pid=9, OS id=25688_25691
Starting background process DIAG
Starting background process OFSD
2022-01-23T18:24:17.623628+05:30
DIAG started with pid=8, OS id=25693
2022-01-23T18:24:17.749960+05:30
Starting background process DBRM
2022-01-23T18:24:17.751319+05:30
OFSD started with pid=11, OS id=25695_25696
2022-01-23T18:24:17.754882+05:30
Oracle running with ofslib:'Oracle File Server Library' version=2
2022-01-23T18:24:17.857259+05:30
DBRM started with pid=13, OS id=25698
Starting background process VKRM
2022-01-23T18:24:17.960437+05:30
VKRM started with pid=14, OS id=25700
Starting background process SVCB
2022-01-23T18:24:18.094831+05:30
SVCB started with pid=15, OS id=25702
Starting background process PMAN
2022-01-23T18:24:18.201252+05:30
PMAN started with pid=16, OS id=25704
Starting background process DIA0
2022-01-23T18:24:18.331715+05:30
DIA0 started with pid=17, OS id=25706
Starting background process DBW0
2022-01-23T18:24:19.111114+05:30
Starting background process LGWR
2022-01-23T18:24:19.116202+05:30
DBW0 started with pid=18, OS id=25709
2022-01-23T18:24:19.378838+05:30
LGWR started with pid=19, OS id=25712
Starting background process CKPT
2022-01-23T18:24:19.513616+05:30
CKPT started with pid=20, OS id=25714
Starting background process SMON
2022-01-23T18:24:19.683503+05:30
SMON started with pid=21, OS id=25716
Starting background process SMCO
2022-01-23T18:24:19.784261+05:30
SMCO started with pid=22, OS id=25718
Starting background process RECO
2022-01-23T18:24:19.898217+05:30
RECO started with pid=23, OS id=25720
Starting background process LREG
2022-01-23T18:24:20.100922+05:30
LREG started with pid=25, OS id=25726
2022-01-23T18:24:20.694354+05:30
Starting background process PXMN
2022-01-23T18:24:21.127645+05:30
Using default pga_aggregate_limit of 2048 MB
2022-01-23T18:24:21.211443+05:30
PXMN started with pid=27, OS id=25730
Starting background process FENC
2022-01-23T18:24:21.500815+05:30
FENC started with pid=28, OS id=25733
Starting background process MMON
2022-01-23T18:24:21.831116+05:30
MMON started with pid=29, OS id=25735
2022-01-23T18:24:21.832099+05:30
Starting background process MMNL
Starting background process TMON
2022-01-23T18:24:22.036743+05:30
MMNL started with pid=28, OS id=25738
2022-01-23T18:24:22.386257+05:30
Setting CPU count to 1
ORACLE_BASE from environment = /u01/app/oracle
2022-01-23T18:24:22.393851+05:30
TMON started with pid=30, OS id=25740

==================================================

When DB creation script is run then below things are captured in alert log file.



2022-01-23T18:27:50.682378+05:30
create database CATDB
LOGFILE
group 1 ('/data1/CATDB/redo01.log') SIZE 10M,
group 2 ('/data1/CATDB/redo02.log') SIZE 10M,
group 3 ('/data1/CATDB/redo03.log') SIZE 10M
datafile '/data1/CATDB/system01.dbf' SIZE 500M
SYSAUX datafile '/data1/CATDB/sysaux01.dbf' SIZE 300M
DEFAULT TABLESPACE USERS datafile '/data1/CATDBusers01.dbf' size 50M
DEFAULT TEMPORARY TABLESPACE temp
tempfile '/data1/CATDB/temp01.dbf' SIZE 50M
UNDO TABLESPACE undotbs1
datafile '/data1/CATDB/undotbs01.dbf' SIZE 50M
CHARACTER SET AL32UTF8
2022-01-23T18:27:51.944936+05:30
Expanded controlfile section 32 from 31 to 128 records
Requested to grow by 97 records; added 3 blocks of records
.... (PID:25924): Redo network throttle feature is disabled at mount time
2022-01-23T18:27:51.963907+05:30
Database mounted in Exclusive Mode
Lost write protection disabled
.... (PID:25924): Using STANDBY_ARCHIVE_DEST parameter default value as /u01/app/oracle/product/19.0.0/dbhome_1/dbs/arch [krsd.c:18141]
2022-01-23T18:27:53.021106+05:30
Successful mount of redo thread 1, with mount id 2640461351
2022-01-23T18:27:53.022650+05:30
Ping without log force is disabled:
  instance mounted in exclusive mode.
Buffer Cache Full DB Caching mode changing from FULL CACHING DISABLED to FULL CACHING ENABLED
Database SCN compatibility initialized to 3
2022-01-23T18:27:53.310352+05:30
Assigning activation ID 2640461351 (0x9d623e27)
Redo log for group 1, sequence 1 is not located on DAX storage
Thread 1 opened at log sequence 1
  Current log# 1 seq# 1 mem# 0: /data1/CATDB/redo01.log
Successful open of redo thread 1
2022-01-23T18:27:53.380372+05:30
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
stopping change tracking
processing ?/rdbms/admin/dcore.bsq
2022-01-23T18:27:54.067957+05:30
create tablespace SYSTEM datafile  '/data1/CATDB/system01.dbf' SIZE 500M

  default storage (initial 10K next 10K) EXTENT MANAGEMENT DICTIONARY online
2022-01-23T18:27:54.252459+05:30
TT00 (PID:25957): Gap Manager starting
2022-01-23T18:28:08.393125+05:30
Endian type of dictionary set to little
2022-01-23T18:28:10.394423+05:30
Completed: create tablespace SYSTEM datafile  '/data1/CATDB/system01.dbf' SIZE 500M

  default storage (initial 10K next 10K) EXTENT MANAGEMENT DICTIONARY online
create rollback segment SYSTEM tablespace SYSTEM
  storage (initial 50K next 50K)
Completed: create rollback segment SYSTEM tablespace SYSTEM
  storage (initial 50K next 50K)
2022-01-23T18:28:13.165648+05:30
Buffer Cache Full DB Caching mode changing from FULL CACHING ENABLED to FULL CACHING DISABLED
Full DB Caching disabled: DEFAULT_CACHE_SIZE should be at least 95 MBs bigger than current size.
2022-01-23T18:28:14.442046+05:30
Undo initialization recovery: err:0 start: 4609113 end: 4609116 diff: 3 ms (0.0 seconds)
Undo initialization online undo segments: err:0 start: 4609116 end: 4609116 diff: 0 ms (0.0 seconds)
Caching undo dictionary info: used hint for object index=0
Caching undo dictionary info: used hint for object index=1
Caching undo dictionary info: used hint for object index=2
Caching undo dictionary info: used hint for object index=3
Caching undo dictionary info: used hint for object index=4
Undo initialization finished serial:0 start:4609113 end:4609117 diff:4 ms (0.0 seconds)
2022-01-23T18:28:19.859916+05:30
alter tablespace system force logging
Completed: alter tablespace system force logging
2022-01-23T18:28:19.980303+05:30
processing ?/rdbms/admin/dsqlddl.bsq
processing ?/rdbms/admin/dmanage.bsq
CREATE TABLESPACE sysaux DATAFILE  '/data1/CATDB/sysaux01.dbf' SIZE 300M

  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE FORCE LOGGING
2022-01-23T18:28:31.709126+05:30
Completed: CREATE TABLESPACE sysaux DATAFILE  '/data1/CATDB/sysaux01.dbf' SIZE 300M

  EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE FORCE LOGGING
alter tablespace system default compress for all operations
Completed: alter tablespace system default compress for all operations
alter tablespace sysaux default compress for all operations
Completed: alter tablespace sysaux default compress for all operations
2022-01-23T18:28:31.805575+05:30
processing ?/rdbms/admin/dplsql.bsq
processing ?/rdbms/admin/dtxnspc.bsq
CREATE UNDO TABLESPACE UNDOTBS1 DATAFILE  '/data1/CATDB/undotbs01.dbf' SIZE 50M

2022-01-23T18:28:36.143642+05:30
[25924] Successfully onlined Undo Tablespace 2.
Completed: CREATE UNDO TABLESPACE UNDOTBS1 DATAFILE  '/data1/CATDB/undotbs01.dbf' SIZE 50M

CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE  '/data1/CATDB/temp01.dbf' SIZE 50M

Completed: CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE  '/data1/CATDB/temp01.dbf' SIZE 50M

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE "TEMP"
Completed: ALTER DATABASE DEFAULT TEMPORARY TABLESPACE "TEMP"
CREATE  TABLESPACE "USERS" DATAFILE  '/data1/CATDBusers01.dbf' size 50M
 SEGMENT SPACE MANAGEMENT AUTO
2022-01-23T18:28:39.295989+05:30
Completed: CREATE  TABLESPACE "USERS" DATAFILE  '/data1/CATDBusers01.dbf' size 50M
 SEGMENT SPACE MANAGEMENT AUTO
ALTER DATABASE DEFAULT TABLESPACE "USERS"
Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS"
2022-01-23T18:28:39.352528+05:30
processing ?/rdbms/admin/dfmap.bsq
processing ?/rdbms/admin/denv.bsq
processing ?/rdbms/admin/drac.bsq
2022-01-23T18:28:40.016979+05:30
Thread 1 advanced to log sequence 2 (LGWR switch)
  Current log# 2 seq# 2 mem# 0: /data1/CATDB/redo02.log
2022-01-23T18:28:40.172831+05:30
processing ?/rdbms/admin/dsec.bsq
2022-01-23T18:28:41.495780+05:30
processing ?/rdbms/admin/doptim.bsq
processing ?/rdbms/admin/dobj.bsq
processing ?/rdbms/admin/djava.bsq
processing ?/rdbms/admin/dpart.bsq
2022-01-23T18:28:42.577083+05:30
processing ?/rdbms/admin/drep.bsq
2022-01-23T18:28:42.812220+05:30
Thread 1 advanced to log sequence 3 (LGWR switch)
  Current log# 3 seq# 3 mem# 0: /data1/CATDB/redo03.log
2022-01-23T18:28:43.507486+05:30
processing ?/rdbms/admin/daw.bsq
2022-01-23T18:28:44.239221+05:30
processing ?/rdbms/admin/dsummgt.bsq
2022-01-23T18:28:44.637710+05:30
processing ?/rdbms/admin/dtools.bsq
2022-01-23T18:28:44.975799+05:30
Thread 1 advanced to log sequence 4 (LGWR switch)
  Current log# 1 seq# 4 mem# 0: /data1/CATDB/redo01.log
2022-01-23T18:28:45.008331+05:30
processing ?/rdbms/admin/dexttab.bsq
processing ?/rdbms/admin/ddm.bsq
processing ?/rdbms/admin/dlmnr.bsq
2022-01-23T18:28:45.560910+05:30
processing ?/rdbms/admin/ddst.bsq
processing ?/rdbms/admin/dfba.bsq
processing ?/rdbms/admin/dpstdy.bsq
processing ?/rdbms/admin/drupg.bsq
processing ?/rdbms/admin/dtlog.bsq
processing ?/rdbms/admin/dmisc.bsq
2022-01-23T18:28:46.119188+05:30
processing ?/rdbms/admin/dhcs.bsq
2022-01-23T18:28:47.207387+05:30
Successfully created internal service SYS$BACKGROUND at open
Successfully created internal service SYS$USERS at open
joxcsys_required_dirobj_exists: directory object does not exist, pid 25924 cid 0
joxcsys_ensure_directory_object: created directory object with path /u01/app/oracle/product/19.0.0/dbhome_1/javavm/admin/, pid 25924 cid 0
replication_dependency_tracking turned off (no async multimaster replication found)
Completed: create database CATDB
LOGFILE
group 1 ('/data1/CATDB/redo01.log') SIZE 10M,
group 2 ('/data1/CATDB/redo02.log') SIZE 10M,
group 3 ('/data1/CATDB/redo03.log') SIZE 10M
datafile '/data1/CATDB/system01.dbf' SIZE 500M
SYSAUX datafile '/data1/CATDB/sysaux01.dbf' SIZE 300M
DEFAULT TABLESPACE USERS datafile '/data1/CATDBusers01.dbf' size 50M
DEFAULT TEMPORARY TABLESPACE temp
tempfile '/data1/CATDB/temp01.dbf' SIZE 50M
UNDO TABLESPACE undotbs1
datafile '/data1/CATDB/undotbs01.dbf' SIZE 50M
CHARACTER SET AL32UTF8










RU and RUR patches Jan-2022 in Oracle

 

Patches details - Jan-2022 



Below Database patches are released in Jan-2022

1) RU - 19.14
2) RUR - 19.13.1
3) RUR - 19.12.2


Please note that:
– Both RUs & RURs are released quarterly (Jan, April , Jul & Oct).
– RU includes new features, optimizer changes and security fixes.
– RUR includes security fixes only.
– Each quarter, two RURs are released for the last two previous RUs.





19c DataPump Features

1. MAX_DATAPUMP_JOBS_PER_PDB database parameter

MAX_DATAPUMP_JOBS_PER_PDB database parameter is introduced to give DBAs more control the number of jobs that can be started in a database environment. 

This parameter is valid for both Multitenant and non-Multitenant environment.

In a database where MAX_DATAPUMP_JOBS_PER_PDB is set to N and where N DataPump jobs are running at the same time, the attempt to start a new N+1 

DataPump job will report ORA-39391, as shown below:

SQL> show parameter MAX_DATAPUMP_JOBS_PER_PDB
NAME                                        TYPE        VALUE
————————————               ———–      ——————————
max_datapump_jobs_per_pdb       string      100



– for Multitenant, in CDB$ROOT:
SQL> alter system set MAX_DATAPUMP_JOBS_PER_PDB=2 container=all;
System altered.



– for non-Multitenant:
SQL> alter system set MAX_DATAPUMP_JOBS_PER_PDB=2;
System altered.




– two DataPump jobs are started and running:

$] expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test1.dmp full=yes

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Starting “SYSTEM”.”SYS_EXPORT_FULL_01″:  system/********@<PDB_service_name> directory=dptest dumpfile=test1.dmp full=yes
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA

—–> still running





$] expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test1.dmp full=yes

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Starting “SYSTEM”.”SYS_EXPORT_FULL_02″:  system/********@cdb1_pdb1 directory=dptest dumpfile=test2.dmp full=yes
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
—–> still running



– launching the third DP job in a database with MAX_DATAPUMP_JOBS_PER_PDB=2, will report:

$] expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test3.dmp full=yes

Export: Release 19.0.0.0.0 – Development on Tue Oct 9 07:37:24 2018
Version 19.1.0.0.0

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
ORA-31626: job does not exist
ORA-31637: cannot create job SYS_EXPORT_FULL_03 for user SYSTEM
ORA-06512: at “SYS.KUPV$FT”, line 1142
ORA-06512: at “SYS.KUPV$FT”, line 1744
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 95
ORA-06512: at “SYS.KUPV$FT_INT”, line 1099
ORA-39391: maximum number of Data Pump jobs (2) exceeded
ORA-06512: at “SYS.DBMS_SYS_ERROR”, line 86
ORA-06512: at “SYS.KUPV$FT_INT”, line 969
ORA-06512: at “SYS.KUPV$FT”, line 1646
ORA-06512: at “SYS.KUPV$FT”, line 1103


==================================================




2. MAX_DATAPUMP_PARALLEL_PER_JOB database parameter

MAX_DATAPUMP_PARALLEL_PER_JOB is introduced to control the number
of parallel workers that can be used for an individual Data Pump job. This is an improvement to implement resource usage policies in DataPump.

In a database with MAX_DATAPUMP_PARALLEL_PER_JOB set to N, even if parallel>N is specified for the DataPump job, only N number of Worker processes will be started, as shown below:




EXAMPLE 1
===========

Here  MAX_DATAPUMP_PARALLEL_PER_JOB=1 is set

SQL> alter system set MAX_DATAPUMP_PARALLEL_PER_JOB=1 container=all;
System altered.

 
Now running parallel 7 

$] expdp system/password@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all metrics=yes

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
09-OCT-18 07:47:14.242: Starting “SYSTEM”.”SYS_EXPORT_FULL_01″:  system/********@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all metrics=yes
09-OCT-18 07:47:15.402: W-1 Startup took 1 seconds
09-OCT-18 07:47:28.374: W-1 Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
09-OCT-18 07:47:31.271: W-1 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
09-OCT-18 07:47:35.128: W-1 Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
09-OCT-18 07:47:38.190: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
09-OCT-18 07:47:40.581: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
09-OCT-18 07:47:40.874: W-1      Completed 72 INDEX_STATISTICS objects in 0 seconds
09-OCT-18 07:47:41.082: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
09-OCT-18 07:47:41.119: W-1      Completed 3 INDEX_STATISTICS objects in 1 seconds
………
09-OCT-18 07:53:43.432: W-1 . . exported “TC_USER5”.”TAB9″                           5.046 KB       1 rows in 0 seconds using direct_path
09-OCT-18 07:53:45.494: W-1      Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 2 seconds
09-OCT-18 07:53:45.504: W-1      Completed 62 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 1 seconds
09-OCT-18 07:53:45.510: W-1      Completed 18 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 16 seconds
09-OCT-18 07:53:45.516: W-1      Completed 120 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 9 seconds
09-OCT-18 07:53:49.806: W-1 Master table “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully loaded/unloaded
09-OCT-18 07:53:49.866: *****************************************************************
09-OCT-18 07:53:49.867: Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
09-OCT-18 07:53:49.872:   <PATH>/test2.dmp
09-OCT-18 07:53:49.942: Job “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully completed at Tue Oct 9 07:53:49 2018 elapsed 0 00:06:47

– v$session shows only one worker process started:

SQL> select to_char(sysdate,’YYYY-MM-DD HH24:MI:SS’) “DATE”, s.program, s.sid,
from   v$session s, v$process p, dba_datapump_sessions d
where  p.addr=s.paddr and s.saddr=d.saddr and UPPER (s.program) LIKE ‘%DM0%’ or UPPER (s.program) LIKE ‘%DW0%’);

DATE                         PROGRAM                                       SID
——————- ————————————–                             ———-
2018-10-09 07:52:27 oracle@den01znf (DW00)                  65





EXAMPLE 2
———-------

– in a database with MAX_DATAPUMP_PARALLEL_PER_JOB set to 4:

SQL> alter system set MAX_DATAPUMP_PARALLEL_PER_JOB=4 container=all;
System altered.

– any DataPump job can have up to maximum 4 Workers, even if the job is started with parallel>4:

> expdp system/oracle@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all  metrics=yes
Export: Release 19.0.0.0.0 – Development on Tue Oct 9 08:19:49 2018
Version 19.1.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
09-OCT-18 08:20:02.544: Starting “SYSTEM”.”SYS_EXPORT_FULL_01″:  system/********@<PDB_service_name> directory=dptest dumpfile=test2.dmp full=yes parallel=7 logtime=all metrics=yes
09-OCT-18 08:20:03.709: W-1 Startup took 1 seconds
09-OCT-18 08:20:07.888: W-2 Startup took 0 seconds
09-OCT-18 08:20:07.951: W-3 Startup took 0 seconds
09-OCT-18 08:20:08.006: W-2 Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
09-OCT-18 08:20:08.013: W-2      Completed 1 MARKER objects in 0 seconds
09-OCT-18 08:20:08.062: W-3 Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
09-OCT-18 08:20:08.063: W-3      Completed 1 MARKER objects in 0 seconds
09-OCT-18 08:20:09.914: W-4 Startup took 0 seconds
…………………
09-OCT-18 08:20:33.919: W-1 Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
09-OCT-18 08:20:35.033: W-2      Completed 4 INC_TYPE objects in 4 seconds
09-OCT-18 08:20:35.194: W-1      Completed 22 TYPE objects in 3 seconds
09-OCT-18 08:20:35.566: W-1 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
09-OCT-18 08:20:35.867: W-1      Completed 2 PROCACT_SYSTEM objects in 0 seconds
09-OCT-18 08:20:37.741: W-3 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
09-OCT-18 08:20:38.376: W-3      Completed 23 PROCOBJ objects in 3 seconds
09-OCT-18 08:20:39.165: W-3 Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
09-OCT-18 08:20:40.948: W-3      Completed 2 PROCACT_SYSTEM objects in 2 seconds
09-OCT-18 08:20:42.752: W-1 Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
………..
09-OCT-18 08:23:30.475: W-3 . . exported “TC_USER5”.”TAB8″                           5.046 KB       1 rows in 0 seconds using direct_path
09-OCT-18 08:23:30.592: W-3 . . exported “TC_USER5”.”TAB9″                           5.046 KB       1 rows in 0 seconds using direct_path
09-OCT-18 08:23:31.715: W-1 . . exported “WMSYS”.”WM$METADATA_MAP”                       0 KB       0 rows in 0 seconds using external_table
09-OCT-18 08:23:32.737: W-4 Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
09-OCT-18 08:23:32.829: W-4      Completed 2 MATERIALIZED_VIEW objects in 52 seconds
09-OCT-18 08:23:34.962: W-4      Completed 1 DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 3 seconds
09-OCT-18 08:23:34.974: W-4      Completed 62 DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA objects in 3 seconds
09-OCT-18 08:23:34.980: W-4      Completed 18 DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA objects in 31 seconds
09-OCT-18 08:23:34.989: W-4      Completed 120 DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA objects in 24 seconds
09-OCT-18 08:23:39.255: W-4 Master table “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully loaded/unloaded
09-OCT-18 08:23:39.303: *****************************************************************
09-OCT-18 08:23:39.304: Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
09-OCT-18 08:23:39.307:   <PATH>/test2.dmp
09-OCT-18 08:23:39.424: Job “SYSTEM”.”SYS_EXPORT_FULL_01″ successfully completed at Tue Oct 9 08:23:39 2018 elapsed 0 00:03:48

– v$session shows only four DataPump worker processes started:
select to_char(sysdate,’YYYY-MM-DD HH24:MI:SS’) “DATE”, s.program, s.sid,
from   v$session s, v$process p, dba_datapump_sessions d
where  p.addr=s.paddr and s.saddr=d.saddr and UPPER (s.program) LIKE ‘%DM0%’ or UPPER (s.program) LIKE ‘%DW0%’);

DATE                PROGRAM                                                   SID
——————- ————————————–                                ———-
2018-10-09 08:20:21 oracle@den01znf (DW00)                   168
2018-10-09 08:20:21 oracle@den01znf (DW01)                   50
2018-10-09 08:20:21 oracle@den01znf (DW02)                   164
2018-10-09 08:20:21 oracle@den01znf (DW03)                   41




Note: There is no warning or error reported when the DP job is started with a number of parallels higher than the value defined by MAX_DATAPUMP_PARALLEL_PER_JOB.



==================================================



3. Exclude ENCRYPTION Clause on Import With DataPump Parameter: OMIT_ENCRYPTION_CLAUSE

19c introduces a new transform parameter, OMIT_ENCRYPTION_CLAUSE, which directs Data Pump to suppress any encryption clauses associated with objects using encrypted columns. 

This new transform parameter is only allowed for non-transportable import jobs. OMIT_ENCRYPTION_CLAUSE applies to materialized view, table, and tablespace objects and will enable objects which were utilizing encrypted columns in the source to get created in a target database environment where encryption attributes are not supported.

DataPump transform parameter has been enhanced with this new option to facilitate the migration from any source database with TDE to a target database without TDE.

For example, in the Oracle Public Cloud environment data is encrypted by default using Transparent Data Encryption (TDE) and the encrypted tablespace feature. However, this environment does not support the encrypted column feature. This presents a problem when using Data Pump to migrate a non-cloud database into the Cloud when the source database has objects which have encrypted columns. During import DataPump will always include any associated encryption attribute syntax when creating objects in the target database. Consequently these objects fail to create in the target database due to the invalid/unsupported encryption syntax. To facilitate the creation of these objects (tables and materialized views), the new transform parameter allows users to suppress encryption related syntax during import. For this kind of migrations, OMIT_ENCRYPTION_CLAUSE instructs DataPump to suppress any encryption clauses associated with objects using encrypted columns.

The command line for the Data Pump Import utility (impdp) is enhanced to expose the new transform parameter, OMIT_ENCRYPTION_CLAUSE. 
This is accomplished by adding a new keyword to the existing command line parameter TRANSFORM:

TRANSFORM=OMIT_ENCRYPTION_CLAUSE: [ Y | N ]
Y – Generated create object ddl will NOT include any encryption attribute clauses.
N – Generated create object ddl will include related encryption attribute clauses (default).

If the value is specified as Y, then any encryption attribute clause will be omitted from the create object ddl. The default is N.

This new transform parameter can additionally be set through the programmatic interface, DBMS_METADATA.SET_TRANSFORM_PARAM:

DBMS_METADATA.SET_TRANSFORM_PARAM(job_handle, ‘OMIT_ENCRYPTION_CLAUSE’, true);




Example:

– source 12.1.0.2 with TDE enabled and 19.1 target database
– in source database, create an encrypted table and populate with:
SQL> administer key management set key identified by Oracle_123 with backup;
keystore altered.

SQL> create table scott.t1 (name varchar2(100) encrypt using ‘AES256’);
Table created.

SQL> insert into scott.t1 values(‘AAA’);
1 row created.

SQL> commit;

SQL> select dbms_metadata.get_ddl(‘TABLE’,’T1′,’SCOTT’) from dual;

DBMS_METADATA.GET_DDL(‘TABLE’,’T1′,’SCOTT’)
——————————————————————————–
CREATE TABLE “SCOTT”.”T1″
(    “NAME” VARCHAR2(100) ENCRYPT USING ‘AES256’ ‘SHA-1’
) SEGMENT CREATION IMMEDIATE
……

– generate the export dump file for the table with encypted clause in the definition:

$] expdp system/<password> dumpfile=test.dmp directory=DATA_PUMP_DIR tables=scott.t1


Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting “SYSTEM”.”SYS_EXPORT_TABLE_01″:  system/******** dumpfile=test.dmp directory=DATA_PUMP_DIR tables=scott.t1
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
……..
. . exported “SCOTT”.”T1″                                5.054 KB       1 rows
ORA-39173: Encrypted data has been stored unencrypted in dump file set.
Master table “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded



– run impdp with transform=OMIT_ENCRYPTION_CLAUSE:N



$] impdp system/oracle directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:N

Import: Release 19.0.0.0.0 – Development on Tue Oct 9 02:45:56 2018
Version 19.1.0.0.0
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
import done in WE8DEC character set and AL16UTF16 NCHAR character set
export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
Warning: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″:  system/******** directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:N
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:”TEST”.”T1″ failed to create with error:
ORA-28365: wallet is not open

Failing sql is:
CREATE TABLE “TEST”.”T1″ (“NAME” VARCHAR2(100 BYTE) ENCRYPT USING ‘AES256’ ‘SHA-1’) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255  NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST …) TABLESPACE “USERS”

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ completed with 1 error(s) at Tue Oct 9 02:46:09 2018 elapsed 0 00:00:06





– run impdp with transform=OMIT_ENCRYPTION_CLAUSE:Y

$] impdp system/<password> directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:Y


Import: Release 19.0.0.0.0 – Development on Tue Oct 9 02:46:45 2018
Version 19.1.0.0.0
Username: system/oracle
Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully loaded/unloaded
Warning: possible data loss in character set conversions
Starting “SYSTEM”.”SYS_IMPORT_FULL_01″:  system/******** directory=dptest dumpfile=test.dmp remap_schema=scott:test transform=OMIT_ENCRYPTION_CLAUSE:Y
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “TEST”.”T1″                                 5.054 KB       1 rows
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job “SYSTEM”.”SYS_IMPORT_FULL_01″ successfully completed at Tue Oct 9 02:47:36 2018 elapsed 0 00:00:35


Now connect to DB and check

bash-4.2$ sqlplus

SQL*Plus: Release 19.0.0.0.0 – Development on Tue Oct 9 03:58:25 2018
Version 19.1.0.0.0
Enter user-name: / as sysdba
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Version 19.1.0.0.0

SQL> select dbms_metadata.get_ddl(‘TABLE’,’T1′,’TEST’) from dual;
DBMS_METADATA.GET_DDL(‘TABLE’,’T1′,’TEST’)
——————————————————————————–
CREATE TABLE “TEST”.”T1″
(    “NAME” VARCHAR2(100)   <——————————- encryption attribute clause is not there after impdp with transform=OMIT_ENCRYPTION_CLAUSE:Y
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 …..



==================================================



4. Test Mode for Transportable Tablespaces With DataPump Parameter: TTS_CLOSURE_CHECK


With 19c, DBAs can more easily determine how long an export will take and discover unforeseen issues not reported by the closure check. DataPump Export command line parameter TTS_CLOSURE_CHECK can be used to specify that tablespaces are not required to be read-only during the transportable tablespace operation. The resulting dump file is flagged as “unavailable for import”.
Test Mode for Transportable Tablespaces performs a metadata-only export test using Transportable Tablespaces or Full Transportable Export/Import. It also removes the limitation on the source database tablespaces to be read only mode. The DataPump Utility requires that tablespaces be set read-only prior to an export transportable tablespaces operation. Setting tablespaces read-only ensures that the data is in a consistent state. In addition, a transportable tablespace export operation will conduct a closure check of the transportable tablespace set. However, there are cases when the following information is needed before starting a transportable operations with DataPump:
– how long it will take, and
– whether there might be unforeseen issues not caught by the closure check

The ability to limit the length of time that the tablespaces are in read-only mode and the ability to obtain time estimates of the DataPump transportable export operation is now supported with the command line DataPump parameter : TTS_CLOSURE_CHECK.

TTS_CLOSURE_CHECK

Purpose :  Specifies the level of closure check to be performed as part of the transportable export operation.

Syntax and Description 

TTS_CLOSURE_CHECK = [ ON | OFF | FULL | TEST_MODE ]

ON – Indicates that closure check be performed to ensure that the transportable tablespace set contains no references outside the set.

OFF – Indicates that the no closure check be performed. The user is responsible for verifying the transportable tablespace set containment.

FULL – Indicates that full multi-directional closure check be performed to ensure that the no remote references out of or into the transportable tablespace set.

TEST_MODE – Indicates that the tablespaces are not required to be in Read-Only mode. This option is for testing purposes only; to provide timing requirements of a transportable tablespace export operation. The resulting export dump file is not available for use by Data Pump Import.

Notes:
1/ ON, OFF, and FULL are mutually exclusive.  TEST_MODE is a Data Pump Export option only.

2/ Running a transportable operation with the TTS_CLOSURE_CHECK TEST_MODE option allows the tablespaces to remain read-write and provides the requested timing information. The resulting Data Pump Export dump file is not available for use by Data Pump Import.

3/ The time DataPump takes to conduct the closure check can be long, and at times unnecessary, especially when it is known that the transportable tablespace set is contained.  Running a Data Pump transportable operation with the TTS_CLOSURE_CHECK OFF option will decrease the time required for DataPump transportable operations to complete. The main performance attribute of this feature is to by-pass a time consuming step for a transportable operation.

4/ Skipping the closure check should decrease the time for the transportable export operation to complete which increase availability. The ability to obtain timing requirements for the transportable export operation with the tablespaces in read-write mode also increases availability.

5/ The TTS_CLOSURE_CHECK parameter value can be set using the existing procedure DBMS_DATAPUMP.SET_PARAMETER. The following disables all closure check and enables test mode:
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl, ‘TTS_CLOSURE_CHECK’, DBMS_DATAPUMP.KU$_TTS_CLOSURE_CHECK_OFF+DBMS_DATAPUMP.KU$_TTS_CLOSURE_CHECK_TEST);


Example:

– running a transportable operation for a read-write tablespace is allowed with 

TTS_CLOSURE_CHECK=test_mode:

SQL> select status,tablespace_name from dba_tablespaces where tablespace_name=’USERS’;

STATUS    TABLESPACE_NAME
——— ——————————
ONLINE    USERS

$] expdp system/<password> directory=dptest dumpfile=test_tts.dmp transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode

Export: Release 19.0.0.0.0 – Development on Thu Oct 11 02:55:00 2018
Version 19.1.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Starting “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″:  system/******** directory=dptest dumpfile=test_tts.dmp transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
*****************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
<path>/test_tts.dmp
Dump file set is unusable. TEST_MODE requested.
*****************************************************************
Datafiles required for transportable tablespace USERS:
<OS_path>/t_user1.f
Job “SYSTEM”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at Thu Oct 11 02:57:06 2018 elapsed 0 00:01:57

Note : The log file will contain the informational message: “Dump file set is unusable. TEST_MODE requested.”


==================================================


5. New values for TRANSPORTABLE DataPump Import parameter: KEEP_READ_ONLY and NO_BITMAP_REBUILD

The method used by DataPump to handle transportable tablespaces import was changed starting with 12.1.  

Before version 12, the DataPump would always leave the tablespaces as read-only but there were two problems with this.

First, if the version of the time zone file of the export database did not match the version of time zone file for the import database, then all tables with time zone data types in the transportable space would be dropped because they could have incorrect values with the new time zone file.
Second, the way the data pump tracked free space in a transportable tablespace did not scale for large tablespaces (we had customers where an export would take hours to gather information about free space). To fix these performance issues, the datapump changed to rebuilding the bitmap of free space for at import time.
However, there are some customers whose processes require the tablespace to remain read-only during and after the import.  If the data files for the tablespace were read-only, then the DataPump would not attempt the update, but that is an inconvenience for many customers.  For other customers, the time spent trying to rebuild the tablespace bitmaps was too long and they would rather not do it or do it at another time.

The TRANSPORTABLE parameter for impdp has two new options:

KEEP_READ_ONLY – this option never changes the tablespace to read/write. 

The bitmaps for free space are not updated and, if the export and import time zone files are different version, any table with time zone columns in the transportable tablespace is dropped.  Note that the free space information is only used when the database needs to create a new storage segment in the tablespace, which can’t happen until the tablespace is set to read/write.  If the bitmaps are not updated, then space used by tables or indexes that were not part of the transportable operation will not be reclaimed.

NO_BITMAP_REBUILD – the data files are set to read write so that time zone data can be updated, but the bitmap for free space is not rebuilt. Again, this means that unused space in the tablespace will not be reclaimed until the bitmaps are rebuilt.



The following Data Pump Import utility (impdp) command–line parameter is modified in 19c:

TRANSPORTABLE

Purpose : Specifies whether the transportable option should be used during a table mode import or a full mode import.  KEEP_READ_ONLY and NO_BITMAP_REBUILD will be added as
TRANSPORTABLE parameter values. KEEP_READ_ONLY and NO_BITMAP_REBUILD are valid for transportable mode import operations.

Syntax and Description 

TRANSPORTABLE = [ NEVER |  ALWAYS | KEEP_READ_ONLY | NO_BITMAP_REBUILD ]

KEEP_READ_ONLY – Indicates that the transportable data files are to remain in READ ONLY mode. KEEP_READ_ONLY allows the data files to be plugged into additional compatible database for read-only access.

NO_BITMAP_REBUILD – Indicates that the transportable data files header bitmaps are not to be rebuilt. Not reclaiming unused data segments reduces the time of the import operation. Bit maps can be rebuilt using dbms_space_admin.tablespace_rebuild_bitmaps.

Notes:
1/ When TRANSPORTABLE=KEEP_READ_ONLY is specified, because the data files remain in READ ONLY mode, tables containing TSTZ column data cannot be updated and are dropped from the import. In addition, data file bitmaps cannot be rebuilt.
2/ When TRANSPORTABLE=NO_BITMAP_REBUILD is specified, the data files may be placed into READ WRITE mode, if necessary, in order to update tables containing TSTZ column data.
3/ The TRANSPORTABLE parameter value can be set using the existing procedure DBMS_DATAPUMP.SET_PARAMETER. Setting TRANSPORTABLE with string values is limited to ‘NEVER’ or ‘ALWAYS’:
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl,’TRANSPORTABLE’,’ALWAYS’);
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl, ‘TRANSPORTABLE’,’NEVER’);
The new TRANSPORTABLE options are set using the new numeric bitmask values:
DBMS_DATAPUMP.KU$_TTS_NEVER  is the value 1
DBMS_DATAPUMP.KU$_TTS_ALWAYS  is the value 2
DBMS_DATAPUMP.KU$_TTS_KEEP_READ_ONLY  is the value 4
DBMS_DATAPUMP.KU$_TTS_NO_BITMAP_REBUILD is the value 8
The following enables KEEP_READ_ONLY for transportable job:
SYS.DBMS_DATAPUMP.SET_PARAMETER(jobhdl, ‘TRANSPORTABLE’,DBMS_DATAPUMP.KU$_TTS_ALWAYS+DBMS_DATAPUMP.KU$_TTS_KEEP

 

Example 1:
$] impdp system/<password> directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=KEEP_READ_ONLY




Import: Release 19.0.0.0.0 – Development on Fri Oct 12 03:29:23 2018
Version 19.1.0.0.0

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″:  system/******** directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=KEEP_READ_ONLY
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully completed at Fri Oct 12 03:29:59 2018 elapsed 0 00:00:34





Example 2:
$] impdp system/<password> directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=NO_BITMAP_REBUILD

Import: Release 19.0.0.0.0 – Development on Fri Oct 12 05:04:31 2018
Version 19.1.0.0.0

Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 – Development
Master table “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″:  system/******** directory=dptest dumpfile=test_tctab transport_datafiles=<OS_path>/tc_tctab1.f TRANSPORTABLE=NO_BITMAP_REBUILD
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job “SYSTEM”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully completed at Fri Oct 12 05:05:08 2018 elapsed 0 00:00:33








Detach Home from Oracle Inventory - 19c oracle database

 Go to below location :

/u01/app/oracle/product/19.0.0/dbhome/oui/bin



[oracle@rac1.com- bin]$

[oracle@rac1.com- bin]$

[oracle@rac1.com- bin]$ ./runInstaller -silent -detachhome -invPtrLoc /etc/oraInst.loc oracle_home="/u01/app/oracle/product/19.0.0/dbhome" ORACLE_HOME_NAME="OraDB19Home1"

Starting Oracle Universal Installer...



Checking swap space: must be greater than 500 MB.   Actual 699 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

You can find the log of this install session at:

 /u01/app/oraInventory/logs/DetachHome2022-01-23_05-40-07PM.log

'DetachHome' was successful.

[oracle@rac1.com- bin]$

[oracle@rac1.com- bin]$

[oracle@rac1.com- bin]$

[oracle@rac1.com- bin]$

[oracle@rac1.com- bin]$

[oracle@rac1.com- bin]$ pwd

/u01/app/oracle/product/19.0.0/dbhome/oui/bin

[oracle@rac1.com- bin]$

12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN Backup Sets (Doc ID 2013271.1)

 In this Document

Goal
Solution
 Step 1: Connect to the source database (AIX) from which you need to transport tablespaces as TARGET.
 Step 2:  Place the tablespaces to be transported in read-only mode.
 Step 3:  Back up the tablespace on the source database using the BACKUP command with the TO PLATFORM or FOR TRANSPORT clause.
 Step 4: Move the backupsets to Destination server
 Step 5: Connect to destination database (Linux x86 64-bit)

 Step 6: Restore the tablespace from backupset.
 Step 7: Alter the tablespace to read write
References

APPLIES TO:

Oracle Database - Enterprise Edition - Version 12.1.0.1 and later
Oracle Database Cloud Schema Service - Version N/A and later
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Information in this document applies to any platform.

GOAL

This Document explains the new 12c feature:  Cross-Platform Data transport Using rman Backup Sets across different Endian Platform.

RMAN can transport databases, data files, and tablespaces across platforms using backup sets. Performing cross-platform data transport with backup sets enables you to use block compression to reduce the size of backups. This improves backup performance and reduces the time taken to transport backups over the network.


A cross-platform backup is an RMAN backup that can be restored on a destination platform that is different from the source platform

SOLUTION

NOTE: In the images and/or the document content below, the user information and environment data used represents fictitious data from the Oracle sample schema(s), Public Documentation delivered with an Oracle database product or other training material. Any similarity to actual environments, actual persons, living or dead, is purely coincidental and not intended in any manner.

For the purposes of this document, the following fictitious environment is used as an example to describe the procedure:

Source database: AIX-Based Systems (64-bit) (Big Endian format)
Destination database: Linux x86 64-bit (Little Endian format)
User RMAN_TEST is granted SYSBACKUP privilege in both source and destination database
Tablespace been transported is TEST1
Source database name is SOURCE
Destination database name is DEST

*******************


Pre-requisite


  • To perform cross-platform data transport using backup sets, the version of the destination database must be Oracle Database 12c Release 1 (12.1) or later.
  • The COMPATIBLE parameter in the server parameter file of the source database and the destination database must be set to 12.0.0 or higher.
  • The tablespaces to be transported are in read-only mode, unless the ALLOW INCONSISTENT clause is used in the BACKUP command.
  • The tablespaces to be transported are self-contained.
  • Execute the DBMS_TTS.TRANSPORT_SET_CHECK procedure to check for dependencies. If the TRANSPORT_SET_VIOLATIONS view contains rows corresponding to the specified tablespaces, then you must resolve the dependencies before creating the cross-platform backup. 

Terms you need to be aware of

Foreign Data File

Data files that do not belong to the destination database are called foreign data files. These data files are being plugged in to the destination database as part of a data transfer to the destination database. In the source database, this data file is identified by its original data file number.

Foreign Tablespace

A foreign tablespace is a set of foreign data files that comprise a tablespace in the source database. These foreign data files do not belong to the destination database, but are being transported into the destination database and are identified by the original tablespace name in the source database.

Foreign Data File Copy

A foreign data file copy is a data file that was restored from a cross-platform backup. It cannot be directly plugged in to the destination database because it is inconsistent. You must apply a cross-platform incremental backup to this data file and recover it before you can plug it in to the destination database.

Data Pump Destination

A Data Pump destination is a location on the disk of the server host of the destination database on which the Data Pump export dump file and the Data Pump log files are stored.

 

NOTE:  If the tablespace which is part of TTS/migration contains an object owned by a schema which does not exist in the destination database, the following error may occur:

ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'DMSTEAM' does not exist
Failing sql is:GRANT DELETE ON "ADVTVS"."FT_ENQUIRY_PLAN" TO "DMSTEAM"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'DMSTEAM' does not exist

To avoid this issue, identify all the schema(s) on the source and create them on the destination server, if necessary, before executing the plugin process. The following query may be of help:

select distinct OWNER from dba_segments where tablespace_name in ('<tablespace name which is part of TTS/migration>');

Steps to Transport Read-Only Tablespaces to a Different Platform Using Backup Sets

Step 1: Connect to the source database (AIX) from which you need to transport tablespaces as TARGET.

$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Thu May 28 14:38:06 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

RMAN> Connect target "rman_test@source AS SYSBACKUP";

Enter the password for the RMAN_TEST user in source when prompted.

Step 2:  Place the tablespaces to be transported in read-only mode.

RMAN> alter tablespace test1 read only;

using target database control file instead of recovery catalog
Statement processed

Step 3:  Back up the tablespace on the source database using the BACKUP command with the TO PLATFORM or FOR TRANSPORT clause.

Use the DATAPUMP clause to indicate that an export dump file for the tablespaces must be created 

RMAN> BACKUP
TO PLATFORM 'Linux x86 64-bit'
FORMAT '/tmp/xplat_backups/trans_ts.bck'
DATAPUMP FORMAT '/tmp/xplat_backups/trans_ts_dmp.bck'
TABLESPACE test1;


This command results in the following output:

RMAN>
Starting backup at 22-MAY-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=184 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYSBACKUP"."TRANSPORT_EXP_SOURCE_asbF":  
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYSBACKUP"."TRANSPORT_EXP_SOURCE_asbF" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYSBACKUP.TRANSPORT_EXP_SOURCE_asbF is:
   EXPDP>   /<oracle_home path>/dbs/backup_tts_SOURCE_93693.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TEST1:
   EXPDP>   /<path>/oradata/SOURCE/test01.dbf
   EXPDP> Job "SYSBACKUP"."TRANSPORT_EXP_SOURCE_asbF" successfully completed at Fri May 22 11:54:53 2015 elapsed 0 00:02:06
Export completed

channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/<path>/oradata/SOURCE/test01.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-15
channel ORA_DISK_1: finished piece 1 at 22-MAY-15
piece handle=/<path>/oradata/trans_ts.bck tag=TAG20150522T115158 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
input Data Pump dump file=/<oracle_home path>/dbs/backup_tts_SOURCE_93693.dmp
channel ORA_DISK_1: starting piece 1 at 22-MAY-15
channel ORA_DISK_1: finished piece 1 at 22-MAY-15
piece handle=/<path>/oradata/trans_ts_dmp.bck tag=TAG20150522T115158 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 22-MAY-15

When the 'TO PLATFORM' clause is used, conversion to the endian format of the destination database is performed on the source database.

Exit this Rman session connected to the source.

Please note :- For Step 3 the export file is created in $ORACLE_HOME/dbs until it completes and then it is moved to the location specified by DATAPUMP FORMAT.  

Ensure this location has enough space.  

Enhancement request 25909942 - RMAN'S BACKUP TO PLATFORM SHOULD NOT CREATE FILE IN $ORACLE_HOME/DBS has been logged for this issue so that file is written directly to DATAPUMP FORMAT location.   
Currently its still in Initial review status

Step 4: Move the backupsets to Destination server


Move the backup sets created by the BACKUP command and the Data Pump export dump file to the destination host. An operating system utility is used to move
the backup sets from the source host to the destination host.

Step 5: Connect to destination database (Linux x86 64-bit)

Connect to the destination database, in which the tablespaces must be transported, as TARGET. The destination database is open in read write mode.

In this example, a TEST1 user is granted the SYSBACKUP privilege within the destination database:

$ rman

Recovery Manager: Release 12.1.0.2.0 - Production on Thu May 28 14:38:06 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

RMAN> connect target "rman_test@dest AS SYSBACKUP";

Enter the password for the RMAN_TEST user in destination when prompted.

Step 6: Restore the tablespace from backupset.

Restore the datafiles from the backup sets transported from the source database using the RESTORE command. Use the DUMP FILE clause to import the export dump file containing the tablespace metadata and plug the tablespaces in to the destination database.

In this example the backuppiece and dump file were copied to /u01/app/oracle/oradata/test/.

If db_create_file_dest set in the destination database execute:

RMAN> RESTORE
FOREIGN TABLESPACE TEST1 to NEW
FROM BACKUPSET '/u01/app/oracle/oradata/test/trans_ts.bck'
DUMP FILE FROM BACKUPSET '/u01/app/oracle/oradata/test/trans_ts_dmp.bck';

NOTE: The use of "to NEW" restores the datafiles in the db_create_file_dest specification.

If using multiple backupsets, specify each like:

RMAN> RESTORE
ALL FOREIGN DATAFILES TO NEW
FROM BACKUPSET '/mnt/staging_clt_target/tts/all_3tbs_uhrdm8f5_1_1.bck' BACKUPSET '/mnt/staging_clt_target/tts/all_3tbs_uirdm8f6_1_1.bck'
DUMP FILE FROM BACKUPSET '/mnt/staging_clt_target/tts/all_3tbs_uhrdm8f5_1_1.bck' ;

If using multiple backup pieces for each backupset, specify the backup pieces names within each backupset like:

RMAN> RESTORE
ALL FOREIGN DATAFILES TO NEW
FROM BACKUPSET '/mnt/staging_clt_target/tts/RTODP-a0r8utgu_1_1','/mnt/staging_clt_target/tts/RTODP-a0r8utgu_2_1',
BACKUPSET '/mnt/staging_clt_target/tts/RTODP-a1r8utgu_1_1','/mnt/staging_clt_target/tts/RTODP-a1r8utgu_2_1'
DUMP FILE FROM BACKUPSET '/mnt/staging_clt_target/tts/RTODP_metadata_1.bck';

If db_create_file_dest is not set you can specify the location of the datafile using:

RMAN> RESTORE
FOREIGN TABLESPACE TEST1 format '/u01/app/oracle/oradata/%U'
FROM BACKUPSET '/u01/app/oracle/oradata/dest/trans_ts.bck'
DUMP FILE FROM BACKUPSET '/u01/app/oracle/oradata/dest/trmp.bck';

The command results in the following output:

Starting restore at 22-MAY-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace TEST1
channel ORA_DISK_1: reading from backup piece/u01/app/oracle/oradata/dest/trans_ts.bck
channel ORA_DISK_1: restoring foreign file 2 to/u01/app/oracle/oradata/dest/data_D-dest_I-2390668590_TS-TEST1_FNO-2_jvq7is5b
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/oradata/dest/trans_ts.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to/u01/app/oracle/product/c12101/dbs/backup_tts_dest_31167.dmp
channel ORA_DISK_1: reading from backup piece/u01/app/oracle/oradata/dest/trans_ts_dmp.bck
channel ORA_DISK_1: foreign piece handle=/u01/app/oracle/oradata/dest/trans_ts_dmp.bck
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata...
IMPDP> Master table "SYSBACKUP"."TSPITR_IMP_dest_jykD" successfully loaded/unloaded
IMPDP> import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
export done in US7ASCII character set and AL16UTF16 NCHAR character set
IMPDP> Starting "SYSBACKUP"."TSPITR_IMP_dest_jykD":
IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
IMPDP> Job "SYSBACKUP"."TSPITR_IMP_dest_jykD" successfully completed at Fri May 22 12:19:08 2015 elapsed 0 00:00:05
Import completed

Finished restore at 22-MAY-15

Exit this Rman session connected to the destination.

Step 7: Alter the tablespace to read write

SQL> select status,tablespace_name from dba_tablespaces;

STATUS TABLESPACE_NAME
--------- ------------------------------
ONLINE SYSTEM
ONLINE SYSAUX
ONLINE UNDOTBS1
ONLINE TEMP
ONLINE USERS
READ ONLY TEST1

SQL> Alter tablespace test1 read write ;

Tablespace altered.

SQL> select status,tablespace_name from dba_tablespaces;

STATUS TABLESPACE_NAME
--------- ------------------------------
ONLINE SYSTEM
ONLINE SYSAUX
ONLINE UNDOTBS1
ONLINE TEMP
ONLINE USERS
ONLINE TEST1

Additional information (Difference between TO PLATFORM and FOR Transport)

You can use the BACKUP command to create backup sets for cross-platform data transportation. To indicate that a backup is for cross-platform transportation,
use either the FOR TRANSPORT or TO PLATFORM clauses.

When you use TO PLATFORM, the endian format conversion is performed on the source database.
Therefore, you can restore this cross-platform backup only on the specified platform. If you omit TO PLATFORM, you can restore this cross-platform backup on any supported platform.The target platform specified by the TO PLATFORM clause must be a supported platform for cross-platform transport. The V$TRANSPORTABLE_PLATFORM view
contains the list of supported platforms.

When you use FOR TRANSPORT, the backup set that is created can be transported to any destination database. If the destination database uses an endian format that is
different from that of the source database, then the required endian format conversion is performed on the destination database (This is applicable for tablespace level transport).You can transport a entire database only if the source and destination use the same endian format.. The benefit of this method is that the processing
overhead of the conversion operation is offloaded to the destination database.

REFERENCES

NOTE:2005729.1 - 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup

100 Oracle DBA Interview Questions and Answers

  Here are 100 tricky interview questions tailored for a Senior Oracle DBA role. These questions span a wide range of topics, including perf...