Disclaimer

Thursday 2 September 2021

Systematic Ways Of Testing Oracle Database For Memory, Space And CPU Processing

In this post, we shall discuss more on systematic ways of testing the database with respect to Memory, Space and CPU processing. In next and the final part of the series, we shall continue our focus on testing with Oracle Real Application Testing

1) MEMORY Testing

Let’s begin with testing your memory requirements. To tune your memory, we first need to get ourselves familiar with the memory structures. We can classify them into three major memory structures, namely:

a) System or Shared Global Area (SGA) – This is a shared memory segment which every Oracle process will access.

b) Process or Program Global Area (PGA) – This is a private memory which is used by an Oracle process.

c) User Global Area (UGA) – This is memory linked to a user session. Based on the type of connection mechanism used UGA becomes a part of PGA (with the dedicated server) and part of the SGA (with Shared server).

To test the application performance and know the optimal memory sizing we need to tune these memory segments.

To tune these memory segments, we need to understand the amount of memory that the application requires to function appropriately.

A simple representation of the Memory Architecture is as follows:




A command to check your current SGA values is, login to SQLPLUS command prompt:

SQL> show sga
 
Total System Global Area 521936896 bytes
Fixed Size                 2177328 bytes
Variable Size             465569488 bytes
Database Buffers           46137344 bytes
Redo Buffers               8052736 bytes

The components above shown constitute to the total SGA size. Suppose I have about 8 GB of RAM (physical memory) on my server to start with, set the SGA size to half of the memory available, i.e. 4 GB and test if the SGA values are suited to your application by running the application with users logged in and executing SQL queries against this database.

Another important factor to consider when sizing your memory is to consider Paging. When the operating system doesn’t have enough physical/real memory available, it transfers them to disk so that new pages can be loaded into memory. High amounts of paging usually indicate performance degradation.

A command commonly used to check paging activity is the “sar” command.

$ sar -g
 
00:00:00 pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf
01:00:00     0.00     0.00     0.00     0.00     0.00

This is an idle system and hence there has been no paging observed. If you observe high values for pgfree/s and pgscan/s, we would review our memory again as this indicates that the memory configured isn’t appropriate to meet the application needs.

To tune Program Global Area (PGA), we need to understand the complexity of the SQL queries that the users execute. We have work areas in the PGA where we perform in-memory sorting when sorting based operators like GROUP BY, ORDER BY, Hash-join, Bitmap Merge is used.

Usually, the PGA doesn’t need an enormous amount of memory; we ideally reserve 20% of the physical memory of the SGA size. We also need to consider if we are using an OLTP (Online Transaction Processing) system or a DSS (Decision Support System).

For an OLTP system, considering the example stated before, 8 GB total physical memory, 4GB dedicated to SGA, PGA = 20% of SGA = 0.8 GB.

For a DSS system, which usually has more memory consuming queries, we could allocate 50% of SGA, so PGA would now be 50% of SGA = 2 GB.

The above values aren’t realistic examples; we have used them to get a basic understanding of the calculations. Most of these systems on a real-time system would have values greater than 100 ~ 500 GB of SGA allocated. We even have several systems having more than 1 TB of physical Memory configured.

2) CPU Processing Testing

Any SQL that is run against a database which performs significant operations such as sorting, querying data, writing data will consume some amount of CPU cycles of your processor. How much of it is ideal and how do we test if we have enough processing power to fulfil our application needs?

How do we determine if there is a bottleneck on CPU?

Most of the time having your CPU operating at 100% is not a bad sign; it only means that your processors are executing at their maximum limit and it usually what we desire after all we have incurred huge expenditure on them.

A SQL usually goes through 3 stages when Oracle instance processes them.

They are the:

  1. Parse phase
  2. Execute phase and
  3. Fetch phase

Parsing performs two main checks, syntax check – check to see if the SQL follows all the rules and if it is a valid SQL which the Oracle database engine can comprehend, the other being semantic check – checks if the objects referenced in the SQL are valid and if the user running the statement has the privilege to access the objects, Parsing is done to make sure that the statement executes properly.

Execute phase comprises of building an execution plan and traverses through each step to access the desired objects.

Fetch phase describes the rows that are fetched from database blocks based on the execution plan computed in the previous step.

If you need to understand more amount the SQL statement processing you could refer this Oracle Documentation which is free.

Each of them will consume some amount of CPU to process. We usually observe high CPU being utilized during the parse. We have a couple of performance views which will examine the processes/sessions consuming CPU.

V$SYSSTAT shows CPU usage for all Oracle user sessions
V$SESSTAT shows CPU usage per oracle user sessions

For example:

SQL> select (a.value / b.value)*100 "% CPU for
parsing" from V$SYSSTAT a, V$SYSSTAT b
where a.name = 'parse time cpu' and
b.name = 'CPU used by this session';
 
% CPU for parsing
-----------------
7.70263467

We have just about 7.7 % of CPU used for parsing our SQL statements. This is achievable if we test our application with users running SQL queries against them. If we find that the % CPU for parsing is high, check to see if literals are being used in the SQL queries which cause a hard parse to be made which in-turn is a costly operation consuming high amounts of CPU at times. You could inform your application developers to change their code to use bind variables so that we reuse the cursors available and avoid hard parses.

To know the about of CPU being used by users accessing the database via their sessions, we can use the following query:

SQL> SELECT n.username, s.sid, s.value FROM
v$sesstat s,v$statname t, v$session n WHERE
s.statistic# = t.statistic# AND n.sid = s.sid
AND t.name='CPU used by this session' ORDER BY s.value desc;
 
USERNAME                             SID     VALUE
-------------------------      ---------- ----------
SYS                                 191       125
                                    190        64
                                    134        50
                                     66        45
                                    192         4
                                      4         3
                                    133         3
                                    126         2
                                     72         1
                                     67         0
                                    125         0

We just have one user SYS logged in and the rest is your background processes which are hardly consuming any amount of CPU.

From the OS perspective, one of the many commands that we have at our disposal, we shall the “sar” command to determine CPU usage. If you need more details regarding this command, you could just issue $ man sar on any Unix console (OS command prompt) or search in Linux documentation.

$sar -u 10 5
 
---- Reports CPU utilization for each 10 seconds. 5 lines are displayed.
Linux 3.8.13-26.2.1.el6uek.x86_64 (abcdefg)   08/07/2014     _x86_64_       (6 CPU)
 
05:17:58 PM     CPU     %user    %nice    %system  %iowait  %steal   %idle
05:18:08 PM     all     0.65     0.00     0.53     0.00     0.03     98.78
05:18:18 PM     all     0.00     0.00     0.00     0.00     0.00     100.00
05:18:28 PM     all     0.02     0.00     0.00     0.00     0.00     99.98
05:18:38 PM     all     0.02     0.00     0.02     0.00     0.00     99.97
05:18:48 PM     all     0.02     0.00     0.00     0.00     0.00     99.98
Average:        all     0.14     0.00     0.11     0.00     0.01     99.74

We can see that the CPU consumption is nearly zero since this being a test system and we have hardly anything running now hence it shows that it’s almost idle.

Having tested our application with respect to Memory and CPU, We need to test the amount of space that the database would require to fulfil user needs.


3) SPACE/Storage Testing

To test a number of storage needs, we first need to understand storage structures in Oracle. We have logical database storage structures as well as physical database storage structures.

By physical structures, we mean structures which are visible at the Operating system level, such as – Data files, Control files, online redo log files. We shall discuss more on data files in this post.

To explore more on these structures you could refer this Oracle Documentation.

By Logical structures, we mean structures which are only recognized and dealt by the Oracle database, nothing outside Oracle would know about the existence of these structures, such as – Data blocks, extents, segments and tablespaces.

To explore more on these structures you could refer this Oracle Documentation.


A high-level overview would be – Datafiles (physical structures) are part of your logical structures which is the tablespace. A single tablespace can have several data files and to know the total size of an existing, fully functional database would be, to sum up all the tablespaces which basically means summing up all the datafiles that belong to the Oracle database on the operating system.

Each application user would have his/her own schema to work with and these schemas are nothing but segments which are part of the tablespace.

With this little bit of understanding of the above logical and physical structures, To test an application needs, we need to know how many users would the database intend to have, how many of these users would be inserting data, their volume, frequency and the duration. As we know in any database system we could also truncate, delete data and not just add data. All such transactions contribute to the data growth in the organization’s database. It also depends on the type of applications that connect to this database.

For example, consider if we were using a banking application wherein thousands of users are performing transactions and we need to store every piece of information of each banking user in a database we would require an enormously large storage system to handle all of this cumulative information.

If we have an application that we have developed and would like this to be deployed on a production server, we would first need to know how many users this application would support, how many transactions would each of these users perform and how much data would we deal with. Since we first test the application on a test or a UAT instance, we have a couple of procedures which help in monitoring growth trend and to know how big an object has eventually grown.

We have performance views which help you determine the size of objects (logical) and physical files such as data files which are meant to hold user data physically on the operating system.

DBA_SEGMENTS, DBA_EXTENTS, DBA_TABLESPACES – To obtain information on logical structures
DBA_DATA_FILES – To obtain information on the size of data files (physical structures).

For example, to know the size of my tablespaces, we execute:

SQL> select tablespace_name "Tablespace",sum(bytes)/1024/1024
"Size in MegaBytes" from dba_data_files group by tablespace_name;
 
Tablespace                     Size in MB
-------------------------      ----------
UNDOTBS1                         100
SYSAUX                           680
USERS                             45
SYSTEM                           830
EXAMPLE                          100

Summing up each of these tablespaces would give you the total size of your database instance.

To have the best application performance, we have several storage products and have engineered storage solutions such as NAS, SAN, Flash, Solid State Devices etc.

Conclusion:

To summarize, we have dealt with testing our application against the robust Oracle database system with respect to Memory, CPU and the storage constraints that application developers, Architects and Administrators would have to consolidate prior to deploying them in a production environment.




No comments:

Post a Comment

100 Oracle DBA Interview Questions and Answers

  Here are 100 tricky interview questions tailored for a Senior Oracle DBA role. These questions span a wide range of topics, including perf...