Tuesday, March 3, 2009

Sql Tuning advisor

This article the discusses the new features which automate the tuning of SQL statements in Oracle 10g:

Overview

SQL Tuning Advisor

Managing SQL Profiles

SQL Tuning Sets

Useful Views

Overview

In its normal mode the query optimizer needs to make decisions about execution plans in a very short time. As a result it may not always be able to obtain enough information to make the best decision. Oracle 10g allows the optimizer to run in tuning mode where it can gather additional information and make recommendations about how specific statements can be tuned further. This process may take several minutes for a single statement so it is intended to be used on high-load resource-intensive statements.

In tuning mode the optimizer performs the following analysis:

Statistics Analysis - The optimizer recommends the gathering of statistics on objects with missing or stale statistics. Additional statistics for these objects are stored in an SQL profile.

SQL Profiling - The optimizer may be able to improve performance by gathering additional statistics and altering session specific parameters such as the OPTIMIZER_MODE. If such improvements are possible the information is stored in an SQL profile. If accepted this information can then used by the optimizer when running in normal mode. Unlike a stored outline which fixes the execution plan, an SQL profile may still be of benefit when the contents of the table alter drastically. Even so, it's sensible to update profiles periodically. The SQL profiling is not performed when the tuining optimizer is run in limited mode.

Access Path Analysis - The optimizer investigates the effect of new or modified indexes on the access path. It's index recommendations relate to a specific statement so where necessary it will also suggest the use of the SQL Access Advisor to check the impact of these indexes on a representative SQL workload.

SQL Structure Analysis - The optimizer suggests alternatives for SQL statements that contain structures that may impact on performance. The implementation of these suggestions requires human intervention to check their validity.

The automatic SQL tuning features are accessible from Enterprise Manager on the "Advisor Central" page these or from PL/SQL using the DBMS_SQLTUNE package. This article will focus on the PL/SQL API as the Enterprise Manager interface is reasonably intuative.

SQL Tuning Advisor

In order to access the SQL tuning advisor API a user must be granted the ADVISOR privilege:

CONN sys/password AS SYSDBA
GRANT ADVISOR TO scott;
CONN scott/tiger

The first step when using the SQL tuning advisor is to create a new tuning task using the CREATE_TUNING_TASK function. The statements to be analyzed can be retrieved from the Automatic Workload Repository (AWR), the cursor cache, an SQL tuning set or specified manually:

SET SERVEROUTPUT ON

-- Tuning task created for specific a statement from the AWR.
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
begin_snap => 764,
end_snap => 938,
sql_id => '19v5guvsgcd1v',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => '19v5guvsgcd1v_AWR_tuning_task',
description => 'Tuning task for statement 19v5guvsgcd1v in AWR.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

-- Tuning task created for specific a statement from the cursor cache.
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_id => '19v5guvsgcd1v',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => '19v5guvsgcd1v_tuning_task',
description => 'Tuning task for statement 19v5guvsgcd1v.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

-- Tuning task created from an SQL tuning set.
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sqlset_name => 'test_sql_tuning_set',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => 'sqlset_tuning_task',
description => 'Tuning task for an SQL tuning set.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

-- Tuning task created for a manually specified statement.
DECLARE
l_sql VARCHAR2(500);
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql := 'SELECT e.*, d.* ' ||
'FROM emp e JOIN dept d ON e.deptno = d.deptno ' ||
'WHERE NVL(empno, ''0'') = :empno';

l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_text => l_sql,
bind_list => sql_binds(anydata.ConvertNumber(100)),
user_name => 'scott',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => 'emp_dept_tuning_task',
description => 'Tuning task for an EMP to DEPT join query.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

If the TASK_NAME parameter is specified it's value is returned as the SQL tune task identifier. If ommitted a system generated name like "TASK_1478" is returned. If the SCOPE parameter is set to scope_limited the SQL profiling analysis is omitted. The TIME_LIMIT parameter simply restricts the time the optimizer can spend compiling the recommendations.

The following examples will reference the last tuning set as it has no external dependancies other than the SCOTT schema. The NVL in the SQL statement was put in to provoke a reaction from the optimizer. In addition we can delete the statistics from one of the tables to provoke it even more:

EXEC DBMS_STATS.delete_table_stats('SCOTT','EMP');

With the tuning task defined the next step is to execute it using the EXECUTE_TUNING_TASK procedure:

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => 'emp_dept_tuning_task');

During the execution phase you may wish to pause and restart the task, cancel it or reset the task to allow it to be re-executed:

-- Interrupt and resume a tuning task.
EXEC DBMS_SQLTUNE.interrupt_tuning_task (task_name => 'emp_dept_tuning_task');
EXEC DBMS_SQLTUNE.resume_tuning_task (task_name => 'emp_dept_tuning_task');

-- Cancel a tuning task.
EXEC DBMS_SQLTUNE.cancel_tuning_task (task_name => 'emp_dept_tuning_task');

-- Reset a tuning task allowing it to be re-executed.
EXEC DBMS_SQLTUNE.reset_tuning_task (task_name => 'emp_dept_tuning_task');

The status of the tuning task can be monitored using the DBA_ADVISOR_LOG view:

SELECT task_name, status FROM dba_advisor_log WHERE owner = 'SCOTT';

TASK_NAME STATUS
------------------------------ -----------
emp_dept_tuning_task COMPLETED

1 row selected.

Once the tuning task has executed successfully the recommendations can be displayed using the REPORT_TUNING_TASK function:

SET LONG 10000;
SET PAGESIZE 1000
SET LINESIZE 200
SELECT DBMS_SQLTUNE.report_tuning_task('emp_dept_tuning_task') AS recommendations FROM dual;
SET PAGESIZE 24

In this case the output looks like this:

RECOMMENDATIONS
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : emp_dept_tuning_task
Scope : COMPREHENSIVE
Time Limit(seconds): 60
Completion Status : COMPLETED
Started at : 05/06/2004 09:29:13
Completed at : 05/06/2004 09:29:15

-------------------------------------------------------------------------------
SQL ID : 0wrmfv2yvswx1
SQL Text: SELECT e.*, d.* FROM emp e JOIN dept d ON e.deptno = d.deptno
WHERE NVL(empno, '0') = :empno

-------------------------------------------------------------------------------
FINDINGS SECTION (2 findings)
-------------------------------------------------------------------------------

1- Statistics Finding
---------------------
Table "SCOTT"."EMP" and its indices were not analyzed.

Recommendation
--------------
Consider collecting optimizer statistics for this table and its indices.
execute dbms_stats.gather_table_stats(ownname => 'SCOTT', tabname =>
'EMP', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE)

Rationale
---------
The optimizer requires up-to-date statistics for the table and its indices
in order to select a good execution plan.

2- Restructure SQL finding (see plan 1 in explain plans section)
----------------------------------------------------------------
The predicate NVL("E"."EMPNO",0)=:B1 used at line ID 2 of the execution plan
contains an expression on indexed column "EMPNO". This expression prevents
the optimizer from selecting indices on table "SCOTT"."EMP".

Recommendation
--------------
Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.

Rationale
---------
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original
-----------
Plan hash value: 1863486531

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 107 | 4 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 107 | 4 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL | EMP | 1 | 87 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
| 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement

-------------------------------------------------------------------------------


1 row selected.

Once the tuning session is over the tuning task can be dropped using the DROP_TUNING_TASK procedure:

BEGIN
DBMS_SQLTUNE.drop_tuning_task (task_name => '19v5guvsgcd1v_AWR_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name => '19v5guvsgcd1v_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name => 'sqlset_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name => 'emp_dept_tuning_task');
END;
/

Managing SQL Profiles

To manage SQL profiles a user needs the following privileges:

CONN sys/password AS SYSDBA
GRANT CREATE ANY SQL PROFILE TO scott;
GRANT DROP ANY SQL PROFILE TO scott;
GRANT ALTER ANY SQL PROFILE TO scott;
CONN scott/tiger

If the recommendations of the SQL tuning advisor include a suggested profile you can choose to accept it using the ACCEPT_SQL_PROFILE procedure:

SET SERVEROUTPUT ON
DECLARE
l_sql_tune_task_id VARCHAR2(20);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.accept_sql_profile (
task_name => 'emp_dept_tuning_task',
name => 'emp_dept_profile');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

The NAME parameter is used to specify a name for the profile. If it is not specified a system generated name will be used.

The STATUS, NAME, DESCRIPTION, and CATEGORY attributes of an SQL profile can be altered using the ALTER_SQL_PROFILE procedure:

BEGIN
DBMS_SQLTUNE.alter_sql_profile (
name => 'emp_dept_profile',
attribute_name => 'STATUS',
value => 'DISABLED');
END;
/

Existing SQL profiles can be dropped using the DROP_SQL_PROFILE procedure:

BEGIN
DBMS_SQLTUNE.drop_sql_profile (
name => 'emp_dept_profile',
ignore => TRUE);
END;
/

The IGNORE parameter prevents errors being reported if the specified profile does not exist.

SQL Tuning Sets

An SQL tuning set is a group of statements along with their execution context. These can be created automatically via Enterprise Manager or manually provided you have the necessary privileges:

CONN sys/password AS SYSDBA
GRANT ADMINISTER ANY SQL TUNING SET TO scott;
CONN scott/tiger

An SQL tuning set is created using the CREATE_SQLSET procedure:

BEGIN
DBMS_SQLTUNE.create_sqlset (
sqlset_name => 'test_sql_tuning_set',
description => 'A test SQL tuning set.');
END;
/

Statements are added to the set using the LOAD_SQLSET procedure which accepts a REF CURSOR of statements retrieved using one of the following pipelined functions:

SELECT_WORKLOAD_REPOSITORY - Retrieves statements from the Automatic Workload Repository (AWR).

SELECT_CURSOR_CACHE - Retrieves statements from the cursor cache.

SELECT_SQLSET - Retrieves statements from another SQL tuning set.

The following are examples of their usage:

-- Load the SQL set from the Automatic Workload Repository (AWR).
DECLARE
l_cursor DBMS_SQLTUNE.sqlset_cursor;
BEGIN
OPEN l_cursor FOR
SELECT VALUE(p)
FROM TABLE (DBMS_SQLTUNE.select_workload_repository (
765, -- begin_snap
766, -- end_snap
NULL, -- basic_filter
NULL, -- object_filter
NULL, -- ranking_measure1
NULL, -- ranking_measure2
NULL, -- ranking_measure3
NULL, -- result_percentage
10) -- result_limit
) p;

DBMS_SQLTUNE.load_sqlset (
sqlset_name => 'test_sql_tuning_set',
populate_cursor => l_cursor);
END;
/

-- Load the SQL set from the cursor cache.
DECLARE
l_cursor DBMS_SQLTUNE.sqlset_cursor;
BEGIN
OPEN l_cursor FOR
SELECT VALUE(p)
FROM TABLE (DBMS_SQLTUNE.select_cursor_cache (
NULL, -- basic_filter
NULL, -- object_filter
NULL, -- ranking_measure1
NULL, -- ranking_measure2
NULL, -- ranking_measure3
NULL, -- result_percentage
1) -- result_limit
) p;

DBMS_SQLTUNE.load_sqlset (
sqlset_name => 'test_sql_tuning_set',
populate_cursor => l_cursor);
END;
/

-- Create a new set and load it from the existing one.
DECLARE
l_cursor DBMS_SQLTUNE.sqlset_cursor;
BEGIN
DBMS_SQLTUNE.create_sqlset(
sqlset_name => 'test_sql_tuning_set_2',
description => 'Another test SQL tuning set.');

OPEN l_cursor FOR
SELECT VALUE(p)
FROM TABLE (DBMS_SQLTUNE.select_sqlset (
'test_sql_tuning_set', -- sqlset_name
NULL, -- basic_filter
NULL, -- object_filter
NULL, -- ranking_measure1
NULL, -- ranking_measure2
NULL, -- ranking_measure3
NULL, -- result_percentage
NULL) -- result_limit
) p;

DBMS_SQLTUNE.load_sqlset (
sqlset_name => 'test_sql_tuning_set_2',
populate_cursor => l_cursor);
END;
/

The contents of an SQL tuning set can be displayed using the SELECT_SQLSET function:

SELECT *
FROM TABLE(DBMS_SQLTUNE.select_sqlset ('test_sql_tuning_set'));

References can be added to a set to indicate its usage by a client using the ADD_SQLSET_REFERENCE function. The resulting reference ID can be used to remove it using the REMOVE_SQLSET_REFERENCE procedure:

DECLARE
l_ref_id NUMBER;
BEGIN
-- Add a reference to a set.
l_ref_id := DBMS_SQLTUNE.add_sqlset_reference (
sqlset_name => 'test_sql_tuning_set',
reference => 'Used for manual tuning by SQL*Plus.');

-- Delete the reference.
DBMS_SQLTUNE.remove_sqlset_reference (
sqlset_name => 'test_sql_tuning_set',
reference_id => l_ref_id);
END;
/

The UPDATE_SQLSET procedure is used to update specific string (MODULE and ACTION) and number (PRIORITY and PARSING_SCHEMA_ID) attributes of specific statements within a set:

BEGIN
DBMS_SQLTUNE.update_sqlset (
sqlset_name => 'test_sql_tuning_set',
sql_id => '19v5guvsgcd1v',
attribute_name => 'ACTION',
attribute_value => 'INSERT');
END;
/

The contents of a set can be trimmed down or deleted completely using the DELETE_SQLSET procedure:

BEGIN
-- Delete statements with less than 50 executions.
DBMS_SQLTUNE.delete_sqlset (
sqlset_name => 'test_sql_tuning_set',
basic_filter => 'executions < style="">
-- Delete all statements. DBMS_SQLTUNE.delete_sqlset ( sqlset_name => 'test_sql_tuning_set');
END;
/

Tuning sets can be dropped using the DROP_SQLSET procedure:

BEGIN
DBMS_SQLTUNE.drop_sqlset (sqlset_name => 'test_sql_tuning_set');
DBMS_SQLTUNE.drop_sqlset (sqlset_name => 'test_sql_tuning_set_2');
END;
/

Useful Views

Useful views related to automatic SQL tuning include:

DBA_ADVISOR_TASKS

DBA_ADVISOR_FINDINGS

DBA_ADVISOR_RECOMMENDATIONS

DBA_ADVISOR_RATIONALE

DBA_SQLTUNE_STATISTICS

DBA_SQLTUNE_BINDS

DBA_SQLTUNE_PLANS

DBA_SQLSET

DBA_SQLSET_BINDS

DBA_SQLSET_STATEMENTS

DBA_SQLSET_REFERENCES

DBA_SQL_PROFILES

V$SQL

V$SQLAREA

V$ACTIVE_SESSION_HISTORY

Monday, March 2, 2009

OCR Manual Backup option available with 11g

11G: OCR Manual Backups

OCR is a file that manages cluster and Oracle Real Application Clusters (Oracle RAC) database configuration information. The OCR contains information about the cluster node list, instance-to-node mapping information, and information about Oracle Clusterware resource profiles for applications

This is a Very important component along with Voting Disk and Loss of it will require you to re-install the Clusterware Software.

Oracle 10g provides Automatic Backup of OCR along with manually export of OCR files.

Starting from Oracle 11g, we can also take manual backups of OCR. So in 11g , we can have following backups

1) Automatic backups—Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle always retains the last three backup copies of the OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of files that Oracle retains.

We can view the backups using ocrconfig -showbackup auto command.

[oracle@prod01]/home/oracle>ocrconfig -showbackup auto
prod01 2008/05/14 19:23:22 /u01/app/11.1.0/crs/cdata/prod_cluster/backup00.ocr
prod01 2008/05/14 15:23:21 /u01/app/11.1.0/crs/cdata/prod_cluster/backup01.ocr
prod01 2008/05/14 11:23:21 /u01/app/11.1.0/crs/cdata/prod_cluster/backup02.ocr
prod01 2008/05/13 15:23:20 /u01/app/11.1.0/crs/cdata/prod_cluster/day.ocr
prod01 2008/05/04 01:52:33 /u01/app/11.1.0/crs/cdata/prod_cluster/week.ocr

2)Manual backups—You can use the ocrconfig -manualbackup command to force Oracle Clusterware to perform a backup of the OCR at any time, rather than wait for the automatic backup that occurs at 4-hour intervals. The -manualbackup option is especially useful when you to need to obtain a binary backup on demand, such as before you make changes to the OCR.

You need to be logged in as root user to take manual backups. Use ocrconfig –manualbackup command. To view the backups use ocrconfig –showbackup manual command

[oracle@prod01]/home/oracle>ocrconfig -showbackup manual
prod01 2008/05/01 12:21:29 /u01/app/11.1.0/crs/cdata/prod_cluster/backup_20080501_122129.ocr
prod02 2008/04/11 11:32:50 /u01/app/11.1.0/crs/cdata/prod_cluster/backup_20080411_113250.ocr
prod02 2008/03/27 17:25:23 /u01/app/11.1.0/crs/cdata/prod_cluster/backup_20080327_172523.ocr

3)Manual Export—You can Use the ocrconfig -export file_name command to export the OCR content to a file format.

For more details refer to 11g Documentation

Tracing Sessions with Event 10046

Tracing Sessions with Event 10046

10046 Event is used to trace SQL statements (similar to sql_trace=true) with additional details depending on the tracing level which is specified.

10046 EVENT levels:

1 - Enable standard SQL_TRACE functionality (Default)
4 - As Level 1 PLUS trace bind values
8 - As Level 1 PLUS trace waits
12 - As Level 1 PLUS both trace bind values and waits

There are various ways to enable the tracing

1) Using Alter session command

ALTER SESSION SET EVENTS ‘10046 TRACE NAME CONTEXT FOREVER, LEVEL 12′;

To Turn it off
ALTER SESSION SET EVENTS ‘10046 TRACE NAME CONTEXT OFF’;

2) Using DBMS_SESSION

EXECUTE DBMS_SESSION.SESSION_TRACE_ENABLE(waits => TRUE, binds => FALSE);


The TRUE argument specifies that wait information will be present in the trace. The FALSE argument specifies that bind information will not be present in the trace.
The SESSION_TRACE_DISABLE procedure disables the trace for the invoking session.

For example:
EXECUTE DBMS_SESSION.SESSION_TRACE_DISABLE();

3) Using DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION

EXEC SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(SID,SERIAL#,TRUE);

Where SID and SERIAL# points to the session being traced and can be retrieved from V$SESSION

To turn it off
EXEC SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(SID,SERIAL#,FALSE);

4) Using DBMS_MONITOR

Starting from Oracle 10g, there is a new package which can be used to enable tracing.

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE(SESSION_ID=>sid,SERIAL_NUM=>serial#,WAITS=>TRUE,BINDS=>TRUE)

To turn it off

EXECUTE DBMS_MONITOR.SESSION_TRACE_DISABLE(session_id => sid, serial_num => serial#);

Please refer to following documentation link for more details

http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#i8759

5) Using ORADEBUG

connect / as sysdba
Sql>oradebug setospid 1234
Sql>oradebug unlimit
Sql>oradebug event 10046 trace name context forever,level 12

Here 1234 is the OS process Id for the session to be traced.

Sql>oradebug tracefile_name

Above command will show the location of the trace file

To disable oradebug tracing

Sql> oradebug event 10046 trace name context off

Whichever method you use, Oracle will create the trace files in the server’s user_dump_dest directory.

Database Replay: 11g

11g New Feature: Database Replay

Database Replay:

In critical production systems, before making changes to the environment( Hardware or software related), lots of testing is required in test systems to know the effect of those changes and this usually takes lots of time to test for the stability and effects of the change.

In Oracle 11g this thing is simplified and a new feature “database replay” is added which can be used to simulate the production workload on a test system.

Following are the three stages of DATABASE REPLAY:
1. Workload Capture
2. Workload Preprocessing
3. Workload Replay

1. Workload Capture:

Before starting the capture process you need to take care of following things:

- Login as sys user

- It is always recommened (not mandatory) to shutdown the database and then start the database in restricted mode. By doing so, you will be making sure that no transactions are running on the database before capture process is actually started. It will guarantee the complete workload capture and you will be running full transactions during database replay on test system.
If you do not follow this step then you have to be prepared for some divergance of results during database replay on test system.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup restrict
ORACLE instance started.


Total System Global Area 313860096 bytes
Fixed Size 1299624 bytes
Variable Size 184552280 bytes
Database Buffers 121634816 bytes
Redo Buffers 6373376 bytes
Database mounted.
Database opened.
SQL > select LOGINS from v$instance;
LOGINS
----------
RESTRICTED

Important thing to note here is that, after starting the capture process the database will automatically be switched into UNRESTRICTED mode and the users will be able to connect normally.

After capturing the initial database information, the capture process will internally fire a statement to disable restricted sessions, this can be verified by setting 10046 trace before starting the capture process as:

SQL > alter session set events '10046 trace name context forever, level 4';

- Now start the capture process.

SQL > alter session set events '10046 trace name context off'

You will see following information:

PARSING IN CURSOR #21 len=39 dep=1 uid=0 oct=49 lid=0 tim=121695902847 hv=14656808 ad='0' sqlid='7shphjbg23h8'
alter system disable restricted session
END OF STMT
PARSE #21:c=0,e=6,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=121695384843

Now check the information from database:

SQL > select LOGINS from v$instance;
LOGINS
----------
ALLOWED

- Create a directory to hold the workload information.

SQL > create directory DIR as '/tmp/ss';

Check the permission of directory ss at OS level and make sure that oracle is able to write to this directory.
Also make sure that the directory is empty, before starting the capture process.

- Define Workload Filters
By adding filters we can restrict the Workload Capture process to be specific to certain user sessions. By default all the sessions will be captured.

There are two types of filters i.e INCLUDE and EXCLUDE.

If INCLUDE is used then the filter will be active and workload for the objects mentioned in the filter will not be captured rather all other user’s workload Will be captured i.e the objects defined in the filter will be FILTERED OUT from capture process.

If EXCLUDE is used, only the workload done by objects defined in the filter will be captured, no other client request to database will be captured.

EXAMPLE:
========

SQL > exec dbms_workload_capture.ADD_FILTER( fname  IN VARCHAR2, fattribute  IN VARCHAR2,fvalue IN VARCHAR2);

fname=Name of the filter.
fattribute=Attribute on which the filter will be applied i.e USER, PROGRAM, INSTANCE_NUMBER etc.
fvalue= value for the corresponding attribute.

SQL > exec dbms_workload_capture.ADD_FILTER( fname =>'FILTER_SCOTT',fattribute => 'USER',fvalue => 'SCOTT');

- Starting workload capture:

To start the workload capture START_CAPTURE procedure is used as:

SQL > exec dbms_workload_capture.start_capture(NAME => 'PEAK_LOAD_TEST',DIR => 'DIR', DURATION => 600);

Name,Dir are mandatory parameters while duration (in seconds) is optional, here the capture will run for 600 seconds, if the duration is not specified then you have to finish the capture manually as:

SQL > dbms_workload_capture.finish_capture();

- While the capture process is runnig it will generate two files : wcr_scapture.wmd and wcr_cap_000xx.start

- After using the finish_capture procedure you will get folloing files also: wcr_cr.html and wcr_cr.text

wcr_cr.html is similar to the AWR report generated by awrrpt.sql

During capture process database initiate a AWR report for the duration of capture period.

SQL>  select id,AWR_BEGIN_SNAP,AWR_END_SNAP from dba_workload_captures;

ID AWR_BEGIN_SNAP AWR_END_SNAP
---------- -------------- ------------
1 20 21
2 29 30

- Exporting AWR data for the workload capture:

It will enable detailed analysis of the workload:

SQL> exec DBMS_WORKLOAD_CAPTURE.EXPORT_AWR (capture_id => 2);

After this you can see two more files generated under the specified directory:

wcr_ca.dmp and wcr_ca.log

2. Workload Preprocessing:

Once the workload is captured it can be replayed on the test system. For this you need to copy the capture files to the test system. Preprocessing a workload will convert the captured files into REPLAY files and create the necessary metadata for this. Preprocessing will put load on the system and is resource intensive and should be performed only on test systems and on the same version of oracle database.

- Create a new directory for the capture files placed on the test system.
Let us suppose that the capture files are placed under /tmp/ss on test system, now create a directory as:

SQL > create directory REPLAY as '/tmp/ss';

SQL> EXEC DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir => 'REPLAY');

After preprocessing followings files will be generated:

wcr_process.wmd, wcr_login.pp, wcr_seq_data.extb, wcr_scn_order.extb , wcr_conn_data.extb

3. Workload Replay:

You have to use an instance which is logically identical to the instance whose workload is captured and need to setup Replay Client (WRC).

- wrc executable has to be run in calibrate mode to estimate the number of replay clients.

$  wrc mode=calibrate replaydir=/tmp/sood

Workload Replay Client: Release 11.1.0.6.0 - Production on Sat Jul 26 18:49:29 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Report for Workload in: /tmp/sood
-----------------------


Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).


Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 5


Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE

- Initialize Replay Data:


Initializing replay loads the metadata into tables required for database replay.

SQL> exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY (replay_name => 'REPLAY_1', REPLAY_DIR=> 'REPLAY');

- Put the database in “PREPARE REPLAY” mode as:

SQL > exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY()

To check the status of replay use:

SQL > select name,status from  dba_workload_replays;
NAME STATUS
-------------------- ----------------------------------------
REPLAY_1 PREPARE

– Start the replay client:

$ wrc  replaydir=/tmp/sood

Workload Replay Client: Release 11.1.0.6.0 - Production on Sat Jul 26 22:12:35 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (22:12:35)

- Start the replay as:

SQL> exec DBMS_WORKLOAD_REPLAY.START_REPLAY ();

Check the status of the replay as:

SQL> select name,status from  dba_workload_replays;

Initially it will show :

NAME STATUS
-------------------- ----------------------------------------
REPLAY_1 IN PROGRESS

And after that it will be shown as completed.

- Once the status is completed finish the replay as:

SQL > exec DBMS_WORKLOAD_REPLAY.CANCEL_REPLAY ();

To generate the report of the replay in HTML format you need to run :

DECLARE
My_Report CLOB;
BEGIN
My_Report := DBMS_WORKLOAD_REPLAY.REPORT(replay_id => 1, format => DBMS_WORKLOAD_REPLAY.TYPE_HTML);
END;
/

Wednesday, February 18, 2009

Approach to Resolve ORA-4031

Simplified Approach to Resolve ORA-4031

A Few Case studies and other related articles, about an approach towards Resolving ORA -4031 error. First we will see what ORA-4031 actually means.

04031, 00000, "unable to allocate %s bytes of shared memory (\"%s\",\"%s\",\"%s\,\"%s\")"

// *Cause: More shared memory is needed than was allocated in the shared pool.

// *Action: If the shared pool is out of memory, either use the

// dbms_shared_pool package to pin large packages,

// reduce your use of shared memory, or increase the amount of

// available shared memory by increasing the value of the

// INIT.ORA parameters "shared_pool_reserved_size" and

// "shared_pool_size".

// If the large pool is out of memory, increase the INIT.ORA

// parameter "large_pool_size".

ORA-4031 error is encountered when we do not have sufficient memory available in shared pool/large pool to service a memory request. But in actual ORA – 4031 can be encountered in any of these areas

1) Shared pool
2) Large Pool
3) Java Pool
4)Streams pool (new to 10g)

This brings us to the first step in our pursuit for finding the cause for ORA -4031.

Step1: Identify the Pool associated with error

Like any other Oracle error, we first need to check Database Alert Log file and also any trace files which gets generated during that time in user_dump_dest,background_dump_dest. Though there are cases when ORA-4031 error is not recorded in alert.log. Starting from 9.2.0.5, you should be able to see trace files which gets generated in udump/bdump location (Depending on whether background process or user process encountered the error).

ORA – 4031 has basically three arguments

1) Size requested
2) Area
3) Comment

ORA-4031: unable to allocate bytes of shared memory (”area “,”comment”)

e.g ORA-4031: unable to allocate 2196 bytes of shared memory

("shared pool","JOB$","KGLS heap","KGLS MEM BLOCK))

So we see from above that the error has occurred in Shared Pool. This is very important step as in case of other pools, ORA-4031 errors are resolved by increasing Java_pool_size and Streams_pool _size.

In this article I will be discussing mostly about errors encountered in Shared pool with small section on Large Pool.

Step2: What is value of SHARED_POOL_SIZE?

Current settings for shared pool related parameters can be found using below query

SQL>col name for a50

SQL>col value for a10

SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val

where nam.indx = val.indx and nam.ksppinm like '%shared_pool%' order by 1;



NAME VALUE

-------------------------------------------------- ----------

__shared_pool_size 654311424

_dm_max_shared_pool_pct 1

_enable_shared_pool_durations TRUE

_io_shared_pool_size 4194304

_shared_pool_max_size 0

_shared_pool_minsize_on FALSE

_shared_pool_reserved_min_alloc 4400

_shared_pool_reserved_pct 5

shared_pool_reserved_size 19293798

shared_pool_size 0

You can use following notes for checking the minimum shared pool size

Note 105813.1 - SCRIPT TO SUGGEST MINIMUM SHARED POOL SIZE

In case of 10g, you can use SGA_TARGET parameter for managing values of Shared Pool,Large pool, Streams Pool,Java Pool, Buffer Cache (DB_CACHE_SIZE). Following note can be used for 10g

Note 270935.1 - Shared pool sizing in 10g

It is recommended to set a lower limit for SHARED_POOL_SIZE parameter.

You can also use V$LIBRARYCACHE view (AWR/Statspack report also has this section) and check if there were lot of Reloads happening for SQL AREA and TABLE/PROCEDURE Namespace. This gives indication that Shared Pool is not appropriately sized. In case you see high value for Invalidations, then this could be due to executing DDL against the objects, gathering stats (DBMS_STATS), or granting/revoking privileges.

High Value for Hard parses in AWR/Statspack report can also be caused by shared pool sizing issues but it cannot be used as a sole criteria as High hard parses can be caused by use of literals and presence of version counts/Child Cursors. This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Some more key points related to Shared pool Sizing

-Shared pool memory consumption varies from release to release

-10g might fail with shared pool of 300 Mb though 8i was working fine

-Some part of memory allocated to fixed structures. Parameters like db_files, open_cursors and processes contribute to Overhead. When you use “Show SGA” command, you will see that “Variable Size” will be more then sum of “Shared Pool + Large Pool + Java Pool”. This is attributed to the value of these parameters.

Please note that in case you specify a low value for SGA_MAX_SIZE, you will see Oracle bumping the value to higher value so as to accomodate high value of Overhead memory.

Staring from 10g, Overhead memory is accomodated in shared_pool_size.

e.g If you specify SHARED_POOL_SIZE as 200 MB and your internal overhead is 100 Mb, then your actual shared pool value available to instance is only 100Mb.

You can read Note:351018.1 - Minimum for SHARED_POOL_SIZE Parameter in 10.2 Version for more information.

Shared Pool Fragmentation

Shared Pool fragmentation also can cause ORA-4031. This is caused when your queries are not being shared and you are seeing lot of reloads and Hard parses in the Statspack Report. In this case check the request failure size

ORA-4031: unable to allocate 16400 bytes of shared memory

We see that failure size is 16K. In this case you can see if you are using Shared_pool_reserved_size parameter for defining shared pool reserved area. Algorithm for memory allocation is such that it will first try to get memory from the shared pool and then if the requested memory size is greater then _Shared_pool_reserved_min_alloc , then it will get the memory from Shared Pool Reserved area. By default this value is set to 4400 bytes. In case the failure value is say 4200, you can try reducing the value of this parameter to reduce the occurences. Though this is not the complete solution. Read Tweaking _Shared_pool_reserved_min_alloc and ORA-4031 for more details.

You can also identify shared pool fragmentation by querying X$KSMSP

select  'sga heap('||KSMCHIDX||',0)'sga_heap,ksmchcom ChunkComment,

decode(round(ksmchsiz/1000),0,'0-1K', 1,'1-2K', 2,'2-3K',

3,'3-4K',4,'4-5K',5,'5-6k',6,'6-7k',7,'7-8k',8,'8-9k', 9,'9-10k','> 10K') "Size",

count(*), ksmchcls "Status", sum(ksmchsiz) "Bytes" from x$ksmsp

where KSMCHCOM = 'free memory' group by 'sga heap('||KSMCHIDX||',0)',

ksmchcom, ksmchcls, decode(round(ksmchsiz/1000),0,'0-1K', 1,'1-2K', 2,'2-3K',

3,'3-4K',4,'4-5K',5,'5-6k',6,'6-7k',7,'7-8k',8,'8-9k', 9,'9-10k','> 10K')



SGA_HEAP CHUNKCOMMENT Size COUNT(*) Status Bytes

-------------- ---------------- ----- ---------- ---------- ----------

sga heap(1,0) free memory > 10K 393 free 11296600

sga heap(1,0) free memory 3-4K 256 free 781928

sga heap(1,0) free memory 8-9k 63 free 510656

sga heap(1,0) free memory 6-7k 60 free 367076

sga heap(1,0) free memory 2-3K 555 free 1071448

sga heap(1,0) free memory 1-2K 1818 free 1397244

sga heap(1,0) free memory 0-1K 3418 free 348344

sga heap(1,0) free memory 9-10k 30 free 269820

sga heap(1,0) free memory 4-5K 154 free 640332

sga heap(1,0) free memory 5-6k 75 free 381920

sga heap(1,0) free memory > 10K 39 R-free 8302632

sga heap(1,0) free memory 7-8k 22 free 152328

If you see lot of memory chunks in 1-4k and very few in buckets >5K then it indicates Shared Pool Fragmentation. In this case you need to also look at Hard Parses (Statspack/AWR Report). This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Note: - It is not recommended to run queries on X$KSMSP as it can lead to Latching issues. Do not run them frequently (I have seen people scheduling them as part of Oracle Hourly jobs. This should be avoided)

Step3: Is it MTS? If Yes, then are you using LARGE_POOL_SIZE?

LARGE_POOL_SIZE recommended for many features of Oracle which are designed to utilize large shared memory chunks like

- Recovery Manager (RMAN)

- parallel processing/IO slave processing. e.g px msg pool consuming more memory

- Shared Server Configuration

UGA will be allocated from shared pool in case large pool is not configured. So this can cause issues while using Shared Server Mode (MTS). Ensure that you are using LARGE_POOL_SIZE parameter or SGA_TARGET.

Step4: Are you having Multiple Subpools?

Subpool concept introduced from 9i R2. Instead of one big shared pool, memory will be divided into many sub pools.To determine number of subpools, you can use below query

SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val

where nam.indx = val.indx and nam.ksppinm like ‘%kghdsidx%’ order by 1 ;



NAME VALUE

—————————— ——————–

_kghdsidx_count 4

Above query indicates that there are 4 subpools

In case you get ORA-4031 and trace file gets generated, then the trace file can also be used to know the number of subpools configured. To do this search on “Memory Utilization of Subpool”
e.g
Memory Utilization of Subpool 1
========================
free memory 10485760
Memory Utilization of Subpool 2
========================

free memory 20971520

This means that there are two subpools configured for your database.

Oracle suggest having 500M as minimum subpool size. I will say that in case you are not facing serious Shared pool Latch contention, 2 subpools should be sufficient (though I believe most of contention issues can be solved by tuning the application). To change the number of subpools, we need to set parameter _kghdsidx_count in pfile or spfile and restart the database

In case of Spfile

alter system set “_kghdsidx_count”=1 scope=spfile;

Restart of database is required as it is a Static parameter. Please note that Large pool has same number of subpools as shared pool so you might be required to change number of subpools in case you are observing ORA-4031 in large pool.

You can read more about Shared Subpools in my earlier post

Step5: Is Sqlarea consuming lot of Memory?

Actually this can also be categorized into “Bad Application Design” as most of the cases are caused by way applications have been designed. High value for sqlarea in V$SGASTAT (or AWR/Statspack report) can be attributed to following causes

Using Literals Instead of Bind Variables

This is the most common cause for ORA-4031. Tom Kyte explains this on one of his post consequences of not using bind variables

If you do not use bind variables and you flood the server with
hundreds/thousands of unique queries you will
-run dog slow
-consume a ton of RAM (and maybe run out)
-not scale beyond a handful of users, if thatamong other really bad side effects.

The above statement is true and you can find lot of cases where not using Bind variables caused excessive Parsing issues (leading to CPU contention) and ORA-4031 issues. One of the way to locate such statements is by running following query.

SELECT substr(sql_text,1,90) "SQL",count(*) "SQL Copies",

sum(executions) "TotExecs", sum(sharable_mem) "TotMemory"

FROM v$sqlarea

WHERE executions <> 30

ORDER BY 2;

I personally try to use script from Asktom website to find these statements. You can find ,more information by clicking here

create table t1 as select sql_text from v$sqlarea;

alter table t1 add sql_text_wo_constants varchar2(1000);

create or replace function

remove_constants( p_query in varchar2 ) return varchar2

as

l_query long;

l_char varchar2(1);

l_in_quotes boolean default FALSE;

begin

for i in 1 .. length( p_query )

loop

l_char := substr(p_query,i,1);

if ( l_char = '''' and l_in_quotes )

then

l_in_quotes := FALSE;

elsif ( l_char = '''' and NOT l_in_quotes )

then

l_in_quotes := TRUE;

l_query := l_query '''#';

end if;

if ( NOT l_in_quotes ) then

l_query := l_query l_char;

end if;

end loop;

l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );

for i in 0 .. 8 loop

l_query := replace( l_query, lpad('@',10-i,'@'), '@' );

l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );

end loop;

return upper(l_query);

end;

/

update t1 set sql_text_wo_constants = remove_constants(sql_text);



select sql_text_wo_constants, count(*)

from t1

group by sql_text_wo_constants

having count(*) > 100

order by 2

/

Above query will give you queries which are using literals and should be modified to use bind variables. Sometimes it is not possible to modify the application, in that case you can use CURSOR_SHARING=SIMILAR/FORCE to force the application to use bind variables. Please note that this can cause issues (especially CURSOR_SHARING=SIMILAR), so it is recommended to test the application in Test environment before implementing in Production. Applications like Oracle Apps do not certify use of this parameter so also check with your application vendor if this can be used.

You can refer to following articles where I have discussed similar issue

ORA-4031 - A Case Study

Application Design and ORA-4031

Multiple Child Cursors/High Version Count

This is also one of the cause for high usage of memory in SQLAREA region. Child cursors are generated in Shared pool when the SQL text is same but Oracle cannot share it because the underlying objects are different or different optimizer settings, etc. To know about child cursors, refer to following Metalink note

Note 296377.1 - Handling and resolving unshared cursors/large version_counts

In case of Oracle 10g, you can use Statspack/AWR report for finding the child cursors under category “SQL ordered by Version Counts”. Following statements can also be run to identify if child cursors are being generated in your database


For 10g

SQL> select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss

where sa.address=ss.address and sa.version_count > 50 order by sa.version_count ;

For 8i/9i

select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss

where sa.address=ss.KGLHDPAR and sa.version_count > 50 order by sa.version_count ;

Results returned by above query reports SQL which are not being shared due to some reason. You should find column with Value Y to find the cause. Most of these issues are encountered while using CURSOR_SHARING=SIMILAR. In case you are using this parameter with columns having Histograms, then it is expected behavior.Read more about Cursor issues related to Histograms in Note:261020.1 - High Version Count with CURSOR_SHARING = SIMILAR or FORCE

There are cases where none of the column value returns Y value. Most of these cases, you need to work with Oracle support to find the cause as this could be a bug.

Child Cursors are problematic as they increase shared pool memory consumption, High parsing and also as the number of child cursors increase, Oracle will take more time to span all the child cursors to match if it can reuse them, if not then it spawns a new child cursor. This results in High Parsing time and CPU contention.

High Sharable Memory per SQL

One more cause for high value of SQLAREA in V$SGASTAT is high memory consumption for SQL statement. This can be due to poorly written SQL statement or due to Oracle Bugs.

In case of Oracle 10g, you can use Statspack/AWR report for finding the statements with high value of Sharable Memory. You can also use Sharable_mem column in V$SQLAREA to find these queries.

Step6:What Next?

You have followed all the above steps and find everything is ok. Now what do we check next?

We can look for any trace file which got generated during the time of error and see which component was taking more memory. You can try searching in metalink with that component. Else you can take a heapdump at time of error and upload the file to support.

Heapdump event
The Heapdump event is used to dump memory from different subheaps. Errors ora-4030 are associated with problems in the pga, uga or cga heaps, and error ora-4031 is related only to problems with the shared pool/large pool/Java Pool/Streams Pool.

command - >  alter system set events ‘4031 trace name heapdump level 2’;

init.ora - >events=’4031 trace name heapdump, level 2’

SQL>oradebug setmypid

SQL>oradebug dump heapdump 2

SQL>oradebug tracefile_name

Staring from 9.2.0.5, level 536870914 can be used for generating heapdump which will gather more diagnostic information for support to diagnose the cause.

Also it is not recommended to set Heapdump event in init.ora or spfile since it will force multiple dumps at time of Shared Pool memory issues. Oracle requires Shared pool Latch for dumping heapdump, so this can worsen the Latching situation. You can set Errorstack event to generate trace file at time of ORA-4031 error

alter system set events '4031 trace name errorstack level 3'; 

Use immediate trace option or Oradebug command at time of error

SQL> connect / as sysdba

SQL> alter session set events 'immediate trace name heapdump level 536870914';



OR



sqlplus "/ as sysdba"

oradebug setmypid

oradebug unlimit

oradebug dump heapdump 536870914

oradebug tracefile_name

exit

Upload the tracefile to Oracle support.

Using the above approach will help you to resolve ORA-4031 in Shared Pool.

Large Pool

While working on ORA-4031 in large pool, you need to follow below approach

1)Check size for LARGE_POOL_SIZE. If possible increase it.

2)Check number of subpools. Ensure that you have sufficient memory in each subpool. _kghdsidx_count is used to control the number of subpools in large pool also. So you would have to either increase memory available in each subpool or decrease the count.

3)In case of MTS, check if any session is consuming lot of memory. It’s a case where instead of getting ORA-4030, you get ORA-4031 in large pool (In MTS, UGA is part of large pool).

4)If all above suggestions have been tried, then capture heapdump and upload the file to Oracle Support. You can use level 32 or 536870944 i.e

SQL> connect / as sysdba

SQL> alter session set events 'immediate trace name heapdump level 32';

or

SQL> alter session set events 'immediate trace name heapdump level 536870944';

I hope this article helps in following a methodology for resolving ORA-4031. At present this article is not exhaustive article on this error and it will be more useful if it can be used as a approach after you have gone through below metalink notes.

Note:62143.1 - Understanding and Tuning the Shared Pool

Note:396940.1 - Troubleshooting and Diagnosing ORA-4031 Error

Note:146599.1 - Diagnosing and Resolving Error ORA-04031

Sunday, February 8, 2009

R M A N

Active Database Duplication or Network-Aware database duplication
No need to backup for duplication. Doesn't copy and apply online redo log, only up to most recent archive. Both password file required with same password.
Flash Recovery Area is not copied. Archive log are copied if necessary.
Source may be in mount or open(archivelog mode). Can automatically copy if specify the clause.
duplicate database to dupdb from active database db_file_name_convert '/u01', '/v01' SPFILE parameter_value_convert '/u01', '/v01'
set log_file_name_convert '/u01', '/v01' SET sga_max_size '3000m' set sga_target '2000m'
SET=> SET are processed later, does not specify db_file_name_convert
SPFILE => SPFILE must be in use in source instance, it will be copied on destination and used to start auxiliary instance. Destination database must be
started using a text init.ora containing only DB_NAME parameter.
db_file_name_convert=> replace pattern for datafiles and tempfiles
NOFILENAMECHECK=> databases are on different hosts, no need to check for different filenames
PASSWORD FILE=> if specified will copy source password file to destination
1)-Create an init.ora for auxiliary instance
2)-Create a passwordfile for auxiliary instance with same SYSDBA password as the source(target)
3)-startup auxiliary instance using startup nomount
4)-Connect RMAN to target and to auxiliary instance, and issue the active database duplication command

Parallel Backup or Multisection backup
Parallelize datafiles backup up to 265 parts(manually customizable for each datafile), multiple channel recommended.
All section have same size may be except the last. SECTION SIZE mandatory and value must be specified. Can't use MAXPIECESIZE with SECTION SIZE.
SECTION SIZE may be specified using VALIDATE DATAFILE command. Each section is a backup piece in the backup set.
backup SECTION SIZE 200m tablespace pippo;
* Views *
V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE (piece, section_size, multi_section[yes, no])

Archival(Long-Term) Backup - KEEP option
Are backed up control file, spfile, data and archived redo logs.
LOGS/NOLOGS clause can't be specified with KEEP. Self-contained archival backup immune from any retention. Redo logs are switched, current needed.
Store archival backup in non flash recovery area, may fill up quickly. Recovery catalog required if using KEEP FOREVER.
CHANGE not allowed for backup in flash recovery area.
An archived log is never made obsolete if needed for a guaranteed restore point, may be obsolete if needed for a normal restore point.
backup database format 'c:\arch\db_%U.bkp' tag quarterly KEEP until time 'sysdate + 365' RESTORE POINT firstq08;
#KEEP=>[FOREVER, UNTIL TIME], recovery catalog required if using KEEP FOREVER.
#RESTORE POINT=>SCN is captured when backup finish. The time until which RMAN can restore an archival backup
CHANGE backup tag 'pippo_bck' forever;
change copy of database controlfile keep;

Fast Incremental Backup
Flashback Database feature is required, searches flashback logs before backup files. Only corrupted blocks are restored and recovered in a datafile.
Archivelog mode always required. Db mounted or open. No proxy copies allowed. Block change tracking may be enabled on physical standby db.
10g blockrecover(no more available)-> 11g recover...block
recover datafile 2 block24 datafile 10 block 21 from tag=sundaynight;
RMAN>recover corruption list;
#recover all corrupted blocks from listed in V$DATABASE_BLOCK_CORRUPTION, view data is then deleted
* Identified Corrupt Blocks *
analyze table, analyze index, list failure, validate(populates V$DATABASE_BLOCK_CORRUPTION), backup...validate, export to /dev/null, dbverify
* Views *
V$DATABASE_BLOCK_CORRUPTION list block corrupted, populated by VALIDATE DATABASE

RMAN Compression
BZIP2(good compression) and ZLIB(def. faster 45%)
RMAN>configure compression algorithm 'bzip2';

Archived Redo Log Deletion
configure archivelog deletion policy=NONE(def.)(applies to all archived logs stored on disk including Flash Recvoery area)(Archived log will be marked
for deletion after transfered to all log_archive_dest_n and backed up or be obsolete)
configure archivelog deletion policy to backed up 2 times to sbt

RMAN Catalog merging - Export/Import RMAN catalog
source will be deleted and deregistered(unless you use ), source db version same as RMAN client version,
fails if same db is already registered in destination
import catalog rman/rman@reporman dbid=122,123 db_name=pippo,pluto dbid=121,31;
#you are connected to destination, rman@reporman will be imported

Virtual private catalog or Virtual catalog
1) SYSDBA> create the db user
2) SYSDBA> grant recovery_catalog_owner to user
3) real CATALOG OWNER> grant [REGISTER_DATABASE | REGISTER] to user;
#permits to register unknown databases, does not grant recovery catalog access
and / or
real CATALOG OWNER>grant catalog for database t1 to user;
#grant access to a target existing database to a user even if db is not yet registered, use DBID if db not yet registered.
#User may un/register databases. User creates local scripts and read global scripts
5) VIRTUAL USER - RMAN> create virtual catalog;
#use for 11g or later database
or
VIRTUAL USER - RMAN> DBMS_RCVCAT.create_virtual_catalog
#use for 10g database
6) VIRTUAL USER - RMAN> register database;
RMAN>revoke all from privileges from user;
* Packages *
DBMS_RCVCAT.create_virtual_catalog
#must execute before working with any pre-11.1 database. Does not create a virtual private catalog
Data Recovery Advisor
<- RMAN, EM Support Workbench & Database Control, Healt Monitor(following errors), VALIDATE or BACKUP command Quickly detect and analyzes data failures, repair manual(mandatory, optional) or automatic, data loss or no Automatically repair some network I/O errors, control file and datafile inconstistencies, accidental renaming of datafiles. No RAC. No standby database. No ORA-600, must be in Healt Monitor status(open, closed), priority(critical, high, low), grouping 1) list failure #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically 2) advise failure #generate manual and automated options for repair, rman scripts are generated for automated options 3) repair failure #by default high and critical failures will be repaired and closed. Generates warning if new failures are detected since last May not automatically close a fixed failure * Views * V$IR_FAILURE or list failure; change failure 7 closed; change failure priority low; #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically V$IR_MANUAL_CHECKLIST or advise failure; #summary and close fixed V$IR_REPAIR or repair failure; #requires advise failure validate database; #checks only for intrablock(no interblock) corruption in all datafiles, control files and the server parameter file validate database; #proactive check Healt Monitor -> run Data Recovery Advisor checks
View Healt Monitor checker reports using: EM, ADRCI, DBMS_HM, V$HM_RUN
Healt check mode can be:
[Reactive]: Healt check ran automatically when a critical error occurred
[Manual]
Available checks:
[Data Block Integrity] : check disk image block corruptions, checksum failures and intrablock corruptions. Not check for interblock corruption
[Db Structure Integrity]: verify accessibility, consistency and corruption of all database files
[Undo Segment Integrity]: check for logical Undo corruptions and attempt to repair them
[Redo Integrity check], [Dictionary Integrity check]
* Packages *
DBMS_HM.run_check(check_name, run_name, input_params)
DBMS_HM.(GET)RUN_REPORT(run_name M, type O , level O )
* Views *
V$HM_CHECK(all possible checks), V$HM_RUN(view HM checker reports), V$HM_CHECK_PARAM, V$HM_FINDING, V$HM_RECOMMENDATION
show hm_run;
#show all checks
Partitioning
Interval partitioning
The high value of the range partitions is called the Transition Point
Extends range partitioning by automatically creating new partitions on new data insert out of bounds. Only a single column DATE or NUMBER.
Works on index-organized tables. You can't create domain indexes
create table pippo(time_id date) partition by range(time_id)
interval(numtoyminterval(1, 'month'))
(partition p0 values less than to_date('3/12/1976', 'dd/mm/yyyy')),
(partition p1 values less than to_date('3/01/1977', 'dd/mm/yyyy'))
alter table pippo merge partitions for(to_date('3/12/1976', 'dd/mm/yyyy')), to_date('3/01/1977', 'dd/mm/yyyy')))
#the partitions to merge must be adiacent
* Views *
DBA_TAB_PARTITIONS(partitioning_type[range, interval(new automatically created)])

System partitioning
The application must explicitly name the partition in the insert/merge statements, not required for update/delete.
Can't create as CTAS, Hurts performances. No keys. Partitions can have different physics attributes. Can't create unique local indexes. Can't split partitions
create table pippo(c1 number) partition by system (partition p1, partition p2)
insert into pippo partition(pippo) values(1)
alter table pippo merge partitions p1, p2 into partition p1

Virtual Column-Based partitioning
Virtual column is computed on the fly. Datatype is optional. Can collect optimizer statistics. You can index a virtual column.
Can use a virtual column in all dml/ddl. Can partition over a virtual column. Can create virtual columns only on normal heap table.
Can't refers another virtual column. Output must always be scalar no lob, long, raw, ... Can't use pl/sql function if used for partitioning
create table pippo (c1 number, c2 number generated always as (c1*2))

Reference partitioning
Master table is a partitioned table. Can't use interval partitioning. Requires foreign key relationship.
If no tablespace specified then partitions are in same tbs of parent table's partition. Can't specify partitions bounds.
Partition names must be unique between master and reference table. Can't disable the foreign key.
Can't add or drop partitions from child table while its possibly for parent table.
Pruning and partitionwise joins works even if query predicates are different from the partitioning key

Composite Partitioning
Top Level partitioning: Range, List, Hash
Range-list, Range-hash, List-list, list-hash, list-range, range-range, interval-range, interval-list, interval-hash
Directory structure
$ORACLE_BASE (recommended only)
$ORACLE_HOME (do not explicitly set)
dbs
oranfstab sfirst default location
Flash recovery area (different disk then datafiles)
Datafiles (different disk then Flash Recovery Area)
Inventory
/diag Advanced Diagnostic Repository (ADR base=diagnostic_dest multiple ADR homes allowed but only one ADR base)
Db name
Instance name (use v$DIAG_INFO to view these paths) ADR_HOME (multiple ADR homes allowed but only one ADR base)
alert (xml alert format)
hm (checker reports)
incident
trace(V$DIAG.name='Diag Trace')
text alert.log
background process trace files
foreground process trace files
wallet
ewallet.p12
cdump
bdump
udump
...
init.ora new params
# A S M
asm_preferred_read_failure_groups=data.locationA, data.locationB #Dynamic. disk_group_name.failure_group only disks local to the instance!!
asm_diskgroups
asm_diskstring
# M E M O R Y
sga_target=0 #Static. Enable Automatic Memory Management
pga_aggregate_target=0 #Static. Enable Automatic Memory Management
memory_target=900M #dynamic default 0 and when upgrading. Total memory for SGA and PGA. With sga_target=0 and pga_target=0 then defaults 60% to SGA and 40% to PGA
#With sga_target!=0 and pga_target!=0 they indicate minimum values
memory_max_target=2000M; #if unset defaults to memory_target(if greater than 0) or 0(if memory_target is not set)
# R E S U L T C A C H E
result_cache_max_size #Static. Set the high limit. 0 disable. Maximum is 0.25% of memory_target, 0.5% of sga_target, 1%><75% plsql_optimize_level="3" plsql_code_type="INTERPRETED(def.)," sec_case_sensitive_logon="TRUE(def.)," sec_max_failed_login_attempts="5" sec_protocol_error_further_action="[CONTINUE" sec_protocol_error_trace_action="none," audit_trail="default" ldap_directory_sysauth=" #YES:" optimizer_capture_sql_plan_baselines="FALSE(def.)," optimizer_use_sql_baselines="FALSE(def.)" optimizer_use_pending_statistics="FALSE(def.)" optimizer_use_invisible_indexes="FALSE(def.)" control_management_pack_access="NONE," java_jit_enabled="true" control_management_pack_access="DIAGNOSTIC+TUNING(def.)," diagnostic_dest=" #ADR" diagnostic_dest="%ORACLE_BASE" ddl_lock_timeout="0(def.)" db_securefile="ALWAYS," db_ultra_safe="OFF(1:OFF" db_block_checking="TYPICAL," db_block_checksum="FALSE(def.)," db_lost_write_protect="TYPICAL" statistic_level="[TYPICAL" filesystemio_options="[ASYNCH" timed_statistics="true" encryption_wallet_location="(source="(method="file)(method_data="(directory="/yourdir)))" diagnostic_dest="ADR" diagnostic_dest="%ORACLE_BASE">Incident Package Configuration)(Collect diagnostic data, Generates incident reports and transmits), ADRCI
Critical Error=Problem=ORA-04031, composed of many Incidents=ORA-04031
Incident(Collecting, Ready, Tracking, Closed, Data Purged)
Retention: 1 year metadata(retention may be changed with EM), 1 month dumps. Incident is closed after twice retention elapses, is purged 30 days after open
1 Generate logical pack, 2 finalize, 3 gen. physical
Incident thresholds cannot be modified.
No edit and customizing when Quick Packaging
Oracle Support Workbench may be enabled to automatically upload files to Oracle Support by providing infos during the installation of
Oracle Configuration Manager
Flood-controlled incidents
are incidents reported multiple times. Alert log is updated and incident recorded in ADR but no dumps are generated.
SQL Performance Analyzer
1)First you capture SQL workload in a STS(SQL Tuning Set)
2)Move SQL workload to test environment
3)Execute the workload to capture baseline execution data
4)you make your changes
5)Execute the workload to capture execution data
6)Analyze differences in SQL performances
7)Tune regressed SQL statements
Identify plan that might have regressed after a database upgrade, hardware reconfiguration, schema changes and any change may affect SQL.
net SQL workload is identified. ?SQL are executed?
Cannot use SQL Performance Analizer to identify sql statements that are part of the SQL plan baseline
You can set a time limit within which all SQL statement in the workload must execute
DML and DDL are NOT taken in consideration
* Parameters *
TIME_LIMIT parameter for all SQL
LOCAL_TIME_LIMIT for single SQL
EXECUTION_TYPE[EXPLAIN PLAN | TEST EXECUTE(def.)]
#TEST EXECUTE: Each SQL Statement in the workload is executed one time and execution plans and statistics are collected.
SQL Repair Advisor
<- Support Workbench(Diagnostic Summary->Active Incidents link)
(often provides patch to fix)
sqlplan will change after patching. SQL Repair Advisor is not automatically launched after a SQL statement crash
* Export patches * DBMS_SQLDIAG.pack_stgtab_sqlpatch(staging_table), export, unpack_stgtab_sqlpatch(staging_table)
* Packages *
DBMS_SQLDIAG.create_diagnosis_task(sql_text, task_name)
DBMS_SQLDIAG.set_tuning_task_parameter(task_id, filter)
DBMS_SQLDIAG.execute_diagnosis_task(task_name)
DBMS_SQLDIAG.report_diagnosis_task(task_name, out_type)
DBMS_SQLDIAG.accept_sql_patch(task_name, task_owner)
* Views *
DBA_SQL_PATCHES(recommended patches), drop_sql_patch(procedure)
SQL Test Case Builder
<- EM Support Workbench Theese informations are gathered for a problematic SQL statement: the text, involved table definitions, optimizer statistics. No dump files DBMS_SQLDIAG.export_sql_testcase_dir_by_inc Automatic Workload Repository <- EM(Adaptive Metric Threshold link -> Baseline Metric Thresholds)
Requires AWR to have at least the same retention time.
A baseline is any set ot snapshot taken over a period of time.
Static baseline
Repeating static baseline
#Will be automatically created
Moving window baseline
#Will not be automatically created. SYSTEM_MOVING_WINDOW is system provided
Retention raise from 7 days(10g) to 8 days(11g) for default baseline SYSTEM_MOVING_WINDOW containing Adaptive Threshold
Thresholds types:
Significance Level
#if set to 0.99 alert on 1% change. Based on statistical relevance. Only Basic Metric support Significance level thresholds
Percentage of maximum
#calculated using the highest value captured by the baseline
Fixed values
When a baseline ends in the future a new baseline with the same name is created.
* Package *
DBMS_WORKLOAD.modify_baseline_window_size(wind_size)
#to change def. retention, if increase then increase AWR retention to match time
DBMS_WORKLOAD.create_baseline
#should be used for past time periods
DBMS_WORKLOAD.create_baseline_template
single: (start_time, end_time, bas_name, templ_name, EXPIRATION(O)=>null, dbid(O))
#EXPIRATION if not specified or null than the baseline template created will never expire
repeating:(day_of_week, hour_in_day, duration, expiration(O)=>null, start_time, end_time, baseline_name_prefix, template_name, dbid(O))
#should be used when part or all of the time period is on the future
DBMS_WORKLOAD.rename_baseline(old, new, dbid(O))
DBMS_WORKLOAD.modify_baseline_window_size
DBMS_WORKLOAD.drop_baseline_template(templ_name, dbid(O))
DBMS_WORKLOAD.select_baseline_metrics(baseline_name)(display metric thresholds)
DBMS_WORKLOAD_REPOSITORY...
#resize AWR retention period
* View *
DBA_HIST_BASELINE_TEMPLATE (template_type[single | repeated], repeat_interval[timing string as for DBMS_SCHEDULER])
DBA_HIST_BASELINE_DETAILS (shutdown[yes | no | null], pct_total_time[total snapshot time / total possible time], error_count[errors during snapshot])
DBA_HIST_BASELINE (baseline_type[static | moving | window | generated], creation_type, expiration[how long keep the baseline], template_name, last_computed,
moving_window_size[if baseline_type is null then AWR retention period else number of days of moving window])
Controlling Automated Maintenace Tasks
New ABP(Autotask Background Process)
- converts automatic tasks into Scheduler jobs
- determine the jobs that need to be created for each maintenance task
- stores task execution history in the SYSAUX tablespace
- does NOT execute maintenance tasks
A task may be enabled/disabled and changed of resource percentage in one or all maintenance windows. May change duration of maintenance windows.
Database Resource Manager is automatically enabled during maintenance window
By default AutoTask schedules: optimizer statistics collection, SQL Tuning Advisor and Automated Segment Advisor.
* MAINTENANCE_WINDOW_GROUP *
DEFAULT_MAINTENANCE_PLAN(belongs to a resource allocation)
10g: weeknight_window, weekend_window
11g: [10PM-02AM, 4h]: monday_window, tuesday_window, wednesday_window, thursday_window, friday_window. [06AM-02AM, 20h]saturday_window, sunday_window
* Managing Automatic Maintenance Tasks *
<- EM 10g: DBMS_SCHEDULER only enable or disable automatic maintenance tasks 11g: DBMS_AUTO_TASK_ADMIN more fine grained control DBMS_AUTO_TASK_ADMIN.enable(client_name, operation, window_name) DBMS_AUTO_TASK_ADMIN.disable(client_name, operation, window_name) DBA_AUTOTASK_OPERATION lists only automatic task(clients) operations names DBA_AUTOTASK_CLIENT DBA_AUTOTASK_TASK #view ABP(Autotask Background Process) repository, stores execution history for automated maintenance tasks I/O Calibration <- EM(Database Control -> Performance -> I/O Calibration)
Requires 10 minutes to run, sysdba, filesystemio_options=[ASYNCH | SETALL], timed_statistics=true
recommended during low activity, no concurrent calibrations allowed
DBMS_RESOURCE_MANAGER.calibrate_io(num_disks=>1, max_latency=>10, max_iops, max_mbps, actual_latency)
V$IO_CALIBRATION_STATUS, DBA_RSC_IO_CALIBRATE, V$IOSTAT_FILE (all show calibration results)
Database Resource Manager
Automatic actions based on current session single call usage: switch of Resource Consumer Group(switch grant needed), session kill, statement kill
Database Resource Manager is automatically enabled during maintenance window
CREATE_RESOURCE_PLAN(switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_time[UNLIMITED], switch_estimate[false | true],
switch_io_megabytes[null | mb], witch_io_reqs[null | io#], switch_for_call, switch_for_call[null | true])
CREATE_PLAN_DIRECTIVE(plan, group_or_subplan, mgmt_p1, switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_io_reqs, switch_io_megabytes,
switch_for_call, switch_time)
MIXED_WORKLOAD_PLAN predefined resource plan
Level 1: 100% sys_group
Level 2: [85% interactive_group], [15% ORA$AUTOTASK_SUB_PLAN, ORA$DIAGNOSTIC, OTHER_GROUPS]
Level 3: batch_group
Oracle Scheduler
Minimum amount of metadata required for creation, recommended to create a big number of job
Create Lightweight jobs using: job array, named programs, schedules, DBMS_SCHEDULER.create_job
Lightweight jobs are created only from job templates(stored proc or Scheduler program), privileges are only inherited from parent job,
may discriminate DataGuard nodes
* Package *
DBMS_SCHEDULER.create_program(program_name, program_action, program_type[plsql_block | stored_procedure], enabled)
DBMS_SCHEDULER.create_job(job_name, program_name, repeat_interval(opt.), end_time(opt.), schedule_name(opt.), job_style[LIGHTWEIGHT], comments)
* Create a Job Array and submit using single transaction *
declare
testjob sys.job;
testjobarr sys.job_array;
begin
testjobarr:=sys.job_array();
testjobarr.extend(500);
testjob:=sys.job(...);
testjobarr(i):=testjob;
dbms_scheduler.create_jobs(testjobarr, 'transactional');
end;
* Remote External Jobs *
On source db: XML DB(default), run as SYS prvtrsch.plb, DBMS_SCHEDULER.set_agent_registration_pass(registration_password, expiration_date(O), max_uses(O))
On Dest host: install Scheduler agent(not default), configure schagent.conf, schagent.exe -registerdatabase hostdb 1521, schagent.exe -start
* Enable a remote job *
DBMS_SCHEDULER.create_credential(cred_name, username, pwd)
DBMS_SCHEDULER.create_job(...)
DBMS_SCHEDULER.set_attribute(job_name, 'credential_name', cred_name)
DBMS_SCHEDULER.set_attribute(job_name, 'destination', host:port), .enable(job_name)
Security Enhancements
sec* parameters
Pwd are case sensitive(default when NOT upgrading, reset the pwd otherwise), delay after pwd wrong up to 10 secs
orapwd file=... entries=... ignorecase=n(def.)
alter profile ... password_verify_function verify_function_11g
utlpwdmg.sql enforce pwd verification
OS authentication is checked before the password file
Authentication using OID can be configured by granting SYSOPER/SYSDBA enterprise role in OID
Administrators can be authenticated using local OS if they are in the OSDBA or OSPER group
* ACL Fine Grained Access to network system supplied packages*
An ACL is used to restrict access to certain hosts is stored in XML format. Connect comprise resove.
an ACL Access Control List is a users/privilege list, only one ACL per host
DBMS_NETWORK_ACL_ADMIN.create_acl(acl[name of XML DB priv file], description, principal, is_grant, privilege[connect | resolve])
#one privilege must be specified on creation
DBMS_NETWORK_ACL_ADMIN.add_privilege(acl[name of XML DB priv file], principal, is_grant, privilege[connect | resolve])
DBMS_NETWORK_ACL_ADMIN.assign_acl(acl[name of XML DB priv file], host[ex. *.fadalti.com], lower_port, upper_port)
DBMS_NETWORK_ACL_ADMIN.check_privilege(acl[name of XML DB priv file], username, privilege[connect | resolve])

Oracle Wallet
<- Oracle Wallet Manager(owm.exe) see sqlnet.ora for location. Encryption Wallet(manually open after db startup) and Auto-Open Wallet Wallet required, configured and opened to encrypt tablespaces, [AES128, 3DES168, AES128, AES192, AES256], I/O overhead Attempting to create a table in an ecrypted tablespace with the wallet closed raise an error. You cannot change the key of an encrypted tablespace. A normal tablespace cannot be converted to an encrypted tablespace. Creation method 1: mkstore -wrl $ORACLE_BASE/admin/$ORACLE_SID/wallet -create Creation method 2: alter system set encryption key identified by "password" create tablespace pippo datafile 'c:\pippo.dbf' size 100m encryption using 'AES128' default storage(encrypt) Automatic SQL Tuning Advisor <- EM statistic_level=TYPICAL, BASIC will disable AWR providing SQL sources. AWR retention less than 7dd disable ASTA. STA may evolves SQL Plans from non-accepted to accepted 10g Automatic Tuning Optimizer(never auto tune) ->upgraded-> 11g Automatic SQL Tuning Advisor
job name SYS_AUTO_SQL_TUNING_TASK runs for 1 hour, runs only once during nightly maintenace window as part of MAINTENANCE_WINDOW_GROUP
Statistical analysis, SQL profiling, Access path analysis(add indexes), SQL structure analysis(Modifying SQL statements)
STA test and automaticall implement only SQL profiles creations with at least 66% of benefit,
choses statements from AWR Top SQL with poor performance in the past week or you manually run providing Sql Tuning Sets.
If run in Tuning Mode then the optimizer might run statistics and store in SQL Profiles
* Limitations * No recursive SQL, no ad hoc SQL, no parallel SQL, no long run profiled SQL, no insert/delete, no ddl
* Report *
Show if statisticd need to be collected, profile was recommended, index recommendation.
Show a benefit percentage of implementing recommendation for SQL profile and indexes
* Package *
DBMS_SQLTUNE.set_tuning_task_parameter(... see params below ...)
#for automatic tuning only, no manual
task_name=>'SYS_AUTO_SQL_TUNING_PROG'
parameter=>accept_sql_profile: When TRUE auto accept SQL profiles. Profiles are always generated
parameter=>replace_user_sql_profiles: allowed to replace user created SQL profiles
parameter=>max_sql_profiles_per_exec: maximum number of allowed SQL profiles changes accepted per tuning task
parameter=>max_auto_sql_profiles: maximum total number of allowed SQL profiles changes by db
parameter=>execution_days_to_expire: default 30, task history expiration
parameter=>time_limit: to change default duration of 1 hour in seconds
parameter=>test_execute: only use plan costs
DBMS_SQLTUNE.report_auto_tuning_task(type[text], level[typical], section[all]) return varchar2
DBMS_AUTO_TASK_ADMIN.enable(client_name=>'sql tuning advisor', operation=>'NULL', window_name=>['NULL'(all) | (a name)])
DBMS_AUTO_TASK_ADMIN.disable
#disable the Automatic Sql Tuning process
* View *
DBA_SQL_PROFILES[type], if type=AUTO then auto implemented SQL profile
DBA_ADVISOR_EXECUTIONS shows metadata information
DBA_ADVISOR_SQLSTATS shows a list of all SQL compilation and execution statistics
DBA_ADVISOR_SQLPLANS a list of all SQL execution plans
SQL Plan Management
Preserve SQL performance across major system changes
A SQL Plan may be:
non accepted #may be evolved to accepted
accepted #required for use, all manually loaded
enabled #required for use
fixed #A fixed plan has precedence over a non-fixed even if with higher cost, is FIXED if at least an enabled plan inside is FIXED=yes,
#the optimizer will not add new plans. If you accept a profile recommended by STA that profile will not be FIXED
SQL Tuning Advisor may evolve plans from non-accepted to accepted.
* Plan history * Contains both accepted and not-accepted plans for a repeatable SQL, best plan not accepted yet in SQL plan baseline
* SQL Plan Baseline * Contains plan history but only accepted
Automatic plan capture: see optimizer_capture_sql_plan_baselines parameter
Manual plan capture: performances are not verified
* Package *
DBMS_SQLTUNE.create_sqlset(sqlset_name, description) creates an empty SQL set
select value(p) from table(DBMS_SQLTUNE.select_workload_repository('peak baseline', null, null, 'elapsed time', null, null, null, 20)) p
DBMS_SQLSET.load_sqlset(sqlset_name, populate_cursor )
DBMS_SPM.load_plans_from_sqlset(sqlset_name, fixed=>no, enabled=>yes)
#plans are loaded into the SQL plan baseline as non-fixed accepted and enabled plans. May be used
DBMS_SPM.load_plans_from_cursor_cache(sql_id, plan_hash_value[null ], sql_text, fixed[no | yes ], enabled[yes],
attribute_name[sql_text | parsing_schema_name | module | action], attribute_value) return integer
DBMS_SPM.evolve_sql_plan_baseline(sql_handle, plan_list, plan_name, time_limit[DBMS_SPM.auto_limit | DBMS_SPM.no_limit],
verify[YES], commit[YES]) return CLOB
#show all non-accepted plans with status changed to accepted.
#VERIFY=YES the database executes the unaccepted plans and compare performances againts the SQL plan baseline
#VERIFY=NO all unaccepted plans became accepted
DBMS_SPM.alter_sql_plan_baseline(sql_handle, plan_name, attribute_name[accepted_status | enabled], attribute_value=>'yes')
DBMS_XPLAIN.display_sql_plan_baseline
* Views *
DBA_SQL_PLAN_BASELINES(sql_handle, sql_text, plan_name, origin[manual-load | manual-sqltune | auto-capture | auto-sqltune], enabled, accepted, fixed, autopurge)

SMB SQL Management Base
By default uses Automatic Segment Space Management ASSM
contains plan history information, the statement log, SQL profiles and plan baselines. Is part of data dictionary and stored in sysaux tbs.
sysaux tbs must be online to use SQL Plan Management features that access the SMB
DBMS_SPM.configure([SPACE_BUDGET_PERCENT | PLAN_RETENTION_WEEKS], value)
#SPACE_BUDGET_PERCENT: if the size of SMB exceed 10% of sysaux tablespace a weekly alert is generated
#PLAN_RETENTION_WEEKS: plan not used for more than 1 year are purged
* Views *
DBA_SQL_MANAGEMENT_CONFIG(SPACE_BUDGET_PERCENT, PLAN_RETENTION_WEEKS)
SQL Access Advisor
May be interrupted and resumed. By default is not scheduled as a maintenance task by the AutoTask.
Partitioning Recommendations for tables, indexes and materialized views, default mode is interval.
Partitioning Recommendations require at least 10000 rows on table, table must have some predicate or join with date or number , space, no bitmap indexes,
no interrupted SQL Access Advisor.
Advice on tuning Materialized Views, Materialized View Logs, indexes(B-tree, bitmap, function based)
* Packages *
DBMS_ADVISOR.add_sts_ref(task_name, sts_owner, sts_name)
#add an existing SQL tuning set
DBMS_ADVISOR.add_sqlwkld_ref(task_name, sts_name)
#add an existing SQL tuning set
DBMS_ADVISOR.delete_sts_ref(task_name, sts_owner, sts_name) link the SQL Access Advisor and the Workload
DBMS_ADVISOR.create_task(advisor_name=>'SQL Access Advisor', task_name)
DBMS_ADVISOR.set_task_parameter(task_name, param, value)[partition | segment][def_partition_tablespace, tbs][max_number_partitions,...]
[valid_table_list,...][mode,comprehensive][journaling,4][analysis_scope,all]
DBMS_ADVISOR.execute_task(task_name)
DBMS_ADVISOR.get_task_report(task_name, type[text], level[typical], section[all], owner_name, execution_name, object_id)
DBMS_ADVISOR.cancel_task(task_name) wont see any recommendation
DBMS_ADVISOR.interrupt_task(task_name) sometimes see recommendation, can't resume or restart
DBMS_ADVISOR.create_file(DBMS_ADVISOR.get_task_script(task_name), directory_name, file_name)
DBMS_ADVISOR.copy_sqlwkld_to_sts()
DBMS_ADVISOR.quick_tune(DBMS_ADVISOR.SQLACCESS_ADVISOR, task_name, sql_text)
#task is automatically created
ADDM Automatic Database Diagnostic Monitor
control_management_pack_access=DIAGNOSTIC+TUNING and statistic_level=TYPICAL or ALL
Analyzes AWR data, present problems and recommendations. Runs automatically by default when a new AWR snapashot is taken. Enabled by default.
You can use directive to limit or filter the ADDM
* ADDM for RAC Real Application Cluster *
Database ADDM(analyze all RAC instances), Instance ADDM(analyze one RAC instances), Partial ADDM(analyze some RAC instances)
Analyze: [Global resources and global I/O], [High-Load SQL], [Contention across the instances], [Global cache interconnect traffic], [network latency issues],
[skews in instance report times]
* Packages *
DBMS_ADDM.analyze_db(task_name, begin_snapshot, end_snapshot)
#Creates an ADDM task global for all RAC instances
DBMS_ADDM.analyze_inst
#Creates an ADDM task for a single instance
DBMS_ADDM.analyze_partial
#Creates an ADDM task for some instances
DBMS_ADDM.delete
#delete an ADDM task
DBMS_ADDM.get_report(task_name)
#Gets a report of an ADDM task
DBMS_ADDM.delete_finding_directive;
DBMS_ADDM.insert_finding_directive(task=>null, directive=>'SGA Directive', finding=>'Undersized SGA', minactivesessions=>5, minpctimpact=>50)
#TASK : null means all subsequently created tasks
#FINDING: the finding in DBA_ADVISOR_FINDING_NAMES
DBMS_ADDM.delete_sql_directive; DBMS_ADDM.insert_sql_directive
#Limit action
DBMS_ADDM.delete_segment_directive;
#Limit action
DBMS_ADDM.insert_segment_directive
#Prevent ADDM from reporting a schema, segment, subsegment or object
DBMS_ADDM.delete_parameter_directive; DBMS_ADDM.insert_parameter_directive(param_name)
#Prevent ADDM from altering specified parameter
DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER (advisor_name, parameter, value)
#Specify the mode in wich to run ADDM [Database | Instance | Partial]
* Views *
DBA_ADVISOR_FINDINGS(finding_name) #USER_... available. Shows which findings occur most frequently in the db
DBA_ADDM_TASKS #USER_... available. Show all executed ADDM tasks
DBA_ADDM_INSTANCES #USER_... available. Instance level info for completed ADDM tasks
DBA_ADD_FINDINGS #USER_... available. Extends DBA_ADVISOR_FINDINGS
DBA_ADVISOR_FINDING_NAMES #USER_... available. Lists all registered finding names
DBA_ADVISOR_RECOMMENDATIONS(filtered[Y | N])
DBA_ADVISOR_ACTIONS(filtered[Y | N])
Automatic Memory Management
sga_target=0 #static; pga_target=0 #static; memory_target=900M #dynamic; memory_max_target=2000M;
show parameter target; #archive_lag_target=0, db_flashback_retention_target=1440, fast_start_io_target=0, fast_start_mttr_target=0, memory_max_target=900M,
memory_target=900M, pga_aggregate_target=0; sga_target=0
sga_target pga_aggregate_target memory_target memory_max_target result
0 0 900M 999M AMM enabled, defaults 60% to SGA and 40% to PGA
50M 100M 900M 999M AAM enabled, SGA minimum 50M, PGA minimum 100M
50M 0 900M 999M AAM enabled, SGA minimum 50M, PGA minimum is memory_target-50M
0 100M 900M 999M AAM enabled, SGA minimum is memory_target-100M, PGA minimum 100M
900M 0 or null AAM enabled, max_memory_target is memory_target. (SGA is not autotuned, PGA is)
50M Automatically tuned only SGA subcomponents. PGA is autotuned. max_memory_target=0
0 or null 999M AAM disabled. memory_target=0 default value
* Views *
V$MEMORY_DYNAMIC_COMPONENTS(component, current_size, user_specified_size)
V$MEMORY_CURRENT_RESIZE_OPS #current resizes in progress
Optimizer Statistics Collection
Automatically during nightly manintenance window, precedence for most needing objects. Can set preferences at table, schema, database and global level.
Both global(incrementally but no with histograms) and local statistics for partitioned tables. Statistics job is atomic at schema level
Statistics are collected for user-defined function and function-based indexes
A pending statistic is not yet available
* Options, Statistics preferences *
publish[true | false]: def. true. Store statistics in data dictionary, current statistics
stale_percent: def. 10%. Threshold level for an object to be stale
* Granularity * auto, global, global and partition, all, partition, subpartition
* Extended Statistics * multi column or expression. Collects number of distinct values, density, number of nulls, frequency histograms

Capture statistics, Test and Publish
1) alter session set optimizer_use_pending_statistics=false;
2) set PUBLISH option to false using SET_TABLE_PREFS procedure
#subsequent statistics will be gathered as pending
3) gather the statistics for table;
4) alter session set optimizer_use_pending_statistics=true;
5) test the statistics
6) execute the PUBLISH_PENDING_STATS procedure

Capture global statistics for a partitioned table INCREMENTALLY, only on last partition
Incremental works if granularity=global and table marked both incremental and publish, estimate_percent=>auto_sample_size[always recommended]
1) set INCREMENTAL value to true use SET_TABLE_PREFS procedure
2) DBMS_STATS.gather_table_stats(ownname=>'ALDO', tabname=>'persone', granularity=>'GLOBAL')
* Packages *
DBMS_STATS.gather_database_stats_job_proc
DBMS_STATS.set_prefs
#set preferences at various levels
DBMS_STATS.set_table_prefs(schema, table, pref_name, pref_value)
#PUBLISH =[true(def.) | false] FALSE: statistics taken will be pending
#INCREMENTAL=[true | false(def.)] TRUE: statistics will no be collected for partition that have not been changed
DBMS_STATS.set_schema_prefs
DBMS_STATS.resume_gather_stats
DBMS_STATS.get_prefs(['estimate_percents' | 'stale_percent' | 'publish'], schema, table)
DBMS_STATS.publish_pending_stats(schema[null(def.)], table[null(def.)])
#pending stats became public
DBMS_STATS.delete_pending_stats(schema[null(def.)], table[null(def.)])
DBMS_STATS.export_pending_stats(schema[null(def.)], table[null(def.)])
DBMS_STATS.create_extended_stats(ownname, tabname, extension=>'(name, surname)')
# select DBMS_STATS.create_extended_stats(null, 'persone', extension=>'(name, surname)') from dual;
#collect extended statistics. A column group is created. A virtual hidden column is created with a system-defined name
DBMS_STATS.drop_extended_stats(ownname, tabname, extension=>'name, surname')
DBMS_STATS.show_extended_stats(ownname, tabname, extension=>'name, surname') return varchar2
#return the name of the statistics entrys
DBMS_STATS.gather_table_stats(ownname=>null, tabname=>'pippe', method_opt=>'for all columns size skewonly for columns(lower(a),b) skewonly')
#coll. ext. stats
* Views *
DBA_TAB_STATS, DBA_IND_STATS stored published statistics
DBA_TAB_STAT_PREFS(owner, table_name, preference_name, preference_value) #shows current settings for statistics preferences
DBA_TAB_PENDING_STATS
DBA_STAT_EXTENSIONS(extension_name, extension)
Result Cache
In SGA(shared pool). Affected by memory_target, sga_target, shared_pool_size. Recently-used algorithm to age out results.
Can cache flashback SQL, can cache query fragments. Used both for SQL queries and deterministic functions.
/*Hint*/: result_cache and no_result_cache can use in a inline view
Result Cache Memory pool
SQL Query Result Cache
Pl/SQL Function Result Cache
client side
* Restrictions *
can't use SQL Query Result Cache on: Temporary tables, Dictionary tables, non-deterministic Pl/Sql functions(use deterministic keyword),
[currval,nextval,sysdate,sys_timestamp,current_date,current_timestamp,local_timestamp,userenv,sys_context,sys_quid], undergoing modification data, subqueries,
pipelined functions, functions can't have OUT or IN parameters, no anonymous blocks, no in package with invoker's right, no IN params as LOB, REF CURSOR,
RECORD, COLLECTION, OBJECT
create or replace function pippo(id number) return pp_record RESULT_CACHE RELIES_ON (persone) is ...
#RELIES_ON means that cached results will became invalid when the structure of any database object on which they depend is changed
* Packages *
DBMS_RESULT_CACHE.memory_report
#Reports cache enabled, block size, max cache size, max result size, memory [total | fixed | dynamic]
DBMS_RESULT_CACHE.status
#enabled/disabled, in RAC shows if synchronizing
DBMS_RESULT_CACHE.flush return boolean
#return true on successful removal, disable/close the cache DBMS_RESULT_CACHE.bypass(true) before flushing
DBMS_RESULT_CACHE.bypass([true | false])
#bypass the cache
* Views *
V$RESULT_CACHE_STATISTICS #list cache settings and memory usage stats
V$RESULT_CACHE_OBJECTS(type, status[new | published | bypass | expired | invalid], name) #list cached objects and attributes
V$RESULT_CACHE_DEPENDENCY
V$RESULT_CACHE_MEMORY #show all memory blocks and statistics

Client Result Cache
OCI based, cache only top-level SQL statements no query fragments, excellent on lookup tables. Independent from server result cache
client sqlnet.ora init params override the db server init params
* Restrictions *
no views, no remote objects, no complex types, no flashback, no SQL with pl/sql, no VPD policies on the tables
* Views *
CLIENT_RESULT_CACHE_STAT$ #show client settings and usage stats
Adaptive Cursor Sharing
Automatic feature can't be turned off, no init params. Works the same for bind vars and literals.
10g: Bind variable not good for plan generation, first time will determine the plan forever(bind peeking)
11g: Generate new child cursors
Bind sensitive cursor: values are so different that a new plane is needed. New plans are added to plan history but not used until db has proved a better
performance. See optimizer_capture_sql_plan_baselines. Once the plane is changed the cursor became bind-aware, more plans will be used depending on params
* Views *
V$SQL(is_bind_sensitive, is_bind_aware)
V$SQL_CS_HISTOGRAM #shows distribution of exceution count across execution history histogram
V$SQL_CS_SELECTIVITY #shows selectivity ranges stored in cursor for predicates with bind variables
V$SQL_CS_STATISTICS #stats for different bind sets, buffer gets and CPU time
Flashback Data Archive FDA - Oracle Total Recall
Requires Automatic Undo Management. Archivelog is not required. Data archive tablespaces must use ASSM
Store data changes compressed for any length of time you want in one or more tbs in an internal history table for each table tracked.
Only for some tables you choice. You can't modify data in Flashback, only purge.
Table/s must be enabled for tracking(flashback archiving).
Can set different retention for different tables, when in different archives, assigning different flashback data archives.
Retention at table level, must be specified for data archive. An internal historical table is created with few more columns and range partitioned.
FBDA process, sleep time automatically adjusted. Can use flashback SQL(as of).
Old data is automatically purged. May create default archive(not required)
* Differences from Flashback Database FD* FD may take db back in time, while FDA is read only. FDA is online FD is offline.
FD at db level while FDA at table level.
* Limitations *
no [drop | rename | modify] column, yes add column. no partition operations. no LONG to LOB conversion. no [drop | truncate] table
SQL>grant flashback archive ADMINISTER to user;
#ADMINISTER allows [create flashback archive, alter flashback archive, drop flashback archive]
SQL>grant flashback archive on t1 to user;
#user also need select privs, execute on DBMS_FLASHBACK
SQL>create flashback data archive f1 tablespace tbs1 quota 2000m RETENTION 4 year;
#RETENTION must be specified for data archive
alter flashback archive f1 purge all; alter flashback archive f1 purge before timestamp(systimestamp - interval '2' day);
alter flashback archive f1 purge before scn 1233
alter table t1 noflashback archive f1;
#all flashback data is removed
alter flashback archive fla1 modify retention 1 year
select * from t1 versions between timestamp to_timestamp(...) and maxvalue
* Views *
DBA_FLASHBACK_ARCHIVE(flashback_archive_name, retention_in_days, status), DBA_FLASHBACK_ARCHIVE_TS(quota_in_mb), DBA_FLASHBACK_ARCHIVE_TABLES

Flashback Transaction Blackout
database in archivelog, undo and redo data are needed. supplemental logging and primary key supplemental logging at db level must be enabled.
Transaction dependencies:
write-after-write: dependent transaction modifies data modified by parent transaction
primary key constraint: dependent transaction reinserts the primary key deleted by parent transaction
alter database add supplemental log data;
alter database add supplemental log data (primary key) columns;
grant execute on dbms_flashback to hr;
grant select any transaction to hr;
grant insert, update, delete on pippo to hr;
#grant DML on specific involved tables
* Packages *
DBMS_FLASHBACK.transaction_backout(numberofxids, xids(array), options[see below...], scnhint(SCN at the start of transaction) or timehint)
#no commit inside!
nocascade: (def.) no depent transactions expected
cascade: dependent transactions are backed out before the parent
nocascade_force: dependent transactions are ignored
noconflict_only: only those rows with no conflict in parent transactions
* Views *
DBA_FLASHBACK_TRANSACTION_STATE #backed out transactions
DBA_FLASHBACK_TRANSACTION_REPORT
Oracle SecureFiles
10g lob->11g BasicFiles
transparent encryption. variable chunk size max 64M stored next to one another. redo generated only for changed data.
new client/server network for faster data transfer.
Every option may be changed later using . long api can't be used to configure SecureFile setting.
COMPATIBLE min 11.0. Error raisen if you use SecureFile capabilities in a BasicFile.
1) DEDUPLICATION detect and avoid duplicated data, Advanced Compression is required, only one copy is saved, deduplication is disabled by default
2) COMPRESSION [HIGH | MEDIUM(def.)], compression is disabled by default
3) ENCRYPTION must use Advanced Security Option [AES128 | 3DES168 | AES128 | AES192 | AES256], encryption is disabled by default
Older storage cluses(chunk, pctversion, freelist) no necessaries with SecureFiles, instead use this:
MAXSIZE: maximum LOB segment size
RETENTION: version control policy
MAX: after MAXSIZE old version is used
MIN: old version is retained for at least the time specified
AUTO: default.
NONE: old version is use ad much as possible
create table pippo(l clob) LOB(l)STORE AS [securefile | lob] (COMPRESS [HIGH | MEDIUM] deduplicate lob [CACHE | NOCACHE | CACHE READS] nologging);
#STORE AS is optional, default is LOB traditional.
#CACHE will place lob pages in the buffer cache for speedier access, default is NOCACHE, CACHE READS cache only during reads not for writes.
#NOLOGGING will not generate any redo
#COMPRESS [HIGH | MEDIUM(def.)]
#(NO) KEEP DUPLICATES #specify whether the database will store duplicates for the LOB column
create table pippo (doc clob)lob(doc) store as securefile(ENCRYPT);
create table pippo (doc clob encrypt using 'AES256')lob(doc) store as securefile;

Migrating to SecureFiles
* Partition Exchange *
Ensure you have enough space for largest partition, long maintenance window, segment offline
* Online Redefinition (recommended) *
No segment offline, migration in parallel, index must be rebuild, double space required.
Materialized view must be refreshed if the source table is redefined.
create table tab1(id number, c lob) lob(c) store as lob;
create table tab1(id number, c lob) lob(c) store as securefile;
DBMS_REDEFINITION.start_redef_table('scott', 'tab1', 'tab2', 'id id c c');
DBMS_REDEFINITION.copy_table_dependents('scott', 'tab1', 'tab2', 1, true, true,true,true,false, error_count);
DBMS_REDEFINITION.finish_redef_table('scott', 'tab1', 'tab2');
* Packages *
DBMS_LOB.getoptions
#return lob settings
DBMS_LOB.setoptions
DBMS_LOB.get_deduplicated_regions
DBMS_SPACE.space_usage
#determine disk space used by all lob in segment, only for ASSM segments
* Views *
DBA_SPACE, DBA_LOB
Locking Enhancements
By default a DDL will fail if can't get an immediate DML lock on the table.
lock table pippo in exclusive mode [nowait | wait(def.) (optional)]
#if mode omitted will wait undefinitely
The following statements will not require a lock:
create index online;
create materialized view log;
alter table enable constraint novalidate;
Minimal invalidation - Fine Grained Dependency Management
Only logical affected objects are invalidated such as views and synonyms during online redefinition, triggers have not been enhanced they will be invalidated.
Parameter file creation from Memory
create [spfile | pfile] from memory;
#will have values for all 150 init parameters even if not precedently set
Hot Patching
Supported on RAC
opatch enable, disable and install patches.
Currently only for Linux, Solaris. Consumes extra memory depending on the number of currently running Oracle processes.
At least one OS page of memory(4-8kb) for Oracle process is required.
opatch query -is_online_patch
#determine if patch is hot
Invisible Indexes
Invisble to the optimizer, discarded from plans. Can change from in/visible at any time.
create index inv1 on pippo(nome) invisible;
alter index i1 in/visible;
* Views *
DBA_INDEXES(visibility)
Shrinking Temporary Tablespaces
May shrink online temporary tablespaces and single tempfiles. Shrink to a minimum of 1MB(def.)
alter tablespace temp shrink space [keep 100m];
#Not equally distributed along tempfiles
alter tablespace temp shrink tempfile '/pippo.dbf' [keep 100m]
* Views *
DBA_TEMP_FREE_SPACE(tablespace_name, free_space) #FREE_SACE in bytes. Space unallocated and allocated space available to be reused
Tablespace Option for Temporary tables
May manually specify a temporary tbs, indexes are created in the same tablespace
PL/SQL and Java Automatic Native Compilation
see parameter plsql_code_type and plsql_optimize_level
alter procedure pioppo compile pl_sql_code_type=native;
* Recompiling a Database for Pl/Sql Native Compilation *
1) shutdown immediate; 2) plsql_code_type=native, plsql_optimize_level=3 3) startup upgrade 4) @?/rdbms/admin/dbsupgnv.sql 5) restart and utlrp.sql
* Views *
DBA_PLSQL_OBJECT_SETTINGS(plsql_code_type)
OLTP Table Compression
Writes not degraded reads improved because data is directly read as compressed. Disk space and memory are minimized, cpu is increased.
May compress data during dml, not only when bulk loading(10g) or when creating CTAS(10g). Holes made by deletes are eliminated
Compression may be enabled for tables, partitions and tablespaces.
A block is compressed when reaches PCTFREE.
create table pippo(...) compress; #normal compress
create table pippo(...) compress for direct_load_operations #normal compress for a data warehouse workloads table not for OLTP
create table pippo(...) compress for all operations #compress for dml operations
alter table pippo compress for all operations #An existing table may be altered to compress but existing data remain uncompressed.
* Views *
DBA_TABLES(compression, compress_for)
Direct NFS Client
Simplify manageability across multiple platforms, performance are increased but kernel NFS is not completely bypassed since its used for network communication
Avoid kernel NFS layer. Load balancing. You may control I/O path. OS must perform the mount. NFS version 3 protocol implemented in Oracle RDBMS kernel.
If mismatch between oranfstab and OS mount points the NFS is stopped. To remove an NFS path in use restart the database.
Direct NFS client will disabled if delete oranfstab or modify its setting or replacing ODM NFS with the stub libodn11.so
If the database can't open the NFS using Direct NFS it will use OS kernel
* Configuring *
1) Specify mount point in /etc/mtab first default location and be sure the OS perform the mount
2) (optional) edit oranfstab for Oracle specfic options such as additional paths to a mount point.
Mount point search order:
(optional) $ORACLE_HOME/dbs/oranfstab
(optional) /etc/oranfstab
(when no oranfstab) /etc/mtab
4) Enable Direct NFS replacing standard ODM library libnfsodm10.so with ODM NFS library $ORACLE_HOME/lib/libodm11.so
oranfstab format:
server: TestServer1 #nfs server name
path: 130.33.34.11 #up to 4 network paths if the first fail, load balancing
export: /vol/oradata1 #exported path from NFS server
mount: /mnt/oradata1 #local mount point
* Views *
V$DNFS_STATS #show performance stats for Direct NFS
V$DNFS_SERVERS #show servers accessed by Direct NFS
V$DNFSFILES #files currently using Direct NFS
V$DNFS_CHANNEL #open network paths