Tuesday, March 3, 2009

Sql Tuning advisor

This article the discusses the new features which automate the tuning of SQL statements in Oracle 10g:

Overview

SQL Tuning Advisor

Managing SQL Profiles

SQL Tuning Sets

Useful Views

Overview

In its normal mode the query optimizer needs to make decisions about execution plans in a very short time. As a result it may not always be able to obtain enough information to make the best decision. Oracle 10g allows the optimizer to run in tuning mode where it can gather additional information and make recommendations about how specific statements can be tuned further. This process may take several minutes for a single statement so it is intended to be used on high-load resource-intensive statements.

In tuning mode the optimizer performs the following analysis:

Statistics Analysis - The optimizer recommends the gathering of statistics on objects with missing or stale statistics. Additional statistics for these objects are stored in an SQL profile.

SQL Profiling - The optimizer may be able to improve performance by gathering additional statistics and altering session specific parameters such as the OPTIMIZER_MODE. If such improvements are possible the information is stored in an SQL profile. If accepted this information can then used by the optimizer when running in normal mode. Unlike a stored outline which fixes the execution plan, an SQL profile may still be of benefit when the contents of the table alter drastically. Even so, it's sensible to update profiles periodically. The SQL profiling is not performed when the tuining optimizer is run in limited mode.

Access Path Analysis - The optimizer investigates the effect of new or modified indexes on the access path. It's index recommendations relate to a specific statement so where necessary it will also suggest the use of the SQL Access Advisor to check the impact of these indexes on a representative SQL workload.

SQL Structure Analysis - The optimizer suggests alternatives for SQL statements that contain structures that may impact on performance. The implementation of these suggestions requires human intervention to check their validity.

The automatic SQL tuning features are accessible from Enterprise Manager on the "Advisor Central" page these or from PL/SQL using the DBMS_SQLTUNE package. This article will focus on the PL/SQL API as the Enterprise Manager interface is reasonably intuative.

SQL Tuning Advisor

In order to access the SQL tuning advisor API a user must be granted the ADVISOR privilege:

CONN sys/password AS SYSDBA
GRANT ADVISOR TO scott;
CONN scott/tiger

The first step when using the SQL tuning advisor is to create a new tuning task using the CREATE_TUNING_TASK function. The statements to be analyzed can be retrieved from the Automatic Workload Repository (AWR), the cursor cache, an SQL tuning set or specified manually:

SET SERVEROUTPUT ON

-- Tuning task created for specific a statement from the AWR.
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
begin_snap => 764,
end_snap => 938,
sql_id => '19v5guvsgcd1v',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => '19v5guvsgcd1v_AWR_tuning_task',
description => 'Tuning task for statement 19v5guvsgcd1v in AWR.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

-- Tuning task created for specific a statement from the cursor cache.
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_id => '19v5guvsgcd1v',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => '19v5guvsgcd1v_tuning_task',
description => 'Tuning task for statement 19v5guvsgcd1v.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

-- Tuning task created from an SQL tuning set.
DECLARE
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sqlset_name => 'test_sql_tuning_set',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => 'sqlset_tuning_task',
description => 'Tuning task for an SQL tuning set.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

-- Tuning task created for a manually specified statement.
DECLARE
l_sql VARCHAR2(500);
l_sql_tune_task_id VARCHAR2(100);
BEGIN
l_sql := 'SELECT e.*, d.* ' ||
'FROM emp e JOIN dept d ON e.deptno = d.deptno ' ||
'WHERE NVL(empno, ''0'') = :empno';

l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
sql_text => l_sql,
bind_list => sql_binds(anydata.ConvertNumber(100)),
user_name => 'scott',
scope => DBMS_SQLTUNE.scope_comprehensive,
time_limit => 60,
task_name => 'emp_dept_tuning_task',
description => 'Tuning task for an EMP to DEPT join query.');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

If the TASK_NAME parameter is specified it's value is returned as the SQL tune task identifier. If ommitted a system generated name like "TASK_1478" is returned. If the SCOPE parameter is set to scope_limited the SQL profiling analysis is omitted. The TIME_LIMIT parameter simply restricts the time the optimizer can spend compiling the recommendations.

The following examples will reference the last tuning set as it has no external dependancies other than the SCOTT schema. The NVL in the SQL statement was put in to provoke a reaction from the optimizer. In addition we can delete the statistics from one of the tables to provoke it even more:

EXEC DBMS_STATS.delete_table_stats('SCOTT','EMP');

With the tuning task defined the next step is to execute it using the EXECUTE_TUNING_TASK procedure:

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => 'emp_dept_tuning_task');

During the execution phase you may wish to pause and restart the task, cancel it or reset the task to allow it to be re-executed:

-- Interrupt and resume a tuning task.
EXEC DBMS_SQLTUNE.interrupt_tuning_task (task_name => 'emp_dept_tuning_task');
EXEC DBMS_SQLTUNE.resume_tuning_task (task_name => 'emp_dept_tuning_task');

-- Cancel a tuning task.
EXEC DBMS_SQLTUNE.cancel_tuning_task (task_name => 'emp_dept_tuning_task');

-- Reset a tuning task allowing it to be re-executed.
EXEC DBMS_SQLTUNE.reset_tuning_task (task_name => 'emp_dept_tuning_task');

The status of the tuning task can be monitored using the DBA_ADVISOR_LOG view:

SELECT task_name, status FROM dba_advisor_log WHERE owner = 'SCOTT';

TASK_NAME STATUS
------------------------------ -----------
emp_dept_tuning_task COMPLETED

1 row selected.

Once the tuning task has executed successfully the recommendations can be displayed using the REPORT_TUNING_TASK function:

SET LONG 10000;
SET PAGESIZE 1000
SET LINESIZE 200
SELECT DBMS_SQLTUNE.report_tuning_task('emp_dept_tuning_task') AS recommendations FROM dual;
SET PAGESIZE 24

In this case the output looks like this:

RECOMMENDATIONS
--------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name : emp_dept_tuning_task
Scope : COMPREHENSIVE
Time Limit(seconds): 60
Completion Status : COMPLETED
Started at : 05/06/2004 09:29:13
Completed at : 05/06/2004 09:29:15

-------------------------------------------------------------------------------
SQL ID : 0wrmfv2yvswx1
SQL Text: SELECT e.*, d.* FROM emp e JOIN dept d ON e.deptno = d.deptno
WHERE NVL(empno, '0') = :empno

-------------------------------------------------------------------------------
FINDINGS SECTION (2 findings)
-------------------------------------------------------------------------------

1- Statistics Finding
---------------------
Table "SCOTT"."EMP" and its indices were not analyzed.

Recommendation
--------------
Consider collecting optimizer statistics for this table and its indices.
execute dbms_stats.gather_table_stats(ownname => 'SCOTT', tabname =>
'EMP', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE)

Rationale
---------
The optimizer requires up-to-date statistics for the table and its indices
in order to select a good execution plan.

2- Restructure SQL finding (see plan 1 in explain plans section)
----------------------------------------------------------------
The predicate NVL("E"."EMPNO",0)=:B1 used at line ID 2 of the execution plan
contains an expression on indexed column "EMPNO". This expression prevents
the optimizer from selecting indices on table "SCOTT"."EMP".

Recommendation
--------------
Rewrite the predicate into an equivalent form to take advantage of
indices. Alternatively, create a function-based index on the expression.

Rationale
---------
The optimizer is unable to use an index if the predicate is an inequality
condition or if there is an expression or an implicit data type conversion
on the indexed column.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original
-----------
Plan hash value: 1863486531

----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 107 | 4 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 107 | 4 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL | EMP | 1 | 87 | 3 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DEPT | 1 | 20 | 1 (0)| 00:00:01 |
| 4 | INDEX UNIQUE SCAN | PK_DEPT | 1 | | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Note
-----
- dynamic sampling used for this statement

-------------------------------------------------------------------------------


1 row selected.

Once the tuning session is over the tuning task can be dropped using the DROP_TUNING_TASK procedure:

BEGIN
DBMS_SQLTUNE.drop_tuning_task (task_name => '19v5guvsgcd1v_AWR_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name => '19v5guvsgcd1v_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name => 'sqlset_tuning_task');
DBMS_SQLTUNE.drop_tuning_task (task_name => 'emp_dept_tuning_task');
END;
/

Managing SQL Profiles

To manage SQL profiles a user needs the following privileges:

CONN sys/password AS SYSDBA
GRANT CREATE ANY SQL PROFILE TO scott;
GRANT DROP ANY SQL PROFILE TO scott;
GRANT ALTER ANY SQL PROFILE TO scott;
CONN scott/tiger

If the recommendations of the SQL tuning advisor include a suggested profile you can choose to accept it using the ACCEPT_SQL_PROFILE procedure:

SET SERVEROUTPUT ON
DECLARE
l_sql_tune_task_id VARCHAR2(20);
BEGIN
l_sql_tune_task_id := DBMS_SQLTUNE.accept_sql_profile (
task_name => 'emp_dept_tuning_task',
name => 'emp_dept_profile');
DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

The NAME parameter is used to specify a name for the profile. If it is not specified a system generated name will be used.

The STATUS, NAME, DESCRIPTION, and CATEGORY attributes of an SQL profile can be altered using the ALTER_SQL_PROFILE procedure:

BEGIN
DBMS_SQLTUNE.alter_sql_profile (
name => 'emp_dept_profile',
attribute_name => 'STATUS',
value => 'DISABLED');
END;
/

Existing SQL profiles can be dropped using the DROP_SQL_PROFILE procedure:

BEGIN
DBMS_SQLTUNE.drop_sql_profile (
name => 'emp_dept_profile',
ignore => TRUE);
END;
/

The IGNORE parameter prevents errors being reported if the specified profile does not exist.

SQL Tuning Sets

An SQL tuning set is a group of statements along with their execution context. These can be created automatically via Enterprise Manager or manually provided you have the necessary privileges:

CONN sys/password AS SYSDBA
GRANT ADMINISTER ANY SQL TUNING SET TO scott;
CONN scott/tiger

An SQL tuning set is created using the CREATE_SQLSET procedure:

BEGIN
DBMS_SQLTUNE.create_sqlset (
sqlset_name => 'test_sql_tuning_set',
description => 'A test SQL tuning set.');
END;
/

Statements are added to the set using the LOAD_SQLSET procedure which accepts a REF CURSOR of statements retrieved using one of the following pipelined functions:

SELECT_WORKLOAD_REPOSITORY - Retrieves statements from the Automatic Workload Repository (AWR).

SELECT_CURSOR_CACHE - Retrieves statements from the cursor cache.

SELECT_SQLSET - Retrieves statements from another SQL tuning set.

The following are examples of their usage:

-- Load the SQL set from the Automatic Workload Repository (AWR).
DECLARE
l_cursor DBMS_SQLTUNE.sqlset_cursor;
BEGIN
OPEN l_cursor FOR
SELECT VALUE(p)
FROM TABLE (DBMS_SQLTUNE.select_workload_repository (
765, -- begin_snap
766, -- end_snap
NULL, -- basic_filter
NULL, -- object_filter
NULL, -- ranking_measure1
NULL, -- ranking_measure2
NULL, -- ranking_measure3
NULL, -- result_percentage
10) -- result_limit
) p;

DBMS_SQLTUNE.load_sqlset (
sqlset_name => 'test_sql_tuning_set',
populate_cursor => l_cursor);
END;
/

-- Load the SQL set from the cursor cache.
DECLARE
l_cursor DBMS_SQLTUNE.sqlset_cursor;
BEGIN
OPEN l_cursor FOR
SELECT VALUE(p)
FROM TABLE (DBMS_SQLTUNE.select_cursor_cache (
NULL, -- basic_filter
NULL, -- object_filter
NULL, -- ranking_measure1
NULL, -- ranking_measure2
NULL, -- ranking_measure3
NULL, -- result_percentage
1) -- result_limit
) p;

DBMS_SQLTUNE.load_sqlset (
sqlset_name => 'test_sql_tuning_set',
populate_cursor => l_cursor);
END;
/

-- Create a new set and load it from the existing one.
DECLARE
l_cursor DBMS_SQLTUNE.sqlset_cursor;
BEGIN
DBMS_SQLTUNE.create_sqlset(
sqlset_name => 'test_sql_tuning_set_2',
description => 'Another test SQL tuning set.');

OPEN l_cursor FOR
SELECT VALUE(p)
FROM TABLE (DBMS_SQLTUNE.select_sqlset (
'test_sql_tuning_set', -- sqlset_name
NULL, -- basic_filter
NULL, -- object_filter
NULL, -- ranking_measure1
NULL, -- ranking_measure2
NULL, -- ranking_measure3
NULL, -- result_percentage
NULL) -- result_limit
) p;

DBMS_SQLTUNE.load_sqlset (
sqlset_name => 'test_sql_tuning_set_2',
populate_cursor => l_cursor);
END;
/

The contents of an SQL tuning set can be displayed using the SELECT_SQLSET function:

SELECT *
FROM TABLE(DBMS_SQLTUNE.select_sqlset ('test_sql_tuning_set'));

References can be added to a set to indicate its usage by a client using the ADD_SQLSET_REFERENCE function. The resulting reference ID can be used to remove it using the REMOVE_SQLSET_REFERENCE procedure:

DECLARE
l_ref_id NUMBER;
BEGIN
-- Add a reference to a set.
l_ref_id := DBMS_SQLTUNE.add_sqlset_reference (
sqlset_name => 'test_sql_tuning_set',
reference => 'Used for manual tuning by SQL*Plus.');

-- Delete the reference.
DBMS_SQLTUNE.remove_sqlset_reference (
sqlset_name => 'test_sql_tuning_set',
reference_id => l_ref_id);
END;
/

The UPDATE_SQLSET procedure is used to update specific string (MODULE and ACTION) and number (PRIORITY and PARSING_SCHEMA_ID) attributes of specific statements within a set:

BEGIN
DBMS_SQLTUNE.update_sqlset (
sqlset_name => 'test_sql_tuning_set',
sql_id => '19v5guvsgcd1v',
attribute_name => 'ACTION',
attribute_value => 'INSERT');
END;
/

The contents of a set can be trimmed down or deleted completely using the DELETE_SQLSET procedure:

BEGIN
-- Delete statements with less than 50 executions.
DBMS_SQLTUNE.delete_sqlset (
sqlset_name => 'test_sql_tuning_set',
basic_filter => 'executions < style="">
-- Delete all statements. DBMS_SQLTUNE.delete_sqlset ( sqlset_name => 'test_sql_tuning_set');
END;
/

Tuning sets can be dropped using the DROP_SQLSET procedure:

BEGIN
DBMS_SQLTUNE.drop_sqlset (sqlset_name => 'test_sql_tuning_set');
DBMS_SQLTUNE.drop_sqlset (sqlset_name => 'test_sql_tuning_set_2');
END;
/

Useful Views

Useful views related to automatic SQL tuning include:

DBA_ADVISOR_TASKS

DBA_ADVISOR_FINDINGS

DBA_ADVISOR_RECOMMENDATIONS

DBA_ADVISOR_RATIONALE

DBA_SQLTUNE_STATISTICS

DBA_SQLTUNE_BINDS

DBA_SQLTUNE_PLANS

DBA_SQLSET

DBA_SQLSET_BINDS

DBA_SQLSET_STATEMENTS

DBA_SQLSET_REFERENCES

DBA_SQL_PROFILES

V$SQL

V$SQLAREA

V$ACTIVE_SESSION_HISTORY

Monday, March 2, 2009

OCR Manual Backup option available with 11g

11G: OCR Manual Backups

OCR is a file that manages cluster and Oracle Real Application Clusters (Oracle RAC) database configuration information. The OCR contains information about the cluster node list, instance-to-node mapping information, and information about Oracle Clusterware resource profiles for applications

This is a Very important component along with Voting Disk and Loss of it will require you to re-install the Clusterware Software.

Oracle 10g provides Automatic Backup of OCR along with manually export of OCR files.

Starting from Oracle 11g, we can also take manual backups of OCR. So in 11g , we can have following backups

1) Automatic backups—Oracle Clusterware automatically creates OCR backups every four hours. At any one time, Oracle always retains the last three backup copies of the OCR. The CRSD process that creates the backups also creates and retains an OCR backup for each full day and at the end of each week. You cannot customize the backup frequencies or the number of files that Oracle retains.

We can view the backups using ocrconfig -showbackup auto command.

[oracle@prod01]/home/oracle>ocrconfig -showbackup auto
prod01 2008/05/14 19:23:22 /u01/app/11.1.0/crs/cdata/prod_cluster/backup00.ocr
prod01 2008/05/14 15:23:21 /u01/app/11.1.0/crs/cdata/prod_cluster/backup01.ocr
prod01 2008/05/14 11:23:21 /u01/app/11.1.0/crs/cdata/prod_cluster/backup02.ocr
prod01 2008/05/13 15:23:20 /u01/app/11.1.0/crs/cdata/prod_cluster/day.ocr
prod01 2008/05/04 01:52:33 /u01/app/11.1.0/crs/cdata/prod_cluster/week.ocr

2)Manual backups—You can use the ocrconfig -manualbackup command to force Oracle Clusterware to perform a backup of the OCR at any time, rather than wait for the automatic backup that occurs at 4-hour intervals. The -manualbackup option is especially useful when you to need to obtain a binary backup on demand, such as before you make changes to the OCR.

You need to be logged in as root user to take manual backups. Use ocrconfig –manualbackup command. To view the backups use ocrconfig –showbackup manual command

[oracle@prod01]/home/oracle>ocrconfig -showbackup manual
prod01 2008/05/01 12:21:29 /u01/app/11.1.0/crs/cdata/prod_cluster/backup_20080501_122129.ocr
prod02 2008/04/11 11:32:50 /u01/app/11.1.0/crs/cdata/prod_cluster/backup_20080411_113250.ocr
prod02 2008/03/27 17:25:23 /u01/app/11.1.0/crs/cdata/prod_cluster/backup_20080327_172523.ocr

3)Manual Export—You can Use the ocrconfig -export file_name command to export the OCR content to a file format.

For more details refer to 11g Documentation

Tracing Sessions with Event 10046

Tracing Sessions with Event 10046

10046 Event is used to trace SQL statements (similar to sql_trace=true) with additional details depending on the tracing level which is specified.

10046 EVENT levels:

1 - Enable standard SQL_TRACE functionality (Default)
4 - As Level 1 PLUS trace bind values
8 - As Level 1 PLUS trace waits
12 - As Level 1 PLUS both trace bind values and waits

There are various ways to enable the tracing

1) Using Alter session command

ALTER SESSION SET EVENTS ‘10046 TRACE NAME CONTEXT FOREVER, LEVEL 12′;

To Turn it off
ALTER SESSION SET EVENTS ‘10046 TRACE NAME CONTEXT OFF’;

2) Using DBMS_SESSION

EXECUTE DBMS_SESSION.SESSION_TRACE_ENABLE(waits => TRUE, binds => FALSE);


The TRUE argument specifies that wait information will be present in the trace. The FALSE argument specifies that bind information will not be present in the trace.
The SESSION_TRACE_DISABLE procedure disables the trace for the invoking session.

For example:
EXECUTE DBMS_SESSION.SESSION_TRACE_DISABLE();

3) Using DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION

EXEC SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(SID,SERIAL#,TRUE);

Where SID and SERIAL# points to the session being traced and can be retrieved from V$SESSION

To turn it off
EXEC SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(SID,SERIAL#,FALSE);

4) Using DBMS_MONITOR

Starting from Oracle 10g, there is a new package which can be used to enable tracing.

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE(SESSION_ID=>sid,SERIAL_NUM=>serial#,WAITS=>TRUE,BINDS=>TRUE)

To turn it off

EXECUTE DBMS_MONITOR.SESSION_TRACE_DISABLE(session_id => sid, serial_num => serial#);

Please refer to following documentation link for more details

http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/sqltrace.htm#i8759

5) Using ORADEBUG

connect / as sysdba
Sql>oradebug setospid 1234
Sql>oradebug unlimit
Sql>oradebug event 10046 trace name context forever,level 12

Here 1234 is the OS process Id for the session to be traced.

Sql>oradebug tracefile_name

Above command will show the location of the trace file

To disable oradebug tracing

Sql> oradebug event 10046 trace name context off

Whichever method you use, Oracle will create the trace files in the server’s user_dump_dest directory.

Database Replay: 11g

11g New Feature: Database Replay

Database Replay:

In critical production systems, before making changes to the environment( Hardware or software related), lots of testing is required in test systems to know the effect of those changes and this usually takes lots of time to test for the stability and effects of the change.

In Oracle 11g this thing is simplified and a new feature “database replay” is added which can be used to simulate the production workload on a test system.

Following are the three stages of DATABASE REPLAY:
1. Workload Capture
2. Workload Preprocessing
3. Workload Replay

1. Workload Capture:

Before starting the capture process you need to take care of following things:

- Login as sys user

- It is always recommened (not mandatory) to shutdown the database and then start the database in restricted mode. By doing so, you will be making sure that no transactions are running on the database before capture process is actually started. It will guarantee the complete workload capture and you will be running full transactions during database replay on test system.
If you do not follow this step then you have to be prepared for some divergance of results during database replay on test system.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.


SQL> startup restrict
ORACLE instance started.


Total System Global Area 313860096 bytes
Fixed Size 1299624 bytes
Variable Size 184552280 bytes
Database Buffers 121634816 bytes
Redo Buffers 6373376 bytes
Database mounted.
Database opened.
SQL > select LOGINS from v$instance;
LOGINS
----------
RESTRICTED

Important thing to note here is that, after starting the capture process the database will automatically be switched into UNRESTRICTED mode and the users will be able to connect normally.

After capturing the initial database information, the capture process will internally fire a statement to disable restricted sessions, this can be verified by setting 10046 trace before starting the capture process as:

SQL > alter session set events '10046 trace name context forever, level 4';

- Now start the capture process.

SQL > alter session set events '10046 trace name context off'

You will see following information:

PARSING IN CURSOR #21 len=39 dep=1 uid=0 oct=49 lid=0 tim=121695902847 hv=14656808 ad='0' sqlid='7shphjbg23h8'
alter system disable restricted session
END OF STMT
PARSE #21:c=0,e=6,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=0,tim=121695384843

Now check the information from database:

SQL > select LOGINS from v$instance;
LOGINS
----------
ALLOWED

- Create a directory to hold the workload information.

SQL > create directory DIR as '/tmp/ss';

Check the permission of directory ss at OS level and make sure that oracle is able to write to this directory.
Also make sure that the directory is empty, before starting the capture process.

- Define Workload Filters
By adding filters we can restrict the Workload Capture process to be specific to certain user sessions. By default all the sessions will be captured.

There are two types of filters i.e INCLUDE and EXCLUDE.

If INCLUDE is used then the filter will be active and workload for the objects mentioned in the filter will not be captured rather all other user’s workload Will be captured i.e the objects defined in the filter will be FILTERED OUT from capture process.

If EXCLUDE is used, only the workload done by objects defined in the filter will be captured, no other client request to database will be captured.

EXAMPLE:
========

SQL > exec dbms_workload_capture.ADD_FILTER( fname  IN VARCHAR2, fattribute  IN VARCHAR2,fvalue IN VARCHAR2);

fname=Name of the filter.
fattribute=Attribute on which the filter will be applied i.e USER, PROGRAM, INSTANCE_NUMBER etc.
fvalue= value for the corresponding attribute.

SQL > exec dbms_workload_capture.ADD_FILTER( fname =>'FILTER_SCOTT',fattribute => 'USER',fvalue => 'SCOTT');

- Starting workload capture:

To start the workload capture START_CAPTURE procedure is used as:

SQL > exec dbms_workload_capture.start_capture(NAME => 'PEAK_LOAD_TEST',DIR => 'DIR', DURATION => 600);

Name,Dir are mandatory parameters while duration (in seconds) is optional, here the capture will run for 600 seconds, if the duration is not specified then you have to finish the capture manually as:

SQL > dbms_workload_capture.finish_capture();

- While the capture process is runnig it will generate two files : wcr_scapture.wmd and wcr_cap_000xx.start

- After using the finish_capture procedure you will get folloing files also: wcr_cr.html and wcr_cr.text

wcr_cr.html is similar to the AWR report generated by awrrpt.sql

During capture process database initiate a AWR report for the duration of capture period.

SQL>  select id,AWR_BEGIN_SNAP,AWR_END_SNAP from dba_workload_captures;

ID AWR_BEGIN_SNAP AWR_END_SNAP
---------- -------------- ------------
1 20 21
2 29 30

- Exporting AWR data for the workload capture:

It will enable detailed analysis of the workload:

SQL> exec DBMS_WORKLOAD_CAPTURE.EXPORT_AWR (capture_id => 2);

After this you can see two more files generated under the specified directory:

wcr_ca.dmp and wcr_ca.log

2. Workload Preprocessing:

Once the workload is captured it can be replayed on the test system. For this you need to copy the capture files to the test system. Preprocessing a workload will convert the captured files into REPLAY files and create the necessary metadata for this. Preprocessing will put load on the system and is resource intensive and should be performed only on test systems and on the same version of oracle database.

- Create a new directory for the capture files placed on the test system.
Let us suppose that the capture files are placed under /tmp/ss on test system, now create a directory as:

SQL > create directory REPLAY as '/tmp/ss';

SQL> EXEC DBMS_WORKLOAD_REPLAY.PROCESS_CAPTURE(capture_dir => 'REPLAY');

After preprocessing followings files will be generated:

wcr_process.wmd, wcr_login.pp, wcr_seq_data.extb, wcr_scn_order.extb , wcr_conn_data.extb

3. Workload Replay:

You have to use an instance which is logically identical to the instance whose workload is captured and need to setup Replay Client (WRC).

- wrc executable has to be run in calibrate mode to estimate the number of replay clients.

$  wrc mode=calibrate replaydir=/tmp/sood

Workload Replay Client: Release 11.1.0.6.0 - Production on Sat Jul 26 18:49:29 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Report for Workload in: /tmp/sood
-----------------------


Recommendation:
Consider using at least 1 clients divided among 1 CPU(s).


Workload Characteristics:
- max concurrency: 1 sessions
- total number of sessions: 5


Assumptions:
- 1 client process per 50 concurrent sessions
- 4 client process per CPU
- think time scale = 100
- connect time scale = 100
- synchronization = TRUE

- Initialize Replay Data:


Initializing replay loads the metadata into tables required for database replay.

SQL> exec DBMS_WORKLOAD_REPLAY.INITIALIZE_REPLAY (replay_name => 'REPLAY_1', REPLAY_DIR=> 'REPLAY');

- Put the database in “PREPARE REPLAY” mode as:

SQL > exec DBMS_WORKLOAD_REPLAY.PREPARE_REPLAY()

To check the status of replay use:

SQL > select name,status from  dba_workload_replays;
NAME STATUS
-------------------- ----------------------------------------
REPLAY_1 PREPARE

– Start the replay client:

$ wrc  replaydir=/tmp/sood

Workload Replay Client: Release 11.1.0.6.0 - Production on Sat Jul 26 22:12:35 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Wait for the replay to start (22:12:35)

- Start the replay as:

SQL> exec DBMS_WORKLOAD_REPLAY.START_REPLAY ();

Check the status of the replay as:

SQL> select name,status from  dba_workload_replays;

Initially it will show :

NAME STATUS
-------------------- ----------------------------------------
REPLAY_1 IN PROGRESS

And after that it will be shown as completed.

- Once the status is completed finish the replay as:

SQL > exec DBMS_WORKLOAD_REPLAY.CANCEL_REPLAY ();

To generate the report of the replay in HTML format you need to run :

DECLARE
My_Report CLOB;
BEGIN
My_Report := DBMS_WORKLOAD_REPLAY.REPORT(replay_id => 1, format => DBMS_WORKLOAD_REPLAY.TYPE_HTML);
END;
/