Wednesday, February 18, 2009

Approach to Resolve ORA-4031

Simplified Approach to Resolve ORA-4031

A Few Case studies and other related articles, about an approach towards Resolving ORA -4031 error. First we will see what ORA-4031 actually means.

04031, 00000, "unable to allocate %s bytes of shared memory (\"%s\",\"%s\",\"%s\,\"%s\")"

// *Cause: More shared memory is needed than was allocated in the shared pool.

// *Action: If the shared pool is out of memory, either use the

// dbms_shared_pool package to pin large packages,

// reduce your use of shared memory, or increase the amount of

// available shared memory by increasing the value of the

// INIT.ORA parameters "shared_pool_reserved_size" and

// "shared_pool_size".

// If the large pool is out of memory, increase the INIT.ORA

// parameter "large_pool_size".

ORA-4031 error is encountered when we do not have sufficient memory available in shared pool/large pool to service a memory request. But in actual ORA – 4031 can be encountered in any of these areas

1) Shared pool
2) Large Pool
3) Java Pool
4)Streams pool (new to 10g)

This brings us to the first step in our pursuit for finding the cause for ORA -4031.

Step1: Identify the Pool associated with error

Like any other Oracle error, we first need to check Database Alert Log file and also any trace files which gets generated during that time in user_dump_dest,background_dump_dest. Though there are cases when ORA-4031 error is not recorded in alert.log. Starting from 9.2.0.5, you should be able to see trace files which gets generated in udump/bdump location (Depending on whether background process or user process encountered the error).

ORA – 4031 has basically three arguments

1) Size requested
2) Area
3) Comment

ORA-4031: unable to allocate bytes of shared memory (”area “,”comment”)

e.g ORA-4031: unable to allocate 2196 bytes of shared memory

("shared pool","JOB$","KGLS heap","KGLS MEM BLOCK))

So we see from above that the error has occurred in Shared Pool. This is very important step as in case of other pools, ORA-4031 errors are resolved by increasing Java_pool_size and Streams_pool _size.

In this article I will be discussing mostly about errors encountered in Shared pool with small section on Large Pool.

Step2: What is value of SHARED_POOL_SIZE?

Current settings for shared pool related parameters can be found using below query

SQL>col name for a50

SQL>col value for a10

SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val

where nam.indx = val.indx and nam.ksppinm like '%shared_pool%' order by 1;



NAME VALUE

-------------------------------------------------- ----------

__shared_pool_size 654311424

_dm_max_shared_pool_pct 1

_enable_shared_pool_durations TRUE

_io_shared_pool_size 4194304

_shared_pool_max_size 0

_shared_pool_minsize_on FALSE

_shared_pool_reserved_min_alloc 4400

_shared_pool_reserved_pct 5

shared_pool_reserved_size 19293798

shared_pool_size 0

You can use following notes for checking the minimum shared pool size

Note 105813.1 - SCRIPT TO SUGGEST MINIMUM SHARED POOL SIZE

In case of 10g, you can use SGA_TARGET parameter for managing values of Shared Pool,Large pool, Streams Pool,Java Pool, Buffer Cache (DB_CACHE_SIZE). Following note can be used for 10g

Note 270935.1 - Shared pool sizing in 10g

It is recommended to set a lower limit for SHARED_POOL_SIZE parameter.

You can also use V$LIBRARYCACHE view (AWR/Statspack report also has this section) and check if there were lot of Reloads happening for SQL AREA and TABLE/PROCEDURE Namespace. This gives indication that Shared Pool is not appropriately sized. In case you see high value for Invalidations, then this could be due to executing DDL against the objects, gathering stats (DBMS_STATS), or granting/revoking privileges.

High Value for Hard parses in AWR/Statspack report can also be caused by shared pool sizing issues but it cannot be used as a sole criteria as High hard parses can be caused by use of literals and presence of version counts/Child Cursors. This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Some more key points related to Shared pool Sizing

-Shared pool memory consumption varies from release to release

-10g might fail with shared pool of 300 Mb though 8i was working fine

-Some part of memory allocated to fixed structures. Parameters like db_files, open_cursors and processes contribute to Overhead. When you use “Show SGA” command, you will see that “Variable Size” will be more then sum of “Shared Pool + Large Pool + Java Pool”. This is attributed to the value of these parameters.

Please note that in case you specify a low value for SGA_MAX_SIZE, you will see Oracle bumping the value to higher value so as to accomodate high value of Overhead memory.

Staring from 10g, Overhead memory is accomodated in shared_pool_size.

e.g If you specify SHARED_POOL_SIZE as 200 MB and your internal overhead is 100 Mb, then your actual shared pool value available to instance is only 100Mb.

You can read Note:351018.1 - Minimum for SHARED_POOL_SIZE Parameter in 10.2 Version for more information.

Shared Pool Fragmentation

Shared Pool fragmentation also can cause ORA-4031. This is caused when your queries are not being shared and you are seeing lot of reloads and Hard parses in the Statspack Report. In this case check the request failure size

ORA-4031: unable to allocate 16400 bytes of shared memory

We see that failure size is 16K. In this case you can see if you are using Shared_pool_reserved_size parameter for defining shared pool reserved area. Algorithm for memory allocation is such that it will first try to get memory from the shared pool and then if the requested memory size is greater then _Shared_pool_reserved_min_alloc , then it will get the memory from Shared Pool Reserved area. By default this value is set to 4400 bytes. In case the failure value is say 4200, you can try reducing the value of this parameter to reduce the occurences. Though this is not the complete solution. Read Tweaking _Shared_pool_reserved_min_alloc and ORA-4031 for more details.

You can also identify shared pool fragmentation by querying X$KSMSP

select  'sga heap('||KSMCHIDX||',0)'sga_heap,ksmchcom ChunkComment,

decode(round(ksmchsiz/1000),0,'0-1K', 1,'1-2K', 2,'2-3K',

3,'3-4K',4,'4-5K',5,'5-6k',6,'6-7k',7,'7-8k',8,'8-9k', 9,'9-10k','> 10K') "Size",

count(*), ksmchcls "Status", sum(ksmchsiz) "Bytes" from x$ksmsp

where KSMCHCOM = 'free memory' group by 'sga heap('||KSMCHIDX||',0)',

ksmchcom, ksmchcls, decode(round(ksmchsiz/1000),0,'0-1K', 1,'1-2K', 2,'2-3K',

3,'3-4K',4,'4-5K',5,'5-6k',6,'6-7k',7,'7-8k',8,'8-9k', 9,'9-10k','> 10K')



SGA_HEAP CHUNKCOMMENT Size COUNT(*) Status Bytes

-------------- ---------------- ----- ---------- ---------- ----------

sga heap(1,0) free memory > 10K 393 free 11296600

sga heap(1,0) free memory 3-4K 256 free 781928

sga heap(1,0) free memory 8-9k 63 free 510656

sga heap(1,0) free memory 6-7k 60 free 367076

sga heap(1,0) free memory 2-3K 555 free 1071448

sga heap(1,0) free memory 1-2K 1818 free 1397244

sga heap(1,0) free memory 0-1K 3418 free 348344

sga heap(1,0) free memory 9-10k 30 free 269820

sga heap(1,0) free memory 4-5K 154 free 640332

sga heap(1,0) free memory 5-6k 75 free 381920

sga heap(1,0) free memory > 10K 39 R-free 8302632

sga heap(1,0) free memory 7-8k 22 free 152328

If you see lot of memory chunks in 1-4k and very few in buckets >5K then it indicates Shared Pool Fragmentation. In this case you need to also look at Hard Parses (Statspack/AWR Report). This is discussed in section Using Literals Instead of Bind Variables and Multiple Child Cursors/High Version Count.

Note: - It is not recommended to run queries on X$KSMSP as it can lead to Latching issues. Do not run them frequently (I have seen people scheduling them as part of Oracle Hourly jobs. This should be avoided)

Step3: Is it MTS? If Yes, then are you using LARGE_POOL_SIZE?

LARGE_POOL_SIZE recommended for many features of Oracle which are designed to utilize large shared memory chunks like

- Recovery Manager (RMAN)

- parallel processing/IO slave processing. e.g px msg pool consuming more memory

- Shared Server Configuration

UGA will be allocated from shared pool in case large pool is not configured. So this can cause issues while using Shared Server Mode (MTS). Ensure that you are using LARGE_POOL_SIZE parameter or SGA_TARGET.

Step4: Are you having Multiple Subpools?

Subpool concept introduced from 9i R2. Instead of one big shared pool, memory will be divided into many sub pools.To determine number of subpools, you can use below query

SQL> select nam.ksppinm NAME, val.KSPPSTVL VALUE from x$ksppi nam, x$ksppsv val

where nam.indx = val.indx and nam.ksppinm like ‘%kghdsidx%’ order by 1 ;



NAME VALUE

—————————— ——————–

_kghdsidx_count 4

Above query indicates that there are 4 subpools

In case you get ORA-4031 and trace file gets generated, then the trace file can also be used to know the number of subpools configured. To do this search on “Memory Utilization of Subpool”
e.g
Memory Utilization of Subpool 1
========================
free memory 10485760
Memory Utilization of Subpool 2
========================

free memory 20971520

This means that there are two subpools configured for your database.

Oracle suggest having 500M as minimum subpool size. I will say that in case you are not facing serious Shared pool Latch contention, 2 subpools should be sufficient (though I believe most of contention issues can be solved by tuning the application). To change the number of subpools, we need to set parameter _kghdsidx_count in pfile or spfile and restart the database

In case of Spfile

alter system set “_kghdsidx_count”=1 scope=spfile;

Restart of database is required as it is a Static parameter. Please note that Large pool has same number of subpools as shared pool so you might be required to change number of subpools in case you are observing ORA-4031 in large pool.

You can read more about Shared Subpools in my earlier post

Step5: Is Sqlarea consuming lot of Memory?

Actually this can also be categorized into “Bad Application Design” as most of the cases are caused by way applications have been designed. High value for sqlarea in V$SGASTAT (or AWR/Statspack report) can be attributed to following causes

Using Literals Instead of Bind Variables

This is the most common cause for ORA-4031. Tom Kyte explains this on one of his post consequences of not using bind variables

If you do not use bind variables and you flood the server with
hundreds/thousands of unique queries you will
-run dog slow
-consume a ton of RAM (and maybe run out)
-not scale beyond a handful of users, if thatamong other really bad side effects.

The above statement is true and you can find lot of cases where not using Bind variables caused excessive Parsing issues (leading to CPU contention) and ORA-4031 issues. One of the way to locate such statements is by running following query.

SELECT substr(sql_text,1,90) "SQL",count(*) "SQL Copies",

sum(executions) "TotExecs", sum(sharable_mem) "TotMemory"

FROM v$sqlarea

WHERE executions <> 30

ORDER BY 2;

I personally try to use script from Asktom website to find these statements. You can find ,more information by clicking here

create table t1 as select sql_text from v$sqlarea;

alter table t1 add sql_text_wo_constants varchar2(1000);

create or replace function

remove_constants( p_query in varchar2 ) return varchar2

as

l_query long;

l_char varchar2(1);

l_in_quotes boolean default FALSE;

begin

for i in 1 .. length( p_query )

loop

l_char := substr(p_query,i,1);

if ( l_char = '''' and l_in_quotes )

then

l_in_quotes := FALSE;

elsif ( l_char = '''' and NOT l_in_quotes )

then

l_in_quotes := TRUE;

l_query := l_query '''#';

end if;

if ( NOT l_in_quotes ) then

l_query := l_query l_char;

end if;

end loop;

l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );

for i in 0 .. 8 loop

l_query := replace( l_query, lpad('@',10-i,'@'), '@' );

l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );

end loop;

return upper(l_query);

end;

/

update t1 set sql_text_wo_constants = remove_constants(sql_text);



select sql_text_wo_constants, count(*)

from t1

group by sql_text_wo_constants

having count(*) > 100

order by 2

/

Above query will give you queries which are using literals and should be modified to use bind variables. Sometimes it is not possible to modify the application, in that case you can use CURSOR_SHARING=SIMILAR/FORCE to force the application to use bind variables. Please note that this can cause issues (especially CURSOR_SHARING=SIMILAR), so it is recommended to test the application in Test environment before implementing in Production. Applications like Oracle Apps do not certify use of this parameter so also check with your application vendor if this can be used.

You can refer to following articles where I have discussed similar issue

ORA-4031 - A Case Study

Application Design and ORA-4031

Multiple Child Cursors/High Version Count

This is also one of the cause for high usage of memory in SQLAREA region. Child cursors are generated in Shared pool when the SQL text is same but Oracle cannot share it because the underlying objects are different or different optimizer settings, etc. To know about child cursors, refer to following Metalink note

Note 296377.1 - Handling and resolving unshared cursors/large version_counts

In case of Oracle 10g, you can use Statspack/AWR report for finding the child cursors under category “SQL ordered by Version Counts”. Following statements can also be run to identify if child cursors are being generated in your database


For 10g

SQL> select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss

where sa.address=ss.address and sa.version_count > 50 order by sa.version_count ;

For 8i/9i

select sa.sql_text,sa.version_count ,ss.*from v$sqlarea sa,v$sql_shared_cursor ss

where sa.address=ss.KGLHDPAR and sa.version_count > 50 order by sa.version_count ;

Results returned by above query reports SQL which are not being shared due to some reason. You should find column with Value Y to find the cause. Most of these issues are encountered while using CURSOR_SHARING=SIMILAR. In case you are using this parameter with columns having Histograms, then it is expected behavior.Read more about Cursor issues related to Histograms in Note:261020.1 - High Version Count with CURSOR_SHARING = SIMILAR or FORCE

There are cases where none of the column value returns Y value. Most of these cases, you need to work with Oracle support to find the cause as this could be a bug.

Child Cursors are problematic as they increase shared pool memory consumption, High parsing and also as the number of child cursors increase, Oracle will take more time to span all the child cursors to match if it can reuse them, if not then it spawns a new child cursor. This results in High Parsing time and CPU contention.

High Sharable Memory per SQL

One more cause for high value of SQLAREA in V$SGASTAT is high memory consumption for SQL statement. This can be due to poorly written SQL statement or due to Oracle Bugs.

In case of Oracle 10g, you can use Statspack/AWR report for finding the statements with high value of Sharable Memory. You can also use Sharable_mem column in V$SQLAREA to find these queries.

Step6:What Next?

You have followed all the above steps and find everything is ok. Now what do we check next?

We can look for any trace file which got generated during the time of error and see which component was taking more memory. You can try searching in metalink with that component. Else you can take a heapdump at time of error and upload the file to support.

Heapdump event
The Heapdump event is used to dump memory from different subheaps. Errors ora-4030 are associated with problems in the pga, uga or cga heaps, and error ora-4031 is related only to problems with the shared pool/large pool/Java Pool/Streams Pool.

command - >  alter system set events ‘4031 trace name heapdump level 2’;

init.ora - >events=’4031 trace name heapdump, level 2’

SQL>oradebug setmypid

SQL>oradebug dump heapdump 2

SQL>oradebug tracefile_name

Staring from 9.2.0.5, level 536870914 can be used for generating heapdump which will gather more diagnostic information for support to diagnose the cause.

Also it is not recommended to set Heapdump event in init.ora or spfile since it will force multiple dumps at time of Shared Pool memory issues. Oracle requires Shared pool Latch for dumping heapdump, so this can worsen the Latching situation. You can set Errorstack event to generate trace file at time of ORA-4031 error

alter system set events '4031 trace name errorstack level 3'; 

Use immediate trace option or Oradebug command at time of error

SQL> connect / as sysdba

SQL> alter session set events 'immediate trace name heapdump level 536870914';



OR



sqlplus "/ as sysdba"

oradebug setmypid

oradebug unlimit

oradebug dump heapdump 536870914

oradebug tracefile_name

exit

Upload the tracefile to Oracle support.

Using the above approach will help you to resolve ORA-4031 in Shared Pool.

Large Pool

While working on ORA-4031 in large pool, you need to follow below approach

1)Check size for LARGE_POOL_SIZE. If possible increase it.

2)Check number of subpools. Ensure that you have sufficient memory in each subpool. _kghdsidx_count is used to control the number of subpools in large pool also. So you would have to either increase memory available in each subpool or decrease the count.

3)In case of MTS, check if any session is consuming lot of memory. It’s a case where instead of getting ORA-4030, you get ORA-4031 in large pool (In MTS, UGA is part of large pool).

4)If all above suggestions have been tried, then capture heapdump and upload the file to Oracle Support. You can use level 32 or 536870944 i.e

SQL> connect / as sysdba

SQL> alter session set events 'immediate trace name heapdump level 32';

or

SQL> alter session set events 'immediate trace name heapdump level 536870944';

I hope this article helps in following a methodology for resolving ORA-4031. At present this article is not exhaustive article on this error and it will be more useful if it can be used as a approach after you have gone through below metalink notes.

Note:62143.1 - Understanding and Tuning the Shared Pool

Note:396940.1 - Troubleshooting and Diagnosing ORA-4031 Error

Note:146599.1 - Diagnosing and Resolving Error ORA-04031

Sunday, February 8, 2009

R M A N

Active Database Duplication or Network-Aware database duplication
No need to backup for duplication. Doesn't copy and apply online redo log, only up to most recent archive. Both password file required with same password.
Flash Recovery Area is not copied. Archive log are copied if necessary.
Source may be in mount or open(archivelog mode). Can automatically copy if specify the clause.
duplicate database to dupdb from active database db_file_name_convert '/u01', '/v01' SPFILE parameter_value_convert '/u01', '/v01'
set log_file_name_convert '/u01', '/v01' SET sga_max_size '3000m' set sga_target '2000m'
SET=> SET are processed later, does not specify db_file_name_convert
SPFILE => SPFILE must be in use in source instance, it will be copied on destination and used to start auxiliary instance. Destination database must be
started using a text init.ora containing only DB_NAME parameter.
db_file_name_convert=> replace pattern for datafiles and tempfiles
NOFILENAMECHECK=> databases are on different hosts, no need to check for different filenames
PASSWORD FILE=> if specified will copy source password file to destination
1)-Create an init.ora for auxiliary instance
2)-Create a passwordfile for auxiliary instance with same SYSDBA password as the source(target)
3)-startup auxiliary instance using startup nomount
4)-Connect RMAN to target and to auxiliary instance, and issue the active database duplication command

Parallel Backup or Multisection backup
Parallelize datafiles backup up to 265 parts(manually customizable for each datafile), multiple channel recommended.
All section have same size may be except the last. SECTION SIZE mandatory and value must be specified. Can't use MAXPIECESIZE with SECTION SIZE.
SECTION SIZE may be specified using VALIDATE DATAFILE command. Each section is a backup piece in the backup set.
backup SECTION SIZE 200m tablespace pippo;
* Views *
V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE (piece, section_size, multi_section[yes, no])

Archival(Long-Term) Backup - KEEP option
Are backed up control file, spfile, data and archived redo logs.
LOGS/NOLOGS clause can't be specified with KEEP. Self-contained archival backup immune from any retention. Redo logs are switched, current needed.
Store archival backup in non flash recovery area, may fill up quickly. Recovery catalog required if using KEEP FOREVER.
CHANGE not allowed for backup in flash recovery area.
An archived log is never made obsolete if needed for a guaranteed restore point, may be obsolete if needed for a normal restore point.
backup database format 'c:\arch\db_%U.bkp' tag quarterly KEEP until time 'sysdate + 365' RESTORE POINT firstq08;
#KEEP=>[FOREVER, UNTIL TIME], recovery catalog required if using KEEP FOREVER.
#RESTORE POINT=>SCN is captured when backup finish. The time until which RMAN can restore an archival backup
CHANGE backup tag 'pippo_bck' forever;
change copy of database controlfile keep;

Fast Incremental Backup
Flashback Database feature is required, searches flashback logs before backup files. Only corrupted blocks are restored and recovered in a datafile.
Archivelog mode always required. Db mounted or open. No proxy copies allowed. Block change tracking may be enabled on physical standby db.
10g blockrecover(no more available)-> 11g recover...block
recover datafile 2 block24 datafile 10 block 21 from tag=sundaynight;
RMAN>recover corruption list;
#recover all corrupted blocks from listed in V$DATABASE_BLOCK_CORRUPTION, view data is then deleted
* Identified Corrupt Blocks *
analyze table, analyze index, list failure, validate(populates V$DATABASE_BLOCK_CORRUPTION), backup...validate, export to /dev/null, dbverify
* Views *
V$DATABASE_BLOCK_CORRUPTION list block corrupted, populated by VALIDATE DATABASE

RMAN Compression
BZIP2(good compression) and ZLIB(def. faster 45%)
RMAN>configure compression algorithm 'bzip2';

Archived Redo Log Deletion
configure archivelog deletion policy=NONE(def.)(applies to all archived logs stored on disk including Flash Recvoery area)(Archived log will be marked
for deletion after transfered to all log_archive_dest_n and backed up or be obsolete)
configure archivelog deletion policy to backed up 2 times to sbt

RMAN Catalog merging - Export/Import RMAN catalog
source will be deleted and deregistered(unless you use ), source db version same as RMAN client version,
fails if same db is already registered in destination
import catalog rman/rman@reporman dbid=122,123 db_name=pippo,pluto dbid=121,31;
#you are connected to destination, rman@reporman will be imported

Virtual private catalog or Virtual catalog
1) SYSDBA> create the db user
2) SYSDBA> grant recovery_catalog_owner to user
3) real CATALOG OWNER> grant [REGISTER_DATABASE | REGISTER] to user;
#permits to register unknown databases, does not grant recovery catalog access
and / or
real CATALOG OWNER>grant catalog for database t1 to user;
#grant access to a target existing database to a user even if db is not yet registered, use DBID if db not yet registered.
#User may un/register databases. User creates local scripts and read global scripts
5) VIRTUAL USER - RMAN> create virtual catalog;
#use for 11g or later database
or
VIRTUAL USER - RMAN> DBMS_RCVCAT.create_virtual_catalog
#use for 10g database
6) VIRTUAL USER - RMAN> register database;
RMAN>revoke all from privileges from user;
* Packages *
DBMS_RCVCAT.create_virtual_catalog
#must execute before working with any pre-11.1 database. Does not create a virtual private catalog
Data Recovery Advisor
<- RMAN, EM Support Workbench & Database Control, Healt Monitor(following errors), VALIDATE or BACKUP command Quickly detect and analyzes data failures, repair manual(mandatory, optional) or automatic, data loss or no Automatically repair some network I/O errors, control file and datafile inconstistencies, accidental renaming of datafiles. No RAC. No standby database. No ORA-600, must be in Healt Monitor status(open, closed), priority(critical, high, low), grouping 1) list failure #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically 2) advise failure #generate manual and automated options for repair, rman scripts are generated for automated options 3) repair failure #by default high and critical failures will be repaired and closed. Generates warning if new failures are detected since last May not automatically close a fixed failure * Views * V$IR_FAILURE or list failure; change failure 7 closed; change failure priority low; #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically V$IR_MANUAL_CHECKLIST or advise failure; #summary and close fixed V$IR_REPAIR or repair failure; #requires advise failure validate database; #checks only for intrablock(no interblock) corruption in all datafiles, control files and the server parameter file validate database; #proactive check Healt Monitor -> run Data Recovery Advisor checks
View Healt Monitor checker reports using: EM, ADRCI, DBMS_HM, V$HM_RUN
Healt check mode can be:
[Reactive]: Healt check ran automatically when a critical error occurred
[Manual]
Available checks:
[Data Block Integrity] : check disk image block corruptions, checksum failures and intrablock corruptions. Not check for interblock corruption
[Db Structure Integrity]: verify accessibility, consistency and corruption of all database files
[Undo Segment Integrity]: check for logical Undo corruptions and attempt to repair them
[Redo Integrity check], [Dictionary Integrity check]
* Packages *
DBMS_HM.run_check(check_name, run_name, input_params)
DBMS_HM.(GET)RUN_REPORT(run_name M, type O , level O )
* Views *
V$HM_CHECK(all possible checks), V$HM_RUN(view HM checker reports), V$HM_CHECK_PARAM, V$HM_FINDING, V$HM_RECOMMENDATION
show hm_run;
#show all checks
Partitioning
Interval partitioning
The high value of the range partitions is called the Transition Point
Extends range partitioning by automatically creating new partitions on new data insert out of bounds. Only a single column DATE or NUMBER.
Works on index-organized tables. You can't create domain indexes
create table pippo(time_id date) partition by range(time_id)
interval(numtoyminterval(1, 'month'))
(partition p0 values less than to_date('3/12/1976', 'dd/mm/yyyy')),
(partition p1 values less than to_date('3/01/1977', 'dd/mm/yyyy'))
alter table pippo merge partitions for(to_date('3/12/1976', 'dd/mm/yyyy')), to_date('3/01/1977', 'dd/mm/yyyy')))
#the partitions to merge must be adiacent
* Views *
DBA_TAB_PARTITIONS(partitioning_type[range, interval(new automatically created)])

System partitioning
The application must explicitly name the partition in the insert/merge statements, not required for update/delete.
Can't create as CTAS, Hurts performances. No keys. Partitions can have different physics attributes. Can't create unique local indexes. Can't split partitions
create table pippo(c1 number) partition by system (partition p1, partition p2)
insert into pippo partition(pippo) values(1)
alter table pippo merge partitions p1, p2 into partition p1

Virtual Column-Based partitioning
Virtual column is computed on the fly. Datatype is optional. Can collect optimizer statistics. You can index a virtual column.
Can use a virtual column in all dml/ddl. Can partition over a virtual column. Can create virtual columns only on normal heap table.
Can't refers another virtual column. Output must always be scalar no lob, long, raw, ... Can't use pl/sql function if used for partitioning
create table pippo (c1 number, c2 number generated always as (c1*2))

Reference partitioning
Master table is a partitioned table. Can't use interval partitioning. Requires foreign key relationship.
If no tablespace specified then partitions are in same tbs of parent table's partition. Can't specify partitions bounds.
Partition names must be unique between master and reference table. Can't disable the foreign key.
Can't add or drop partitions from child table while its possibly for parent table.
Pruning and partitionwise joins works even if query predicates are different from the partitioning key

Composite Partitioning
Top Level partitioning: Range, List, Hash
Range-list, Range-hash, List-list, list-hash, list-range, range-range, interval-range, interval-list, interval-hash
Directory structure
$ORACLE_BASE (recommended only)
$ORACLE_HOME (do not explicitly set)
dbs
oranfstab sfirst default location
Flash recovery area (different disk then datafiles)
Datafiles (different disk then Flash Recovery Area)
Inventory
/diag Advanced Diagnostic Repository (ADR base=diagnostic_dest multiple ADR homes allowed but only one ADR base)
Db name
Instance name (use v$DIAG_INFO to view these paths) ADR_HOME (multiple ADR homes allowed but only one ADR base)
alert (xml alert format)
hm (checker reports)
incident
trace(V$DIAG.name='Diag Trace')
text alert.log
background process trace files
foreground process trace files
wallet
ewallet.p12
cdump
bdump
udump
...
init.ora new params
# A S M
asm_preferred_read_failure_groups=data.locationA, data.locationB #Dynamic. disk_group_name.failure_group only disks local to the instance!!
asm_diskgroups
asm_diskstring
# M E M O R Y
sga_target=0 #Static. Enable Automatic Memory Management
pga_aggregate_target=0 #Static. Enable Automatic Memory Management
memory_target=900M #dynamic default 0 and when upgrading. Total memory for SGA and PGA. With sga_target=0 and pga_target=0 then defaults 60% to SGA and 40% to PGA
#With sga_target!=0 and pga_target!=0 they indicate minimum values
memory_max_target=2000M; #if unset defaults to memory_target(if greater than 0) or 0(if memory_target is not set)
# R E S U L T C A C H E
result_cache_max_size #Static. Set the high limit. 0 disable. Maximum is 0.25% of memory_target, 0.5% of sga_target, 1%><75% plsql_optimize_level="3" plsql_code_type="INTERPRETED(def.)," sec_case_sensitive_logon="TRUE(def.)," sec_max_failed_login_attempts="5" sec_protocol_error_further_action="[CONTINUE" sec_protocol_error_trace_action="none," audit_trail="default" ldap_directory_sysauth=" #YES:" optimizer_capture_sql_plan_baselines="FALSE(def.)," optimizer_use_sql_baselines="FALSE(def.)" optimizer_use_pending_statistics="FALSE(def.)" optimizer_use_invisible_indexes="FALSE(def.)" control_management_pack_access="NONE," java_jit_enabled="true" control_management_pack_access="DIAGNOSTIC+TUNING(def.)," diagnostic_dest=" #ADR" diagnostic_dest="%ORACLE_BASE" ddl_lock_timeout="0(def.)" db_securefile="ALWAYS," db_ultra_safe="OFF(1:OFF" db_block_checking="TYPICAL," db_block_checksum="FALSE(def.)," db_lost_write_protect="TYPICAL" statistic_level="[TYPICAL" filesystemio_options="[ASYNCH" timed_statistics="true" encryption_wallet_location="(source="(method="file)(method_data="(directory="/yourdir)))" diagnostic_dest="ADR" diagnostic_dest="%ORACLE_BASE">Incident Package Configuration)(Collect diagnostic data, Generates incident reports and transmits), ADRCI
Critical Error=Problem=ORA-04031, composed of many Incidents=ORA-04031
Incident(Collecting, Ready, Tracking, Closed, Data Purged)
Retention: 1 year metadata(retention may be changed with EM), 1 month dumps. Incident is closed after twice retention elapses, is purged 30 days after open
1 Generate logical pack, 2 finalize, 3 gen. physical
Incident thresholds cannot be modified.
No edit and customizing when Quick Packaging
Oracle Support Workbench may be enabled to automatically upload files to Oracle Support by providing infos during the installation of
Oracle Configuration Manager
Flood-controlled incidents
are incidents reported multiple times. Alert log is updated and incident recorded in ADR but no dumps are generated.
SQL Performance Analyzer
1)First you capture SQL workload in a STS(SQL Tuning Set)
2)Move SQL workload to test environment
3)Execute the workload to capture baseline execution data
4)you make your changes
5)Execute the workload to capture execution data
6)Analyze differences in SQL performances
7)Tune regressed SQL statements
Identify plan that might have regressed after a database upgrade, hardware reconfiguration, schema changes and any change may affect SQL.
net SQL workload is identified. ?SQL are executed?
Cannot use SQL Performance Analizer to identify sql statements that are part of the SQL plan baseline
You can set a time limit within which all SQL statement in the workload must execute
DML and DDL are NOT taken in consideration
* Parameters *
TIME_LIMIT parameter for all SQL
LOCAL_TIME_LIMIT for single SQL
EXECUTION_TYPE[EXPLAIN PLAN | TEST EXECUTE(def.)]
#TEST EXECUTE: Each SQL Statement in the workload is executed one time and execution plans and statistics are collected.
SQL Repair Advisor
<- Support Workbench(Diagnostic Summary->Active Incidents link)
(often provides patch to fix)
sqlplan will change after patching. SQL Repair Advisor is not automatically launched after a SQL statement crash
* Export patches * DBMS_SQLDIAG.pack_stgtab_sqlpatch(staging_table), export, unpack_stgtab_sqlpatch(staging_table)
* Packages *
DBMS_SQLDIAG.create_diagnosis_task(sql_text, task_name)
DBMS_SQLDIAG.set_tuning_task_parameter(task_id, filter)
DBMS_SQLDIAG.execute_diagnosis_task(task_name)
DBMS_SQLDIAG.report_diagnosis_task(task_name, out_type)
DBMS_SQLDIAG.accept_sql_patch(task_name, task_owner)
* Views *
DBA_SQL_PATCHES(recommended patches), drop_sql_patch(procedure)
SQL Test Case Builder
<- EM Support Workbench Theese informations are gathered for a problematic SQL statement: the text, involved table definitions, optimizer statistics. No dump files DBMS_SQLDIAG.export_sql_testcase_dir_by_inc Automatic Workload Repository <- EM(Adaptive Metric Threshold link -> Baseline Metric Thresholds)
Requires AWR to have at least the same retention time.
A baseline is any set ot snapshot taken over a period of time.
Static baseline
Repeating static baseline
#Will be automatically created
Moving window baseline
#Will not be automatically created. SYSTEM_MOVING_WINDOW is system provided
Retention raise from 7 days(10g) to 8 days(11g) for default baseline SYSTEM_MOVING_WINDOW containing Adaptive Threshold
Thresholds types:
Significance Level
#if set to 0.99 alert on 1% change. Based on statistical relevance. Only Basic Metric support Significance level thresholds
Percentage of maximum
#calculated using the highest value captured by the baseline
Fixed values
When a baseline ends in the future a new baseline with the same name is created.
* Package *
DBMS_WORKLOAD.modify_baseline_window_size(wind_size)
#to change def. retention, if increase then increase AWR retention to match time
DBMS_WORKLOAD.create_baseline
#should be used for past time periods
DBMS_WORKLOAD.create_baseline_template
single: (start_time, end_time, bas_name, templ_name, EXPIRATION(O)=>null, dbid(O))
#EXPIRATION if not specified or null than the baseline template created will never expire
repeating:(day_of_week, hour_in_day, duration, expiration(O)=>null, start_time, end_time, baseline_name_prefix, template_name, dbid(O))
#should be used when part or all of the time period is on the future
DBMS_WORKLOAD.rename_baseline(old, new, dbid(O))
DBMS_WORKLOAD.modify_baseline_window_size
DBMS_WORKLOAD.drop_baseline_template(templ_name, dbid(O))
DBMS_WORKLOAD.select_baseline_metrics(baseline_name)(display metric thresholds)
DBMS_WORKLOAD_REPOSITORY...
#resize AWR retention period
* View *
DBA_HIST_BASELINE_TEMPLATE (template_type[single | repeated], repeat_interval[timing string as for DBMS_SCHEDULER])
DBA_HIST_BASELINE_DETAILS (shutdown[yes | no | null], pct_total_time[total snapshot time / total possible time], error_count[errors during snapshot])
DBA_HIST_BASELINE (baseline_type[static | moving | window | generated], creation_type, expiration[how long keep the baseline], template_name, last_computed,
moving_window_size[if baseline_type is null then AWR retention period else number of days of moving window])
Controlling Automated Maintenace Tasks
New ABP(Autotask Background Process)
- converts automatic tasks into Scheduler jobs
- determine the jobs that need to be created for each maintenance task
- stores task execution history in the SYSAUX tablespace
- does NOT execute maintenance tasks
A task may be enabled/disabled and changed of resource percentage in one or all maintenance windows. May change duration of maintenance windows.
Database Resource Manager is automatically enabled during maintenance window
By default AutoTask schedules: optimizer statistics collection, SQL Tuning Advisor and Automated Segment Advisor.
* MAINTENANCE_WINDOW_GROUP *
DEFAULT_MAINTENANCE_PLAN(belongs to a resource allocation)
10g: weeknight_window, weekend_window
11g: [10PM-02AM, 4h]: monday_window, tuesday_window, wednesday_window, thursday_window, friday_window. [06AM-02AM, 20h]saturday_window, sunday_window
* Managing Automatic Maintenance Tasks *
<- EM 10g: DBMS_SCHEDULER only enable or disable automatic maintenance tasks 11g: DBMS_AUTO_TASK_ADMIN more fine grained control DBMS_AUTO_TASK_ADMIN.enable(client_name, operation, window_name) DBMS_AUTO_TASK_ADMIN.disable(client_name, operation, window_name) DBA_AUTOTASK_OPERATION lists only automatic task(clients) operations names DBA_AUTOTASK_CLIENT DBA_AUTOTASK_TASK #view ABP(Autotask Background Process) repository, stores execution history for automated maintenance tasks I/O Calibration <- EM(Database Control -> Performance -> I/O Calibration)
Requires 10 minutes to run, sysdba, filesystemio_options=[ASYNCH | SETALL], timed_statistics=true
recommended during low activity, no concurrent calibrations allowed
DBMS_RESOURCE_MANAGER.calibrate_io(num_disks=>1, max_latency=>10, max_iops, max_mbps, actual_latency)
V$IO_CALIBRATION_STATUS, DBA_RSC_IO_CALIBRATE, V$IOSTAT_FILE (all show calibration results)
Database Resource Manager
Automatic actions based on current session single call usage: switch of Resource Consumer Group(switch grant needed), session kill, statement kill
Database Resource Manager is automatically enabled during maintenance window
CREATE_RESOURCE_PLAN(switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_time[UNLIMITED], switch_estimate[false | true],
switch_io_megabytes[null | mb], witch_io_reqs[null | io#], switch_for_call, switch_for_call[null | true])
CREATE_PLAN_DIRECTIVE(plan, group_or_subplan, mgmt_p1, switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_io_reqs, switch_io_megabytes,
switch_for_call, switch_time)
MIXED_WORKLOAD_PLAN predefined resource plan
Level 1: 100% sys_group
Level 2: [85% interactive_group], [15% ORA$AUTOTASK_SUB_PLAN, ORA$DIAGNOSTIC, OTHER_GROUPS]
Level 3: batch_group
Oracle Scheduler
Minimum amount of metadata required for creation, recommended to create a big number of job
Create Lightweight jobs using: job array, named programs, schedules, DBMS_SCHEDULER.create_job
Lightweight jobs are created only from job templates(stored proc or Scheduler program), privileges are only inherited from parent job,
may discriminate DataGuard nodes
* Package *
DBMS_SCHEDULER.create_program(program_name, program_action, program_type[plsql_block | stored_procedure], enabled)
DBMS_SCHEDULER.create_job(job_name, program_name, repeat_interval(opt.), end_time(opt.), schedule_name(opt.), job_style[LIGHTWEIGHT], comments)
* Create a Job Array and submit using single transaction *
declare
testjob sys.job;
testjobarr sys.job_array;
begin
testjobarr:=sys.job_array();
testjobarr.extend(500);
testjob:=sys.job(...);
testjobarr(i):=testjob;
dbms_scheduler.create_jobs(testjobarr, 'transactional');
end;
* Remote External Jobs *
On source db: XML DB(default), run as SYS prvtrsch.plb, DBMS_SCHEDULER.set_agent_registration_pass(registration_password, expiration_date(O), max_uses(O))
On Dest host: install Scheduler agent(not default), configure schagent.conf, schagent.exe -registerdatabase hostdb 1521, schagent.exe -start
* Enable a remote job *
DBMS_SCHEDULER.create_credential(cred_name, username, pwd)
DBMS_SCHEDULER.create_job(...)
DBMS_SCHEDULER.set_attribute(job_name, 'credential_name', cred_name)
DBMS_SCHEDULER.set_attribute(job_name, 'destination', host:port), .enable(job_name)
Security Enhancements
sec* parameters
Pwd are case sensitive(default when NOT upgrading, reset the pwd otherwise), delay after pwd wrong up to 10 secs
orapwd file=... entries=... ignorecase=n(def.)
alter profile ... password_verify_function verify_function_11g
utlpwdmg.sql enforce pwd verification
OS authentication is checked before the password file
Authentication using OID can be configured by granting SYSOPER/SYSDBA enterprise role in OID
Administrators can be authenticated using local OS if they are in the OSDBA or OSPER group
* ACL Fine Grained Access to network system supplied packages*
An ACL is used to restrict access to certain hosts is stored in XML format. Connect comprise resove.
an ACL Access Control List is a users/privilege list, only one ACL per host
DBMS_NETWORK_ACL_ADMIN.create_acl(acl[name of XML DB priv file], description, principal, is_grant, privilege[connect | resolve])
#one privilege must be specified on creation
DBMS_NETWORK_ACL_ADMIN.add_privilege(acl[name of XML DB priv file], principal, is_grant, privilege[connect | resolve])
DBMS_NETWORK_ACL_ADMIN.assign_acl(acl[name of XML DB priv file], host[ex. *.fadalti.com], lower_port, upper_port)
DBMS_NETWORK_ACL_ADMIN.check_privilege(acl[name of XML DB priv file], username, privilege[connect | resolve])

Oracle Wallet
<- Oracle Wallet Manager(owm.exe) see sqlnet.ora for location. Encryption Wallet(manually open after db startup) and Auto-Open Wallet Wallet required, configured and opened to encrypt tablespaces, [AES128, 3DES168, AES128, AES192, AES256], I/O overhead Attempting to create a table in an ecrypted tablespace with the wallet closed raise an error. You cannot change the key of an encrypted tablespace. A normal tablespace cannot be converted to an encrypted tablespace. Creation method 1: mkstore -wrl $ORACLE_BASE/admin/$ORACLE_SID/wallet -create Creation method 2: alter system set encryption key identified by "password" create tablespace pippo datafile 'c:\pippo.dbf' size 100m encryption using 'AES128' default storage(encrypt) Automatic SQL Tuning Advisor <- EM statistic_level=TYPICAL, BASIC will disable AWR providing SQL sources. AWR retention less than 7dd disable ASTA. STA may evolves SQL Plans from non-accepted to accepted 10g Automatic Tuning Optimizer(never auto tune) ->upgraded-> 11g Automatic SQL Tuning Advisor
job name SYS_AUTO_SQL_TUNING_TASK runs for 1 hour, runs only once during nightly maintenace window as part of MAINTENANCE_WINDOW_GROUP
Statistical analysis, SQL profiling, Access path analysis(add indexes), SQL structure analysis(Modifying SQL statements)
STA test and automaticall implement only SQL profiles creations with at least 66% of benefit,
choses statements from AWR Top SQL with poor performance in the past week or you manually run providing Sql Tuning Sets.
If run in Tuning Mode then the optimizer might run statistics and store in SQL Profiles
* Limitations * No recursive SQL, no ad hoc SQL, no parallel SQL, no long run profiled SQL, no insert/delete, no ddl
* Report *
Show if statisticd need to be collected, profile was recommended, index recommendation.
Show a benefit percentage of implementing recommendation for SQL profile and indexes
* Package *
DBMS_SQLTUNE.set_tuning_task_parameter(... see params below ...)
#for automatic tuning only, no manual
task_name=>'SYS_AUTO_SQL_TUNING_PROG'
parameter=>accept_sql_profile: When TRUE auto accept SQL profiles. Profiles are always generated
parameter=>replace_user_sql_profiles: allowed to replace user created SQL profiles
parameter=>max_sql_profiles_per_exec: maximum number of allowed SQL profiles changes accepted per tuning task
parameter=>max_auto_sql_profiles: maximum total number of allowed SQL profiles changes by db
parameter=>execution_days_to_expire: default 30, task history expiration
parameter=>time_limit: to change default duration of 1 hour in seconds
parameter=>test_execute: only use plan costs
DBMS_SQLTUNE.report_auto_tuning_task(type[text], level[typical], section[all]) return varchar2
DBMS_AUTO_TASK_ADMIN.enable(client_name=>'sql tuning advisor', operation=>'NULL', window_name=>['NULL'(all) | (a name)])
DBMS_AUTO_TASK_ADMIN.disable
#disable the Automatic Sql Tuning process
* View *
DBA_SQL_PROFILES[type], if type=AUTO then auto implemented SQL profile
DBA_ADVISOR_EXECUTIONS shows metadata information
DBA_ADVISOR_SQLSTATS shows a list of all SQL compilation and execution statistics
DBA_ADVISOR_SQLPLANS a list of all SQL execution plans
SQL Plan Management
Preserve SQL performance across major system changes
A SQL Plan may be:
non accepted #may be evolved to accepted
accepted #required for use, all manually loaded
enabled #required for use
fixed #A fixed plan has precedence over a non-fixed even if with higher cost, is FIXED if at least an enabled plan inside is FIXED=yes,
#the optimizer will not add new plans. If you accept a profile recommended by STA that profile will not be FIXED
SQL Tuning Advisor may evolve plans from non-accepted to accepted.
* Plan history * Contains both accepted and not-accepted plans for a repeatable SQL, best plan not accepted yet in SQL plan baseline
* SQL Plan Baseline * Contains plan history but only accepted
Automatic plan capture: see optimizer_capture_sql_plan_baselines parameter
Manual plan capture: performances are not verified
* Package *
DBMS_SQLTUNE.create_sqlset(sqlset_name, description) creates an empty SQL set
select value(p) from table(DBMS_SQLTUNE.select_workload_repository('peak baseline', null, null, 'elapsed time', null, null, null, 20)) p
DBMS_SQLSET.load_sqlset(sqlset_name, populate_cursor )
DBMS_SPM.load_plans_from_sqlset(sqlset_name, fixed=>no, enabled=>yes)
#plans are loaded into the SQL plan baseline as non-fixed accepted and enabled plans. May be used
DBMS_SPM.load_plans_from_cursor_cache(sql_id, plan_hash_value[null ], sql_text, fixed[no | yes ], enabled[yes],
attribute_name[sql_text | parsing_schema_name | module | action], attribute_value) return integer
DBMS_SPM.evolve_sql_plan_baseline(sql_handle, plan_list, plan_name, time_limit[DBMS_SPM.auto_limit | DBMS_SPM.no_limit],
verify[YES], commit[YES]) return CLOB
#show all non-accepted plans with status changed to accepted.
#VERIFY=YES the database executes the unaccepted plans and compare performances againts the SQL plan baseline
#VERIFY=NO all unaccepted plans became accepted
DBMS_SPM.alter_sql_plan_baseline(sql_handle, plan_name, attribute_name[accepted_status | enabled], attribute_value=>'yes')
DBMS_XPLAIN.display_sql_plan_baseline
* Views *
DBA_SQL_PLAN_BASELINES(sql_handle, sql_text, plan_name, origin[manual-load | manual-sqltune | auto-capture | auto-sqltune], enabled, accepted, fixed, autopurge)

SMB SQL Management Base
By default uses Automatic Segment Space Management ASSM
contains plan history information, the statement log, SQL profiles and plan baselines. Is part of data dictionary and stored in sysaux tbs.
sysaux tbs must be online to use SQL Plan Management features that access the SMB
DBMS_SPM.configure([SPACE_BUDGET_PERCENT | PLAN_RETENTION_WEEKS], value)
#SPACE_BUDGET_PERCENT: if the size of SMB exceed 10% of sysaux tablespace a weekly alert is generated
#PLAN_RETENTION_WEEKS: plan not used for more than 1 year are purged
* Views *
DBA_SQL_MANAGEMENT_CONFIG(SPACE_BUDGET_PERCENT, PLAN_RETENTION_WEEKS)
SQL Access Advisor
May be interrupted and resumed. By default is not scheduled as a maintenance task by the AutoTask.
Partitioning Recommendations for tables, indexes and materialized views, default mode is interval.
Partitioning Recommendations require at least 10000 rows on table, table must have some predicate or join with date or number , space, no bitmap indexes,
no interrupted SQL Access Advisor.
Advice on tuning Materialized Views, Materialized View Logs, indexes(B-tree, bitmap, function based)
* Packages *
DBMS_ADVISOR.add_sts_ref(task_name, sts_owner, sts_name)
#add an existing SQL tuning set
DBMS_ADVISOR.add_sqlwkld_ref(task_name, sts_name)
#add an existing SQL tuning set
DBMS_ADVISOR.delete_sts_ref(task_name, sts_owner, sts_name) link the SQL Access Advisor and the Workload
DBMS_ADVISOR.create_task(advisor_name=>'SQL Access Advisor', task_name)
DBMS_ADVISOR.set_task_parameter(task_name, param, value)[partition | segment][def_partition_tablespace, tbs][max_number_partitions,...]
[valid_table_list,...][mode,comprehensive][journaling,4][analysis_scope,all]
DBMS_ADVISOR.execute_task(task_name)
DBMS_ADVISOR.get_task_report(task_name, type[text], level[typical], section[all], owner_name, execution_name, object_id)
DBMS_ADVISOR.cancel_task(task_name) wont see any recommendation
DBMS_ADVISOR.interrupt_task(task_name) sometimes see recommendation, can't resume or restart
DBMS_ADVISOR.create_file(DBMS_ADVISOR.get_task_script(task_name), directory_name, file_name)
DBMS_ADVISOR.copy_sqlwkld_to_sts()
DBMS_ADVISOR.quick_tune(DBMS_ADVISOR.SQLACCESS_ADVISOR, task_name, sql_text)
#task is automatically created
ADDM Automatic Database Diagnostic Monitor
control_management_pack_access=DIAGNOSTIC+TUNING and statistic_level=TYPICAL or ALL
Analyzes AWR data, present problems and recommendations. Runs automatically by default when a new AWR snapashot is taken. Enabled by default.
You can use directive to limit or filter the ADDM
* ADDM for RAC Real Application Cluster *
Database ADDM(analyze all RAC instances), Instance ADDM(analyze one RAC instances), Partial ADDM(analyze some RAC instances)
Analyze: [Global resources and global I/O], [High-Load SQL], [Contention across the instances], [Global cache interconnect traffic], [network latency issues],
[skews in instance report times]
* Packages *
DBMS_ADDM.analyze_db(task_name, begin_snapshot, end_snapshot)
#Creates an ADDM task global for all RAC instances
DBMS_ADDM.analyze_inst
#Creates an ADDM task for a single instance
DBMS_ADDM.analyze_partial
#Creates an ADDM task for some instances
DBMS_ADDM.delete
#delete an ADDM task
DBMS_ADDM.get_report(task_name)
#Gets a report of an ADDM task
DBMS_ADDM.delete_finding_directive;
DBMS_ADDM.insert_finding_directive(task=>null, directive=>'SGA Directive', finding=>'Undersized SGA', minactivesessions=>5, minpctimpact=>50)
#TASK : null means all subsequently created tasks
#FINDING: the finding in DBA_ADVISOR_FINDING_NAMES
DBMS_ADDM.delete_sql_directive; DBMS_ADDM.insert_sql_directive
#Limit action
DBMS_ADDM.delete_segment_directive;
#Limit action
DBMS_ADDM.insert_segment_directive
#Prevent ADDM from reporting a schema, segment, subsegment or object
DBMS_ADDM.delete_parameter_directive; DBMS_ADDM.insert_parameter_directive(param_name)
#Prevent ADDM from altering specified parameter
DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER (advisor_name, parameter, value)
#Specify the mode in wich to run ADDM [Database | Instance | Partial]
* Views *
DBA_ADVISOR_FINDINGS(finding_name) #USER_... available. Shows which findings occur most frequently in the db
DBA_ADDM_TASKS #USER_... available. Show all executed ADDM tasks
DBA_ADDM_INSTANCES #USER_... available. Instance level info for completed ADDM tasks
DBA_ADD_FINDINGS #USER_... available. Extends DBA_ADVISOR_FINDINGS
DBA_ADVISOR_FINDING_NAMES #USER_... available. Lists all registered finding names
DBA_ADVISOR_RECOMMENDATIONS(filtered[Y | N])
DBA_ADVISOR_ACTIONS(filtered[Y | N])
Automatic Memory Management
sga_target=0 #static; pga_target=0 #static; memory_target=900M #dynamic; memory_max_target=2000M;
show parameter target; #archive_lag_target=0, db_flashback_retention_target=1440, fast_start_io_target=0, fast_start_mttr_target=0, memory_max_target=900M,
memory_target=900M, pga_aggregate_target=0; sga_target=0
sga_target pga_aggregate_target memory_target memory_max_target result
0 0 900M 999M AMM enabled, defaults 60% to SGA and 40% to PGA
50M 100M 900M 999M AAM enabled, SGA minimum 50M, PGA minimum 100M
50M 0 900M 999M AAM enabled, SGA minimum 50M, PGA minimum is memory_target-50M
0 100M 900M 999M AAM enabled, SGA minimum is memory_target-100M, PGA minimum 100M
900M 0 or null AAM enabled, max_memory_target is memory_target. (SGA is not autotuned, PGA is)
50M Automatically tuned only SGA subcomponents. PGA is autotuned. max_memory_target=0
0 or null 999M AAM disabled. memory_target=0 default value
* Views *
V$MEMORY_DYNAMIC_COMPONENTS(component, current_size, user_specified_size)
V$MEMORY_CURRENT_RESIZE_OPS #current resizes in progress
Optimizer Statistics Collection
Automatically during nightly manintenance window, precedence for most needing objects. Can set preferences at table, schema, database and global level.
Both global(incrementally but no with histograms) and local statistics for partitioned tables. Statistics job is atomic at schema level
Statistics are collected for user-defined function and function-based indexes
A pending statistic is not yet available
* Options, Statistics preferences *
publish[true | false]: def. true. Store statistics in data dictionary, current statistics
stale_percent: def. 10%. Threshold level for an object to be stale
* Granularity * auto, global, global and partition, all, partition, subpartition
* Extended Statistics * multi column or expression. Collects number of distinct values, density, number of nulls, frequency histograms

Capture statistics, Test and Publish
1) alter session set optimizer_use_pending_statistics=false;
2) set PUBLISH option to false using SET_TABLE_PREFS procedure
#subsequent statistics will be gathered as pending
3) gather the statistics for table;
4) alter session set optimizer_use_pending_statistics=true;
5) test the statistics
6) execute the PUBLISH_PENDING_STATS procedure

Capture global statistics for a partitioned table INCREMENTALLY, only on last partition
Incremental works if granularity=global and table marked both incremental and publish, estimate_percent=>auto_sample_size[always recommended]
1) set INCREMENTAL value to true use SET_TABLE_PREFS procedure
2) DBMS_STATS.gather_table_stats(ownname=>'ALDO', tabname=>'persone', granularity=>'GLOBAL')
* Packages *
DBMS_STATS.gather_database_stats_job_proc
DBMS_STATS.set_prefs
#set preferences at various levels
DBMS_STATS.set_table_prefs(schema, table, pref_name, pref_value)
#PUBLISH =[true(def.) | false] FALSE: statistics taken will be pending
#INCREMENTAL=[true | false(def.)] TRUE: statistics will no be collected for partition that have not been changed
DBMS_STATS.set_schema_prefs
DBMS_STATS.resume_gather_stats
DBMS_STATS.get_prefs(['estimate_percents' | 'stale_percent' | 'publish'], schema, table)
DBMS_STATS.publish_pending_stats(schema[null(def.)], table[null(def.)])
#pending stats became public
DBMS_STATS.delete_pending_stats(schema[null(def.)], table[null(def.)])
DBMS_STATS.export_pending_stats(schema[null(def.)], table[null(def.)])
DBMS_STATS.create_extended_stats(ownname, tabname, extension=>'(name, surname)')
# select DBMS_STATS.create_extended_stats(null, 'persone', extension=>'(name, surname)') from dual;
#collect extended statistics. A column group is created. A virtual hidden column is created with a system-defined name
DBMS_STATS.drop_extended_stats(ownname, tabname, extension=>'name, surname')
DBMS_STATS.show_extended_stats(ownname, tabname, extension=>'name, surname') return varchar2
#return the name of the statistics entrys
DBMS_STATS.gather_table_stats(ownname=>null, tabname=>'pippe', method_opt=>'for all columns size skewonly for columns(lower(a),b) skewonly')
#coll. ext. stats
* Views *
DBA_TAB_STATS, DBA_IND_STATS stored published statistics
DBA_TAB_STAT_PREFS(owner, table_name, preference_name, preference_value) #shows current settings for statistics preferences
DBA_TAB_PENDING_STATS
DBA_STAT_EXTENSIONS(extension_name, extension)
Result Cache
In SGA(shared pool). Affected by memory_target, sga_target, shared_pool_size. Recently-used algorithm to age out results.
Can cache flashback SQL, can cache query fragments. Used both for SQL queries and deterministic functions.
/*Hint*/: result_cache and no_result_cache can use in a inline view
Result Cache Memory pool
SQL Query Result Cache
Pl/SQL Function Result Cache
client side
* Restrictions *
can't use SQL Query Result Cache on: Temporary tables, Dictionary tables, non-deterministic Pl/Sql functions(use deterministic keyword),
[currval,nextval,sysdate,sys_timestamp,current_date,current_timestamp,local_timestamp,userenv,sys_context,sys_quid], undergoing modification data, subqueries,
pipelined functions, functions can't have OUT or IN parameters, no anonymous blocks, no in package with invoker's right, no IN params as LOB, REF CURSOR,
RECORD, COLLECTION, OBJECT
create or replace function pippo(id number) return pp_record RESULT_CACHE RELIES_ON (persone) is ...
#RELIES_ON means that cached results will became invalid when the structure of any database object on which they depend is changed
* Packages *
DBMS_RESULT_CACHE.memory_report
#Reports cache enabled, block size, max cache size, max result size, memory [total | fixed | dynamic]
DBMS_RESULT_CACHE.status
#enabled/disabled, in RAC shows if synchronizing
DBMS_RESULT_CACHE.flush return boolean
#return true on successful removal, disable/close the cache DBMS_RESULT_CACHE.bypass(true) before flushing
DBMS_RESULT_CACHE.bypass([true | false])
#bypass the cache
* Views *
V$RESULT_CACHE_STATISTICS #list cache settings and memory usage stats
V$RESULT_CACHE_OBJECTS(type, status[new | published | bypass | expired | invalid], name) #list cached objects and attributes
V$RESULT_CACHE_DEPENDENCY
V$RESULT_CACHE_MEMORY #show all memory blocks and statistics

Client Result Cache
OCI based, cache only top-level SQL statements no query fragments, excellent on lookup tables. Independent from server result cache
client sqlnet.ora init params override the db server init params
* Restrictions *
no views, no remote objects, no complex types, no flashback, no SQL with pl/sql, no VPD policies on the tables
* Views *
CLIENT_RESULT_CACHE_STAT$ #show client settings and usage stats
Adaptive Cursor Sharing
Automatic feature can't be turned off, no init params. Works the same for bind vars and literals.
10g: Bind variable not good for plan generation, first time will determine the plan forever(bind peeking)
11g: Generate new child cursors
Bind sensitive cursor: values are so different that a new plane is needed. New plans are added to plan history but not used until db has proved a better
performance. See optimizer_capture_sql_plan_baselines. Once the plane is changed the cursor became bind-aware, more plans will be used depending on params
* Views *
V$SQL(is_bind_sensitive, is_bind_aware)
V$SQL_CS_HISTOGRAM #shows distribution of exceution count across execution history histogram
V$SQL_CS_SELECTIVITY #shows selectivity ranges stored in cursor for predicates with bind variables
V$SQL_CS_STATISTICS #stats for different bind sets, buffer gets and CPU time
Flashback Data Archive FDA - Oracle Total Recall
Requires Automatic Undo Management. Archivelog is not required. Data archive tablespaces must use ASSM
Store data changes compressed for any length of time you want in one or more tbs in an internal history table for each table tracked.
Only for some tables you choice. You can't modify data in Flashback, only purge.
Table/s must be enabled for tracking(flashback archiving).
Can set different retention for different tables, when in different archives, assigning different flashback data archives.
Retention at table level, must be specified for data archive. An internal historical table is created with few more columns and range partitioned.
FBDA process, sleep time automatically adjusted. Can use flashback SQL(as of).
Old data is automatically purged. May create default archive(not required)
* Differences from Flashback Database FD* FD may take db back in time, while FDA is read only. FDA is online FD is offline.
FD at db level while FDA at table level.
* Limitations *
no [drop | rename | modify] column, yes add column. no partition operations. no LONG to LOB conversion. no [drop | truncate] table
SQL>grant flashback archive ADMINISTER to user;
#ADMINISTER allows [create flashback archive, alter flashback archive, drop flashback archive]
SQL>grant flashback archive on t1 to user;
#user also need select privs, execute on DBMS_FLASHBACK
SQL>create flashback data archive f1 tablespace tbs1 quota 2000m RETENTION 4 year;
#RETENTION must be specified for data archive
alter flashback archive f1 purge all; alter flashback archive f1 purge before timestamp(systimestamp - interval '2' day);
alter flashback archive f1 purge before scn 1233
alter table t1 noflashback archive f1;
#all flashback data is removed
alter flashback archive fla1 modify retention 1 year
select * from t1 versions between timestamp to_timestamp(...) and maxvalue
* Views *
DBA_FLASHBACK_ARCHIVE(flashback_archive_name, retention_in_days, status), DBA_FLASHBACK_ARCHIVE_TS(quota_in_mb), DBA_FLASHBACK_ARCHIVE_TABLES

Flashback Transaction Blackout
database in archivelog, undo and redo data are needed. supplemental logging and primary key supplemental logging at db level must be enabled.
Transaction dependencies:
write-after-write: dependent transaction modifies data modified by parent transaction
primary key constraint: dependent transaction reinserts the primary key deleted by parent transaction
alter database add supplemental log data;
alter database add supplemental log data (primary key) columns;
grant execute on dbms_flashback to hr;
grant select any transaction to hr;
grant insert, update, delete on pippo to hr;
#grant DML on specific involved tables
* Packages *
DBMS_FLASHBACK.transaction_backout(numberofxids, xids(array), options[see below...], scnhint(SCN at the start of transaction) or timehint)
#no commit inside!
nocascade: (def.) no depent transactions expected
cascade: dependent transactions are backed out before the parent
nocascade_force: dependent transactions are ignored
noconflict_only: only those rows with no conflict in parent transactions
* Views *
DBA_FLASHBACK_TRANSACTION_STATE #backed out transactions
DBA_FLASHBACK_TRANSACTION_REPORT
Oracle SecureFiles
10g lob->11g BasicFiles
transparent encryption. variable chunk size max 64M stored next to one another. redo generated only for changed data.
new client/server network for faster data transfer.
Every option may be changed later using . long api can't be used to configure SecureFile setting.
COMPATIBLE min 11.0. Error raisen if you use SecureFile capabilities in a BasicFile.
1) DEDUPLICATION detect and avoid duplicated data, Advanced Compression is required, only one copy is saved, deduplication is disabled by default
2) COMPRESSION [HIGH | MEDIUM(def.)], compression is disabled by default
3) ENCRYPTION must use Advanced Security Option [AES128 | 3DES168 | AES128 | AES192 | AES256], encryption is disabled by default
Older storage cluses(chunk, pctversion, freelist) no necessaries with SecureFiles, instead use this:
MAXSIZE: maximum LOB segment size
RETENTION: version control policy
MAX: after MAXSIZE old version is used
MIN: old version is retained for at least the time specified
AUTO: default.
NONE: old version is use ad much as possible
create table pippo(l clob) LOB(l)STORE AS [securefile | lob] (COMPRESS [HIGH | MEDIUM] deduplicate lob [CACHE | NOCACHE | CACHE READS] nologging);
#STORE AS is optional, default is LOB traditional.
#CACHE will place lob pages in the buffer cache for speedier access, default is NOCACHE, CACHE READS cache only during reads not for writes.
#NOLOGGING will not generate any redo
#COMPRESS [HIGH | MEDIUM(def.)]
#(NO) KEEP DUPLICATES #specify whether the database will store duplicates for the LOB column
create table pippo (doc clob)lob(doc) store as securefile(ENCRYPT);
create table pippo (doc clob encrypt using 'AES256')lob(doc) store as securefile;

Migrating to SecureFiles
* Partition Exchange *
Ensure you have enough space for largest partition, long maintenance window, segment offline
* Online Redefinition (recommended) *
No segment offline, migration in parallel, index must be rebuild, double space required.
Materialized view must be refreshed if the source table is redefined.
create table tab1(id number, c lob) lob(c) store as lob;
create table tab1(id number, c lob) lob(c) store as securefile;
DBMS_REDEFINITION.start_redef_table('scott', 'tab1', 'tab2', 'id id c c');
DBMS_REDEFINITION.copy_table_dependents('scott', 'tab1', 'tab2', 1, true, true,true,true,false, error_count);
DBMS_REDEFINITION.finish_redef_table('scott', 'tab1', 'tab2');
* Packages *
DBMS_LOB.getoptions
#return lob settings
DBMS_LOB.setoptions
DBMS_LOB.get_deduplicated_regions
DBMS_SPACE.space_usage
#determine disk space used by all lob in segment, only for ASSM segments
* Views *
DBA_SPACE, DBA_LOB
Locking Enhancements
By default a DDL will fail if can't get an immediate DML lock on the table.
lock table pippo in exclusive mode [nowait | wait(def.) (optional)]
#if mode omitted will wait undefinitely
The following statements will not require a lock:
create index online;
create materialized view log;
alter table enable constraint novalidate;
Minimal invalidation - Fine Grained Dependency Management
Only logical affected objects are invalidated such as views and synonyms during online redefinition, triggers have not been enhanced they will be invalidated.
Parameter file creation from Memory
create [spfile | pfile] from memory;
#will have values for all 150 init parameters even if not precedently set
Hot Patching
Supported on RAC
opatch enable, disable and install patches.
Currently only for Linux, Solaris. Consumes extra memory depending on the number of currently running Oracle processes.
At least one OS page of memory(4-8kb) for Oracle process is required.
opatch query -is_online_patch
#determine if patch is hot
Invisible Indexes
Invisble to the optimizer, discarded from plans. Can change from in/visible at any time.
create index inv1 on pippo(nome) invisible;
alter index i1 in/visible;
* Views *
DBA_INDEXES(visibility)
Shrinking Temporary Tablespaces
May shrink online temporary tablespaces and single tempfiles. Shrink to a minimum of 1MB(def.)
alter tablespace temp shrink space [keep 100m];
#Not equally distributed along tempfiles
alter tablespace temp shrink tempfile '/pippo.dbf' [keep 100m]
* Views *
DBA_TEMP_FREE_SPACE(tablespace_name, free_space) #FREE_SACE in bytes. Space unallocated and allocated space available to be reused
Tablespace Option for Temporary tables
May manually specify a temporary tbs, indexes are created in the same tablespace
PL/SQL and Java Automatic Native Compilation
see parameter plsql_code_type and plsql_optimize_level
alter procedure pioppo compile pl_sql_code_type=native;
* Recompiling a Database for Pl/Sql Native Compilation *
1) shutdown immediate; 2) plsql_code_type=native, plsql_optimize_level=3 3) startup upgrade 4) @?/rdbms/admin/dbsupgnv.sql 5) restart and utlrp.sql
* Views *
DBA_PLSQL_OBJECT_SETTINGS(plsql_code_type)
OLTP Table Compression
Writes not degraded reads improved because data is directly read as compressed. Disk space and memory are minimized, cpu is increased.
May compress data during dml, not only when bulk loading(10g) or when creating CTAS(10g). Holes made by deletes are eliminated
Compression may be enabled for tables, partitions and tablespaces.
A block is compressed when reaches PCTFREE.
create table pippo(...) compress; #normal compress
create table pippo(...) compress for direct_load_operations #normal compress for a data warehouse workloads table not for OLTP
create table pippo(...) compress for all operations #compress for dml operations
alter table pippo compress for all operations #An existing table may be altered to compress but existing data remain uncompressed.
* Views *
DBA_TABLES(compression, compress_for)
Direct NFS Client
Simplify manageability across multiple platforms, performance are increased but kernel NFS is not completely bypassed since its used for network communication
Avoid kernel NFS layer. Load balancing. You may control I/O path. OS must perform the mount. NFS version 3 protocol implemented in Oracle RDBMS kernel.
If mismatch between oranfstab and OS mount points the NFS is stopped. To remove an NFS path in use restart the database.
Direct NFS client will disabled if delete oranfstab or modify its setting or replacing ODM NFS with the stub libodn11.so
If the database can't open the NFS using Direct NFS it will use OS kernel
* Configuring *
1) Specify mount point in /etc/mtab first default location and be sure the OS perform the mount
2) (optional) edit oranfstab for Oracle specfic options such as additional paths to a mount point.
Mount point search order:
(optional) $ORACLE_HOME/dbs/oranfstab
(optional) /etc/oranfstab
(when no oranfstab) /etc/mtab
4) Enable Direct NFS replacing standard ODM library libnfsodm10.so with ODM NFS library $ORACLE_HOME/lib/libodm11.so
oranfstab format:
server: TestServer1 #nfs server name
path: 130.33.34.11 #up to 4 network paths if the first fail, load balancing
export: /vol/oradata1 #exported path from NFS server
mount: /mnt/oradata1 #local mount point
* Views *
V$DNFS_STATS #show performance stats for Direct NFS
V$DNFS_SERVERS #show servers accessed by Direct NFS
V$DNFSFILES #files currently using Direct NFS
V$DNFS_CHANNEL #open network paths

Basic RAC 10g administration

This is just a draft of basic RAC 10g administration

RAC benefit and characteristics- does not protect from human errrors- increased availabilty from node/instance failure- speed up parallel DSS queries- no speed up parallel OLTP processes- no availability increase on data failures- no availability increase on network failures- no availability increase on release upgrades- no scalability increased for applications workloads in all cases

RAC tuning - After migration to RAC test: - Interconnect latency - Instance recovery time - Application strongly relying on table truncates, full scan tables, sequences and non-sequences key generation, global context variables

RAC specific background processes for the database instanceCluster Synchronization Service (CSS) ocssd daemon, manages cluster configuration Cluster Ready Services (CRS) manages resources(listeners, VIPs, Global Service Daemon GSD, Oracle Notification Service ONS) crsd daemon backup the OCR every for hours, configuration is stored in OCREvent Manager (EVM) evmd daemon, publish eventsLMSn coordinate block updatesLMON global enqueue for shared locksLMDn manages requests for global enqueuesLCK0 handle resources not requiring Cache FusionDIAG collect diagnostic info

FAN Fast Application Notification-

Must connect using serviceLogged to:&ORA_CRS_HOME/racg/dump $ORA_CRS_HOHE/log//racg VERSION= service= [database= [instance=]] [host=] status= reason= [card=] timestamp= event_type Description SERVICE Primary application service event SRV_PRECONNECT Preconnect application service event (TAF) SERVICEMEMBER Application service on a specific instance event DATABASE Database event INSTANCE Instance event ASM ASM instance event NODE Cluster node event

Oracle Notification Service ONS- Transmits FAN events-

For every FAN event status change, all executables in $ORA_CRS_HOME/racg/usrco are launched (callout scripts) The ONS process is $ORA_CRS_HOME/opmn/bin/ons Arguments: -d: Run in daemon mode -a : can be [ping, shutdown, reload, or debug] [$ORA_CRS_HOME/opmn/conf/ons.config] localport=6lOO remoteport=6200 loglevel=3 useocr=ononsctl start/stop/ping/reconfig/debug/detailed

FCF Fast Connection Failover-

A JDBC application configured to use FCF automatically subscribes to FAN events- A JDBC application must use service names to connect- A JDBC application must use implicit connection cache- $ORACLE_HOME/opmn/lib/ons.jar must be in classpath- -Doracle.ons.oraclehome - or System.setProperty ("oracle.ons.oraclehome", "/u01/app/oracle/product/10.2.0/db_l"); OracleDataSource ods = new OracleDataSource(); ods.setUser("USERl"); ods.setPassword("USERl"); ods.setConnectionCachingEnabled(true); ods.setFastConnectionFailoverEnabled(true); ods.setConnectionCacheName("MyCache"); ods.setConnectionCacheProperties(cp); ods.setURL("jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=londonl-vip)(PORT=152l)(ADDRESS=(PROTOCOL=TCP)(HOST=london2-vip)(PORT=152l)(CONNECT_DATA=(SERVICE_NAME=SERVICE1)))")

Check for main Clusterware services up #check Event Manager upps -ef | grep evmd#check Cluster Synchronization Services upps -ef | grep ocssd #check Cluster Ready Services upps -ef | grep crsd#check Oracle Notification Serviceps -ef | grep ons[/etc/inittab]...hl:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&l
/dev/null 2>&l
/dev/null 2>&1
of=crsctl#Tested, as oracle$ORA_CRS_HOME/bin/crsctl check crs Cluster Synchronization Services appears healthy Cluster Ready Services appears healthy Event Manager appears healthy#add online a new voting disk(10.2), -force if Oracle Clusterware is not startedcrsctl add css votedisk 'new votedisk path' -forcecrsctl start/stop/enable/disable crs#set/unset parameters on OCRcrsctl set/unset You can list the currently configured voting disks: crsctl query css votedisk 0. 0 /u02/oradata/RAC/CSSFilel 1. 1 /u03/oradata/RAC/CSSFile2 2. 2 /u04/oradata/RAC/CSSFile3 Dynamically add and remove voting disks to an existing Oracle Clusterware installation: crsctl add/delete css votedisk -force#as root, enable extra debug for the running CRS daemons as well as those running in future#enable to inspect system rebootscrsctl debug log crs

OCR - Oracle Cluster Registry [/etc/oracle/ocr.loc](10g) or [/etc/oracle/srvConfig.loc](9i, still exists in 10g for compatibility)ocrconfig_loc=/dev/raw/rawl ocrmirrorconfig_loc=/dev/raw/raw2 local_only=FALSE OCRCONFIG - Command-line tool for managing Oracle Cluster Registry #recover OCR logically, must be done on all nodes ocrconfig -import exp.dmp #export OCR content logically ocrconfig -export #recover OCR from OCR backup ocrconfig -restore bck.ocr #show backup status #crsd daemon backup the OCR every for hours, the most recent backup file is backup00.ocr ocrconfig -showbackup londonl 2005/08/04 11:15:29 /uOl/app/oracle/product/lO.2.0/crs/cdata/crs londonl 2005/08/03 22:24:32 /uOl/app/oracle/product/10.2.0/crs/cdata/crs #change OCR autobackuo location ocrconfig -backuploc #must be run on each affected node ocrconfig -repair ocr ocrconfig -repair ocrmirror #force Oracle Clusterware to restart on a node, may lose recent OCR updates ocrconfig -overwrite OCRDUMP - Identify the interconnect being used$ORA CRS HOME/bin/ocrdump.bin -stdout -keyname SYSTEM.css.misscount -xmlCVU - Cluster verification utility to get status of CRS resourcesdd : use it safely to backup voting disks when nodes are added/removed#verify restorecluvfy comp ocr -n allocrcheck#OCR integrity check, validate the accessibility of the device and its block integritylog to current dir or to $OCR_HOME/log//client

Pre install, prerequisite(./run)cluvfy : run from install media or CRS_HOME, verify prerequisites on all nodesPost installation- Backup root.sh- Set up other user accounts- Verify Enterprise Manager / Cluster Registry by running srvctl config database -d db_name

SRVCTLStores infos in OCR, manages:Database, Instance, Service, Node applications, ASM, Listener srvctl config database -d : Verify Enterprise Manager / Cluster Registryset SRVM_TRACE=TRUE environment var to create Java based tool trace/debug file for srvctl#-v to check servicessrvctl status database -d RAC -v SERVICE1srvctl start database -d [-o mount]srvctl stop database -d [-o stop_options] #moves parameter filesrvctl modify database -d name -p /u03/oradata/RAC/spfileRAC.ora srvctl remove database -d TEST#Verify the OCR configurationsrvctl config database - TESTsrvctl start instance -d RACDB -i "RAC3,RAC4" srvctl stop instance -d -i "orcl3,orcl4" -o immediatesrvctl add instance -d RACDB -i RAC3 -n londonS #move the instance to node london4srvctl modify instance -d RAC -i RAC3 -n london4 #set a dependency of instance RAC3 to +ASM3srvctl modify instance -d RAC -i RAC3 -s +ASM3 #removes an ASM dependencysrvctl modify instance -d RAC -i RAC3 -r #stop all applications on nodesrvctl stop nodeapps -n londonl #-a display the VIP configurationsrvctl config nodeapps -n londonl -asrvctl add nodeapps -n london3 -o $0RACLE_H0ME -A london3-vip/255.255.0.0/eth0

ServicesChanges are recorded in OCR only! Must use DBMS_SERVICE to update the dictionarysrvctl start service -d RAC -s "SERVICE1,SERVICE2"srvctl status service -d RAC -s "SERVICE1,SERVICE2"srvctl stop service -d RAC -s "SERVICE1,SERVICE2" -fsrvctl disable service -d RAC -s "SERVICE2" -i RAC4srvctl remove service -d RAC -s "SERVICE2"#relocate from RAC2 to RAC4srvctl relocate service -d RAC -s "SERVICE2" -i RAC2 -t RAC4#preferred RAC1,RAC2 and available RAC3,RAC4#-P PRECONNECT automatically creates a ERP and ERP_PRECONNECT service to use as BACKUP in tns_names#See TNSnames configuration#the service is NOT started, must be started manually (dbca do it automatically)srvctl add service -d ERP -s SERVICE2 -i "RAC1,RAC2" -a "RAC3,RAC4" -P PRECONNECT#show configuration, -a shows TAF confsrvctl config service -d RAC -a#modify an existing servicesrvctl modify service -d RACDB -s "SERVICE1" -i "RAC1,RAC2" -a "RAC3,RAC4"srvctl stop service -d RACDB -s "SERVICE1"srvctl start service -d RACDB -s "SERVICE1"ViewsGV$SERVICES GV$ACTIVE_SERVICES GV$SERVICEMETRICGV$SERVICEMETRIC_HISTORYGV$SERVICE_WAIT_CLASSGV$SERVICE_EVENTGV$SERVICE_STATSGV$SERV_MOD_ACT_STATS

SQL for RACselect * from V$ACTIVE_INSTANCES;Cache Fusion - GRD Global Resource DirectoryGES(Global Enqueue Service)GCS(Global Cache Service)Data Guard & RAC- Configuration files at primary location can be stored in any shared ASM diskgroup, on shared raw devices, on any shared cluster file system. They simply have to be sharedAdding a new node- Configure hardware and OS- Reconfigure listeners with NETCA- $ORA_CRS_HOME/bin/addnode.sh

VIP virtual IP- Both application/RAC VIP fail over if related application fail and accept new connections- Recommended RAC VIP sharing among database instances but not among different applications because...- ...VIP fail over if the application fail over- A failed over VIP application accepts new connection- Each VIP requires an unused and resolvable IP address- VIP address should be registered in DNS- VIP address should be on the same subnet of the public network- VIPs are used to prevent connection requests timeout during client connection attemptsChanging a VIP1- Stop VIP dependent cluster components on one node2- Make changes on DNS3- Change VIP using SRVCTL4- Restart VIP dependent components5- Repeat above on remaining nodes

oifcfgallocating and deallocating network interfaces, get values from OCRTo display a list of networksoifcfg getifeth1 192.168.1.0 global cluster_interconnecteth0 192.168.0.0 global publicdisplay a list of current subnetsoifcfg iflist etho 147.43.1.0 ethl 192.168.1.0 To include a description of the subnet, specify the -p option: oifcfg iflist -p ethO 147.43.1.0 UNKNOWN ethl 192.168.1.0 PRIVATE In 10.2 public interfaces are UNKNOWN. To include the subnet mask, append the -n option to the -p option: oifcfg if list -p -n etho 147.43.1.0 UNKNOWN 255.255.255.0 ethl 192.168.1.0 PRIVATE 255.255.255.0

Db parameters with SAME VALUE across all instancesactive_instance_countarchive_lag_targetcompatiblecluster_database RAC paramcluster_database_instance RAC param#Define network interfaces that will be used for interconnect#it is not a failover but a redistribution. If an address not work then stop all#Overrides the OCRcluster_interconnects RAC param = 192.168.0.10; 192.168.0.11; ...control_filesdb_block_sizedb_domaindb_filesdb_namedb_recovery_file_destdb_recovery_file_dest_sizedb_unique_namedml_locks (when 0)instance_type (rdbms or asm)max_commit_propagation_delay RAC paramparallel_max_serversremote_login_password_filetrace_enabled#cannot be mixed AUTO and MANUAL in a RACundo_managementDb parameters with INSTANCE specific VALUE across all instancesinstance_nameinstance_numberthreadundo_tablespace #system paramListener parameterslocal_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.0.13) (PORT = 1521)))' #allow pmon to register with local listener when not using 1521 portremote_listener = '(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.2.9) (PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST =192.168.2.10)(PORT = 1521)))' #make the listener aware of the load of the listeners of other nodesImportant Rac Parametersgc_files_to_locks #other than default disable Cache Fusionrecovery_parallelism #number of redo application server processes in instance or media recoveryRac and Standby parametersdg_broker_config_file1 #shared between primary and standby instancesdg_broker_config_file2 #different from dg_broker_config_file1, shared between primary and standby instances
Forum section



Shared contentsdatafiles, controlfiles, spfiles, redo logShared or local? RAW_Dev File_Syst ASM NFS OCFS- Datafiles : shared mandatory- Control files : shared mandatory- Redo log : shared mandatory- SPfile : shared mandatory- OCR and vote : shared mandatory Y Y N- Archived log : shared not mandatory. N Y N Y- Undo : local- Flash Recovery : shared Y Y Y- Data Guard broker conf.: shared(prim. & stdby) Y Y

Adding logfile thread groups for a new instance#To support a new instance on your RAC 1) alter database add logfile thread 3 group 7;1) alter database add logfile thread 3 group 8;2) alter database enable thread 3;# if you want to change an used thread 2) alter system set thread=3 scope=pfile sid='RAC01' 3) srvctl stop instance -d RACDB -i RAC01

Views and queriesselect * from GV$CACHE_TRANSFER

An instance failed to start, what do we do?1) Check the instance alert.log2) Check the Oracle Clusterware software alert.log3) Check the resource state using CRS_STAT
Install
See official Note 239998.1 for removing crs installation
See http://startoracle.com/2007/09/30/so-you-want-to-play-with-oracle-11gs-rac-heres-how/ to install 11g RAC on VMware
See http://www.oracle.com/technology/pub/articles/hunter_rac10gr2_iscsi.html to install on Linux with iSCSI disks
See http://www.oracle-base.com/articles/10g/OracleDB10gR2RACInstallationOnCentos4UsingVMware.php to install on VMware

#If using VMware remember to allow shared disks disabling locking
#Obviously this is not required if you use OCFS or ASM
disk.locking = "FALSE"
See OCFS Oracle Cluster Filesystem
Prerequisites check#check node connectivity and Clusterware integrity./runcluvfy.sh stage -pre dbinst -n all./runcluvfy.sh stage -post hwos -n "linuxes,linuxes1" -verboseWARNING: Package cvuqdisk not installed. rpm -Uvh clusterware/rpm/cvuqdisk-1.0.1-1.rpmWARNING: Unable to determine the sharedness of /dev/sdf on nodes: linuxes1,linuxes1,linuxes1,linuxes1,linuxes1,linuxes1,linuxes,linuxes,linuxes,linuxes,linuxes,linuxesSafely ignore this error./runcluvfy.sh comp peer -n "linuxes,linuxes1" -verbose./runcluvfy.sh comp nodecon -n "linuxes,linuxes1" -verbose ./runcluvfy.sh comp sys -n "linuxes,linuxes1" -p crs -verbose ./runcluvfy.sh comp admprv -n "linuxes,linuxes1" -verbose -o user_equiv./runcluvfy.sh stage -pre crsinst -n "linuxes,linuxes1" -r 10gR2

Restart intallation - Remove from each nodesu -c "$ORA_CRS_HOME/install/rootdelete.sh; $ORA_CRS_HOME/install/rootdeinstall.sh"#oracle userexport DISPLAY=192.168.0.1:0.0/app/crs/oui/bin/runInstaller -removeHome -noClusterEnabled ORACLE_HOME=/app/crs LOCAL_NODE=linuxesrm -rf $ORA_CRS_HOME/*#rootsu -c "chown oracle:dba /dev/raw/*; chmod 660 /dev/raw/*; rm -rf /var/tmp/.oracle; rm -rf /tmp/.oracle"
#Format rawdevices using
dd if=/dev/zero of=/dev/raw/raw6 bs=1M count=250

#If related error message appears during installation, manually launch on related node
/app/crs/oui/bin/runInstaller -attachHome -noClusterEnabled ORACLE_HOME=/app/crs ORACLE_HOME_NAME=OraCrsHome CLUSTER_NODES=linuxes,linuxes1 CRS=true "INVENTORY_LOCATION=/app/oracle/oraInventory" LOCAL_NODE=linuxes

runcluvfy.sh stage -pre crsinst -n linuxes -verbose
Forum section


/etc/hosts example# Do not remove the following line, or various programs # that require network functionality will fail, 127.0.0.1 localhost 147.43.1.101 londonl 147.43.1.102 london2 #VIP is usable only after VIPCA utility run,#should be created on the public interface. Remember that VIPCA is a GUI tool147.43.1.201 londonl-vip 147.43.1.202 london2-vip 192.168.1.1 londonl-priv 192.168.1.2 london2-priv

Kernel Parameters(/etc/sysctl.conf) Recommended Valueskernel.sem (semmsl) 250 kernel.sem (semmns) 32000 kernel.sem (semopm) 100 kernel.sem (semmni) 128 kernel.shmall 2097152 kernel.shmmax Half the size of physical memory kernel.shmmni 4096 fs.file-max 65536 net.core.rmem_default 262144 net.core.rmem_max 262144 net.core.wmem_default 262144 net.core.wmem_max 262144 net.ipv4.ip_local_port_range 1024 to 65000

RAC restrictions- dbms_alert, both publisher and subscriber must be on same instance, AQ is the workaround- dbms_pipe, only works on the same instance, AQ is the workaround- UTL_FILE, directories, external tables and BFILEs need to be on shared storage

Implementing the HA High Availability FrameworkUse srvctl to start/stop applications#Manually create a script that OCR will use to start/stop/status#Create an application VIP. #This command generates an application profile called haf demovip.cap in the $ORA_CRS_HOME/crs/ public directory. $ORA_CRS_HOME/bin/crs_profile -create hafdemovip -t application -a $ORA_CRS_HOME/bin/usrvip -o oi=eth0,ov=147.43.1.200,on=255.255.0.0 #As the oracle user, register the VIP with Oracle Clusterware: ORA_CRS_HOME/bin/crs_register hafdemovip #As the root user, set the owner of the apphcation VIP to root: $ORA_CRS_HOME/bin/crs_setperm hafdemovip -o root #As the root user, grant the oracle user permission to run the script: $ORA_CRS_HOME/bin/crs_setperm hafdemovip -u user:oracle:r-x #As the oracle user, start the application VIP: $ORA_CRS_HOME/bin/crs_start hafdemovip 2. Create an application profile. $ORA_CRS_HOHE/bin/crs_profile -create hafdemo -t application -d "HAF Demo" -r hafdemovip -a /tmp/HAFDemoAction -0 ci=5,ra=603. Register the application profile with Oracle Clusterware. $ORA_CRS_HOHE/bin/crs_register hafdemo $ORA_CRS_HOME/bin/crs_start hafdemo

CRS commandscrs_profilecrs_registercrs_unregistercrs_getpermcrs_setpermcrs_startcrs_stopcrs_statcrs_relocate

Server side calloutOracle instance up(/down?)Service member down(/up?)Shadow application service up(/down?)

Removing a node from a cluster- Remove node from clusterware- Check that ONS configuration has been updated on other node- Check that database and instances are terminated on node to remove- Check that node has been removed from database and ASM repository- Check that software has been removed from database and ASM homes on node to remove

$ srvctl status database -d RAC
Instance RAC1 is running on node orarac1
Instance RAC2 is running on node orarac2

$srvctl status instance -d RAC -i RAC1
Instance RAC1 is running on node orarac1

$srvctl status asm -n orarac1
ASM instance +ASM1 is running on node orarac1

$srvctl status nodeapps -n orarac1
VIP is running on node: orarac1
GSD is running on node: orarac1
PRKO-2016 : Error in checking condition of listener on node: orarac1

Please refer PRKO-2016 to Oracle Bug No.4625482
"CHECKING/STARTING/STOPING NAMED LISTENER WITH SRVCTL NODEAPPS FAILS /W
PRKO-2016". It's a known issue on Oracle 10.2.0.1. Ignore if the

Stop/Start Oracle RAC
1. Stop Oracle 10g on one of RAC nodes.
$ export ORACLE_SID=RAC1
$ srvctl stop instance -d RAC -i RAC1
$ srvctl stop asm -n orarac1
$ srvctl stop nodeapps -n orarac1

2. Start Oracle 10g on one of RAC nodes.
$ export ORACLE_SID=RAC1
$ srvctl start nodeapps -n orarac1
$ srvctl start asm -n orarac1
$ srvctl start instance -d RAC -i RAC1

3. Stop/start Oracle 10g on all RAC nodes.
$ srvctl stop database -d RAC

Check Oracle Listener on nodes
$ srvctl config listener -n orarac1
orarac1 LISTENER_ORARAC1
$ lsnrctl start LISTENER_ORARAC1


Check, backup, restore OCR
$ocrconfig -showbackup

$ocrconfig -export /data/backup/rac/ocrdisk.bak

!To restore OCR, it must stop Clusterware on all nodes before.
$ocrconfig -import /data/backup/rac/ocrdisk.bak

$cluvfy comp ocr -n all //verification

$ocrcheck //check OCR disk usage


Check configuration of ORACLE RAC
$srvctl config database
RAC

$srvctl config database -d RAC
orarac1 RAC1 /space/oracle/product/10.2.0/db_2
orarac2 RAC2 /space/oracle/product/10.2.0/db_2

$srvctl config asm -n orarac2
+ASM2 /space/oracle/product/10.2.0/db_2

$srvctl config nodeapps -n orarac1 -a -g -s -l
VIP exists.: /orarac1-vip.banctecmtl.lan/10.21.51.13/255.255.224.0/Public
GSD exists.
ONS daemon exists.
Listener exists.

SQL> column HOST_NAME format a10;
SQL> select INSTANCE_NAME, HOST_NAME,STARTUP_TIME,STATUS,PARALLEL,DATABASE_STATUS,ACTIVE_STATE from gv$instance;

INSTANCE_NAME HOST_NAME STARTUP_T STATUS PAR DATABASE_STATUS ACTIVE_ST
---------------- ---------- --------- ------------ --- ----------------- ---------
rac2 ORARAC2 16-SEP-08 OPEN YES ACTIVE NORMAL
rac1 ORARAC1 19-SEP-08 OPEN YES ACTIVE NORMAL


$ srvctl start database -d RAC

listener works fine.

ONS daemon is running on node: orarac1