R M A N
Active Database Duplication or Network-Aware database duplication
No need to backup for duplication. Doesn't copy and apply online redo log, only up to most recent archive. Both password file required with same password.
Flash Recovery Area is not copied. Archive log are copied if necessary.
Source may be in mount or open(archivelog mode). Can automatically copy if specify the clause.
duplicate database to dupdb from active database db_file_name_convert '/u01', '/v01' SPFILE parameter_value_convert '/u01', '/v01'
set log_file_name_convert '/u01', '/v01' SET sga_max_size '3000m' set sga_target '2000m'
SET=> SET are processed later, does not specify db_file_name_convert
SPFILE => SPFILE must be in use in source instance, it will be copied on destination and used to start auxiliary instance. Destination database must be
started using a text init.ora containing only DB_NAME parameter.
db_file_name_convert=> replace pattern for datafiles and tempfiles
NOFILENAMECHECK=> databases are on different hosts, no need to check for different filenames
PASSWORD FILE=> if specified will copy source password file to destination
1)-Create an init.ora for auxiliary instance
2)-Create a passwordfile for auxiliary instance with same SYSDBA password as the source(target)
3)-startup auxiliary instance using startup nomount
4)-Connect RMAN to target and to auxiliary instance, and issue the active database duplication command
Parallel Backup or Multisection backup
Parallelize datafiles backup up to 265 parts(manually customizable for each datafile), multiple channel recommended.
All section have same size may be except the last. SECTION SIZE mandatory and value must be specified. Can't use MAXPIECESIZE with SECTION SIZE.
SECTION SIZE may be specified using VALIDATE DATAFILE command. Each section is a backup piece in the backup set.
backup SECTION SIZE 200m tablespace pippo;
* Views *
V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE (piece, section_size, multi_section[yes, no])
Archival(Long-Term) Backup - KEEP option
Are backed up control file, spfile, data and archived redo logs.
LOGS/NOLOGS clause can't be specified with KEEP. Self-contained archival backup immune from any retention. Redo logs are switched, current needed.
Store archival backup in non flash recovery area, may fill up quickly. Recovery catalog required if using KEEP FOREVER.
CHANGE not allowed for backup in flash recovery area.
An archived log is never made obsolete if needed for a guaranteed restore point, may be obsolete if needed for a normal restore point.
backup database format 'c:\arch\db_%U.bkp' tag quarterly KEEP until time 'sysdate + 365' RESTORE POINT firstq08;
#KEEP=>[FOREVER, UNTIL TIME], recovery catalog required if using KEEP FOREVER.
#RESTORE POINT=>SCN is captured when backup finish. The time until which RMAN can restore an archival backup
CHANGE backup tag 'pippo_bck' forever;
change copy of database controlfile keep;
Fast Incremental Backup
Flashback Database feature is required, searches flashback logs before backup files. Only corrupted blocks are restored and recovered in a datafile.
Archivelog mode always required. Db mounted or open. No proxy copies allowed. Block change tracking may be enabled on physical standby db.
10g blockrecover(no more available)-> 11g recover...block
recover datafile 2 block24 datafile 10 block 21 from tag=sundaynight;
RMAN>recover corruption list;
#recover all corrupted blocks from listed in V$DATABASE_BLOCK_CORRUPTION, view data is then deleted
* Identified Corrupt Blocks *
analyze table, analyze index, list failure, validate(populates V$DATABASE_BLOCK_CORRUPTION), backup...validate, export to /dev/null, dbverify
* Views *
V$DATABASE_BLOCK_CORRUPTION list block corrupted, populated by VALIDATE DATABASE
RMAN Compression
BZIP2(good compression) and ZLIB(def. faster 45%)
RMAN>configure compression algorithm 'bzip2';
Archived Redo Log Deletion
configure archivelog deletion policy=NONE(def.)(applies to all archived logs stored on disk including Flash Recvoery area)(Archived log will be marked
for deletion after transfered to all log_archive_dest_n and backed up or be obsolete)
configure archivelog deletion policy to backed up 2 times to sbt
RMAN Catalog merging - Export/Import RMAN catalog
source will be deleted and deregistered(unless you use ), source db version same as RMAN client version,
fails if same db is already registered in destination
import catalog rman/rman@reporman dbid=122,123 db_name=pippo,pluto dbid=121,31;
#you are connected to destination, rman@reporman will be imported
Virtual private catalog or Virtual catalog
1) SYSDBA> create the db user
2) SYSDBA> grant recovery_catalog_owner to user
3) real CATALOG OWNER> grant [REGISTER_DATABASE | REGISTER] to user;
#permits to register unknown databases, does not grant recovery catalog access
and / or
real CATALOG OWNER>grant catalog for database t1 to user;
#grant access to a target existing database to a user even if db is not yet registered, use DBID if db not yet registered.
#User may un/register databases. User creates local scripts and read global scripts
5) VIRTUAL USER - RMAN> create virtual catalog;
#use for 11g or later database
or
VIRTUAL USER - RMAN> DBMS_RCVCAT.create_virtual_catalog
#use for 10g database
6) VIRTUAL USER - RMAN> register database;
RMAN>revoke all from privileges from user;
* Packages *
DBMS_RCVCAT.create_virtual_catalog
#must execute before working with any pre-11.1 database. Does not create a virtual private catalog
Data Recovery Advisor
<- RMAN, EM Support Workbench & Database Control, Healt Monitor(following errors), VALIDATE or BACKUP command Quickly detect and analyzes data failures, repair manual(mandatory, optional) or automatic, data loss or no Automatically repair some network I/O errors, control file and datafile inconstistencies, accidental renaming of datafiles. No RAC. No standby database. No ORA-600, must be in Healt Monitor status(open, closed), priority(critical, high, low), grouping 1) list failure #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically 2) advise failure #generate manual and automated options for repair, rman scripts are generated for automated options 3) repair failure #by default high and critical failures will be repaired and closed. Generates warning if new failures are detected since last May not automatically close a fixed failure * Views * V$IR_FAILURE or list failure; change failure 7 closed; change failure priority low; #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically V$IR_MANUAL_CHECKLIST or advise failure; #summary and close fixed V$IR_REPAIR or repair failure; #requires advise failure validate database; #checks only for intrablock(no interblock) corruption in all datafiles, control files and the server parameter file validate database; #proactive check Healt Monitor -> run Data Recovery Advisor checks
View Healt Monitor checker reports using: EM, ADRCI, DBMS_HM, V$HM_RUN
Healt check mode can be:
[Reactive]: Healt check ran automatically when a critical error occurred
[Manual]
Available checks:
[Data Block Integrity] : check disk image block corruptions, checksum failures and intrablock corruptions. Not check for interblock corruption
[Db Structure Integrity]: verify accessibility, consistency and corruption of all database files
[Undo Segment Integrity]: check for logical Undo corruptions and attempt to repair them
[Redo Integrity check], [Dictionary Integrity check]
* Packages *
DBMS_HM.run_check(check_name, run_name, input_params)
DBMS_HM.(GET)RUN_REPORT(run_name M, type O , level O )
* Views *
V$HM_CHECK(all possible checks), V$HM_RUN(view HM checker reports), V$HM_CHECK_PARAM, V$HM_FINDING, V$HM_RECOMMENDATION
show hm_run;
#show all checks
Partitioning
Interval partitioning
The high value of the range partitions is called the Transition Point
Extends range partitioning by automatically creating new partitions on new data insert out of bounds. Only a single column DATE or NUMBER.
Works on index-organized tables. You can't create domain indexes
create table pippo(time_id date) partition by range(time_id)
interval(numtoyminterval(1, 'month'))
(partition p0 values less than to_date('3/12/1976', 'dd/mm/yyyy')),
(partition p1 values less than to_date('3/01/1977', 'dd/mm/yyyy'))
alter table pippo merge partitions for(to_date('3/12/1976', 'dd/mm/yyyy')), to_date('3/01/1977', 'dd/mm/yyyy')))
#the partitions to merge must be adiacent
* Views *
DBA_TAB_PARTITIONS(partitioning_type[range, interval(new automatically created)])
System partitioning
The application must explicitly name the partition in the insert/merge statements, not required for update/delete.
Can't create as CTAS, Hurts performances. No keys. Partitions can have different physics attributes. Can't create unique local indexes. Can't split partitions
create table pippo(c1 number) partition by system (partition p1, partition p2)
insert into pippo partition(pippo) values(1)
alter table pippo merge partitions p1, p2 into partition p1
Virtual Column-Based partitioning
Virtual column is computed on the fly. Datatype is optional. Can collect optimizer statistics. You can index a virtual column.
Can use a virtual column in all dml/ddl. Can partition over a virtual column. Can create virtual columns only on normal heap table.
Can't refers another virtual column. Output must always be scalar no lob, long, raw, ... Can't use pl/sql function if used for partitioning
create table pippo (c1 number, c2 number generated always as (c1*2))
Reference partitioning
Master table is a partitioned table. Can't use interval partitioning. Requires foreign key relationship.
If no tablespace specified then partitions are in same tbs of parent table's partition. Can't specify partitions bounds.
Partition names must be unique between master and reference table. Can't disable the foreign key.
Can't add or drop partitions from child table while its possibly for parent table.
Pruning and partitionwise joins works even if query predicates are different from the partitioning key
Composite Partitioning
Top Level partitioning: Range, List, Hash
Range-list, Range-hash, List-list, list-hash, list-range, range-range, interval-range, interval-list, interval-hash
Directory structure
$ORACLE_BASE (recommended only)
$ORACLE_HOME (do not explicitly set)
dbs
oranfstab sfirst default location
Flash recovery area (different disk then datafiles)
Datafiles (different disk then Flash Recovery Area)
Inventory
/diag Advanced Diagnostic Repository (ADR base=diagnostic_dest multiple ADR homes allowed but only one ADR base)
Db name
Instance name (use v$DIAG_INFO to view these paths) ADR_HOME (multiple ADR homes allowed but only one ADR base)
alert (xml alert format)
hm (checker reports)
incident
trace(V$DIAG.name='Diag Trace')
text alert.log
background process trace files
foreground process trace files
wallet
ewallet.p12
cdump
bdump
udump
...
init.ora new params
# A S M
asm_preferred_read_failure_groups=data.locationA, data.locationB #Dynamic. disk_group_name.failure_group only disks local to the instance!!
asm_diskgroups
asm_diskstring
# M E M O R Y
sga_target=0 #Static. Enable Automatic Memory Management
pga_aggregate_target=0 #Static. Enable Automatic Memory Management
memory_target=900M #dynamic default 0 and when upgrading. Total memory for SGA and PGA. With sga_target=0 and pga_target=0 then defaults 60% to SGA and 40% to PGA
#With sga_target!=0 and pga_target!=0 they indicate minimum values
memory_max_target=2000M; #if unset defaults to memory_target(if greater than 0) or 0(if memory_target is not set)
# R E S U L T C A C H E
result_cache_max_size #Static. Set the high limit. 0 disable. Maximum is 0.25% of memory_target, 0.5% of sga_target, 1%><75% plsql_optimize_level="3" plsql_code_type="INTERPRETED(def.)," sec_case_sensitive_logon="TRUE(def.)," sec_max_failed_login_attempts="5" sec_protocol_error_further_action="[CONTINUE" sec_protocol_error_trace_action="none," audit_trail="default" ldap_directory_sysauth=" #YES:" optimizer_capture_sql_plan_baselines="FALSE(def.)," optimizer_use_sql_baselines="FALSE(def.)" optimizer_use_pending_statistics="FALSE(def.)" optimizer_use_invisible_indexes="FALSE(def.)" control_management_pack_access="NONE," java_jit_enabled="true" control_management_pack_access="DIAGNOSTIC+TUNING(def.)," diagnostic_dest=" #ADR" diagnostic_dest="%ORACLE_BASE" ddl_lock_timeout="0(def.)" db_securefile="ALWAYS," db_ultra_safe="OFF(1:OFF" db_block_checking="TYPICAL," db_block_checksum="FALSE(def.)," db_lost_write_protect="TYPICAL" statistic_level="[TYPICAL" filesystemio_options="[ASYNCH" timed_statistics="true" encryption_wallet_location="(source="(method="file)(method_data="(directory="/yourdir)))" diagnostic_dest="ADR" diagnostic_dest="%ORACLE_BASE">Incident Package Configuration)(Collect diagnostic data, Generates incident reports and transmits), ADRCI
Critical Error=Problem=ORA-04031, composed of many Incidents=ORA-04031
Incident(Collecting, Ready, Tracking, Closed, Data Purged)
Retention: 1 year metadata(retention may be changed with EM), 1 month dumps. Incident is closed after twice retention elapses, is purged 30 days after open
1 Generate logical pack, 2 finalize, 3 gen. physical
Incident thresholds cannot be modified.
No edit and customizing when Quick Packaging
Oracle Support Workbench may be enabled to automatically upload files to Oracle Support by providing infos during the installation of
Oracle Configuration Manager
Flood-controlled incidents
are incidents reported multiple times. Alert log is updated and incident recorded in ADR but no dumps are generated.
SQL Performance Analyzer
1)First you capture SQL workload in a STS(SQL Tuning Set)
2)Move SQL workload to test environment
3)Execute the workload to capture baseline execution data
4)you make your changes
5)Execute the workload to capture execution data
6)Analyze differences in SQL performances
7)Tune regressed SQL statements
Identify plan that might have regressed after a database upgrade, hardware reconfiguration, schema changes and any change may affect SQL.
net SQL workload is identified. ?SQL are executed?
Cannot use SQL Performance Analizer to identify sql statements that are part of the SQL plan baseline
You can set a time limit within which all SQL statement in the workload must execute
DML and DDL are NOT taken in consideration
* Parameters *
TIME_LIMIT parameter for all SQL
LOCAL_TIME_LIMIT for single SQL
EXECUTION_TYPE[EXPLAIN PLAN | TEST EXECUTE(def.)]
#TEST EXECUTE: Each SQL Statement in the workload is executed one time and execution plans and statistics are collected.
SQL Repair Advisor
<- Support Workbench(Diagnostic Summary->Active Incidents link)
(often provides patch to fix)
sqlplan will change after patching. SQL Repair Advisor is not automatically launched after a SQL statement crash
* Export patches * DBMS_SQLDIAG.pack_stgtab_sqlpatch(staging_table), export, unpack_stgtab_sqlpatch(staging_table)
* Packages *
DBMS_SQLDIAG.create_diagnosis_task(sql_text, task_name)
DBMS_SQLDIAG.set_tuning_task_parameter(task_id, filter)
DBMS_SQLDIAG.execute_diagnosis_task(task_name)
DBMS_SQLDIAG.report_diagnosis_task(task_name, out_type)
DBMS_SQLDIAG.accept_sql_patch(task_name, task_owner)
* Views *
DBA_SQL_PATCHES(recommended patches), drop_sql_patch(procedure)
SQL Test Case Builder
<- EM Support Workbench Theese informations are gathered for a problematic SQL statement: the text, involved table definitions, optimizer statistics. No dump files DBMS_SQLDIAG.export_sql_testcase_dir_by_inc Automatic Workload Repository <- EM(Adaptive Metric Threshold link -> Baseline Metric Thresholds)
Requires AWR to have at least the same retention time.
A baseline is any set ot snapshot taken over a period of time.
Static baseline
Repeating static baseline
#Will be automatically created
Moving window baseline
#Will not be automatically created. SYSTEM_MOVING_WINDOW is system provided
Retention raise from 7 days(10g) to 8 days(11g) for default baseline SYSTEM_MOVING_WINDOW containing Adaptive Threshold
Thresholds types:
Significance Level
#if set to 0.99 alert on 1% change. Based on statistical relevance. Only Basic Metric support Significance level thresholds
Percentage of maximum
#calculated using the highest value captured by the baseline
Fixed values
When a baseline ends in the future a new baseline with the same name is created.
* Package *
DBMS_WORKLOAD.modify_baseline_window_size(wind_size)
#to change def. retention, if increase then increase AWR retention to match time
DBMS_WORKLOAD.create_baseline
#should be used for past time periods
DBMS_WORKLOAD.create_baseline_template
single: (start_time, end_time, bas_name, templ_name, EXPIRATION(O)=>null, dbid(O))
#EXPIRATION if not specified or null than the baseline template created will never expire
repeating:(day_of_week, hour_in_day, duration, expiration(O)=>null, start_time, end_time, baseline_name_prefix, template_name, dbid(O))
#should be used when part or all of the time period is on the future
DBMS_WORKLOAD.rename_baseline(old, new, dbid(O))
DBMS_WORKLOAD.modify_baseline_window_size
DBMS_WORKLOAD.drop_baseline_template(templ_name, dbid(O))
DBMS_WORKLOAD.select_baseline_metrics(baseline_name)(display metric thresholds)
DBMS_WORKLOAD_REPOSITORY...
#resize AWR retention period
* View *
DBA_HIST_BASELINE_TEMPLATE (template_type[single | repeated], repeat_interval[timing string as for DBMS_SCHEDULER])
DBA_HIST_BASELINE_DETAILS (shutdown[yes | no | null], pct_total_time[total snapshot time / total possible time], error_count[errors during snapshot])
DBA_HIST_BASELINE (baseline_type[static | moving | window | generated], creation_type, expiration[how long keep the baseline], template_name, last_computed,
moving_window_size[if baseline_type is null then AWR retention period else number of days of moving window])
Controlling Automated Maintenace Tasks
New ABP(Autotask Background Process)
- converts automatic tasks into Scheduler jobs
- determine the jobs that need to be created for each maintenance task
- stores task execution history in the SYSAUX tablespace
- does NOT execute maintenance tasks
A task may be enabled/disabled and changed of resource percentage in one or all maintenance windows. May change duration of maintenance windows.
Database Resource Manager is automatically enabled during maintenance window
By default AutoTask schedules: optimizer statistics collection, SQL Tuning Advisor and Automated Segment Advisor.
* MAINTENANCE_WINDOW_GROUP *
DEFAULT_MAINTENANCE_PLAN(belongs to a resource allocation)
10g: weeknight_window, weekend_window
11g: [10PM-02AM, 4h]: monday_window, tuesday_window, wednesday_window, thursday_window, friday_window. [06AM-02AM, 20h]saturday_window, sunday_window
* Managing Automatic Maintenance Tasks *
<- EM 10g: DBMS_SCHEDULER only enable or disable automatic maintenance tasks 11g: DBMS_AUTO_TASK_ADMIN more fine grained control DBMS_AUTO_TASK_ADMIN.enable(client_name, operation, window_name) DBMS_AUTO_TASK_ADMIN.disable(client_name, operation, window_name) DBA_AUTOTASK_OPERATION lists only automatic task(clients) operations names DBA_AUTOTASK_CLIENT DBA_AUTOTASK_TASK #view ABP(Autotask Background Process) repository, stores execution history for automated maintenance tasks I/O Calibration <- EM(Database Control -> Performance -> I/O Calibration)
Requires 10 minutes to run, sysdba, filesystemio_options=[ASYNCH | SETALL], timed_statistics=true
recommended during low activity, no concurrent calibrations allowed
DBMS_RESOURCE_MANAGER.calibrate_io(num_disks=>1, max_latency=>10, max_iops, max_mbps, actual_latency)
V$IO_CALIBRATION_STATUS, DBA_RSC_IO_CALIBRATE, V$IOSTAT_FILE (all show calibration results)
Database Resource Manager
Automatic actions based on current session single call usage: switch of Resource Consumer Group(switch grant needed), session kill, statement kill
Database Resource Manager is automatically enabled during maintenance window
CREATE_RESOURCE_PLAN(switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_time[UNLIMITED], switch_estimate[false | true],
switch_io_megabytes[null | mb], witch_io_reqs[null | io#], switch_for_call, switch_for_call[null | true])
CREATE_PLAN_DIRECTIVE(plan, group_or_subplan, mgmt_p1, switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_io_reqs, switch_io_megabytes,
switch_for_call, switch_time)
MIXED_WORKLOAD_PLAN predefined resource plan
Level 1: 100% sys_group
Level 2: [85% interactive_group], [15% ORA$AUTOTASK_SUB_PLAN, ORA$DIAGNOSTIC, OTHER_GROUPS]
Level 3: batch_group
Oracle Scheduler
Minimum amount of metadata required for creation, recommended to create a big number of job
Create Lightweight jobs using: job array, named programs, schedules, DBMS_SCHEDULER.create_job
Lightweight jobs are created only from job templates(stored proc or Scheduler program), privileges are only inherited from parent job,
may discriminate DataGuard nodes
* Package *
DBMS_SCHEDULER.create_program(program_name, program_action, program_type[plsql_block | stored_procedure], enabled)
DBMS_SCHEDULER.create_job(job_name, program_name, repeat_interval(opt.), end_time(opt.), schedule_name(opt.), job_style[LIGHTWEIGHT], comments)
* Create a Job Array and submit using single transaction *
declare
testjob sys.job;
testjobarr sys.job_array;
begin
testjobarr:=sys.job_array();
testjobarr.extend(500);
testjob:=sys.job(...);
testjobarr(i):=testjob;
dbms_scheduler.create_jobs(testjobarr, 'transactional');
end;
* Remote External Jobs *
On source db: XML DB(default), run as SYS prvtrsch.plb, DBMS_SCHEDULER.set_agent_registration_pass(registration_password, expiration_date(O), max_uses(O))
On Dest host: install Scheduler agent(not default), configure schagent.conf, schagent.exe -registerdatabase hostdb 1521, schagent.exe -start
* Enable a remote job *
DBMS_SCHEDULER.create_credential(cred_name, username, pwd)
DBMS_SCHEDULER.create_job(...)
DBMS_SCHEDULER.set_attribute(job_name, 'credential_name', cred_name)
DBMS_SCHEDULER.set_attribute(job_name, 'destination', host:port), .enable(job_name)
Security Enhancements
sec* parameters
Pwd are case sensitive(default when NOT upgrading, reset the pwd otherwise), delay after pwd wrong up to 10 secs
orapwd file=... entries=... ignorecase=n(def.)
alter profile ... password_verify_function verify_function_11g
utlpwdmg.sql enforce pwd verification
OS authentication is checked before the password file
Authentication using OID can be configured by granting SYSOPER/SYSDBA enterprise role in OID
Administrators can be authenticated using local OS if they are in the OSDBA or OSPER group
* ACL Fine Grained Access to network system supplied packages*
An ACL is used to restrict access to certain hosts is stored in XML format. Connect comprise resove.
an ACL Access Control List is a users/privilege list, only one ACL per host
DBMS_NETWORK_ACL_ADMIN.create_acl(acl[name of XML DB priv file], description, principal, is_grant, privilege[connect | resolve])
#one privilege must be specified on creation
DBMS_NETWORK_ACL_ADMIN.add_privilege(acl[name of XML DB priv file], principal, is_grant, privilege[connect | resolve])
DBMS_NETWORK_ACL_ADMIN.assign_acl(acl[name of XML DB priv file], host[ex. *.fadalti.com], lower_port, upper_port)
DBMS_NETWORK_ACL_ADMIN.check_privilege(acl[name of XML DB priv file], username, privilege[connect | resolve])
Oracle Wallet
<- Oracle Wallet Manager(owm.exe) see sqlnet.ora for location. Encryption Wallet(manually open after db startup) and Auto-Open Wallet Wallet required, configured and opened to encrypt tablespaces, [AES128, 3DES168, AES128, AES192, AES256], I/O overhead Attempting to create a table in an ecrypted tablespace with the wallet closed raise an error. You cannot change the key of an encrypted tablespace. A normal tablespace cannot be converted to an encrypted tablespace. Creation method 1: mkstore -wrl $ORACLE_BASE/admin/$ORACLE_SID/wallet -create Creation method 2: alter system set encryption key identified by "password" create tablespace pippo datafile 'c:\pippo.dbf' size 100m encryption using 'AES128' default storage(encrypt) Automatic SQL Tuning Advisor <- EM statistic_level=TYPICAL, BASIC will disable AWR providing SQL sources. AWR retention less than 7dd disable ASTA. STA may evolves SQL Plans from non-accepted to accepted 10g Automatic Tuning Optimizer(never auto tune) ->upgraded-> 11g Automatic SQL Tuning Advisor
job name SYS_AUTO_SQL_TUNING_TASK runs for 1 hour, runs only once during nightly maintenace window as part of MAINTENANCE_WINDOW_GROUP
Statistical analysis, SQL profiling, Access path analysis(add indexes), SQL structure analysis(Modifying SQL statements)
STA test and automaticall implement only SQL profiles creations with at least 66% of benefit,
choses statements from AWR Top SQL with poor performance in the past week or you manually run providing Sql Tuning Sets.
If run in Tuning Mode then the optimizer might run statistics and store in SQL Profiles
* Limitations * No recursive SQL, no ad hoc SQL, no parallel SQL, no long run profiled SQL, no insert/delete, no ddl
* Report *
Show if statisticd need to be collected, profile was recommended, index recommendation.
Show a benefit percentage of implementing recommendation for SQL profile and indexes
* Package *
DBMS_SQLTUNE.set_tuning_task_parameter(... see params below ...)
#for automatic tuning only, no manual
task_name=>'SYS_AUTO_SQL_TUNING_PROG'
parameter=>accept_sql_profile: When TRUE auto accept SQL profiles. Profiles are always generated
parameter=>replace_user_sql_profiles: allowed to replace user created SQL profiles
parameter=>max_sql_profiles_per_exec: maximum number of allowed SQL profiles changes accepted per tuning task
parameter=>max_auto_sql_profiles: maximum total number of allowed SQL profiles changes by db
parameter=>execution_days_to_expire: default 30, task history expiration
parameter=>time_limit: to change default duration of 1 hour in seconds
parameter=>test_execute: only use plan costs
DBMS_SQLTUNE.report_auto_tuning_task(type[text], level[typical], section[all]) return varchar2
DBMS_AUTO_TASK_ADMIN.enable(client_name=>'sql tuning advisor', operation=>'NULL', window_name=>['NULL'(all) | (a name)])
DBMS_AUTO_TASK_ADMIN.disable
#disable the Automatic Sql Tuning process
* View *
DBA_SQL_PROFILES[type], if type=AUTO then auto implemented SQL profile
DBA_ADVISOR_EXECUTIONS shows metadata information
DBA_ADVISOR_SQLSTATS shows a list of all SQL compilation and execution statistics
DBA_ADVISOR_SQLPLANS a list of all SQL execution plans
SQL Plan Management
Preserve SQL performance across major system changes
A SQL Plan may be:
non accepted #may be evolved to accepted
accepted #required for use, all manually loaded
enabled #required for use
fixed #A fixed plan has precedence over a non-fixed even if with higher cost, is FIXED if at least an enabled plan inside is FIXED=yes,
#the optimizer will not add new plans. If you accept a profile recommended by STA that profile will not be FIXED
SQL Tuning Advisor may evolve plans from non-accepted to accepted.
* Plan history * Contains both accepted and not-accepted plans for a repeatable SQL, best plan not accepted yet in SQL plan baseline
* SQL Plan Baseline * Contains plan history but only accepted
Automatic plan capture: see optimizer_capture_sql_plan_baselines parameter
Manual plan capture: performances are not verified
* Package *
DBMS_SQLTUNE.create_sqlset(sqlset_name, description) creates an empty SQL set
select value(p) from table(DBMS_SQLTUNE.select_workload_repository('peak baseline', null, null, 'elapsed time', null, null, null, 20)) p
DBMS_SQLSET.load_sqlset(sqlset_name, populate_cursor )
DBMS_SPM.load_plans_from_sqlset(sqlset_name, fixed=>no, enabled=>yes)
#plans are loaded into the SQL plan baseline as non-fixed accepted and enabled plans. May be used
DBMS_SPM.load_plans_from_cursor_cache(sql_id, plan_hash_value[null ], sql_text, fixed[no | yes ], enabled[yes],
attribute_name[sql_text | parsing_schema_name | module | action], attribute_value) return integer
DBMS_SPM.evolve_sql_plan_baseline(sql_handle, plan_list, plan_name, time_limit[DBMS_SPM.auto_limit | DBMS_SPM.no_limit],
verify[YES], commit[YES]) return CLOB
#show all non-accepted plans with status changed to accepted.
#VERIFY=YES the database executes the unaccepted plans and compare performances againts the SQL plan baseline
#VERIFY=NO all unaccepted plans became accepted
DBMS_SPM.alter_sql_plan_baseline(sql_handle, plan_name, attribute_name[accepted_status | enabled], attribute_value=>'yes')
DBMS_XPLAIN.display_sql_plan_baseline
* Views *
DBA_SQL_PLAN_BASELINES(sql_handle, sql_text, plan_name, origin[manual-load | manual-sqltune | auto-capture | auto-sqltune], enabled, accepted, fixed, autopurge)
SMB SQL Management Base
By default uses Automatic Segment Space Management ASSM
contains plan history information, the statement log, SQL profiles and plan baselines. Is part of data dictionary and stored in sysaux tbs.
sysaux tbs must be online to use SQL Plan Management features that access the SMB
DBMS_SPM.configure([SPACE_BUDGET_PERCENT | PLAN_RETENTION_WEEKS], value)
#SPACE_BUDGET_PERCENT: if the size of SMB exceed 10% of sysaux tablespace a weekly alert is generated
#PLAN_RETENTION_WEEKS: plan not used for more than 1 year are purged
* Views *
DBA_SQL_MANAGEMENT_CONFIG(SPACE_BUDGET_PERCENT, PLAN_RETENTION_WEEKS)
SQL Access Advisor
May be interrupted and resumed. By default is not scheduled as a maintenance task by the AutoTask.
Partitioning Recommendations for tables, indexes and materialized views, default mode is interval.
Partitioning Recommendations require at least 10000 rows on table, table must have some predicate or join with date or number , space, no bitmap indexes,
no interrupted SQL Access Advisor.
Advice on tuning Materialized Views, Materialized View Logs, indexes(B-tree, bitmap, function based)
* Packages *
DBMS_ADVISOR.add_sts_ref(task_name, sts_owner, sts_name)
#add an existing SQL tuning set
DBMS_ADVISOR.add_sqlwkld_ref(task_name, sts_name)
#add an existing SQL tuning set
DBMS_ADVISOR.delete_sts_ref(task_name, sts_owner, sts_name) link the SQL Access Advisor and the Workload
DBMS_ADVISOR.create_task(advisor_name=>'SQL Access Advisor', task_name)
DBMS_ADVISOR.set_task_parameter(task_name, param, value)[partition | segment][def_partition_tablespace, tbs][max_number_partitions,...]
[valid_table_list,...][mode,comprehensive][journaling,4][analysis_scope,all]
DBMS_ADVISOR.execute_task(task_name)
DBMS_ADVISOR.get_task_report(task_name, type[text], level[typical], section[all], owner_name, execution_name, object_id)
DBMS_ADVISOR.cancel_task(task_name) wont see any recommendation
DBMS_ADVISOR.interrupt_task(task_name) sometimes see recommendation, can't resume or restart
DBMS_ADVISOR.create_file(DBMS_ADVISOR.get_task_script(task_name), directory_name, file_name)
DBMS_ADVISOR.copy_sqlwkld_to_sts()
DBMS_ADVISOR.quick_tune(DBMS_ADVISOR.SQLACCESS_ADVISOR, task_name, sql_text)
#task is automatically created
ADDM Automatic Database Diagnostic Monitor
control_management_pack_access=DIAGNOSTIC+TUNING and statistic_level=TYPICAL or ALL
Analyzes AWR data, present problems and recommendations. Runs automatically by default when a new AWR snapashot is taken. Enabled by default.
You can use directive to limit or filter the ADDM
* ADDM for RAC Real Application Cluster *
Database ADDM(analyze all RAC instances), Instance ADDM(analyze one RAC instances), Partial ADDM(analyze some RAC instances)
Analyze: [Global resources and global I/O], [High-Load SQL], [Contention across the instances], [Global cache interconnect traffic], [network latency issues],
[skews in instance report times]
* Packages *
DBMS_ADDM.analyze_db(task_name, begin_snapshot, end_snapshot)
#Creates an ADDM task global for all RAC instances
DBMS_ADDM.analyze_inst
#Creates an ADDM task for a single instance
DBMS_ADDM.analyze_partial
#Creates an ADDM task for some instances
DBMS_ADDM.delete
#delete an ADDM task
DBMS_ADDM.get_report(task_name)
#Gets a report of an ADDM task
DBMS_ADDM.delete_finding_directive;
DBMS_ADDM.insert_finding_directive(task=>null, directive=>'SGA Directive', finding=>'Undersized SGA', minactivesessions=>5, minpctimpact=>50)
#TASK : null means all subsequently created tasks
#FINDING: the finding in DBA_ADVISOR_FINDING_NAMES
DBMS_ADDM.delete_sql_directive; DBMS_ADDM.insert_sql_directive
#Limit action
DBMS_ADDM.delete_segment_directive;
#Limit action
DBMS_ADDM.insert_segment_directive
#Prevent ADDM from reporting a schema, segment, subsegment or object
DBMS_ADDM.delete_parameter_directive; DBMS_ADDM.insert_parameter_directive(param_name)
#Prevent ADDM from altering specified parameter
DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER (advisor_name, parameter, value)
#Specify the mode in wich to run ADDM [Database | Instance | Partial]
* Views *
DBA_ADVISOR_FINDINGS(finding_name) #USER_... available. Shows which findings occur most frequently in the db
DBA_ADDM_TASKS #USER_... available. Show all executed ADDM tasks
DBA_ADDM_INSTANCES #USER_... available. Instance level info for completed ADDM tasks
DBA_ADD_FINDINGS #USER_... available. Extends DBA_ADVISOR_FINDINGS
DBA_ADVISOR_FINDING_NAMES #USER_... available. Lists all registered finding names
DBA_ADVISOR_RECOMMENDATIONS(filtered[Y | N])
DBA_ADVISOR_ACTIONS(filtered[Y | N])
Automatic Memory Management
sga_target=0 #static; pga_target=0 #static; memory_target=900M #dynamic; memory_max_target=2000M;
show parameter target; #archive_lag_target=0, db_flashback_retention_target=1440, fast_start_io_target=0, fast_start_mttr_target=0, memory_max_target=900M,
memory_target=900M, pga_aggregate_target=0; sga_target=0
sga_target pga_aggregate_target memory_target memory_max_target result
0 0 900M 999M AMM enabled, defaults 60% to SGA and 40% to PGA
50M 100M 900M 999M AAM enabled, SGA minimum 50M, PGA minimum 100M
50M 0 900M 999M AAM enabled, SGA minimum 50M, PGA minimum is memory_target-50M
0 100M 900M 999M AAM enabled, SGA minimum is memory_target-100M, PGA minimum 100M
900M 0 or null AAM enabled, max_memory_target is memory_target. (SGA is not autotuned, PGA is)
50M Automatically tuned only SGA subcomponents. PGA is autotuned. max_memory_target=0
0 or null 999M AAM disabled. memory_target=0 default value
* Views *
V$MEMORY_DYNAMIC_COMPONENTS(component, current_size, user_specified_size)
V$MEMORY_CURRENT_RESIZE_OPS #current resizes in progress
Optimizer Statistics Collection
Automatically during nightly manintenance window, precedence for most needing objects. Can set preferences at table, schema, database and global level.
Both global(incrementally but no with histograms) and local statistics for partitioned tables. Statistics job is atomic at schema level
Statistics are collected for user-defined function and function-based indexes
A pending statistic is not yet available
* Options, Statistics preferences *
publish[true | false]: def. true. Store statistics in data dictionary, current statistics
stale_percent: def. 10%. Threshold level for an object to be stale
* Granularity * auto, global, global and partition, all, partition, subpartition
* Extended Statistics * multi column or expression. Collects number of distinct values, density, number of nulls, frequency histograms
Capture statistics, Test and Publish
1) alter session set optimizer_use_pending_statistics=false;
2) set PUBLISH option to false using SET_TABLE_PREFS procedure
#subsequent statistics will be gathered as pending
3) gather the statistics for table;
4) alter session set optimizer_use_pending_statistics=true;
5) test the statistics
6) execute the PUBLISH_PENDING_STATS procedure
Capture global statistics for a partitioned table INCREMENTALLY, only on last partition
Incremental works if granularity=global and table marked both incremental and publish, estimate_percent=>auto_sample_size[always recommended]
1) set INCREMENTAL value to true use SET_TABLE_PREFS procedure
2) DBMS_STATS.gather_table_stats(ownname=>'ALDO', tabname=>'persone', granularity=>'GLOBAL')
* Packages *
DBMS_STATS.gather_database_stats_job_proc
DBMS_STATS.set_prefs
#set preferences at various levels
DBMS_STATS.set_table_prefs(schema, table, pref_name, pref_value)
#PUBLISH =[true(def.) | false] FALSE: statistics taken will be pending
#INCREMENTAL=[true | false(def.)] TRUE: statistics will no be collected for partition that have not been changed
DBMS_STATS.set_schema_prefs
DBMS_STATS.resume_gather_stats
DBMS_STATS.get_prefs(['estimate_percents' | 'stale_percent' | 'publish'], schema, table)
DBMS_STATS.publish_pending_stats(schema[null(def.)], table[null(def.)])
#pending stats became public
DBMS_STATS.delete_pending_stats(schema[null(def.)], table[null(def.)])
DBMS_STATS.export_pending_stats(schema[null(def.)], table[null(def.)])
DBMS_STATS.create_extended_stats(ownname, tabname, extension=>'(name, surname)')
# select DBMS_STATS.create_extended_stats(null, 'persone', extension=>'(name, surname)') from dual;
#collect extended statistics. A column group is created. A virtual hidden column is created with a system-defined name
DBMS_STATS.drop_extended_stats(ownname, tabname, extension=>'name, surname')
DBMS_STATS.show_extended_stats(ownname, tabname, extension=>'name, surname') return varchar2
#return the name of the statistics entrys
DBMS_STATS.gather_table_stats(ownname=>null, tabname=>'pippe', method_opt=>'for all columns size skewonly for columns(lower(a),b) skewonly')
#coll. ext. stats
* Views *
DBA_TAB_STATS, DBA_IND_STATS stored published statistics
DBA_TAB_STAT_PREFS(owner, table_name, preference_name, preference_value) #shows current settings for statistics preferences
DBA_TAB_PENDING_STATS
DBA_STAT_EXTENSIONS(extension_name, extension)
Result Cache
In SGA(shared pool). Affected by memory_target, sga_target, shared_pool_size. Recently-used algorithm to age out results.
Can cache flashback SQL, can cache query fragments. Used both for SQL queries and deterministic functions.
/*Hint*/: result_cache and no_result_cache can use in a inline view
Result Cache Memory pool
SQL Query Result Cache
Pl/SQL Function Result Cache
client side
* Restrictions *
can't use SQL Query Result Cache on: Temporary tables, Dictionary tables, non-deterministic Pl/Sql functions(use deterministic keyword),
[currval,nextval,sysdate,sys_timestamp,current_date,current_timestamp,local_timestamp,userenv,sys_context,sys_quid], undergoing modification data, subqueries,
pipelined functions, functions can't have OUT or IN parameters, no anonymous blocks, no in package with invoker's right, no IN params as LOB, REF CURSOR,
RECORD, COLLECTION, OBJECT
create or replace function pippo(id number) return pp_record RESULT_CACHE RELIES_ON (persone) is ...
#RELIES_ON means that cached results will became invalid when the structure of any database object on which they depend is changed
* Packages *
DBMS_RESULT_CACHE.memory_report
#Reports cache enabled, block size, max cache size, max result size, memory [total | fixed | dynamic]
DBMS_RESULT_CACHE.status
#enabled/disabled, in RAC shows if synchronizing
DBMS_RESULT_CACHE.flush return boolean
#return true on successful removal, disable/close the cache DBMS_RESULT_CACHE.bypass(true) before flushing
DBMS_RESULT_CACHE.bypass([true | false])
#bypass the cache
* Views *
V$RESULT_CACHE_STATISTICS #list cache settings and memory usage stats
V$RESULT_CACHE_OBJECTS(type, status[new | published | bypass | expired | invalid], name) #list cached objects and attributes
V$RESULT_CACHE_DEPENDENCY
V$RESULT_CACHE_MEMORY #show all memory blocks and statistics
Client Result Cache
OCI based, cache only top-level SQL statements no query fragments, excellent on lookup tables. Independent from server result cache
client sqlnet.ora init params override the db server init params
* Restrictions *
no views, no remote objects, no complex types, no flashback, no SQL with pl/sql, no VPD policies on the tables
* Views *
CLIENT_RESULT_CACHE_STAT$ #show client settings and usage stats
Adaptive Cursor Sharing
Automatic feature can't be turned off, no init params. Works the same for bind vars and literals.
10g: Bind variable not good for plan generation, first time will determine the plan forever(bind peeking)
11g: Generate new child cursors
Bind sensitive cursor: values are so different that a new plane is needed. New plans are added to plan history but not used until db has proved a better
performance. See optimizer_capture_sql_plan_baselines. Once the plane is changed the cursor became bind-aware, more plans will be used depending on params
* Views *
V$SQL(is_bind_sensitive, is_bind_aware)
V$SQL_CS_HISTOGRAM #shows distribution of exceution count across execution history histogram
V$SQL_CS_SELECTIVITY #shows selectivity ranges stored in cursor for predicates with bind variables
V$SQL_CS_STATISTICS #stats for different bind sets, buffer gets and CPU time
Flashback Data Archive FDA - Oracle Total Recall
Requires Automatic Undo Management. Archivelog is not required. Data archive tablespaces must use ASSM
Store data changes compressed for any length of time you want in one or more tbs in an internal history table for each table tracked.
Only for some tables you choice. You can't modify data in Flashback, only purge.
Table/s must be enabled for tracking(flashback archiving).
Can set different retention for different tables, when in different archives, assigning different flashback data archives.
Retention at table level, must be specified for data archive. An internal historical table is created with few more columns and range partitioned.
FBDA process, sleep time automatically adjusted. Can use flashback SQL(as of).
Old data is automatically purged. May create default archive(not required)
* Differences from Flashback Database FD* FD may take db back in time, while FDA is read only. FDA is online FD is offline.
FD at db level while FDA at table level.
* Limitations *
no [drop | rename | modify] column, yes add column. no partition operations. no LONG to LOB conversion. no [drop | truncate] table
SQL>grant flashback archive ADMINISTER to user;
#ADMINISTER allows [create flashback archive, alter flashback archive, drop flashback archive]
SQL>grant flashback archive on t1 to user;
#user also need select privs, execute on DBMS_FLASHBACK
SQL>create flashback data archive f1 tablespace tbs1 quota 2000m RETENTION 4 year;
#RETENTION must be specified for data archive
alter flashback archive f1 purge all; alter flashback archive f1 purge before timestamp(systimestamp - interval '2' day);
alter flashback archive f1 purge before scn 1233
alter table t1 noflashback archive f1;
#all flashback data is removed
alter flashback archive fla1 modify retention 1 year
select * from t1 versions between timestamp to_timestamp(...) and maxvalue
* Views *
DBA_FLASHBACK_ARCHIVE(flashback_archive_name, retention_in_days, status), DBA_FLASHBACK_ARCHIVE_TS(quota_in_mb), DBA_FLASHBACK_ARCHIVE_TABLES
Flashback Transaction Blackout
database in archivelog, undo and redo data are needed. supplemental logging and primary key supplemental logging at db level must be enabled.
Transaction dependencies:
write-after-write: dependent transaction modifies data modified by parent transaction
primary key constraint: dependent transaction reinserts the primary key deleted by parent transaction
alter database add supplemental log data;
alter database add supplemental log data (primary key) columns;
grant execute on dbms_flashback to hr;
grant select any transaction to hr;
grant insert, update, delete on pippo to hr;
#grant DML on specific involved tables
* Packages *
DBMS_FLASHBACK.transaction_backout(numberofxids, xids(array), options[see below...], scnhint(SCN at the start of transaction) or timehint)
#no commit inside!
nocascade: (def.) no depent transactions expected
cascade: dependent transactions are backed out before the parent
nocascade_force: dependent transactions are ignored
noconflict_only: only those rows with no conflict in parent transactions
* Views *
DBA_FLASHBACK_TRANSACTION_STATE #backed out transactions
DBA_FLASHBACK_TRANSACTION_REPORT
Oracle SecureFiles
10g lob->11g BasicFiles
transparent encryption. variable chunk size max 64M stored next to one another. redo generated only for changed data.
new client/server network for faster data transfer.
Every option may be changed later using . long api can't be used to configure SecureFile setting.
COMPATIBLE min 11.0. Error raisen if you use SecureFile capabilities in a BasicFile.
1) DEDUPLICATION detect and avoid duplicated data, Advanced Compression is required, only one copy is saved, deduplication is disabled by default
2) COMPRESSION [HIGH | MEDIUM(def.)], compression is disabled by default
3) ENCRYPTION must use Advanced Security Option [AES128 | 3DES168 | AES128 | AES192 | AES256], encryption is disabled by default
Older storage cluses(chunk, pctversion, freelist) no necessaries with SecureFiles, instead use this:
MAXSIZE: maximum LOB segment size
RETENTION: version control policy
MAX: after MAXSIZE old version is used
MIN: old version is retained for at least the time specified
AUTO: default.
NONE: old version is use ad much as possible
create table pippo(l clob) LOB(l)STORE AS [securefile | lob] (COMPRESS [HIGH | MEDIUM] deduplicate lob [CACHE | NOCACHE | CACHE READS] nologging);
#STORE AS is optional, default is LOB traditional.
#CACHE will place lob pages in the buffer cache for speedier access, default is NOCACHE, CACHE READS cache only during reads not for writes.
#NOLOGGING will not generate any redo
#COMPRESS [HIGH | MEDIUM(def.)]
#(NO) KEEP DUPLICATES #specify whether the database will store duplicates for the LOB column
create table pippo (doc clob)lob(doc) store as securefile(ENCRYPT);
create table pippo (doc clob encrypt using 'AES256')lob(doc) store as securefile;
Migrating to SecureFiles
* Partition Exchange *
Ensure you have enough space for largest partition, long maintenance window, segment offline
* Online Redefinition (recommended) *
No segment offline, migration in parallel, index must be rebuild, double space required.
Materialized view must be refreshed if the source table is redefined.
create table tab1(id number, c lob) lob(c) store as lob;
create table tab1(id number, c lob) lob(c) store as securefile;
DBMS_REDEFINITION.start_redef_table('scott', 'tab1', 'tab2', 'id id c c');
DBMS_REDEFINITION.copy_table_dependents('scott', 'tab1', 'tab2', 1, true, true,true,true,false, error_count);
DBMS_REDEFINITION.finish_redef_table('scott', 'tab1', 'tab2');
* Packages *
DBMS_LOB.getoptions
#return lob settings
DBMS_LOB.setoptions
DBMS_LOB.get_deduplicated_regions
DBMS_SPACE.space_usage
#determine disk space used by all lob in segment, only for ASSM segments
* Views *
DBA_SPACE, DBA_LOB
Locking Enhancements
By default a DDL will fail if can't get an immediate DML lock on the table.
lock table pippo in exclusive mode [nowait | wait(def.) (optional)]
#if mode omitted will wait undefinitely
The following statements will not require a lock:
create index online;
create materialized view log;
alter table enable constraint novalidate;
Minimal invalidation - Fine Grained Dependency Management
Only logical affected objects are invalidated such as views and synonyms during online redefinition, triggers have not been enhanced they will be invalidated.
Parameter file creation from Memory
create [spfile | pfile] from memory;
#will have values for all 150 init parameters even if not precedently set
Hot Patching
Supported on RAC
opatch enable, disable and install patches.
Currently only for Linux, Solaris. Consumes extra memory depending on the number of currently running Oracle processes.
At least one OS page of memory(4-8kb) for Oracle process is required.
opatch query -is_online_patch
#determine if patch is hot
Invisible Indexes
Invisble to the optimizer, discarded from plans. Can change from in/visible at any time.
create index inv1 on pippo(nome) invisible;
alter index i1 in/visible;
* Views *
DBA_INDEXES(visibility)
Shrinking Temporary Tablespaces
May shrink online temporary tablespaces and single tempfiles. Shrink to a minimum of 1MB(def.)
alter tablespace temp shrink space [keep 100m];
#Not equally distributed along tempfiles
alter tablespace temp shrink tempfile '/pippo.dbf' [keep 100m]
* Views *
DBA_TEMP_FREE_SPACE(tablespace_name, free_space) #FREE_SACE in bytes. Space unallocated and allocated space available to be reused
Tablespace Option for Temporary tables
May manually specify a temporary tbs, indexes are created in the same tablespace
PL/SQL and Java Automatic Native Compilation
see parameter plsql_code_type and plsql_optimize_level
alter procedure pioppo compile pl_sql_code_type=native;
* Recompiling a Database for Pl/Sql Native Compilation *
1) shutdown immediate; 2) plsql_code_type=native, plsql_optimize_level=3 3) startup upgrade 4) @?/rdbms/admin/dbsupgnv.sql 5) restart and utlrp.sql
* Views *
DBA_PLSQL_OBJECT_SETTINGS(plsql_code_type)
OLTP Table Compression
Writes not degraded reads improved because data is directly read as compressed. Disk space and memory are minimized, cpu is increased.
May compress data during dml, not only when bulk loading(10g) or when creating CTAS(10g). Holes made by deletes are eliminated
Compression may be enabled for tables, partitions and tablespaces.
A block is compressed when reaches PCTFREE.
create table pippo(...) compress; #normal compress
create table pippo(...) compress for direct_load_operations #normal compress for a data warehouse workloads table not for OLTP
create table pippo(...) compress for all operations #compress for dml operations
alter table pippo compress for all operations #An existing table may be altered to compress but existing data remain uncompressed.
* Views *
DBA_TABLES(compression, compress_for)
Direct NFS Client
Simplify manageability across multiple platforms, performance are increased but kernel NFS is not completely bypassed since its used for network communication
Avoid kernel NFS layer. Load balancing. You may control I/O path. OS must perform the mount. NFS version 3 protocol implemented in Oracle RDBMS kernel.
If mismatch between oranfstab and OS mount points the NFS is stopped. To remove an NFS path in use restart the database.
Direct NFS client will disabled if delete oranfstab or modify its setting or replacing ODM NFS with the stub libodn11.so
If the database can't open the NFS using Direct NFS it will use OS kernel
* Configuring *
1) Specify mount point in /etc/mtab first default location and be sure the OS perform the mount
2) (optional) edit oranfstab for Oracle specfic options such as additional paths to a mount point.
Mount point search order:
(optional) $ORACLE_HOME/dbs/oranfstab
(optional) /etc/oranfstab
(when no oranfstab) /etc/mtab
4) Enable Direct NFS replacing standard ODM library libnfsodm10.so with ODM NFS library $ORACLE_HOME/lib/libodm11.so
oranfstab format:
server: TestServer1 #nfs server name
path: 130.33.34.11 #up to 4 network paths if the first fail, load balancing
export: /vol/oradata1 #exported path from NFS server
mount: /mnt/oradata1 #local mount point
* Views *
V$DNFS_STATS #show performance stats for Direct NFS
V$DNFS_SERVERS #show servers accessed by Direct NFS
V$DNFSFILES #files currently using Direct NFS
V$DNFS_CHANNEL #open network paths