Friday, November 30, 2007

Steps in the Emergency Performance Method

Steps in the Emergency Performance Method

(copy from: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96532/ch3.htm#9798 )

1. Survey the performance problem and collect the symptoms of the performance problem.

This process should include the following:

* User feedback on how the system is underperforming. Is the problem throughput or response time?
* Ask the question, "What has changed since we last had good performance?" This answer can give clues to the problem; however, getting unbiased answers in an escalated situation can be difficult.


2. Sanity-check the hardware utilization of all components of the application system.

Check where the highest CPU utilization is, and check the disk, memory usage, and network performance on all the system components. This quick process identifies which tier is causing the problem. If the problem is in the application, then shift analysis to application debugging. Otherwise, move on to database server analysis.

3. Determine if the database server is constrained on CPU or if it is spending time waiting on wait events.

(a) If the database server is CPU-constrained, then investigate the following:

* Sessions that are consuming large amounts of CPU at the operating system level
* Sessions or statements that perform many buffer gets at the database level (check V$SESSTAT, V$SQL)
* Execution plan changes causing sub-optimal SQL execution (these can be difficult to locate)
* Incorrect setting of initialization parameters
* Algorithmic issues as a result of code changes or upgrades of all components

(b) If the database sessions are waiting on events, then follow the wait events listed in V$SESSION_WAIT to determine what is causing serialization.

In cases of massive contention for the library cache, it might not be possible to logon or submit SQL to the database. In this case, use historical data to determine why there is suddenly contention on this latch. If most waits are for I/O, then sample the SQL being run by the sessions that are performing all of the I/Os.


4. Apply emergency action to stabilize the system.

This could involve actions that take parts of the application off-line or restrict the workload that can be applied to the system. It could also involve a system restart or the termination of job in process. These naturally have service level implications.

5. Validate that the system is stable.

Having made changes and restrictions to the system, validate that the system is now stable, and collect a reference set of statistics for the database. Now follow the rigorous performance method described earlier in this book to bring back all functionality and users to the system. This process may require significant application re-engineering before it is complete.

Top Ten Mistakes Found in Oracle Systems

Copied from Oracle9i Database Performance Planning Release 2 (9.2)

Top Ten Mistakes Found in Oracle Systems


  1. Bad Connection Management

    The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance, and it is totally unscalable.

  2. Bad Use of Cursors and the Shared Pool

    Not using cursors results in repeated parses. If bind variables are not used, then there is hard parsing of all SQL statements. This has an order of magnitude impact in performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of applications generating dynamic SQL.

  3. Getting Database I/O Wrong

    Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly, because they configure disks by disk space and not I/O bandwidth.

  4. Redo Log Setup Problems

    Many sites run with too few redo logs that are too small. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and I/O system. If there are too few redo logs, then the archive cannot keep up, and the database will wait for the archive process to catch up.

  5. Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.

    This is particularly common on INSERT-heavy applications, in applications that have raised the block size to 8K or 16K, or in applications with large numbers of active users and few rollback segments.

  6. Long Full Table Scans

    Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive and unscalable.

  7. In Disk Sorting

    In disk sorts for online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Disk sorts, by nature, are I/O-intensive and unscalable.

  8. High Amounts of Recursive (SYS) SQL

    Large amounts of recursive SQL executed by SYS could indicate space management activities, such as extent allocations, taking place. This is unscalable and impacts user response time. Recursive SQL executed under another user ID is probably SQL and PL/SQL, and this is not a problem.

  9. Schema Errors and Optimizer Problems

    In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementation. Examples of this are missing indexes or incorrect statistics. These errors can lead to sub-optimal execution plans and poor interactive user performance. When migrating applications of known performance, export the schema statistics to maintain plan stability using the DBMS_STATS package.

    Likewise, optimizer parameters set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer settings should be managed together as a group to ensure consistency of performance.

  10. Use of Nonstandard Initialization Parameters

    These might have been implemented based on poor advice or incorrect assumptions. In particular, parameters associated with SPIN_COUNT on latches and undocumented optimizer features can cause a great deal of problems that can require considerable investigation.

Performance Diagnosis - Understanding the core SQL statments

From:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96532/ch2.htm


Understanding the core SQL statements in an application is key to performance diagnosis.


For example, a core business transaction is required to run in a subsecond response time. Initial investigation of the transaction shows that this transaction performs 200 logical reads, of which 40 are always obtained from disk. Taking a disk response time of 20 milliseconds, the likely I/O time is 40 x .02 = 0.8 seconds, which probably fails the response time target. The DBA requests that the transaction be rewritten, and the number of logical I/Os is reduced to 80, with an average of five coming from disk.

Thursday, November 29, 2007

Manage ASM - Some tests

Test with ASM
================


1. Login to the ASM instance

NTBK-/home/Denis/e_oracle/Practise_Oracle/asm_practice>export ORACLE_SID=+ASM
NTBK-/home/Denis/e_oracle/Practise_Oracle/asm_practice>sqlplus / as sysdba


2. Disk discovery
select name, header_status, path from v$asm_disk;

NAME HEADER_STATU PATH
------------------------------ ------------ ------------------------------
ORCL_DG1_0000 MEMBER E:\ASMDISKS\_FILE_DISK1
ORCL_DG1_0001 MEMBER E:\ASMDISKS\_FILE_DISK2
ORCL_DG1_0002 MEMBER E:\ASMDISKS\_FILE_DISK3
ORCL_DG1_0003 MEMBER E:\ASMDISKS\_FILE_DISK4

3. Create a diskgroup

idle> ;
1 CREATE DISKGROUP orcl_dg2 external REDUNDANCY
2 DISK 'e:\asmdisks\_file_disk10',
3* 'e:\asmdisks\_file_disk11'

idle> @disk_discover

NAME MOUNT_S HEADER_STATU STATE PATH
------------------------------ ------- ------------ -------- ------------------------------
ORCL_DG1_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK1
ORCL_DG2_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK10
ORCL_DG1_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK2
ORCL_DG1_0002 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK3
ORCL_DG1_0003 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK4
ORCL_DG2_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK11


4. Add a disk

ALTER DISKGROUP orcl_dg2 ADD DISK
'e:\asmdisks\_file_disk12';

idle> @disk_discover.sql

NAME MOUNT_S HEADER_STATU STATE PATH
------------------------------ ------- ------------ -------- -----------------------------
ORCL_DG1_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK1
ORCL_DG1_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK2
ORCL_DG1_0002 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK3
ORCL_DG1_0003 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK4
ORCL_DG2_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK10
ORCL_DG2_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK11
ORCL_DG2_0002 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK12


5. Drop a disk

idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG1_0000 82
ORCL_DG1_0001 74
ORCL_DG1_0002 79
ORCL_DG1_0003 77
ORCL_DG2_0000 18
ORCL_DG2_0001 18
ORCL_DG2_0002 18

7 rows selected.

idle> alter diskgroup orcl_dg2 drop disk orcl_dg2_0002;

Diskgroup altered.
idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG1_0000 82
ORCL_DG1_0001 74
ORCL_DG1_0002 79
ORCL_DG1_0003 77
ORCL_DG2_0000 26
ORCL_DG2_0001 26
100

6. Resize a diskgroup

idle> select name, total_mb from v$asm_diskgroup;

NAME TOTAL_MB
------------------------------ ----------
ORCL_DG1 396
ORCL_DG2 200


idle> alter diskgroup orcl_dg2 resize all size 80M;

Diskgroup altered.

idle> select name, total_mb from v$asm_diskgroup;

NAME TOTAL_MB
------------------------------ ----------
ORCL_DG1 396
ORCL_DG2 160


7. Mount/Dismount ASM diskgroup

idle> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
ORCL_DG1 MOUNTED
ORCL_DG2 MOUNTED


idle> alter diskgroup orcl_dg2 dismount;

Diskgroup altered.

idle> alter diskgroup orcl_dg1 dismount;
alter diskgroup orcl_dg1 dismount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "ORCL_DG1" precludes its dismount

Notes: If you try to dismount a disk group that contains open files,
the statement will fail, unless you also specify the FORCE clause.

idle> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
ORCL_DG1 MOUNTED
ORCL_DG2 DISMOUNTED

idle> alter diskgroup orcl_dg2 mount;



8. Observing rebalancing when add a disk

idle> select name, total_mb, free_mb from v$asm_diskgroup;

NAME TOTAL_MB FREE_MB
------------------------------ ---------- ----------
ORCL_DG1 496 131
ORCL_DG2 160 107

SQL> create tablespace asm_users datafile '+ORCL_DG2' size 100m;

idle> select name, total_mb, free_mb from v$asm_diskgroup;

NAME TOTAL_MB FREE_MB
------------------------------ ---------- ----------
ORCL_DG1 496 131
ORCL_DG2 160 5

idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG2_0000 78
ORCL_DG2_0001 77


alter diskgroup orcl_dg2 add disk 'e:\asmdisks\_file_disk13';
alter diskgroup orcl_dg2 add disk 'e:\asmdisks\_file_disk14';

idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG2_0000 43
ORCL_DG2_0001 44
ORCL_DG2_0002 20
ORCL_DG2_0003 52

Wednesday, November 28, 2007

Turn on and off tracing in another session

Turn on and off tracing in another session

Turn on:
exec sys.dbms_system.set_ev(34, 20716, 10046, 12, '');

Turn off:
exec sys.dbms_system.set_ev(34, 20716, 10046, 0, '');

dbms_system.set_ev

PROCEDURE SET_EV
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SI BINARY_INTEGER IN
SE BINARY_INTEGER IN
EV BINARY_INTEGER IN
LE BINARY_INTEGER IN
NM VARCHAR2 IN

si = sid
se = serial#
ev = event = 10046
le = level = 12
nm = name = ''

Sunday, November 25, 2007

Usage Scenario - Transporting and Attaching Partitions for Data Warehousing

Goal: Employ trasportable tablespace feature to Load monthly sales data into a datawarehouse

Ref: This usage scenario is taken from Oracle9i Database Administrator's Guide

1. Suppose every month you would like to load one month's worth of data into
the data warehouse. There is a large fact table in the data warehouse
called sales, which has the following columns:

CREATE TABLE sales (invoice_no NUMBER,
sale_year INT NOT NULL,
sale_month INT NOT NULL,
sale_day INT NOT NULL)
PARTITION BY RANGE (sale_year, sale_month, sale_day)
(partition jan98 VALUES LESS THAN (1998, 2, 1),
partition feb98 VALUES LESS THAN (1998, 3, 1),
partition mar98 VALUES LESS THAN (1998, 4, 1),
partition apr98 VALUES LESS THAN (1998, 5, 1),
partition may98 VALUES LESS THAN (1998, 6, 1),
partition jun98 VALUES LESS THAN (1998, 7, 1));

2. You create a local nonprefixed index:
CREATE INDEX sales_index ON sales(invoice_no) LOCAL;

3. Initially, all partitions are empty, and are in the same default tablespace.
Each month, you want to create one partition and attach it to the
partitioned sales table.

4. Suppose it is July 1998, and you would like to load the July sales data into
the partitioned table.

5. In a staging database, you create a new tablespace, ts_jul. You also create
a table, jul_sales, in that tablespace with exactly the same column types as
the sales table. You can create the table jul_sales using the
CREATE TABLE ... AS SELECT statement.

6. After creating and populating jul_sales,
you can also create an index, jul_sale_index, for the table, indexing the same
column as the local index in the sales table.

7. After building the index, transport the tablespace ts_jul to the data warehouse.

8. In the data warehouse, add a partition to the sales table for the July sales data.
This also creates another partition for the local nonprefixed index:

ALTER TABLE sales ADD PARTITION jul98 VALUES LESS THAN (1998, 8, 1);

9. Attach the transported table jul_sales to the table sales by exchanging it
with the new partition:

ALTER TABLE sales EXCHANGE PARTITION jul98 WITH TABLE jul_sales
INCLUDING INDEXES
WITHOUT VALIDATION;

Notes: This statement places the July sales data into the new partition jul98,
attaching the new data to the partitioned table. This statement also converts
the index jul_sale_index into a partition of the local index for the sales table.
This statement should return immediately, because it only operates on the structural
information and it simply switches database pointers.
If you know that the data in the new partition does not overlap with data in
previous partitions, you are advised to specify the WITHOUT VALIDATION option.
Otherwise, the statement goes through all the new data in the new partition in
an attempt to validate the range of that partition.

If all partitions of the sales table came from the same staging database
(the staging database is never destroyed), the exchange statement always succeeds.
In general, however, if data in a partitioned table comes from different databases,
it's possible that the exchange operation may fail. For example, if the jan98 partition
of sales did not come from the same staging database, the above exchange operation
can fail, returning the following error:

ORA-19728: data object number conflict between table JUL_SALES and partition
JAN98 in table SALES

To resolve this conflict, move the offending partition by issuing the following statement:

ALTER TABLE sales MOVE PARTITION jan98;

Then retry the exchange operation.

10. After the exchange succeeds, you can safely drop jul_sales and jul_sale_index
(both are now empty). Thus you have successfully loaded the July sales data
into your data warehouse.

Tuesday, November 20, 2007

Display Session Waiting for Locks Using utllockt.sql script

The utllockt.sql script displays, in tree-structured fashion, the sessions in the system that are waiting for locks and the locks that they are waiting for.

The following test is done with Oracle 10gR2 database:


In session 1:

scott@ORCL> update emp set job=lower(job) where empno=1000;

1 row updated.


In session 2:

scott@ORCL> update emp set ename=upper(ename) where empno = 1000;

In session 3:

scott@ORCL> delete from emp where empno=1000;




cd /cygdrive/c/oraclexe/app/oracle/product/10.2.0/server/RDBMS/ADMIN
sqlplus / as sysdba


sys@ORCL> @utllockt.sql

...

WAITING_SESSION LOCK_TYPE MODE_REQUESTED MODE_HELD LOCK_ID1 LOCK_ID2
----------------- ----------------- -------------- ------------ --------- -----------------
151 None
135 Transaction Exclusive Exclusive 524333 13026
147 Transaction Exclusive Exclusive 524333 13026



After commit in session 1, run utllockt.sql again:


WAITING_SESSION LOCK_TYPE MODE_REQUESTED MODE_HELD LOCK_ID1 LOCK_ID2
----------------- ----------------- -------------- -------------- ----------------- -----------------
135 None
147 Transaction Exclusive Exclusive 131072 12916


After commit in session 2

scott@ORCL> delete from emp where empno=1000;

1 row deleted.

scott@ORCL> rollback;

Rollback complete.

scott@ORCL> select * from emp where empno=1000;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- -------------------- ---------- ---------- ----------
1000 DENNISII

Monday, November 19, 2007

Testing ASM on the Windows platform Using blank files

Testing ASM on the Windows platform Using blank files


Ref: http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_22.shtml


Environment: Oracle 10gR2 on Windows XP PC

1. Use DBCA to create an ASM instance

During this process, run the following script as asked:

C:\oraclexe\app\oracle\product\10.2.0\server\BIN>localconfig.bat add
Step 1: creating new OCR repository
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'denis', privgrp ''..
Operation successful.
Step 2: creating new CSS service
successfully created local CSS service
successfully added CSS to home

Notes: An ASM service will be automatically created

2. Set Initialization Parameter

C:\> set ORACLE_SID=+ASM
C:\> sqlplus "/ as sysdba"
SQL> alter system set "_asm_allow_only_raw_disks"=false scope=spfile;
SQL> alter system set asm_diskstring='e:\asmdisks\_file*' scope=both;

3. Bounce the ASM Instance

SQL> shutdown
SQL> startup

- verify the parameter
SQL> show parameters asm

NAME TYPE VALUE
------------------------------------ ----------- -----------------------------
_asm_allow_only_raw_disks boolean FALSE
asm_diskgroups string
asm_diskstring string e:\asmdisks\_file*
asm_power_limit integer 1


4. Create New Directory for Device Files

mkdir e:\asmdisks

5. Create Files for ASM Disks

make four 100MB text files using Perl.

------- creatTextFile.pl -----------------
my $s='0' x 2**20;

open(DF1,">E:/asmdisks/_file_disk1") die "Cannot create file - $!\n";
open(DF2,">E:/asmdisks/_file_disk2") die "Cannot create file - $!\n";
open(DF3,">E:/asmdisks/_file_disk3") die "Cannot create file - $!\n";
open(DF4,">E:/asmdisks/_file_disk4") die "Cannot create file - $!\n";

for (my $i=1; $i<100; style="font-weight: bold;">

Notes:
We can use asmtool to create files
asmtool -create \\server\share\file

E:\asmdisks>asmtool -create e:\asmdisks\_file_disk10 100


6. Confirm we can discover the 'disk' now

Log into the ASM instance, issue

col path format a30
SELECT group_number, disk_number, mount_status, header_status, state, path
FROM v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH
------------ ----------- ------- ------------ -------- ------------------------------
0 0 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK1
0 3 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK4
0 2 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK3
0 1 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK2



7. Create ASM disk group

Using SQL*Plus, create a disk group with normal redundancy and two failure groups:

set ORACLE_SID=+ASM
sqlplus "/ as sysdba"

SQL> CREATE DISKGROUP orcl_dg1 NORMAL REDUNDANCY
FAILGROUP controller1 DISK 'e:\ASMDISKS\_FILE_DISK1', 'e:\ASMDISKS\_FILE_DISK2'
FAILGROUP controller2 DISK 'e:\ASMDISKS\_FILE_DISK3', 'e:\ASMDISKS\_FILE_DISK4';
Diskgroup created.


sys@ORCL> select group_number, name, total_mb, free_mb, state, type
2 from v$asm_diskgroup;

GROUP_NUMBER NAME TOTAL_MB FREE_MB STATE TYPE
------------ ------------- ---------- ---------- ----------- ------
1 ORCL_DG1 396 290 MOUNTED NORMAL



GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH
------------ ----------- ------- ------------ -------- ------------------------------
1 0 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK1
1 1 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK2
1 2 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK3
1 3 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK4

8. Use ASM disk group
SQL> create tablespace users2 datafile '+ORCL_DG1' size 100m;


sys@ORCL> select tablespace_name, file_name from dba_data_files order by 1;

TABLESPACE_NAME FILE_NAME
------------------------------ ----------------------------------------
AUTO_TBS E:\ORADATA\ORCL\AUTO_TBS_DATA01.DBF
EXAMPLE C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\EXAMPLE01.DBF
HISTORY E:\ORADATA\HISTORY01.DBF
MANUAL_TBS E:\ORADATA\ORCL\MANUAL_TBS_DATA01.DBF
PART1 E:\ORADATA\ORCL\PART1_01.DBF
PART2 E:\ORADATA\ORCL\PART2_01.DBF
PART3 E:\ORADATA\ORCL\PART3_01.DBF
PART4 E:\ORADATA\ORCL\PART4_01.DBF
SEQ E:\ORADATA\ORCL\DATAFILE\O1_MF_SEQ_3MKLG
ORW_.DBF

SYSAUX C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\SYSAUX01.DBF

SYSTEM C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\SYSTEM01.DBF

UNDOTBS1 C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\UNDOTBS01.DBF

UNDO_BATCH E:\ORADATA\ORCL\UNDO_DATCH01.DBF
USERS C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\USERS01.DBF

USERS2 +ORCL_DG1/orcl/datafile/users2.256.63904
9975

Wednesday, November 14, 2007

RMAN spool log to example

Never know can do this before in RMAN prompt:


NTBK-/home/Denis/e_oracle/Practise_Oracle/temp>rman

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Nov 14 11:02:04 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> connect target

connected to target database: ORCL (DBID=1151466663)

RMAN> spool log to temp.log;
RMAN> show all;
RMAN> exit;
NTBK-/home/Denis/e_oracle/Practise_Oracle/temp>cat temp.log

Spooling started in log file: temp.log

Recovery Manager10.2.0.1.0

RMAN>
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SE

RMAN>

Recovery Manager complete.

Thursday, November 01, 2007

Transaction isolation level - A test

From Oracle 9i Database Concepts Chapter 20, we can read the following statement:

Oracle generates an error when a serializable transaction tries to update or delete data modified by a transaction that commits after the serializable transaction began:

ORA-08177: Cannot serialize access for this transaction

Tested with the following steps to verify the above statement


(1) In Session 1, issue:

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SMITH
1 SUN
2 SUN
3 JIN

(2) Also in Session 1:

hr@ORCL> alter session set isolation_level=serializable;

Session altered.

Note: Oracle default isolation_level is read committed

(3) Still in Session 1:
hr@ORCL> update t1 set name='SUN' where id=3;

1 row updated.

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SMITH
1 SUN
2 SUN
3 SUN

(4) Now in session 2, execute:

hr@ORCL> update t1 set name='SUN' where id=4;

1 row updated.
hr@ORCL> commit;

Commit complete.

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SUN
1 SUN
2 SUN
3 JIN

(5) Now back to the session 1, issue:

hr@ORCL> delete from t1 where name='SUN';
delete from t1 where name='SUN'
*
ERROR at line 1:
ORA-08177: can't serialize access for this transaction


hr@ORCL> commit;

Commit complete.

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SUN
1 SUN
2 SUN
3 SUN


Note: if the isolation level is the default (read commit), the delete in this step will delete all rows in the table t1