Tuesday, December 23, 2008

Update a join

I have started to work for my current client since Dec 15. This post is the first new thing I've learend from this client site.

Yesterday I was requested to check a "scrub" job that application team was running in a test environment, which would go into production at night. This "scrub" job was essentially to update a column of three tables using the value from the driver table. The developer provided the actual code he used as follows: ( The table names and colunm names were modified)


DECLARE
  counter  NUMBER(10):= 0;
  CURSOR upd_tabs IS SELECT x,  y FROM D;
BEGIN
  FOR rec IN upd_tabs LOOP
    UPDATE T1 SET y= rec.y WHERE x=rec.x;
    UPDATE T2 SET y= rec.y WHERE z=rec.x;
    UPDATE T3 SET y= rec.y WHERE z=rec.x;
    counter:=counter+1;
    IF MOD(counter,1000) = 0 )THEN
       INSERT INTO update_log (COMMIT_TIME, COMMIT_COUNTER)
             VALUES (SYSDATE,  counter);
       COMMIT;
    END IF;
    IF upd_tabs%NOTFOUND THEN
      EXIT;
    END IF;
  END LOOP;
COMMIT;
END;



Maybe several days ago , I came across a thread on AskTom forum, which talks about update the first table with second table. At that time I did not study throughly. I re-visited that thread, and I think Tom' s approach - update a join, can be used in our case. I checked the table involed on production database, found that:

T1 (x, y ... ) - x is primary key 3M

T2 (z, y ... ) - There is an index on z 14M

T3 (z, y ... ) - Primary key on z and another column, z is the leading column 9M

D (x, y ... ) - no primary key constraint, x is not null, but most likly x is unique also


If we can enfore the primary key constraint on D (x), we can write the following sqls:

update
( select
    T1.y T1_y,
    D.y D_y
  from
     T1,
     D
  where T1.x = D.x
)
set
   T1_y = D_y
/



As a rule of thumb, if problem can be solved using only SQL statment, it is always better than using PL/SQL. If we take a close check on the alogrithm of the PL/SQL code above, we can find that it is no difference than the execution plan shown in the following test case: ( This test case is just repeating what Tom Kyte did in the AskTom forum thread )

denis@DB10G>
denis@DB10G> create table t1
2  ( x int constraint t1_pk primary key,
3    y int );

Table created.

denis@DB10G>
denis@DB10G> create table t2
2  ( x int constraint t2_pk primary key,
3    y int );

Table created.

denis@DB10G>
denis@DB10G> insert into t1 values (1,1);

1 row created.

denis@DB10G> insert into t1 values (2,1);

1 row created.

denis@DB10G> insert into t1 values (3,1);

1 row created.

denis@DB10G>
denis@DB10G> insert into t2 values (2,2);

1 row created.

denis@DB10G> insert into t2 values (3,2);

1 row created.

denis@DB10G>
denis@DB10G>
denis@DB10G> -- update a join
denis@DB10G> set autotrace on explain
denis@DB10G> update
2   ( select  /*+ USE_NL(t1) INDEX (t1 t1_pk ) */
3     t1.y t1_y, t2.y t2_y
4       from t1, t2
5     where t1.x = t2.x )
6    set t1_y = t2_y
7  /

2 rows updated.


Execution Plan
----------------------------------------------------------
Plan hash value: 2040170521

---------------------------------------------------------------------------------------
Id   Operation                      Name   Rows   Bytes  Cost (%CPU) Time    
---------------------------------------------------------------------------------------
0  UPDATE STATEMENT                          2    104      5   (0) 00:00:01
1   UPDATE                        T1                                       
2    NESTED LOOPS                            2    104      5   (0) 00:00:01
3     TABLE ACCESS FULL           T2         2     52      3   (0) 00:00:01
4     TABLE ACCESS BY INDEX ROWID T1         1     26      1   (0) 00:00:01
*  5      INDEX UNIQUE SCAN          T1_PK      1             0   (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("T1"."X"="T2"."X")

Note
-----
- dynamic sampling used for this statement

denis@DB10G>
denis@DB10G> set autotrace off;
denis@DB10G>
denis@DB10G>> select * from t1;

X          Y
---------- ----------
1          1
2          2
3          2
denis@DB10G>
denis@DB10G> -- for larger tables the hint may not be necessary, Oracle can
denis@DB10G> -- pick up the right plan
denis@DB10G>
denis@DB10G> insert into t1 select rownum + 5 , 1
2  from all_objects
3  /

49841 rows created.

denis@DB10G>
denis@DB10G> analyze table t1 compute statistics;

Table analyzed.

denis@DB10G>
denis@DB10G> analyze table t2 compute statistics;

Table analyzed.

denis@DB10G>
denis@DB10G> set autotrace on explain
denis@DB10G> update
2     ( select t1.y t1_y, t2.y t2_y
3       from t1, t2
4     where t1.x = t2.x )
5    set t1_y = t2_y
6  /

2 rows updated.


Execution Plan
----------------------------------------------------------
Plan hash value: 2040170521

---------------------------------------------------------------------------------------
Id   Operation                      Name   Rows   Bytes  Cost (%CPU) Time    
---------------------------------------------------------------------------------------
0  UPDATE STATEMENT                          2     20      5   (0) 00:00:01
1   UPDATE                        T1                                       
2    NESTED LOOPS                            2     20      5   (0) 00:00:01
3     TABLE ACCESS FULL           T2         2      8      3   (0) 00:00:01
4     TABLE ACCESS BY INDEX ROWID T1         1      6      1   (0) 00:00:01
*  5      INDEX UNIQUE SCAN          T1_PK      1             0   (0) 00:00:01
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("T1"."X"="T2"."X")

denis@DB10G> set autotrace off
denis@DB10G>



Per Tom:
It should be NOTED that in order for the update of a join to work, T2 in the above example must have a primary or unique key on it. If T2 does not, you'll not be able to update the join. The optimizer needs to know that T1 will be joined to AT MOST 1 row in T2 and the way it does this is by seeing that a primary key or unique constraint exists on the join conditions

Sunday, December 21, 2008

Oracle DML locks - a very simple test


V$LOCK

Column: type
Possible values:
TM - DML or Table Lock
TX - transaction
MR - Media Recovery
ST - Disk Space Transaction

Column: lmode, request
If lmode is not 0 or 1, then the session has aquired the lock;
If request is other than 0 or 1, the session is waiting to aquire lock
Possible values:
1: null
2: Row Share (SS)
3: Row Exclusive (SX)
4: Share (S)
5: Share Row Exclusive (SSX)
6: Exclusive(X)


Steps:

1. Obtain current session SID


SQL> select sid from v$session where audsid=userenv('SESSIONID');

SID
----------
27

2. Insert a row to the table lck

SQL> insert into lck values(30,31);

1 row created.

SQL> select * from lck;

A B
---------- ----------
1 2
2 3
4 5

3. How does step 2 influence v$lock?

SQL> select sid,type,id1,lmode,request from v$lock where sid = 27;

SID TY ID1 LMODE REQUEST
---------- -- ---------- ---------- ----------
27 TX 327746 6 0
27 TM 16147 3 0


Session 27 has acquired two locks.


4. At this time, another session tyring to do DDL to lck, what will happen?

SQL> alter table lck add (c number);
alter table lck add (c number)
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified

Sunday, December 14, 2008

Testing Update Conflict Resolution in a Streams Environment

Before starting this experiment, a two-way replication has already been set up for a table called denis.strm_tab3 between databases TEST10G and TEST02DB.

The structure of the table is as follows:

Name              Null?    Type
----------------- -------- ---------------
ID                NOT NULL NUMBER
FIRST_NAME                 VARCHAR2(20)
LAST_NAME                  VARCHAR2(30)


To test the conflict resolution, I will update the same row of the table with different values at same time on the two databases. This will be archieved through scheduler jobs. First,the follwoing procedure is created:

create or replace procedure update_strm_tab3(l_id in  number, 
l_first_name in varchar2,
l_last_name in varchar2 )
as
begin
update strm_tab3 set first_name = l_first_name, last_name=l_last_name
where id= l_id;
commit;
end;
/



Second, the folliwng scheduler jobs that have same start date will be submitted:

At TEST10G:

begin
dbms_scheduler.create_job (
job_name => 'update_strm_tab3_job',
job_type => 'PLSQL_BLOCK',
job_action => 'begin update_strm_tab3(6,''fn6d_10g'', ''ln6d_10g''); end;',
start_date => '14-DEC-2008 06:35:00 PM',
enabled => true,
comments => 'Update strm_tab3 ');
end;
/

At TEST02DB

begin
dbms_scheduler.create_job (
job_name => 'update_strm_tab3_job',
job_type => 'PLSQL_BLOCK',
job_action => 'begin update_strm_tab3(6,''fn6d_02db'', ''ln6d_02db''); end;',
start_date => '14-DEC-2008 06:35:00 PM',
enabled => true,
comments => 'Update strm_tab3 ');
end;
/

Note: to check scheduler job, isssue the following statment:

col owner format a15
col next_run_date format a20
select owner, job_name, state, last_run_duration,
next_run_date
from dba_scheduler_jobs
where owner='DENIS';

I have tested the following cases:

Case 1 - No conflict resolution method is set up

Results:

At TEST10G
ID FIRST_NAME           LAST_NAME
---------- -------------------- ------------------------------
6 fn6d_10g             ln6d_10g


At TEST02G
ID FIRST_NAME           LAST_NAME
---------- -------------------- ------------------------------
6 fn6d_02db            ln6d_02db


The apply processes status became 'ABORT' on both database, for example
we can get something like:


++  APPLY PROCESS INFORMATION ++
APPLY_NAME                    : STRMADMIN_APPLY_2
MAX_APPLIED_MESSAGE_NUMBER    :
STATUS                        : ABORTED
STATUS_CHANGE_TIME            : 14-dec-2008 15:10:09
ERROR_NUMBER                  : 26714
ERROR_MESSAGE                 : ORA-26714: User error encountered while applying




Case 2 - Conflict resolution - OVERWRITE at TEST02DB

OVERWRITE - When a conflict occurs, the OVERWRITE handler replaces the current value at the destination database with the new value in the LCR from the source database.

According Oracle doc: You must specify a conditional supplemental log group at the source database for all of the columns in the column_list at the destination database

I issued the following statment at TEST10G and TEST02DB:

ALTER TABLE denis.strm_tab3 ADD SUPPLEMENTAL LOG GROUP log_group_jobs_cr (first_name, last_name);

I then set an update conflict handler using the SET_UPDATE_CONFLICT_HANDLER procedure in the DBMS_APPLY_ADM package, using the prebuilt method OVERWRITE.

I issue the following statement at TEST02DB:

DECLARE
cols DBMS_UTILITY.NAME_ARRAY;
BEGIN
cols(1) := 'first_name';
cols(2) := 'last_name';
DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
object_name => 'denis.strm_tab3',
method_name => 'OVERWRITE',
resolution_column => 'first_name',
column_list => cols);
END;
/


Note:

The resolution_column is not used for OVERWRITE and DISCARD methods, but one of the columns in the column_list still must be specified.


RESULTS:

1. We can see the following on both database.
ID FIRST_NAME           LAST_NAME
---------- -------------------- ------------------------------
6 fn6d_10g             ln6d_10g


2. The apply process is in abort status at TEST10G, and apply process at TEST02DB works fine

To prepare for the next test, I restart the apply process at TEST10G, and issue the
following statment at TEST02DB:

denis@TEST02DB> update strm_tab3 set first_name='first', last_name='last' where id=6;

1 row updated.

denis@TEST02DB> commit;

I verified that this row is updated at both databases.


Case 3 - Conflict resolution - OVERWRITE at TEST02DB, DISCARD at TEST10G

DISCARD - When a conflict occurs, the DISCARD handler ignores the values in the LCR from the source database and retains the value at the destination database.

I issue the following statement at TEST10G to set up the DISCARD handler:

DECLARE
cols DBMS_UTILITY.NAME_ARRAY;
BEGIN
cols(1) := 'first_name';
cols(2) := 'last_name';
DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(
object_name => 'denis.strm_tab3',
method_name => 'DISCARD',
resolution_column => 'first_name',
column_list => cols);
END;
/

Then I submitted the scheduler job again.

Results:

This time the row is updated as expected and apply processes run fine at both database. So in this configuration, if there are update conflicts, the statment issued at TEST10G will take effect. The statment issued at TEST02G will be ignored.

Note: we can issue the following query to check information about update conflict Handlers.

COLUMN OBJECT_OWNER HEADING 'TableOwner' FORMAT A5
COLUMN OBJECT_NAME HEADING 'Table Name' FORMAT A12
COLUMN METHOD_NAME HEADING 'Method' FORMAT A12
COLUMN RESOLUTION_COLUMN HEADING 'ResolutionColumn' FORMAT A13
COLUMN COLUMN_NAME HEADING 'Column Name' FORMAT A30

SELECT OBJECT_OWNER,
OBJECT_NAME,
METHOD_NAME,
RESOLUTION_COLUMN,
COLUMN_NAME
FROM DBA_APPLY_CONFLICT_COLUMNS
ORDER BY OBJECT_OWNER, OBJECT_NAME, RESOLUTION_COLUMN;



The output looks like:

At TEST10G


Table                           Resolution
Owner Table Name   Method       Column        Column Name
----- ------------ ------------ ------------- ------------------------
DENIS STRM_TAB3    DISCARD      FIRST_NAME    LAST_NAME
DENIS STRM_TAB3    DISCARD      FIRST_NAME    FIRST_NAME



At TEST02DB

Table                           Resolution
Owner Table Name   Method       Column        Column Name
----- ------------ ------------ ------------- -----------------
DENIS STRM_TAB3    OVERWRITE    FIRST_NAME    LAST_NAME
DENIS STRM_TAB3    OVERWRITE    FIRST_NAME    FIRST_NAME



To explore more about conflict resolution, check Chapter 3 Streams Conflict Resolution of the Oracle online documentation.

Friday, December 12, 2008

Creating a Simple Two-Way Streams Replciation Environment

Before starting this game, my environment is as follows:

  • Two 10g databases on my PC: TEST10G and TEST02DB
  • One-way Streams replication has been set up for the tables: dept and strm_tab2 in scott schema
  • replciation from TEST10G to TEST02DB.

By the end of game, my new enviroment should be:

  • Two databases: TEST10G and TEST02DB
  • One-way Streams replication for table dept and strm_tab2 in scott schema, from TEST10G to TEST02DB
  • Two-way Streams replication for table strm_tab3 in denis schema, between TEST10G and TEST02DB

My experimenting steps are as follows:
(ref: Oracle document: Creating a New Streams Multiple-Source Environment )

Step 0 -- Stop current capture, propagation and apply processes


Step 1 -- Create table denis.strm_tab3 at TEST10G


sys@TEST10G> select * from denis.strm_tab3;

ID FIRST_NAME LAST_NAME
---------- -------------------- ------------
1 Denis Sun
2 Tom Kyte
3 Jack Smith


Step 2 -- Create database link between TEST10G and TEST02DB

connect STRMADMIN/STRMADMIN@test02db

CREATE DATABASE LINK TEST10G.world connect to
STRMADMIN identified by STRMADMIN using 'TEST10G.world';
select * from global_name@TEST10G.world;

Note: DB link from TEST10G to TEST02DB already exists.

Step 3 -- Specifying an Unconditional Supplemental Log Group for Primary Key Column(s) at TEST10G

ALTER TABLE denis.strm_tab3 ADD SUPPLEMENTAL LOG GROUP tab3_id_pk1 (id) ALWAYS;


Step 4 -- Performing the following steps at TEST10G


-- create queues

connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_Q_SRC',
queue_table =>'STREAMS_Q_SRC',
queue_user => 'STRMADMIN');
END;strm_tab3;
/

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_Q_DST',
queue_table =>'STREAMS_Q_DST',
queue_user => 'STRMADMIN');
END;
/


-- Add table rules to caputure process


BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'DENIS.STRM_TAB3',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE_2',
queue_name => 'STRMADMIN.STREAMS_Q_SRC',
include_dml => true,strm_tab3;
include_ddl => true,
source_database => 'TEST10G.world');
END;
/


-- Add rule to propagation

connect strmadmin/strmadmin
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'DENIS.STRM_TAB3',
streams_name => 'STRMADMIN_PROPAGATE_2',
source_queue_name => 'STRMADMIN.STREAMS_Q_SRC',
destination_queue_name => 'STRMADMIN.STREAMS_Q_DST@TEST02DB.world',
include_dml => true,
include_ddl => true,
source_database => 'TEST10G.world');
END;
/


-- Add apply rules for the table

conn strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'DENIS.STRM_TAB3',
streams_type => 'APPLY',
streams_name => 'STRMADMIN_APPLY_2',
queue_name => 'STRMADMIN.STREAMS_Q_DST',
include_dml => true,
include_ddl => true,
source_database => 'TEST02DB.world');
END;
/

-- apply user
conn strmadmin/strmadmin

BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY_2',
apply_user => 'DENIS');
END;
/

Step 5 -- Performing the following steps at TEST02DB


-- create q
connect strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_Q_SRC',
queue_table =>'STREAMS_Q_SRC',
queue_user => 'STRMADMIN');
END;
/

BEGIN
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_name => 'STREAMS_Q_DST',
queue_table =>'STREAMS_Q_DST',
queue_user => 'STRMADMIN');
END;
/


-- Add table rules to caputure process

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'DENIS.STRM_TAB3',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_Q_SRC',
include_dml => true,
include_ddl => true,
source_database => 'TEST02DB.world');
END;
/


-- Add rule to propagation

connect strmadmin/strmadmin
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'DENIS.STRM_TAB3',
streams_name => 'STRMADMstrm_tab3;IN_PROPAGATE_2',
source_queue_name => 'STRMADMIN.STREAMS_Q_SRC',
destination_queue_name => 'STRMADMIN.STREAMS_Q_DST@TEST10G.world',
include_dml => true,
include_ddl => true,
source_database => 'TEST02DB.world');
END;
/


-- Add apply rules for the table

conn strmadmin/strmadmin

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'DENIS.STRM_TAB3',
streams_type => 'APPLY',
streams_name => 'STRMADMIN_APPLY_2',
queue_name => 'STRMADMIN.STREAMS_Q_DST',
include_dml => true,
include_ddl => true,
source_database => 'TEST10G.world');
END;
/
-- apply user
conn strmadmin/strmadmin

BEGIN
DBMS_APPLY_ADM.ALTER_APPLY(
apply_name => 'STRMADMIN_APPLY_2',
apply_user => 'DENIS');
END;
/



Step 6 Export and import

$ exp denis/oracle file=tab3.dmp tables=strm_tab3
Export: Release 10.2.0.1.0 - Production on Fri Dec 12 20:45:29 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table STRM_TAB3 3 rows exported
Export terminated successfully without warnings.

---------------------------------------------

$ imp denis/oracle@TEST02DB file=tab3.dmp tables=strm_tab3
Import: Release 10.2.0.1.0 - Production on Fri Dec 12 20:49:10 2008
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export file created by EXPORT:V10.02.01 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing DENIS's objects into DENIS
. importing DENIS's objects into DENIS
. . importing table "STRM_TAB3" 3 rows imported
Import terminated successfully without warnings.



Step 7 -- Setting Instantiation SCNs

At TEST10G
~~~~~~~~~~

connect STRMADMIN/STRMADMIN@TEST02DB
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' iscn);
END;
/


connect strmadmin/strmadmin@TEST10G
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'DENIS.STRM_TAB3',
source_database_name => 'TEST02DB.world',
instantiation_scn => &iscn);
END;
/



At TEST02DB
~~~~~~~~~

connect STRMADMIN/STRMADMIN@TEST10G
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' iscn);
END;
/


connect strmadmin/strmadmin@TEST02DB
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'DENIS.STRM_TAB3',
source_database_name => 'TEST10G.world',
instantiation_scn => &iscn);
END;
/



Step 8 -- Configure conflict resolution

(skip this step for future experimenting, check this doc for reference )


Step 9 -- Start apply process, propagation and capture


Note:

Oracle recommends that you use only one capture process for each source database. I have two capture at TEST10G.



Step 10 - Verifying the replication.

Working as expected

Thursday, December 11, 2008

What are the differences between v$sql and v$sqlarea?

I have been not clear at all about what are the differences between v$sql and v$sqlarea view until I read a chapter in Christian Antognini's book "Troubleshooting Oracle Performance"

The points I have learned are:

  • The result of a parse operation is a parent cursor and a child cursor stored in the library cache.
  • Parent cursor: text of sql statement
  • Child cursor : execution plan and excution environment
  • v$sql gives information about child cursor
  • v$sqlarea gives information about parenet cursor

Below I tried to compare the columns of these two views line by line in a 10g database, which shows the differenece quite obviously:





v$sql v$sqlarea
Name Type Name
----------------------------- --------------------- ---------------------------
CHILD_ADDRESS RAW(4)
CHILD_NUMBER NUMBER
SQLTYPE NUMBER
SERVICE VARCHAR2(64)
SERVICE_HASH NUMBER
TYPE_CHK_HEAP RAW(4)
-----------------------------------------------------------------------------------
VERSION_COUNT
LAST_ACTIVE_CHILD_ADDRESS
-----------------------------------------------------------------------------------

[ below is same ... ]
SQL_TEXT VARCHAR2(1000) SQL_TEXT
SQL_FULLTEXT CLOB SQL_FULLTEXT
SQL_ID VARCHAR2(13) SQL_ID
SHARABLE_MEM NUMBER SHARABLE_MEM
PERSISTENT_MEM NUMBER PERSISTENT_MEM
RUNTIME_MEM NUMBER RUNTIME_MEM
SORTS NUMBER SORTS
LOADED_VERSIONS NUMBER LOADED_VERSIONS
OPEN_VERSIONS NUMBER OPEN_VERSIONS
USERS_OPENING NUMBER USERS_OPENING
FETCHES NUMBER FETCHES
EXECUTIONS NUMBER EXECUTIONS
PX_SERVERS_EXECUTIONS NUMBER PX_SERVERS_EXECUTIONS
END_OF_FETCH_COUNT NUMBER END_OF_FETCH_COUNT
USERS_EXECUTING NUMBER USERS_EXECUTING
LOADS NUMBER LOADS
FIRST_LOAD_TIME VARCHAR2(19) FIRST_LOAD_TIME
INVALIDATIONS NUMBER INVALIDATIONS
PARSE_CALLS NUMBER PARSE_CALLS
DISK_READS NUMBER DISK_READS
DIRECT_WRITES NUMBER DIRECT_WRITES
BUFFER_GETS NUMBER BUFFER_GETS
APPLICATION_WAIT_TIME NUMBER APPLICATION_WAIT_TIME
CONCURRENCY_WAIT_TIME NUMBER CONCURRENCY_WAIT_TIME
CLUSTER_WAIT_TIME NUMBER CLUSTER_WAIT_TIME
USER_IO_WAIT_TIME NUMBER USER_IO_WAIT_TIME
PLSQL_EXEC_TIME NUMBER PLSQL_EXEC_TIME
JAVA_EXEC_TIME NUMBER JAVA_EXEC_TIME
ROWS_PROCESSED NUMBER ROWS_PROCESSED
COMMAND_TYPE NUMBER COMMAND_TYPE
OPTIMIZER_MODE VARCHAR2(10) OPTIMIZER_MODE
OPTIMIZER_COST NUMBER OPTIMIZER_COST
OPTIMIZER_ENV RAW(797) OPTIMIZER_ENV
OPTIMIZER_ENV_HASH_VALUE NUMBER OPTIMIZER_ENV_HASH_VALUE
PARSING_USER_ID NUMBER PARSING_USER_ID
PARSING_SCHEMA_ID NUMBER PARSING_SCHEMA_ID
PARSING_SCHEMA_NAME VARCHAR2(30) PARSING_SCHEMA_NAME
KEPT_VERSIONS NUMBER KEPT_VERSIONS
ADDRESS RAW(4) ADDRESS
HASH_VALUE NUMBER HASH_VALUE
OLD_HASH_VALUE NUMBER OLD_HASH_VALUE
PLAN_HASH_VALUE NUMBER PLAN_HASH_VALUE
MODULE VARCHAR2(64) MODULE
MODULE_HASH NUMBER MODULE_HASH
ACTION VARCHAR2(64) ACTION
ACTION_HASH NUMBER ACTION_HASH
SERIALIZABLE_ABORTS NUMBER SERIALIZABLE_ABORTS
OUTLINE_CATEGORY VARCHAR2(64) OUTLINE_CATEGORY
CPU_TIME NUMBER CPU_TIME
ELAPSED_TIME NUMBER ELAPSED_TIME
OUTLINE_SID NUMBER OUTLINE_SID
REMOTE VARCHAR2(1) REMOTE
OBJECT_STATUS VARCHAR2(19) OBJECT_STATUS
LITERAL_HASH_VALUE NUMBER LITERAL_HASH_VALUE
LAST_LOAD_TIME VARCHAR2(19) LAST_LOAD_TIME
IS_OBSOLETE VARCHAR2(1) IS_OBSOLETE
CHILD_LATCH NUMBER CHILD_LATCH
SQL_PROFILE VARCHAR2(64) SQL_PROFILE
PROGRAM_ID NUMBER PROGRAM_ID
PROGRAM_LINE# NUMBER PROGRAM_LINE#
EXACT_MATCHING_SIGNATURE NUMBER EXACT_MATCHING_SIGNATURE
FORCE_MATCHING_SIGNATURE NUMBER FORCE_MATCHING_SIGNATURE
LAST_ACTIVE_TIME DATE LAST_ACTIVE_TIME
BIND_DATA RAW(2000) BIND_DATA


Sunday, December 07, 2008

Cursor Variables - Impementing Variant Records

The power of cursor variables comes from their ability to point to different cursors.
This test example comes from this doc



denis@TEST10G>
denis@TEST10G> create table emp_tab
2 as
3 select
4 rownum empno,
5 object_name ename,
6 mod(rownum, 10) + 1 deptno ,
7 rownum * 100 sal
8 from all_objects
9 where rownum < 100;

Table created.

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> drop table dept_tab purge;

Table dropped.

denis@TEST10G> create table dept_tab
2 as
3 select
4 rownum deptno,
5 'd_' || rownum deptname
6 from all_objects
7 where rownum <=10;

Table created.

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> -- In the following package example, a discriminant is used to
denis@TEST10G> -- open a cursor variable to point to one of two different cursors:
denis@TEST10G>
denis@TEST10G> create or replace package emp_dept_data as
2 type cv_type is ref cursor;
3 procedure open_cv(cv in out cv_type,
4 discrim in positive);
5 end emp_dept_data;
6 /

Package created.

denis@TEST10G>
denis@TEST10G> create or replace package body emp_dept_data as
2 procedure open_cv (cv in out cv_type,
3 discrim in positive) is
4 begin
5 if discrim = 1 then
6 open cv for select * from emp_tab where sal > 2000;
7 elsif discrim = 2 then
8 open cv for select * from dept_tab;
9 end if;
10 end open_cv;
11 end emp_dept_data;
12 /

Package body created.

denis@TEST10G>
denis@TEST10G> -- You can call the open_cv procedure to open the cursor variable
denis@TEST10G> -- and point it to either a query on the emp_tab table or
denis@TEST10G> -- the dept_tab table.
denis@TEST10G>
denis@TEST10G> declare
2 emp_rec emp_tab%rowtype;
3 dept_rec dept_tab%rowtype;
4 cv emp_dept_data.cv_type;
5
6 begin
7 -- emp_dept_data.open_cv(cv,2); -- open cv for emp_tab fetch
8 emp_dept_data.open_cv(cv,1); -- open cv for emp_tab fetch
9 fetch cv into dept_rec; -- but fetch inot dept_tab record
10 -- which raises rowtype_mismatch
11 dbms_output.put(dept_rec.deptno);
12 dbms_output.put_line(' ' || dept_rec.deptname);
13 exception
14 when rowtype_mismatch then
15 begin
16 dbms_output.put_line
17 ('Row type mismatch, fetching emp_tab data ...');
18 fetch cv into emp_rec;
19 dbms_output.put(emp_rec.deptno);
20 dbms_output.put_line(' ' || emp_rec.ename);
21 end;
22 end;
23 /
Row type mismatch, fetching emp_tab data ...
2 C_TS#

PL/SQL procedure successfully completed.


Adding a new table to an existing Oracle Streams Replication environment - second attempt

I breifly read the Oracle document about this topic, and I have successfuly added a new table to the replication by taking the following steps:

1. Create the table at source and destination database.

create table scott.strm_tab2 (
id number,
name varchar2(20),
constraint id_pk primary key(id)
);


2. Turn on supplemental logging for the table at source

ALTER TABLE scott.strm_tab2 ADD SUPPLEMENTAL LOG GROUP id_pk1 (id) ALWAYS;

3. At source database, stop the capture process and propagation process

++ CAPTURE PROCESSES IN DATABASE ++
CAPTURE_NAME : STRMADMIN_CAPTURE
QUEUE_OWNER : STRMADMIN
QUEUE : STREAMS_QUEUE
CAPTURE_TYPE : LOCAL
STATUS : DISABLED
RULE_SET_OWNER : SYS
RSN : RULESET$_17
NEGATIVE_RULE_SET_OWNER :
RSN2 :
CHECKPOINT_RETENTION_TIME : 60
VERSION : 10.2.0.1.0
LOGFILE_ASSIGNMENT : IMPLICIT
ERROR_NUMBER :
STATUS_CHANGE_TIME : 07-dec-2008 14:18:13


++ PROPAGATIONS IN DATABASE ++
PROPAGATION_NAME : STRMADMIN_PROPAGATE
SOURCE_QUEUE_OWNER : STRMADMIN
SOURCE_QUEUE_NAME : STREAMS_QUEUE
SRC GLOBAL NAME : TEST10G.WORLD
DESTINATION_QUEUE_OWNER : STRMADMIN
DESTINATION_QUEUE_NAME : STREAMS_QUEUE
DESTINATION_DBLINK : TEST02DB.WORLD
QUEUE_TO_QUEUE : FALSE
STATUS : ABORTED
ERROR_DATE :
ERROR_MESSAGE :
-----------------


4. At destionation database stop the apply process
++ APPLY INFORMATION ++
APPLY_NAME : STRMADMIN_APPLY
QUEUE_OWNER : STRMADMIN
QUEUE_NAME : STREAMS_QUEUE
APPLY_CAPTURED : YES
STATUS : DISABLED
APPLY_USER : SCOTT
APPLY_TAG : 00
RULE_SET_OWNER : STRMADMIN
RULE_SET_NAME : RULESET$_35
NEGATIVE_RULE_SET_OWNER :
NEGATIVE_RULE_SET_NAME :
APPLY_DATABASE_LINK :
-----------------
++ APPLY PROCESS INFORMATION ++
APPLY_NAME : STRMADMIN_APPLY
MAX_APPLIED_MESSAGE_NUMBER : 10138270879363
STATUS : DISABLED
STATUS_CHANGE_TIME : 07-dec-2008 14:20:47
ERROR_NUMBER :
ERROR_MESSAGE :
-----------------


5. At destination database, add rules for an apply process


BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.STRM_TAB2',
streams_type => 'APPLY',
streams_name => 'STRMADMIN_APPLY',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'TEST10G.world');
END;
/

6. At source database, add rules for propagation process

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'SCOTT.STRM_TAB2',
streams_name => 'STRMADMIN_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@TEST02DB.world',
include_dml => true,
include_ddl => true,
source_database => 'TEST10G.world');
END;
/

7. At source database, add rules for capture process

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'SCOTT.STRM_TAB2',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'TEST10G.world');
END;
/

8. At destination database, set the instantiation SCN for the table

8.1 Execute the following to get SCN at source

connect STRMADMIN/STRMADMIN@TEST10G
set serveroutput on
DECLARE
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' iscn);
END;
/

8.2 Execute the following with the SCN obtained in 8.1


connect strmadmin/strmadmin@TEST02DB
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'SCOTT.STRM_TAB2',
source_database_name => 'TEST10G.world',
instantiation_scn => &iscn);
END;
/

9. At source database start capture process

10. At source database start propagation process


11. At destinationdatabase start apply

12. Test the replciation --- OK DML and DDL

It is still not completely clear to me what was wrong for the steps I have taken yesterday. But it looks like we must stop capture, propagation and apply processes if we add a rule to the rule sets they are using and the order matters - From the doc: "If you perform administrative steps in the wrong order, you can lost LCRs"

Saturday, December 06, 2008

Adding a new table to an existing Oracle Streams Replication environment - A failed attempt

Today I assigned myself a task, which is to add a new table to an existing streams replication. The source database is TEST10G and the destination database is TEST02DB. They both reside in my notebook pc.

The following are all the test steps for today, the task does not succeed.

1. Apply create table DDL on both databases:

create table denis.strm_tab1 (
id number,
name varchar2(20),
constraint id_pk primary key(id)
);


2. Check current capture, propagation processes on the source database


++ CAPTURE PROCESSES IN DATABASE ++
CAPTURE_NAME : STRMADMIN_CAPTURE
QUEUE_OWNER : STRMADMIN
QUEUE : STREAMS_QUEUE
CAPTURE_TYPE : LOCAL
STATUS : ENABLED
RULE_SET_OWNER : SYS
RSN : RULESET$_17
NEGATIVE_RULE_SET_OWNER :
RSN2 :
CHECKPOINT_RETENTION_TIME : 60
VERSION : 10.2.0.1.0
LOGFILE_ASSIGNMENT : IMPLICIT
ERROR_NUMBER :
STATUS_CHANGE_TIME : 30-nov-2008 21:07:57
ERROR_MESSAGE :
-----------------

++ CAPTURE PROCESS SOURCE INFORMATION ++
CAPTURE_NAME : STRMADMIN_CAPTURE
CAPTURE_TYPE : LOCAL
SOURCE_DATABASE : TEST10G.WORLD
FIRST_SCN : 10138270589233
START_SCN : 10138270589233
CAPTURED_SCN : 10138270859114
APPLIED_SCN : 10138270859114
LAST_ENQUEUED_SCN : 10138270865525
REQUIRED_CHECKPOINT_SCN : 10138270830473
MAX_CHECKPOINT_SCN : 10138270859114
SOURCE_DBID : 917147433
SOURCE_RESETLOGS_SCN : 534907
SOURCE_RESETLOGS_TIME : 665622764
LOGMINER_ID : 1
-----------------

++ PROPAGATIONS IN DATABASE ++
PROPAGATION_NAME : STRMADMIN_PROPAGATE
SOURCE_QUEUE_OWNER : STRMADMIN
SOURCE_QUEUE_NAME : STREAMS_QUEUE
SRC GLOBAL NAME : TEST10G.WORLDInstantiating
DESTINATION_QUEUE_OWNER : STRMADMIN
DESTINATION_QUEUE_NAME : STREAMS_QUEUE
DESTINATION_DBLINK : TEST02DB.WORLD
QUEUE_TO_QUEUE : FALSE
STATUS : ENABLED
ERROR_DATE :
ERROR_MESSAGE :



3. Check current apply process on the destioination database


++ APPLY INFORMATION ++
APPLY_NAME : STRMADMIN_APPLY
QUEUE_OWNER : STRMADMIN
QUEUE_NAME : STREAMS_QUEUE
APPLY_CAPTURED : YES
STATUS : ENABLED
APPLY_USER : SCOTT
APPLY_TAG : 00
RULE_SET_OWNER : STRMADMIN
RULE_SET_NAME : RULESET$_35
NEGATIVE_RULE_SET_OWNER :
NEGATIVE_RULE_SET_NAME :
APPLY_DATABASE_LINK :
-----------------

++ APPLY PROCESS INFORMATION ++
APPLY_NAME : STRMADMIN_APPLY
MAX_APPLIED_MESSAGE_NUMBER :
STATUS : ENABLED
STATUS_CHANGE_TIME : 03-dec-2008 15:26:13
ERROR_NUMBER :
ERROR_MESSAGE :
-----------------


4. Execute the following steps at source database
4.1 Turn on supplementapply_vt.sqlal logging for STRM_TAB1 table
ALTER TABLE denis.strm_tab1 ADD SUPPLEMENTAL LOG GROUP id_pk1 (id) ALWAYS;

4.2 Add capture rules the table STRM_TAB1 at the source database:

conn / as sysdba
BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'DENIS.STRM_TAB1',
streams_type => 'CAPTURE',
streams_name => 'STRMADMIN_CAPTURE',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'TEST10G.world');
END;
/

4.3 Add propagation rules for the table STRM_TAB1 at the source database.

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(
table_name => 'DENIS.STRM_TAB1',
streams_name => 'STRMADMIN_PROPAGATE',
source_queue_name => 'STRMADMIN.STREAMS_QUEUE',
destination_queue_name => 'STRMADMIN.STREAMS_QUEUE@TEST02DB.world',
include_dml => true,
include_ddl => true, Adding to a Streams Replication Environment
source_database => 'TEST10G.world');
END;
/


5. Execute the following steps at the destination database
5.1 Add apply rules for the table at the destination database

BEGIN
DBMS_STREAMS_ADM.ADD_TABLE_RULES(
table_name => 'DENIS.STRM_TAB1',
streams_type => 'APPLY',
streams_name => 'STRMADMIN_APPLY',
queue_name => 'STRMADMIN.STREAMS_QUEUE',
include_dml => true,
include_ddl => true,
source_database => 'TEST10G.world');
END;
/


5.2 Grant privs to scott

Note: The apply user in existing repliction is scott. It may be better to
have different capture, propagation and apply for different schema, will
explore this later

denis@TEST02DB> grant all on strm_tab1 to scott;

Grant succeeded.


6. Instantiating

6.1 Execute the following to get SCN


connect STRMADMIN/STRMADMIN@TEST10G
set serveroutput on
DECLARE Adding to a Streams Replication Environment
iscn NUMBER; -- Variable to hold instantiation SCN value
BEGIN
iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' iscn);
END;
/

6.2 Execute the following with the SCN obtained in 6.1

connect strmadmin/strmadmin@TEST02DB
BEGIN
DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
source_object_name => 'DENIS.STRM_TAB1',
source_database_name => 'TEST10G.world',
instantiation_scn => &iscn);
END;


7. Test to see if the replication work.
Insert a row into denis.strm_tab1 at source to see what happen at the dest.
Results: At the dest the row did not get inserted.
Verified that the replication for scott.dept still works fine.

8. Troubleshoot:

Execute the health check script at source and destination and review the report. Fixed the instantiating error, however, it is still not working
Tried stop and start capture, propagation and apply process, does not help.


9. Next step: Read documentation, try to fix the problem tommorrow if possible.

Oracle Doc: Adding to a Streams Replication Environment

Performance Tuning Articles on Internet

This post is intended to collect various Oracle performance tuning articles readily availalbe from internet.


Burleson

Adams, S.

Friday, December 05, 2008

Demonstrate the Usage of Cursor Variable in PL/SQL

Reference: Cursor Variables



denis@TEST10G> @cursor_variables.sql
denis@TEST10G> rem Fectching Data
denis@TEST10G> rem
denis@TEST10G> rem Purpose: demonstrate the usage of cursor variable in PL/SQL
denis@TEST10G> rem
denis@TEST10G> rem A cursor variable is a pointer to a cursor. Because cursor
denis@TEST10G> rem variables are pointers, they can be passed and returned as
denis@TEST10G> rem parameters to procedures and functions. A cursor variable
denis@TEST10G> rem can also refer to different cursors in its life time
denis@TEST10G>
denis@TEST10G>
denis@TEST10G> drop table emp_tab purge;

Table dropped.

denis@TEST10G>
denis@TEST10G> create table emp_tab
2 as
3 select
4 rownum empno,
5 object_name ename,
6 mod(rownum, 10) + 1 deptno ,
7 rownum * 100 sal
8 from all_objects
9 where rownum < 100;

Table created.

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> drop table dept_tab purge;

Table dropped.

denis@TEST10G> create table dept_tab
2 as
3 select
4 rownum deptno,
5 'd_' || rownum deptname
6 from all_objects
7 where rownum <=10;

Table created.

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> -- the following package defines a PL/SQL cursor variable type Emp_Val_cv_type,
denis@TEST10G> -- and two procedures
denis@TEST10G> -- The first procedure, Open_emp_cv, opens the cursor variable using a bind
denis@TEST10G> -- variable in the WHERE clause. The second procedure, Fetch_emp_data, fetches
denis@TEST10G> -- rows from the Emp_tab table using the cursor variable
denis@TEST10G>
denis@TEST10G>
denis@TEST10G> create or replace package emp_data as
2 type emp_val_cv_type is ref cursor return emp_tab%rowtype;
3 procedure open_emp_cv (emp_cv in out emp_val_cv_type,
4 dept_number in integer);
5 procedure fetch_emp_data (emp_cv in emp_val_cv_type,
6 emp_row out emp_tab%rowtype);
7 end emp_data;
8 /

Package created.

denis@TEST10G>
denis@TEST10G> create or replace package body emp_data as
2 procedure open_emp_cv (emp_cv in out emp_val_cv_type,
3 dept_number in integer) is
4 begin
5 open emp_cv for select * from emp_tab where deptno = dept_number;
6 end open_emp_cv;
7 procedure fetch_emp_data (emp_cv in emp_val_cv_type,
8 emp_row out emp_tab%rowtype) is
9 begin
10 fetch emp_cv into emp_row;
11 end fetch_emp_data;
12 end emp_data;
13 /

Package body created.

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> -- The following code shows how to call emp_data package
denis@TEST10G> -- procedures from a PL/SQL bock:
denis@TEST10G>
denis@TEST10G>
denis@TEST10G> declare
2 -- delcare a cursor variable
3 emp_curs emp_data.emp_val_cv_type;
4 dept_number dept_tab.deptno%type;
5 emp_row emp_tab%rowtype;
6 begin
7 dept_number := 6;
8 -- open the cursor using a variable
9 emp_data.open_emp_cv(emp_curs, dept_number);
10 -- fetch the data and display it
11 loop
12 emp_data.fetch_emp_data(emp_curs, emp_row);
13 exit when emp_curs%notfound;
14 dbms_output.put_line(rpad(emp_row.ename, 20) || ' ' || lpad(emp_row.sal, 20));
15 end loop;
16 end;
17 /
C_COBJ# 500
I_CON1 1500
I_UNDO2 2500
C_USER# 3500
I_COL3 4500
I_PROXY_ROLE_DATA$_2 5500
I_SUPEROBJ1 6500
IDL_SB4$ 7500
TRIGGERJAVAM$ 8500
PROPS$ 9500

PL/SQL procedure successfully completed.

Wednesday, December 03, 2008

Set up a Simple Oracle Streams Replication

Based on the Metalink Note: 224255.1 : "Steps To Setup Replication Using Oracle Streams", I was trying to set up a streams replicatin for the scott.dept table from a soruce datbase TEST10G to a destination database TEST02DB. Both databases are 10g and on my notebook computer. However my first attempt was not successfuly. When I inserted a row into the source table, nothing happens in the destionation table.

Oracle Metalink Note: 273674.1: "Streams Configuration Report and Health Check Script" provides a script that can be used to retrieve all the infomation related to the Streams configuration and perform health check. I used this script to generate the report for the TEST10G and TEST02BD. As expected, the report shows that the setup for the Apply process in the destinaton database is not correct. The source database should use the global name. The global name of source database is TEST10G.world, however in the report the source database is specified by TEST10G.

Firstly, I issued the following statement in the destionatin database as strmadmin
BEGIN
DBMS_STREAMS_ADM.remove_streams_configuration;
END;
/

Then I redo the step 2.4, 2.5, 2.6 and 4.2 as described in the Note: 273674.1. After that I inserted a row in the source scott.dept table and I see the same row got inserted into the destination table also. Replication is now working ...

Sunday, November 16, 2008

Test case: flashback table



scott@TEST10G> drop table t;

Table dropped.

scott@TEST10G> create table t(a int);

Table created.

scott@TEST10G> insert into t values(1);

1 row created.

scott@TEST10G> insert into t values(2);

1 row created.

scott@TEST10G> commit;

Commit complete.

scott@TEST10G> select sysdate from dual;

SYSDATE
-------------------
2008-11-16 13:23:44


scott@TEST10G> insert into t values(3);

1 row created.

scott@TEST10G> insert into t values(4);

1 row created.

scott@TEST10G> commit;

Commit complete.

scott@TEST10G> select * from t;

A
----------
1
2
3
4


scott@TEST10G> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

scott@TEST10G> alter table t enable row movement;

Table altered.

scott@TEST10G> flashback table t to timestamp to_date('2008-11-16 13:23:44');

Flashback complete.

scott@TEST10G> select * from t;

A
----------
1
2



Must enable row movment:

scott@TEST10G> alter table t disable row movement;

Table altered.

scott@TEST10G>
scott@TEST10G> select table_name, row_movement from user_tables where table_name='T';

TABLE_NAME ROW_MOVE
------------------------------ --------
T DISABLED

7 rows selected.

scott@TEST10G> select * from t;

A
----------
1
2

scott@TEST10G> flashback table t to timestamp to_date('2008-11-16 13:23:44');
flashback table t to timestamp to_date('2008-11-16 13:23:44')
*
ERROR at line 1:
ORA-08189: cannot flashback the table because row movement is not enabled


scott@TEST10G>



Note: to flashback to 15 min ago, we can use the following syntax:

SQL> flashback table hr.employees, hr.departments
to timestamp systimestamp - interval '15' minute;

Saturday, November 15, 2008

Trace an Oracle Scheduler Job

Got a request to trace a scheduler job. I did the following:

1. Find the pl/sql block this job is doing:

select JOB_ACTION from dba_scheduler_jobs where job_name='XXXX_MOBLAL01ANK_CORR_JOB';

JOB_ACTION
----------------------------------------------------------------------------------
BEGIN XXXX_CORRELATION('MOBLAL01ANK', 10000, 60); END;


2. Disable this job after it is stopped. (login as job owner)

begin
dbms_scheduler.disable(user||'.XXXX_MOBLAL01ANK_CORR_JOB');
end;
/


3. Trace a sql session:

alter session set tracefile_identifier ='XXXX';
alter session set timed_statistics=true;
alter session set events '10046 trace name context forever, level 12';

BEGIN
XXXX_CORRELATION('MOBLAL01ANK', 10000, 60);
END;
/
exit;


4. Enable this job after trace is done

begin
dbms_scheduler.enable(user||'.XXXX_MOBLAL01ANK_CORR_JOB');
end;
/

5. Issue the following query to check the job state:
select state from dba_scheduler_jobs where job_name='XXXX_MOBLAL01ANK_CORR_JOB';

STATE
---------------
SCHEDULED

Troubleshooting - ORA-02049: timeout: distributed transaction waiting for lock

The following steps/observations are involved in solving this issue:

1. Application team asked DBA for help to solve the following error:

We are connecting to NCAMASTERCON/xxxxx@PRDDB1



SQL> update au1011
2 set rerate_request_cd = '5',dl_service_code = 'FXLSA', sys_update_date = sysdate
3 where ban in (select ban from TEMP_LSAFF_US1011)
4 and record_type = '1';
commit;
update au1011
*
ERROR at line 1:
ORA-02049: timeout: distributed transaction waiting for lock
ORA-02063: preceding line from NCAUSAGECON.PRDDB2





2. I found out that au1011 on PRDDB1 is a synonym for an object on a remote database PRDDB2

The synonym definition is as follows:
CREATE OR REPLACE SYNONYM "NCAMASTERCON"."AU1011" FOR "NCAUSAGECON"."AU1011"@"NCAUSAGECON.PRDDB2";


3. Then, I found out on PRDDB2 the "AU1011" is also a synonym, it is defined as:
CREATE OR REPLACE SYNONYM "NCAUSAGECON"."AU1011" FOR "NCAUSAGEOWN"."ACCUMULATED_USAGE_10_B";

4. Checked if there are any locks on the table: ACCUMULATED_USAGE_10_B



system@PRDDB2> @l2/locked_object_by_name.sql
Enter value for object_name: ACCUMULATED_USAGE_10_B
Lock Lock Time Object
SID Type Held Owner Object Name Held (min) Status
------ ---- ------ ---------------- ------------------------------ ---------- ------------
2278 DML Row-X NCAUSAGEOWN ACCUMULATED_USAGE_10_B 1622 VALID




5. The SQL running by session 2278 is:



system@PRDDB2> @sql_cur_sid
Enter value for sid: 2278

SQL_TEXT SQL_ID SID USER_NAME

---------------------------------------- ------------- ------ ------------
UPDATE /*+ PARALLEL ("A1",4) */ "NCAUSAG 93vbp0jv2c7w4 2278 NCAUSAGECON
ECON"."AU1011" "A1" SET "RERATE_REQUEST_
CD" = '5',"DL_SERVICE_CODE" = 'FXLSA',"S
YS_UPDATE_DATE" = SYSDATE@! WHERE ("A1".
"BAN","A1"."SUBSCRIBER_NO")=ANY (SELECT
DISTINCT "A2"."BAN","A1"."SUBSCRIBER_NO"
FROM "TEMP_LSAFF_US1011"@! "A2") AND "A
1"."RECORD_TYPE"='1'



6. Further check revealed that session 2278 on PRDDB2 was started 97784 seconds ago
and the last call it made was 97773 seconds ago



7. The application DBA observed the following:
The session (sid=7265) shows as KILLED on PRDDB1, but the corresponding remote session on PRDDB2(sid=2278) is still active.

8. Confirmed by application team, we killed the session 2278 on PRDDB2

Friday, November 14, 2008

Disable a scheduler job

To disable a schedular job, if it is running, it may be necessary to stop it before it can be disabled.

Below is an exmaple that I did today for a production change request:
( changed the real database name)


BV_DATA@mydbname> select owner, job_name, state from dba_scheduler_jobs where job_name like 'ANYNAME_%';

OWNER JOB_NAME STATE
-------------------- ------------------------------ ---------------
BV_DATA ANYNAME_TAMQFL02ANK_CORR_JOB DISABLED
BV_DATA ANYNAME_MOBLAL02ANK_CORR_JOB DISABLED
BV_DATA ANYNAME_MOBLAL01ANK_CORR_JOB RUNNING
BV_DATA ANYNAME_LPTPPA01ANK_CORR_JOB DISABLED

4 rows selected.

BV_DATA@mydbname> exec dbms_scheduler.stop_job ('ANYNAME_MOBLAL01ANK_CORR_JOB');

PL/SQL procedure successfully completed.

BV_DATA@mydbname> exec dbms_scheduler.disable(user||'.ANYNAME_LPTPPA01ANK_CORR_JOB',true);

PL/SQL procedure successfully completed.

BV_DATA@mydbname> select owner, job_name, state from dba_scheduler_jobs where job_name like 'ANYNAME_%';

OWNER JOB_NAME STATE
-------------------- ------------------------------ ---------------
BV_DATA ANYNAME_TAMQFL02ANK_CORR_JOB DISABLED
BV_DATA ANYNAME_MOBLAL02ANK_CORR_JOB DISABLED
BV_DATA ANYNAME_MOBLAL01ANK_CORR_JOB DISABLED
BV_DATA ANYNAME_LPTPPA01ANK_CORR_JOB DISABLED



AWK script to analyze statspack report

I have written a small awk script to help to analyze statspack report. The content of the script is as follows:

.

The content of the script is as follows:



$ cat printload.awk
{
if ( $0 ~ /Load Profile/ ) {
print "===>" FILENAME
print $0
getline
print $0
getline
print $0
getline
print $0
getline
print $0
}
}


Here is an example of its usage:

$ awk -f printload.awk *_1114*.lst

===>sp_P1ROCCS_1114_00:00_01:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: -530,710.25 -39,825,381.33
Logical reads: 5,346.96 401,244.58
===>sp_P1ROCCS_1114_01:00_02:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 661,687.44 62,668,765.37
Logical reads: 5,337.23 505,492.05
===>sp_P1ROCCS_1114_04:00_05:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: -261,353.97 -16,558.50
Logical reads: 34,913.84 2,212.02
===>sp_P1ROCCS_1114_05:00_06:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 679,480.45 635,181.85
Logical reads: 9,648.79 9,019.74
===>sp_P1ROCCS_1114_06:00_07:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: -531,582.88 -11,387,897.48
Logical reads: 7,055.82 151,154.21
===>sp_P1ROCCS_1114_07:00_08:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 632,586.85 26,531,869.35
Logical reads: 6,800.79 285,237.83
===>sp_P1ROCCS_1114_08:00_09:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: -535,177.88 -18,857,150.16
Logical reads: 6,451.35 227,315.36
===>sp_P1ROCCS_1114_09:00_10:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 657,449.22 15,364,673.58
Logical reads: 5,490.71 128,318.66
===>sp_P1ROCCS_1114_10:00_11:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 542,321.35 648,839.10
Logical reads: 14,599.49 17,466.99
===>sp_P1ROCCS_1114_11:00_12:03.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 268,331.93 319,353.99
Logical reads: 19,272.63 22,937.23
===>sp_P1ROCCS_1114_12:03_13:07.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 316,435.43 342,552.49
Logical reads: 22,151.92 23,980.23
===>sp_P1ROCCS_1114_14:00_15:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 149,285.73 391,643.98
Logical reads: 6,620.19 17,367.75
===>sp_P1ROCCS_1114_15:00_16:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 158,509.15 256,437.01
Logical reads: 7,857.87 12,712.51
===>sp_P1ROCCS_1114_16:00_16:58.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 197,935.84 243,677.31
Logical reads: 10,099.65 12,433.60
===>sp_P1ROCCS_1114_16:58_17:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 799,700.92 948,216.80
Logical reads: 37,202.29 44,111.29
===>sp_P1ROCCS_1114_17:00_18:00.lst
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 541,252.15 441,183.58
Logical reads: 26,538.76 21,632.18



 


I know under cygwin I can use grep -A option to achieve same goal.
I can do something as follows:
grep -A5 "Load Profile" *_1114*.lst

Note:
-A NUM, --after-context=NUM
Print NUM lines of trailing context after matching lines. Places a line
containing -- between contiguous groups of matches.

However, the version of grep on our server does not have this option.

startup open restrict

Not familar with this command before. Did a test as follows:


sys@TEST10G> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
sys@TEST10G> startup open restrict
ORACLE instance started.

Total System Global Area 612368384 bytes
Fixed Size 1250428 bytes
Variable Size 163580804 bytes
Database Buffers 440401920 bytes
Redo Buffers 7135232 bytes
Database mounted.
Database opened.

sys@TEST10G> select logins from v$instance;

LOGINS
----------
RESTRICTED

sys@TEST10G> alter system disable restricted session;

System altered.

sys@TEST10G> select logins from v$instance;

LOGINS
----------
ALLOWED

Monday, November 10, 2008

Copy optimizer table statistics between different schema and databases

This test case is to simulate the situation in which we need to copy a table stats from one schema to anohter schema in a different database.

Summary of Steps:
1. Set up the initial tables and collect stats
2. Export table stats using dbms_stats package in the source database and schema
3. Exp the stats table from source database
4. Imp the stats table to the target database and target schema
5. Import table stats using dbms_stats package in target database

Note: Pay attention to the neccessity of updating c5 in the stats tab to replace source schema name with target schema name.


1. set up the inital tables
Note: Table t is in hr, we want to copy its stats to table t in scott

scott@TEST10G> @1_setup
scott@TEST10G> connect hr/hr;
Connected.
hr@TEST10G>
hr@TEST10G> drop table t;

Table dropped.

hr@TEST10G>
hr@TEST10G> create table t
  2  as
  3  select * from all_objects;

Table created.

hr@TEST10G> exec dbms_stats.gather_table_stats(user, 'T');

PL/SQL procedure successfully completed.

hr@TEST10G>
hr@TEST10G> grant select on t to scott;

Grant succeeded.

hr@TEST10G>
hr@TEST10G> connect scott/tiger;
Connected.
scott@TEST10G> drop table t;

Table dropped.

scott@TEST10G> create table t
  2  as
  3  select * from hr.t;

Table created.


2. Export table stats


hr@TEST10G> conn hr/hr
Connected.
hr@TEST10G>
hr@TEST10G> drop table t_stats;

Table dropped.

hr@TEST10G> exec dbms_stats.create_stat_table(user, 'T_STATS');

PL/SQL procedure successfully completed.

hr@TEST10G> exec dbms_stats.export_table_stats(user, 'T', stattab => 'T_STATS');

PL/SQL procedure successfully completed.


3. Exp the stats table: T_STATS

exp hr/hr tables=t_stats file=t_stats.dmp log=3_exp_t_stats.log


$ 3_exp_t_stats.sh

Export: Release 10.2.0.1.0 - Production on Mon Nov 10 18:51:38 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table                        T_STATS         14 rows exported
Export terminated successfully without warnings.



4. Imp the stats table: T_STATS
imp system/oracle fromuser=hr touser=scott tables=t_stats file=t_stats.dmp log=4_imp_t_stats.log
Import: Release 10.2.0.1.0 - Production on Mon Nov 10 18:52:22 2008

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path

Warning: the objects were exported by HR, not by you

import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing HR's objects into SCOTT
. . importing table "T_STATS" 14 rows imported
Import terminated successfully without warnings.




5. Import table stats
Note: to update c5 in stats table is necessary !


system@TEST10G> @5_imp_tab_stats.sql
system@TEST10G> connect scott/tiger
Connected.
scott@TEST10G>
scott@TEST10G> -- must update column c5
scott@TEST10G>
scott@TEST10G> Prompt c5 stores schema name
c5 stores schema name
scott@TEST10G> select c5 from t_stats;

C5
------------------------------
HR
HR
HR
HR
HR
HR
HR
HR
HR
HR
HR
HR
HR
HR

14 rows selected.

scott@TEST10G>
scott@TEST10G>
scott@TEST10G> --  see what will happen if we don't update c5
scott@TEST10G> exec dbms_stats.import_table_stats(user, 'T', stattab=> 'T_STATS');

PL/SQL procedure successfully completed.

scott@TEST10G>
scott@TEST10G>
scott@TEST10G> select owner,table_name, num_rows, last_analyzed
  2    from dba_tables where table_name='T' and owner in ('HR', 'SCOTT');

OWNER             TABLE_NAME      NUM_ROWS LAST_ANALYZED
----------------- ------------- ---------- -------------------
HR                T                  40733 2008-11-10 18:59:31
SCOTT             T

scott@TEST10G>
scott@TEST10G>
scott@TEST10G> -- update c5
scott@TEST10G>
scott@TEST10G> update t_stats set c5='SCOTT';

14 rows updated.

scott@TEST10G>
scott@TEST10G> exec dbms_stats.import_table_stats(user, 'T', stattab=> 'T_STATS');

PL/SQL procedure successfully completed.

scott@TEST10G>
scott@TEST10G> select owner,table_name, num_rows, last_analyzed
  2    from dba_tables where table_name='T' and owner in ('HR', 'SCOTT');

OWNER          TABLE_NAME       NUM_ROWS LAST_ANALYZED
-------------- -------------- ---------- -------------------
HR             T                   40733 2008-11-10 18:59:31
SCOTT          T                   40733 2008-11-10 18:59:31

Monday, November 03, 2008

Some Basic RAC Concepts

I found previously I have developed some reading notes about RAC, I am reviewing them today, becuase I will have a technical phone screen tonight which is said about Oracle DBA with RAC experiences. Hopefully review those notes will refresh my mind.

Below are part of the notes about some basic RAC concepts:

13. RAC concepts

13.1 What is RAC ?
RAC architecture consisted of:
(1) Clustered hardware:
- Multiple instances accessing the same database
- One instance per node
- Interconnected nodes act as a single server

(2) Real Application Clusters software:
- Manage coordinated data access and data changes among instances
- Each instance sees a consistent image of the database
- Pass coordination information and data images among each interconnected nodes.

13.2 Why use RAC
- High availability
- Scalability

13.3 RAC specific processes
At RAC instance level
* LMON – GES Monitor
* LMD0 – GES Daemon
* LMSx – GCS processes: manages resources and provides inter-instance resource control.
* LCK0: lock process
* DIAG: Diagnosability process
At cluster level:
* CRSD and RACGIMON: are engines for high-availability operations
* OCSSD: Provides access to node membership and group services
* EVMD: Scan callout directory and invokes callouts in reactions to detected events
* OPRROCD: Is a process monitor for the cluster: (not in Windows and Linux)

13.4 Global Resource Coordination

- Global Resource Directory (GRD):
* Each instance maintains a part of GRD
* Each particular resource has a master (the instance that manages all the information about this resource)
- Global Cache Service (GCS)
* buffer cache
* cache coherency – Cache Fusion algorithm
- Global Enqueue Service (GED)
* dictionary cache locks
* library cache locks
* deadlock

13.5 Object Affinity and Dynamic Remastering

- GCS enables the database to automatically adapt and migrate resources in the GRD
- GCS keeps track of the number of GCS requests on a per-instance and per-object basis.
- Master a buffer cache resource on the instance where it is mostly accesse

13.6 RAC database files
- Must be on local storage: Oracle cluster software (ORA_CRS_HOME)
- Can be on shared but better on local storage: Oracle RDBMS and ASM software
- Must be on shared storage (e.g. ASM disk groups, CFS )

RAW devices:
* OCR files - on raw devices
* Voting files (disks) – on raw devices

ASM Disk group: +DATADG
* Data files
* Temp files
* Control files
* SPFILE
* Undo tablespace files ( each instance has its own dedicated ones)
* Redo log files ( at least two groups for each instance)

ASM DISK group: +FLADG
* Flash recovery area files
* Change tracking file
* Archived log files* Control file (duplicate

Sunday, November 02, 2008

KSH expr command integer overflow

Last night we have recived false alerts about a filesystem space issue. It turns out there is a bug in the script related to integer overflow in the "expr" command. I found this overflow error can be see on our HP-UX system, but not on Solaris or Linux machines. Here are the testing results:

----------- test script: test.ksh begins next line -----------------
#!/bin/ksh
## Bad##
FS=2840644712
FS=`expr $FS / 1024` ;
echo $FS
## Good##
FS=2840644712
FS=`echo $FS/1024 bc `
echo $FS
----------- test script ends last line -----------------


HP-UX
~~~~~~~~

$ uname -a
HP-UX dhpnbe5 B.11.23 U ia64 3928363565 unlimited-user license
$ ./test.ksh
-1420236
2774067


Linux
~~~~~~

$ uname -a
Linux wrpe2e06.edc.mycompany.net 2.6.9-55.0.9.ELlargesmp #1 SMP Tue Sep 25 02:25:48 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
oracle@wrpe2e06.edc.mycompany.net(!) ph1etem3 /tmp
$ ./test.ksh
2774067
2774067


Solaris
~~~~~~~

$ uname -a
SunOS wspebp06 5.10 Generic_127111-11 sun4u sparc SUNW,Sun-Fire
$ ./test.ksh
2774067
2774067

Friday, October 31, 2008

Standby log gap issue solved

We had a physical standby database that is behind the primary database for several days. We found that there were gaps. So we restored the archived logs from tape by RMAN to their original destination. Oracle is smart enough to automatically fetch those archived logs to the destination. And we don't need to register them at all on the standby ( our environment is 10g).

However, we found the MRP0 process status is always WAIT_FOR_LOG when issue the following sql:

SELECT PROCESS, STATUS,SEQUENCE#,BLOCK#,BLOCKS, DELAY_MINS FROM
V$MANAGED_STANDBY


Later I found out that in the alert log, there is a line showing:

Media Recovery Delayed for 479 minute(s) (thread 1 sequence 52361)

I reallized we have used the DELAY=480 attribute of the LOG_ARCHIVE_DEST_2 initialization parameter on the primary database to delay applying archived redo log files to the standby database.

So I tryied the following command which I found in the documentaton on the standby:

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;

Oracle started the redo apply immediately. By this way we are able to catch up and solve this log missing and apply lag issue.

Wednesday, October 29, 2008

PL/SQL Exception - a simple test

Continue the effort to learn PL/SQL - This is about EXCEPTION, simple test.



denis@TEST10G> @exception_1
denis@TEST10G> rem This is the test case to test the following situation:
denis@TEST10G> rem
denis@TEST10G> rem 1. Not using any exception, which would result in an ORA-01403 error
denis@TEST10G> rem 2. Uisng predefined Oracle Database exception NO_DATA_FOUND
denis@TEST10G> rem 3. Using user-defined exception
denis@TEST10G> rem
denis@TEST10G>
denis@TEST10G>
denis@TEST10G> drop table t;

Table dropped.

denis@TEST10G>
denis@TEST10G> create table t
2 as
3 select
4 rownum empno,
5 a.object_name ename
6 from all_objects a
7 where rownum < 5000;

Table created.

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> -- Case 1: exception not defined
denis@TEST10G>
denis@TEST10G> declare
2 emp_number integer :=5000;
3 emp_name varchar2(50);
4 begin
5 select ename into emp_name from t
6 where empno = emp_number; -- no such number
7 dbms_output.put_line('Employee name is ' || emp_name);
8 end;
9 /
declare
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 5


denis@TEST10G>
denis@TEST10G>
denis@TEST10G> -- Case 2: using predefind Oracle exception
denis@TEST10G>
denis@TEST10G> declare
2 emp_number integer :=5000;
3 emp_name varchar2(50);
4 begin
5 select ename into emp_name from t
6 where empno = emp_number; -- no such number
7 dbms_output.put_line('Employee name is ' || emp_name);
8 exception
9 when no_data_found then
10 dbms_output.put_line('No such employee:' || emp_number);
11 end;
12 /
No such employee:5000

PL/SQL procedure successfully completed.

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> -- Case 3: using user-defined exception
denis@TEST10G> -- declare it in the delcaration part of a block
denis@TEST10G> -- define it in the exception part of the block
denis@TEST10G>
denis@TEST10G> declare
2 emp_number integer;
3 emp_name varchar2(50);
4 empno_out_of_range exception;
5 begin
6 emp_number := 5000;
7 if emp_number > 4999 or emp_number < 1 then
8 raise empno_out_of_range;
9 else
10 select ename into emp_name from t
11 where empno = emp_number;
12 dbms_output.put_line('Employee name is ' || emp_name);
13 end if;
14 exception
15 when empno_out_of_range then
16 dbms_output.put_line('Employee number:' || emp_number ||
17 ' is out of range.');
18 end;
19 /
Employee number:5000 is out of range.

PL/SQL procedure successfully completed.

Monday, October 27, 2008

Identify and kill a blocking session - enq: TX - row lock contention

This post documented steps I have taken to identify and kill a blocking session:

1. Recived alert:

DBNAME:critical-Higher wait events:enq: TX - row lock contention-63(threshold 50)

2. Checking the wait events, find that the count for 'enq: TX - row lock contention' is high

  COUNT(*) EVENT                                         MAX(SECONDS_IN_WAIT)
---------- --------------------------------------------- --------------------
5753 SQL*Net message from client                                6048278
188 SQL*Net break/reset to client                                33425
149 enq: TX - row lock contention                                12274
20 rdbms ipc message                                            41691
1 Streams AQ: qmn slave idle wait                             556815
1 Streams AQ: waiting for messages in the queue                    5
1 Streams AQ: waiting for time management or cl                33106
eanup tasks
1 db file parallel write                                           0
1 direct path write                                                2
1 Streams AQ: qmn coordinator idle wait                      6184833
1 jobq slave wait                                                 18
1 os thread startup                                                0
1 pmon timer                                                   18795
1 smon timer                                                    5504
1 SQL*Net message to client                                        0


3. Checking locks:

Looks everyone is waiting for sid=7752

system@DBNAME> @qlocks

Lock                Time Held
SID Lock Type Requested Lock Held (minutes) Block
------ --------- --------- --------- --------- -----
7752 Trans     None      Exclusive       325 Yes
3994 Trans     Share     None            324 No
8305 Trans     Share     None            323 No
4922 Trans     Share     None            292 No
3977 Trans     Share     None            285 No
7668 Trans     Share     None            282 No
4379 Trans     Share     None            124 No
5423 Trans     Share     None              1 No
7611 Trans     Share     None              1 No
6080 Trans     Share     None              1 No
3843 Trans     Share     None              0 No
4731 Trans     Share     None              0 No

( truncated ...  total 116 rows )



system@DBNAME> @blocker

ADDR             KADDR               SID TY        ID1        ID2      LMODE    REQUEST      CTIME   Block
---------------- ---------------- ------ -- ---------- ---------- ---------- ---------- ---------- ----------
0000000516758030 00000005167581B8   7752 TX    3080222     128785          6          0      19544 ##########





4. Check the session sid=7752

system@DBNAME> @sess_by_sid
Enter value for sid: 7752

Shadow       Remote       Remote
SID-SERIAL   OS User  Username         Module         Process      Process      Machine              Logon Time      Status
------------ -------- ---------------- -------------- ------------ ------------ -------------------- --------------- --------
'7752,55822' doammw27 OLAMLOG                         8786         1234         dspcef07             10/26 13:42:04  INACTIVE





5. The following query shows that session sid=7752 is 19367 sec old, and the last call made by it is 19366 sec ago.

system@DBNAME> @trans_long_running.sql

ADDR             Status       UBAREC           START_SCN SYSDATE             START_DATE                SECS     RSSIZE   SID LAST_CALL_ET Status   Username         OS User  SHARABLE_MEM
---------------- -------- ---------- ------------------- ------------------- ------------------- ---------- ---------- ----- ------------ -------- ---------------- -------- ------------
0000000516758030 ACTIVE           13      10172725882034 2008-10-26 19:05:54 2008-10-26 13:43:07      19367    1171456  7752        19366 INACTIVE OLAMLOG          doammw27




6. After killing the session sid=7752, wait event showed as follows:

  COUNT(*) EVENT                                         MAX(SECONDS_IN_WAIT)
---------- --------------------------------------------- --------------------
5943 SQL*Net message from client                                6049401
188 SQL*Net break/reset to client                                34548
22 rdbms ipc message                                            42814
1 smon timer                                                     276
1 Streams AQ: waiting for time management or cl                34229
eanup tasks

1 jobq slave wait                                                 47
1 pmon timer                                                     283
1 Streams AQ: qmn slave idle wait                             557938
1 Streams AQ: qmn coordinator idle wait                      6185956
1 SQL*Net message to client                                        0
1 Streams AQ: waiting for messages in the queue                    5




####---------------------------------------------------------------------------
Appendix - scripts used:

$ cat event.sql
rem file: event.sql
rem
rem  Lists the Current Waits in the database.
rem  First Script to run in the database to debug performance issues.
rem  This query lists the Count of Sessions group by Wait Event, and
rem  sorted in Ascending Order.
rem  Look for Top 5 Wait events from this query.
rem  Also listed is the maximum time ( seconds) sessions
rem  have waited against a particular wait event. Count Information is
rem  more valuable than the Max time information.
rem
column event format a45
select count(*), event, max(seconds_in_wait) from v$session_wait
group by event
order by 1 desc
/

####---------------------------------------------------------------------------

$ cat qlocks.sql
clear columns
clear computes
clear breaks

set linesize 240
set pagesize 40

column sid      format 99999    heading "SID"
column locktype format A9       heading "Lock Type"
column lockreq  format A9       heading "Lock     Requested"
column lockheld format A9       heading "Lock Held"
column timeheld format 9999999  heading "Time Held(minutes)"
column block    format a5       heading "Block"


select /* RULE */ l.sid sid,
decode(l.type,'TM','DML','TX','Trans','UL','User',l.type) locktype,
decode(l.request,0,'None',1,'Null',2,'Row-S',3,'Row-X',4,'Share',5,'S/Row-X',6,'Exclusive') lockreq,
decode(l.lmode,0,'None',1,'Null',2,'Row-S',3,'Row-X',4,'Share',5,'S/Row-X',6,'Exclusive') lockheld,
l.ctime/60 timeheld,
decode(l.block,0,'No',1,'Yes') block
from v$lock l
where l.request != 0 or l.block != 0
order by l.id1, l.lmode desc, l.ctime desc
/

####---------------------------------------------------------------------------
$ cat blocker.sql
select * from v$lock where block > 0
/

####---------------------------------------------------------------------------
$ cat sess_by_sid.sql
set linesize 250
set pagesize 1000
set verify off

column SidSerial        format a12      heading "SID-SERIAL"
column sid      format 9999     heading "SID"
column serial   format 99999    heading "Serial "
column osuser   format a8       heading "OS User"
column username format a16      heading "Username"
column module   format a14      heading "Module"
column spid     format 999999   heading "ShadowProcess"
column process  format 999999   heading "RemoteProcess"
column machine  format a20      heading "RemoteMachine"
column time     format a15      heading "Logon Time"
column status   format a8       heading "Status"


select ''''||s.sid||','||s.serial# ||'''' SidSerial,
--select s.sid sid ,
--s.serial# serial,
s.osuser                 osuser,
s.username               username,
s.module                 module,
p.spid                   spid,
s.process                process,
s.machine                machine,
to_char(s.logon_time, 'mm/dd hh24:mi:ss') time,
s.status                 status
from v$session s, v$process p
where s.paddr = p.addr (+)
and s.sid = &sid
/

####---------------------------------------------------------------------------

$ cat trans_long_running.sql
column start_scn format 999999999999999999
select
t.addr,
t.status,
t.UBAREC,
t.start_scn,
SYSDATE,
t.start_date,
(
(SYSDATE - t.start_date) * 86400
) secs,
r.rssize,
s.sid,
s.last_call_et,
s.status,
s.username,
s.osuser,
sa.sharable_mem
from
v$transaction t
join v$rollstat r on t.xidusn = r.usn
join v$session s on s.saddr = t.ses_addr
left outer join v$sqlarea sa on s.sql_address = sa.address
where s.sid = 7752
/


Wednesday, October 22, 2008

Oracle built-in functions: SUBSTR and INSTR

If we want to restore a RMAN backup to a different host and the path in the new host are different from those in the original host, we will have to use the 'set newname for datafile ' statment to change the file name. This test case demonstrated the use of SUBSTR and INSTR function and how to use them to generate the 'set newname for datafile' statement dynamically.


 

denis@TEST10G> @substr_instr.sql
denis@TEST10G> rem substr_instr.sql
denis@TEST10G> rem ref: http://www.psoug.org/reference/substr_instr.html
denis@TEST10G>
denis@TEST10G> set echo on
denis@TEST10G>
denis@TEST10G> SELECT SUBSTR('Take the first four characters', 1, 4) FIRST_FOUR
2 FROM dual;

FIRS
----
Take

denis@TEST10G>
denis@TEST10G> SELECT SUBSTR('Take the first four characters', 16, 4) MIDDLE_FOUR
2 FROM dual;

MIDD
----
four

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> SELECT SUBSTR('Take the first four characters', 16) SIXTEEN_TO_END
2 FROM dual;

SIXTEEN_TO_END
---------------
four characters

denis@TEST10G>
denis@TEST10G> SELECT SUBSTR('Take the first four characters', -4) FINAL_FOUR
2 FROM dual;

FINA
----
ters

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> SELECT INSTR('Take the first four characters', 'a', 1, 1) FOUND_1
2 FROM dual;

FOUND_1
----------
2

denis@TEST10G>
denis@TEST10G> SELECT INSTR('Take the first four characters', 'a', 1, 2) FOUND_2
2 FROM dual;

FOUND_2
----------
23

denis@TEST10G>
denis@TEST10G> SELECT INSTR('Take the first four characters', 'four', 1, 1) MCHARS
2 FROM dual;

MCHARS
----------
16

denis@TEST10G>
denis@TEST10G> SELECT INSTR('Take the first four characters', 'a', -1, 1) REV_SRCH
2 FROM dual;

REV_SRCH
----------
25

denis@TEST10G>
denis@TEST10G> SELECT INSTR('Take the first four characters', 'a', -1, 2) REV_TWO
2 FROM dual;

REV_TWO
----------
23

denis@TEST10G>
denis@TEST10G> -- Take up to the character before the first comma
denis@TEST10G> SELECT SUBSTR('abc,def,ghi', 1 ,INSTR('abc,def,ghi', ',', 1, 1)-1)
2 FROM dual;

SUB
---
abc

denis@TEST10G>
denis@TEST10G> -- List parsing center value
denis@TEST10G> -- Take the value between the commas
denis@TEST10G> SELECT SUBSTR('abc,def,ghi', INSTR('abc,def,ghi',',', 1, 1)+1,
2 INSTR('abc,def,ghi',',',1,2)-INSTR('abc,def,ghi',',',1,1)-1)
3 FROM dual;

SUB
---
def

denis@TEST10G>
denis@TEST10G> -- List parsing last value
denis@TEST10G> -- Take the value after the last comma
denis@TEST10G> SELECT SUBSTR('abc,def,ghi', INSTR('abc,def,ghi',',', 1, 2)+1)
2 FROM dual;

SUB
---
ghi

denis@TEST10G>
denis@TEST10G>
denis@TEST10G> -- test the case that restoring a databsase to a different host and with different path name
denis@TEST10G> -- using RMAN. Demonstrate how to create 'set new name' statments
denis@TEST10G>
denis@TEST10G> set linesize 400
denis@TEST10G> drop table t;

Table dropped.

denis@TEST10G> create table t(file_name varchar2(100));

Table created.

denis@TEST10G> insert into t values('/opt/app/wcmdb02/ora01/P3CM1D1/dbf/idx500cP0CMD_7.dbf');

1 row created.

denis@TEST10G> insert into t values('/opt/app/wcmdb02/ora03/P3CM1D1/dbf/usersP0CMD_1.dbf');

1 row created.

denis@TEST10G> commit;

Commit complete.

denis@TEST10G>
denis@TEST10G> select 'set newname for datafile ' || ''''||file_name||'''' ||
2 lpad('to ', 60-length(file_name)) || ''''||
3 '/opt/app/cm2cmdb' || substr(file_name,instr(file_name,'/',1,4), 6) ||'/P0CMD' ||
4 substr(file_name, instr(file_name, '/', -1, 2) ) || '''' || ';'
5 from t;

'SETNEWNAMEFORDATAFILE'||''''||FILE_NAME||''''||LPAD('TO',60-LENGTH(FILE_NAME))||''''||'/OPT/APP/CM2CMDB'||SUBSTR(FILE_NAME,INSTR(FILE_NAME,'/',1,4),6
------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------

set newname for datafile '/opt/app/wcmdb02/ora01/P3CM1D1/dbf/idx500cP0CMD_7.dbf' to '/opt/app/cm2cmdb/ora01/P0CMD/dbf/idx500cP0CMD_7.dbf';
set newname for datafile '/opt/app/wcmdb02/ora03/P3CM1D1/dbf/usersP0CMD_1.dbf' to '/opt/app/cm2cmdb/ora03/P0CMD/dbf/usersP0CMD_1.dbf';

denis@TEST10G>