Wednesday, April 29, 2009

Handle ORed Semi-Join conditions

Supposing we have a SQL using the EXISTS construct as follows:


select count(*) from t1 cbo
where exists
(select null from t2
where cbo.object_id = t2.object_id
or cbo.object_name = t2.object_name);

 
Here we have two semi-join conditions ORed together. What could be the best way to execute this kind of SQL? Here is my test case to answer this question in a 10g database.


1. Create the objects:


SQL> execute dbms_random.seed(0);

PL/SQL procedure successfully completed.

SQL>
SQL> create table t1
2 as select * from
3 ( select *
4 from all_objects
5 order by dbms_random.value
6 )
7 where rownum <=200;

Table created.

SQL>
SQL>
SQL> create table t2
2 as
3 select * from (
4 select * from all_objects order by object_type)
5 where rownum <= 10000;

Table created.

SQL>
SQL> insert into t2
2 select * from all_objects where rownum <= 10000;

10000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> create index t2_idx on t2(object_id);

Index created.

SQL>
SQL> begin
2 dbms_stats.gather_table_stats(
3 user,
4 't1',
5 cascade => true
6 );
7 end;
8 /

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> begin
2 dbms_stats.gather_table_stats(
3 user,
4 't2',
5 cascade => true
6 );
7 end;
8 /




 

2. Oracle does not pick up semi-join access path for the questioned SQL:



SQL> select count(*) from t1 cbo
2 where exists
3 (select null from t2
4 where cbo.object_id = t2.object_id
5 or cbo.object_name = t2.object_name);

COUNT(*)
----------
103

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last outline'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------
SQL_ID cuz85u83buju4, child number 0
-------------------------------------
select count(*) from t1 cbo where exists (select null from t2
where cbo.object_id = t2.object_id or cbo.object_name =
t2.object_name)

Plan hash value: 1280151244

--------------------------------------------------------------------------------------
Id Operation Name Starts E-Rows A-Rows A-Time Buffers
--------------------------------------------------------------------------------------
1 SORT AGGREGATE 1 1 1 00:00:01.95 43590
* 2 FILTER 1 103 00:00:01.95 43590
3 TABLE ACCESS FULL T1 1 200 200 00:00:00.01 5
* 4 TABLE ACCESS FULL T2 200 2 103 00:00:01.95 43585
--------------------------------------------------------------------------------------

Outline Data
-------------

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$2")
OUTLINE_LEAF(@"SEL$1")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$1")
FULL(@"SEL$1" "CBO"@"SEL$1")
FULL(@"SEL$2" "T2"@"SEL$2")
END_OUTLINE_DATA
*/

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter( IS NOT NULL)
4 - filter(("T2"."OBJECT_NAME"=:B1 OR "T2"."OBJECT_ID"=:B2))



 

3. How about rewrite the SQL into two EXISTS constructs?
- Not much help, although buffer gets descreased from 43590 to 33834




SQL> select count(*) from t1 cbo
2 where exists (select null from t2 where cbo.object_id = t2.object_id )
3 or exists (select null from t2 where cbo.object_name = t2.object_name);

COUNT(*)
----------
103

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last outline'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------
SQL_ID 2qxaf7m9c7z3r, child number 0
-------------------------------------
select count(*) from t1 cbo where exists (select null from t2 where
cbo.object_id = t2.object_id ) or exists (select null from t2 where
cbo.object_name = t2.object_name)

Plan hash value: 2450656000

------------------------------------------------------------------------------
Id Operation Name Starts E-Rows A-Rows A-Time Buffers
--------------------------------------------------------------------------------
1 SORT AGGREGATE 1 1 1 00:00:00.87 33834
* 2 FILTER 1 103 00:00:00.87 33834
3 TABLE ACCESS FULL T1 1 200 200 00:00:00.01 5
* 4 INDEX RANGE SCAN T2_IDX 200 1 70 00:00:00.01 400
* 5 TABLE ACCESS FULL T2 130 1 33 00:00:00.86 33429
-----------------------------------------------------------------------------------------




  

4. Rewrite the SQL into UNION construct.
This time the buffer gets is only 369 due to Oracle is able to take advantage of semi-join:






SQL> select count(*) from (
2 select * from t1 cbo
3 where exists (select null from t2 where cbo.object_id = t2.object_id )
4 union
5 select * from t1 cbo
6 where exists (select null from t2 where cbo.object_name = t2.object_name)
7 );

COUNT(*)
----------
103

SQL>
SQL> select * from table(dbms_xplan.display_cursor(null, null, 'allstats last outline'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------
SQL_ID cmtfjwznktn3c, child number 0
-------------------------------------
select count(*) from ( select * from t1 cbo where exists (select null from t2 where cbo.object_id =
t2.object_id ) union select * from t1 cbo where exists (select null from t2 where cbo.object_name =
t2.object_name) )

Plan hash value: 3775847438

----------------------------------------------------------------------------------------------
Id Operation Name Starts E-Rows A-Rows A-ime Buffers OMem 1Mem Used-Mem
------------------------------------------------------------------------------------------------
1 SORT AGGREGATE 1 1 1 00.08 369
2 VIEW 1 400 103 00.08 369
3 SORT UNIQUE 1 400 103 00.08 369 184321843216384 (0)
4 UNION-ALL 1 173 00.08 369
* 5 HASH JOIN SEMI 1 200 70 00.04 54 762K 762K1184K (0)
6 TABLE ACCESS FULL T1 1 200 200 00.01 5
7 INDEX FAST FULL SCANT2_IDX 1 20000 20000 00.01 49
* 8 HASH JOIN SEMI 1 200 103 00.04 315 762K 762K1187K (0)
9 TABLE ACCESS FULL T1 1 200 200 00.01 5
10 TABLE ACCESS FULL T2 1 20000 20000 00.01 310
------------------------------------------------------------------------------------------------


Outline Data
-------------

/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$BE5C8E5F")
UNNEST(@"SEL$3")
OUTLINE_LEAF(@"SEL$385088EC")
UNNEST(@"SEL$5")
OUTLINE_LEAF(@"SET$1")
OUTLINE_LEAF(@"SEL$1")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$3")
OUTLINE(@"SEL$4")
OUTLINE(@"SEL$5")
OUTLINE(@"SET$1")
OUTLINE(@"SEL$1")
NO_ACCESS(@"SEL$1" "from$_subquery$_001"@"SEL$1")
FULL(@"SEL$385088EC" "CBO"@"SEL$4")
FULL(@"SEL$385088EC" "T2"@"SEL$5")
LEADING(@"SEL$385088EC" "CBO"@"SEL$4" "T2"@"SEL$5")
USE_HASH(@"SEL$385088EC" "T2"@"SEL$5")
FULL(@"SEL$BE5C8E5F" "CBO"@"SEL$2")
INDEX_FFS(@"SEL$BE5C8E5F" "T2"@"SEL$3" ("T2"."OBJECT_ID"))
LEADING(@"SEL$BE5C8E5F" "CBO"@"SEL$2" "T2"@"SEL$3")
USE_HASH(@"SEL$BE5C8E5F" "T2"@"SEL$3")
END_OUTLINE_DATA
*/

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("CBO"."OBJECT_ID"="T2"."OBJECT_ID")
8 - access("CBO"."OBJECT_NAME"="T2"."OBJECT_NAME")

Friday, April 24, 2009

How to invalidate a curosr?

I read from somewhere that issuing harmless DDL, for examples COMMENT and GRANT statements, could be a good way to invalidate a cursor. I have not got chance to verify it until today. I set up a test and find that the following method can invalidate a cursor.

- flush the shared_pool
- grant: i.e. grant select on t to system;
- comment: i.e. comment on table t is 'I am a table';
- dbms_stats.gather_table_stats (no_invalidate should not be true);

Of course if I add a new column to a table, I am sure I can invalidate the cursor too, but this is not a choice obviously in the middle of day, even I do want to add a column. This is not a harmless DDL.

Below is the test abount 'COMMENT':

(1) I first created a table with one row having value 1 and 1999 rows having value 99 for its only column id. I had a histogram on that columne collected. So if I have predicate id=1, index scan should be used and if I have id=99, full table scan is preferred.


denis@TESTDB920> create table t
  2  as
  3  select
  4   case when rownum=1
  5   then 1
  6   else 99
  7   end id,
  8   a.*
  9    from all_objects a
 10  where rownum <=2000 
 11  / 
Table created. 
denis@TESTDB920> create index t_idx on t(id);

Index created.

denis@TESTDB920> begin
  2   dbms_stats.gather_table_stats
  3   (user,
  4    'T',
  5    method_opt=>'for all indexed columns size 254');
  6  end;
  7  /

PL/SQL procedure successfully completed.

 

(2) I then executed a SQL statement twice with different bind variable values, using value 1 first.

denis@TESTDB920> variable id number
denis@TESTDB920> exec :id := 1

PL/SQL procedure successfully completed.

denis@TESTDB920> select /* denis */ sum(object_id) from t where id= :id;

SUM(OBJECT_ID)
--------------
            97

denis@TESTDB920>
denis@TESTDB920> exec :id := 99

PL/SQL procedure successfully completed.

denis@TESTDB920> select /* denis */ sum(object_id) from t where id= :id;

SUM(OBJECT_ID)
--------------
       6361608

denis@TESTDB920> select hash_value, address, child_number, executions, plan_hash_value, invalidations from
  2  v$sql where sql_text like 'select%denis%' and sql_text not like 'select hash_value%';

HASH_VALUE ADDRESS          CHILD_NUMBER EXECUTIONS PLAN_HASH_VALUE INVALIDATIONS
---------- ---------------- ------------ ---------- --------------- -------------
1403451991 000000039F798070            0          2      3178687684             0
 
(3) Now I invalidated the cursor by executiong a COMMENT stateemnt, noticed from v$sql: invalidations=1


denis@TESTDB920> comment on table t is 'I am ok';

Comment created.

denis@TESTDB920>
denis@TESTDB920> select hash_value, address, child_number, executions, plan_hash_value, invalidations from
  2  v$sql where sql_text like 'select%denis%' and sql_text not like 'select hash_value%';

HASH_VALUE ADDRESS          CHILD_NUMBER EXECUTIONS PLAN_HASH_VALUE INVALIDATIONS
---------- ---------------- ------------ ---------- --------------- -------------
1403451991 000000039F798070            0          0               0             1
 

(4) Next I executed the same sql statement 4 times. Since the cursor was invalidated, Oracle would need to re-parse the sql. As I first used bind value 99 as compared to 1 in the step 2, as expected the cursor got exected with different plan, notice the plan hash value is different from that in the step 2

denis@TESTDB920>
denis@TESTDB920> exec :id := 99

PL/SQL procedure successfully completed.

denis@TESTDB920> select /* denis */ sum(object_id) from t where id= :id;

SUM(OBJECT_ID)
--------------
       6361608

denis@TESTDB920> select /* denis */ sum(object_id) from t where id= :id;

SUM(OBJECT_ID)
--------------
       6361608

denis@TESTDB920> select /* denis */ sum(object_id) from t where id= :id;

SUM(OBJECT_ID)
--------------
       6361608

denis@TESTDB920> select /* denis */ sum(object_id) from t where id= :id;

SUM(OBJECT_ID)
--------------
       6361608

denis@TESTDB920> select hash_value, address, child_number, executions, plan_hash_value, invalidations from
  2  v$sql where sql_text like 'select%denis%' and sql_text not like 'select hash_value%';

HASH_VALUE ADDRESS          CHILD_NUMBER EXECUTIONS PLAN_HASH_VALUE INVALIDATIONS
---------- ---------------- ------------ ---------- --------------- -------------
1403451991 000000039F798070            0          4      1842905362             1



update 10/21/2013 -

we can use: sys.dbms_shared_pool.purge  procedure to purge a particular cursor. For example:

SQL> select address, hash_value from v$sqlarea
  2  where sql_text = 'select count(c2) from skew where c1 = :x';
ADDRESS  HASH_VALUE
-------- ----------
27308318 2934790721

SQL> exec sys.dbms_shared_pool.purge('&address, &hash_value','c')
PL/SQL procedure successfully completed.

see : http://prutser.wordpress.com/2009/04/19/flushing-a-cursor-out-of-the-library-cache/

Thursday, April 23, 2009

Buffer hit ratio in every hour

In response to providing evidence for my suspection that the db_cache_size is low on a 10g production database, I wrote a script to display buffer hit ratio in every hour.


rem buffer hit ratio on hourly basis for the past day
rem -- 10 g
rem
select AA.start_hour,
round(( 1-CC.phys_read/
(AA.consistent_get + BB.db_blk_get))*100,1) buffer_hit_ratio
from (
select
to_char(s.begin_interval_time, 'YYYY-MM-DD HH24') start_hour,
sum(e.value - b.value) consistent_get
from
dba_hist_sysstat b,
dba_hist_sysstat e,
dba_hist_snapshot s
where b.snap_id = s.snap_id
and b.snap_id = e.snap_id -1
and b.stat_id = 4162191256
and b.stat_id = e.stat_id
and s.begin_interval_time >= trunc(sysdate)-1
and s.begin_interval_time < trunc(sysdate)
group by to_char(s.begin_interval_time, 'YYYY-MM-DD HH24')
) AA,
(
select to_char(s.begin_interval_time, 'YYYY-MM-DD HH24') start_hour,
sum(e.value - b.value) db_blk_get
from
dba_hist_sysstat b,
dba_hist_sysstat e,
dba_hist_snapshot s
where b.snap_id = s.snap_id
and b.snap_id = e.snap_id -1
and b.stat_id =1480709069
and b.stat_id = e.stat_id
and s.begin_interval_time >= trunc(sysdate)-1
and s.begin_interval_time < trunc(sysdate)
group by to_char(s.begin_interval_time, 'YYYY-MM-DD HH24')
) BB,
(
select to_char(s.begin_interval_time, 'YYYY-MM-DD HH24') start_hour,
sum(e.value - b.value) phys_read
from
dba_hist_sysstat b,
dba_hist_sysstat e,
dba_hist_snapshot s
where b.snap_id = s.snap_id
and b.snap_id = e.snap_id -1
and b.stat_id = 2263124246
and b.stat_id = e.stat_id
and s.begin_interval_time >= trunc(sysdate)-1
and s.begin_interval_time < trunc(sysdate)
group by to_char(s.begin_interval_time, 'YYYY-MM-DD HH24')
) CC
where AA.start_hour=BB.start_hour
and BB.start_hour=CC.start_hour
order by 1
/


The output for the database looks like:


system@THEDB> @buffer_hit_hour.sql

START_HOUR BUFFER_HIT_RATIO
------------- ----------------
2009-04-22 00 46.2
2009-04-22 01 42.4
2009-04-22 02 28.9
2009-04-22 03 35.6
2009-04-22 04 34.4
2009-04-22 05 37.9
2009-04-22 06 47.3
2009-04-22 07 58.1
2009-04-22 08 68.7
2009-04-22 09 86.9
2009-04-22 10 62.2
2009-04-22 11 58.3
2009-04-22 12 68.6
2009-04-22 13 67.3
2009-04-22 14 70.9
2009-04-22 15 79.7
2009-04-22 16 66.9
2009-04-22 17 49.5
2009-04-22 18 44
2009-04-22 19 54.7
2009-04-22 20 22.8
2009-04-22 21 55.6
2009-04-22 22 53.7
2009-04-22 23 53.5

Wednesday, April 22, 2009

Find long running SQLs in a reporting database

We have a reporting database (real sid is changed to 'mydbrpt' in the post), there are usually long running SQLs in it. Usually I check v$sessiong_longops to see them. The problem is that not all the operations could appear in this view. Today I found a long connecting session by ps -ef. I think this is a valid alternative and also wrote an AWK script to make it easy to find the long running sessions/sqls

If I type the following command at OS level:
ps -ef grep mydbrpt

I will see hundreds of line similar to the following:


......
oracle 3106 1 0 12:18:07 console 0:01 oraclemydbrpt (LOCAL=NO)
oracle 19386 1 3 15:00:40 console 20:44 ora_p004_mydbrpt
oracle 2631 4115 0 06:01:03 console 1:08 oraclemydbrpt (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 10910 14823 0 15:46:27 console 0:00 oraclemydbrpt (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 4152 4115 0 Apr 14 console 38:49 oraclemydbrpt (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 21996 1 0 15:06:25 console 0:01 oraclemydbrpt (LOCAL
=NO)
......


Noticed the field 7 or field 8. It shows the minutes of the server process that is running. So I wrote a small AWK script to filter out those processes running more than 100 min as follows:



$ cat ps.awk
{
if ( $7 == "console" )
split($8 ,a, ":")
else
split($7 ,a, ":")
if ( a[1] > 100 )
print $0
}



Then I executed the following command and got:


$ ps -ef grep mydbrpt awk -f ps.awk
oracle 18100 1 0 Mar 14 console 1883:49 ora_dbw2_mydbrpt
oracle 18088 1 0 Mar 14 console 2218:00 ora_dbw0_mydbrpt
oracle 18102 1 0 Mar 14 console 1831:44 ora_dbw3_mydbrpt
oracle 18108 1 0 Mar 14 console 450:24 ora_ckpt_mydbrpt
oracle 18093 1 0 Mar 14 console 2013:48 ora_dbw1_mydbrpt
oracle 18086 1 0 Mar 14 console 181:00 ora_pmon_mydbrpt
oracle 18118 1 0 Mar 14 console 606:40 ora_arc0_mydbrpt
oracle 18106 1 0 Mar 14 console 4377:12 ora_lgwr_mydbrpt
oracle 11329 1 3 Apr 16 console 8500:06 oraclemydbrpt (LOCAL=NO)
oracle 21487 1 1 07:44:30 console 127:23 oraclemydbrpt (LOCAL=NO)
oracle 2999 2955 1 05:00:02 console 220:51 oraclemydbrpt (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 6658 6652 0 Mar 27 console 136:19 oraclemydbrpt (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))



 

Found out the pid=11329 is from a user session which may use unreasonable execution plan:



-------------------------------------------------------------------------------
Id Operation Name Rows Bytes TempSpc Cost
-------------------------------------------------------------------------------
0 SELECT STATEMENT 33M
1 SORT AGGREGATE 1 28
2 HASH JOIN 638G 16T 199M 33M
3 TABLE ACCESS FULL CP 13M 49M 1598K
4 MERGE JOIN CARTESIAN 24G 539G 19M
5 TABLE ACCESS FULL OF 82 1722 2
6 BUFFER SORT 294M 842M 19M
7 INDEX FAST FULL SCAN BM_FK4 294M 842M 232K
-------------------------------------------------------------------------------





But I can not see this session in v$session_longops at all.

Check if CBO stats are locked or stale

Not familiar with DBA_TAB_STATISTICS before. Today happened to find we can check if CBO stats are locked or stale from this view. Here is an example:

SQL> select owner, table_name, last_analyzed,
stattype_locked, stale_stats
from dba_tab_statistics where stale_stats='YES';

OWNER TABLE_NAME LAST_ANALYZED STATT STA
-------------- --------------- ------------------- ----- ---
SYSTEM BIN$ZInWFVYZa0L 2008-11-16 09:11:58 YES
gRAAUT46ylg==$0

SYSTEM BIN$ZJEkpLOwbVj 2009-03-07 18:09:05 YES
gRAAUT46ylg==$0

SYS WRH$_TABLESPACE 2009-04-18 19:35:47 YES
_STAT

SYS WRH$_OSSTAT 2009-04-18 19:34:56 YES
SYS WRH$_SYS_TIME_M 2009-04-18 19:35:46 YES
ODEL

SYS WRH$_SERVICE_WA 2009-04-18 19:35:09 YES
IT_CLASS

SYS WRH$_FILESTATXS 2009-04-18 19:34:38 YES
SYS WRH$_SQLSTAT 2009-04-18 19:35:18 YES
SYS WRH$_SYSTEM_EVE 2009-04-18 19:35:45 YES
NT

SYS WRH$_WAITSTAT 2009-04-18 19:35:51 YES
SYS WRH$_LATCH 2009-04-18 19:34:42 YES
SYS WRH$_LATCH_CHIL 2009-04-18 19:34:48 YES
DREN

SYS WRH$_LATCH_PARE 2009-04-18 19:34:53 YES
NT

SYS WRH$_LATCH_MISS 2009-04-18 19:34:52 YES
ES_SUMMARY

SYS WRH$_DB_CACHE_A 2009-04-18 19:34:33 YES
DVICE

SYS WRH$_ROWCACHE_S 2009-04-18 19:35:01 YES
UMMARY

SYS WRH$_SGASTAT 2009-04-18 19:35:10 YES
SYS WRH$_SYSSTAT 2009-04-18 19:35:44 YES
SYS WRH$_PARAMETER 2009-04-18 19:34:57 YES
SYS WRH$_SEG_STAT 2009-04-18 19:35:06 YES
SYS WRH$_SERVICE_ST 2009-04-18 19:35:09 YES
AT

SYS WRH$_ACTIVE_SES 2009-04-18 19:34:25 YES
SION_HISTORY


22 rows selected.

Monday, April 13, 2009

Column DEGREE of the DBA_INDEXES view is VARCHAR type

Not aware DEGREE is VARCHAR2(40) type. So surprised to see the following error. It appears that some values can not be converted to numbers.



system@DB10G> select index_name, degree from dba_indexes where degree > 2;
select index_name, degree from dba_indexes where degree > 2
*
ERROR at line 1:
ORA-01722: invalid number


system@DB10G> select index_name, degree from all_indexes where degree > 2;
select index_name, degree from all_indexes where degree > 2
*
ERROR at line 1:
ORA-01722: invalid number


system@DB10G> select index_name, degree from user_indexes where degree > 2;




system@DB10G> desc dba_indexes;
Name Null? Type
------------------------------------ -------- ---------------
OWNER NOT NULL VARCHAR2(30)
INDEX_NAME NOT NULL VARCHAR2(30)
INDEX_TYPE VARCHAR2(27)
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
AVG_DATA_BLOCKS_PER_KEY NUMBER
CLUSTERING_FACTOR NUMBER
STATUS VARCHAR2(8)
.....
NUM_ROWS NUMBER
SAMPLE_SIZE NUMBER
LAST_ANALYZED DATE
DEGREE VARCHAR2(40)
INSTANCES VARCHAR2(40)
PARTITIONED VARCHAR2(3)
BUFFER_POOL VARCHAR2(7)
USER_STATS VARCHAR2(3)
DURATION VARCHAR2(15)
.....
FUNCIDX_STATUS VARCHAR2(8)
JOIN_INDEX VARCHAR2(3)
IOT_REDUNDANT_PKEY_ELIM VARCHAR2(3)
DROPPED VARCHAR2(3)

system@DB10G>

Sunday, April 12, 2009

Observations on Parallel Index Rebuild jobs

Tonight I am working on index rebuild for three databases of one of our major applications. This post is intended to docomnet my observations about parallel executions.

First of all the parallel_max_server parameter of the database is 24. I run two scripts (call them script1 and script2) at the same time. Each script essentially starts with the following two statements:

alter session enable parallel ddl;
alter session force parallel ddl parallel 8;

Then, followed by bunch of alter index rebuild statements without specifying the parallel degree at the statement level, for example:

ALTER INDEX SCH1.INDEX_IX1 REBUILD NOLOGGING NOREVERSE TABLESPACE TBS_IDX_2 INITRANS 2 STORAGE (FREELISTS 2 BUFFER_POOL DEFAULT) ;

I run the script1.sql first, then 2 min later script2.sql.

I then check the v$px_session views from time to time by the following query:

SQL> select sid, qcsid, server#, degree 2* from v$px_session

At one time I saw:


SID QCSID SERVER# DEGREE
----- ---------- ---------- ----------
1756 1756
2797 1756 1 6
2538 1756 2 6
302 1756 3 6
1649 1756 4 6
3228 1756 5 6
3469 1756 6 6
2815 1756 5 6
2087 1756 6 6
3508 3654 1 6
3002 1756 1 6
3450 1756 2 6
2827 3654 2 6
2638 1756 3 6
825 3654 3 6
1305 1756 4 6
3374 3654 4 6
3654 3654
1331 3654 5 6
3516 3654 6 6
3136 3654 5 6
3388 3654 6 6
3364 3654 1 6
1521 3654 2 6
3844 3654 3 6
542 3654 4 6

 

I was able to assign QCSID=1756 to script1.sql and QCSID=3654 to script2.sql.

For the most of time, I observed:

SID QCSID SERVER# DEGREE
----- ---------- ---------- ----------
1756 1756
3844 3654 1 8
141 3654 2 8
1771 3654 3 8
921 3654 4 8
3374 3654 5 8
542 3654 6 8
3364 3654 7 8
1331 3654 8 8
2827 3654 1 8
3136 3654 2 8
665 3654 3 8
3516 3654 4 8
3388 3654 5 8
1649 3654 6 8
3508 3654 7 8
825 3654 8 8
3654 3654
3228 1756 1 4
2540 1756 2 4
1305 1756 3 4
2979 1756 4 4
3469 1756 1 4
2538 1756 2 4
302 1756 3 4
2797 1756 4 4

  

So, it looks like if we specify parallel degree N, Oracle will need to user parallel execution server 2xN. Since we have parallel_max_server=24, so if one job executed with degree 8, another will have to use degree 4 at most.

This is something I don't know before. I usually use parallel degree option in the create index statment- ie. at statement level. I have never enabled parallel option at session level. Could it be this reason? I will find it out. However by quick checking the example from Tom's Expert Oracle Database Architecture book, Chap 14, p622, it looks like if degree=8, server number=8 too.

Look at my v$px_session output again, I saw for QCSID=3654, there are two sets of server# from 1-8 .

I tested the case of using parallel degree at statement level. It also shows 2xN parallel execution server needed:


SQL_TEXT ADDRESS HASH_VALUE SID USER_NAME
---------------------------------------- --------------- ---------- ------ ------------
ALTER INDEX SCH_1.AXXXX_PXXXXXXXXX_PK 000000058B0A50B8 594046221 555 OPS$ORACLE
REBUILD NOLOGGING NOREVERSE TABLESPACE
SCH_IDX_4 INITRANS 10 STORAGE (FREELI
STS 2 BUFFER_POOL DEFAULT) parallel (
degree 5 )


SQL> @px_session

SID QCSID SERVER# DEGREE
------ ---------- ---------- ----------
555 555
103 555 1 5
150 555 2 5
666 555 3 5
689 555 4 5
339 555 5 5
536 555 1 5
646 555 2 5
461 555 3 5
442 555 4 5
123 555 5 5





 


So,at this point, I believe that for index rebuild at least, Oracle needs 2xN parallel execution servers if N parallel degree is specified.

I tested the parallel query with the case similar to that is described in Tom Kyte's book, I found that Oracle used same number of parallel execution servers as the degree specified.

Wednesday, April 08, 2009

On the order of table being exported

We have a weekly exp job for a production 9.2.0.8 database which runs as the schema-mode export. Last weekend, when it exported the largest table ( ~ 200GB) in the schema, it encountered "ORA-01555: snapshot too old" error, then terminated at that point. The remaining tables were thus not exported. This isuee motivated me to think of the order by which the tables are exported during sechma-mode export.

By testing for a while, I believe that in the traditional schema mode exp job, the tables are exported in the alphabetical order of their name; whereas in the data pump exp job, the tables are exported in the ascending order of their creation time.

Belwo are more detailed steps and observations.

First I created ten tables with sigle letter table name in random order in a 10g database, i.e.

testsrv1:/home/oracle/denis [testdb1] $ grep "create table" expord.sql
create table c
create table b
create table j
create table k
create table m
create table l
create table p
create table d
create table v
create table t



Then I did several tests using exp/imp or datapump exp/imp:

Test 1 schema mode export

I found that during the export using schema mode, the tables are exported in alphabetical order. I issued the following command:

exp testusr rows=n log=exp1.log owner=testusr file=exp1.dmp

In the exp1.log, it can be seen that the table exporting order is as follows:


. about to export TESTUSR's tables via Conventional Path ...
. . exporting table B
. . exporting table C
. . exporting table D
. . exporting table J
. . exporting table K
. . exporting table L
. . exporting table M
. . exporting table P
. . exporting table T
. . exporting table TEST
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
. . exporting table V



 

If I use the above exp dump file to do an imp and generate index file, I can observe that the 'create table' ddl statments in the indexfile is in the same order as the exporting table order, shown as follows:


imp testusr ignore=y full=y file=exp1.dmp log=imp1.log indexfile=imp1_index.sql

testsrv1:/home/oracle/denis [testdb1] $ grep "CREATE TABLE" imp1_index.sql
REM CREATE TABLE "TESTUSR"."B" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."C" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."D" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."J" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."K" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."L" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."M" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."P" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."T" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."TEST" ("N" NUMBER) PCTFREE 10 PCTUSED 40
REM CREATE TABLE "TESTUSR"."V" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,



Test 2 table mode export

In this test, I created the following parfile for table-mode export:

testsrv1:/home/oracle/denis [testdb1] $ cat exp.par
rows=n
tables=(
K
,B
,P
,D
,T
,J
,L
,M
,C
)


We can see that the table exporting order is same as in the parfile:


testsrv1:/home/oracle/denis [testdb1] $ exp testusr log=exp2.log parfile=exp.par file=exp2.dmp

Export: Release 10.2.0.1.0 - Production on Wed Apr 8 06:40:41 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Password:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
Note: table data (rows) will not be exported

About to export specified tables via Conventional Path ...
. . exporting table K
. . exporting table B
. . exporting table P
. . exporting table D
. . exporting table T
. . exporting table J
. . exporting table L
. . exporting table M
. . exporting table C
Export terminated successfully without warnings.



Again, the 'create table' ddl statement order in the indexfile is same as the table exporting order.


imp testusr ignore=y full=y file=exp2.dmp log=imp2.log indexfile=imp2_index.sql

testsrv1:/home/oracle/denis [testdb1] $ grep "CREATE TABLE" imp2_index.sql
REM CREATE TABLE "TESTUSR"."K" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."B" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."P" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."D" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."T" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."J" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."L" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."M" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,
REM CREATE TABLE "TESTUSR"."C" ("OWNER" VARCHAR2(30) NOT NULL ENABLE,



  

Test 3 datapump export

In this test, I issued the following command to do schema-mode export using datapump:

expdp testusr directory=dir1 dumpfile=expdp1.dmp schemas=testusr logfile=expdp1.log content=metadata_only

However, no information indicated in which order the tables were exported in the logfile:


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "TESTUSR"."SYS_EXPORT_SCHEMA_01": testusr/******** directory=dir1 dumpfile=expdp1.dmp schemas=testusr/******** logfile=expdp1.log content=metadata_only
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Master table "TESTUSR"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTUSR.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/denis/expdp1.dmp
Job "TESTUSR"."SYS_EXPORT_SCHEMA_01" successfully completed at 06:59:52




Similarly, we can generate ddl statements by specifying sqlfile option in the impdp session:

impdp testusr directory=dir1 dumpfile=expdp1.dmp logfile=impdp1.log sqlfile=impdp1_sql.sql

Checked the create table order in the generated impdp1_sql.sql, we could see the order was acutally same as the order I created them.



testsrv1:/home/oracle/denis [testdb1] $ grep "CREATE TABLE" impdp1_sql.sql
CREATE TABLE "TESTUSR"."TEST"
CREATE TABLE "TESTUSR"."C"
CREATE TABLE "TESTUSR"."B"
CREATE TABLE "TESTUSR"."J"
CREATE TABLE "TESTUSR"."K"
CREATE TABLE "TESTUSR"."M"
CREATE TABLE "TESTUSR"."L"
CREATE TABLE "TESTUSR"."P"
CREATE TABLE "TESTUSR"."D"
CREATE TABLE "TESTUSR"."V"
CREATE TABLE "TESTUSR"."T"



  

I modified the table structure of table B and did the above expdp/impdp again, and the create table order did not change. I dropped the table B and re-created it, and repeated the steps again. Now the 'create table b' moved to the last position. So what I can say is most likely , the order of the table export using data pump with schema mode is by the table creation time.

Note: table test is an old table which exists before I perform the test.

Tuesday, April 07, 2009

Email from Unix server with attachment and message body

I followed this article "Creating email with a text message and an attachment from standard UNIX" to test the email, however for some reason, I am not successful, may be my OS is Sun Solaris. What the article described is AIX.

I tried the following script, found that I got what I want. unix_attachment.lst file appeared as an attachment and the content of message.txt appeared in the email body.




#!/bin/ksh
# uuencode the attachment file
uuencode unix_attachment.lst unix_attachment.lst | mailx -s "Message with Attachment -Hello

`cat message.txt`
" denis@email.com

Monday, April 06, 2009

Generate exp parfile for table mode export

We have exp job based on the schema. Now we want to use table mode exp. The following script can thus be used to generate parfile. Every month we have release, which may add new tables. So using script to generate the parfile could be handy. It is obviously possible to modify the query to generate the table list based on some condition.



spool export.par
set echo off heading off feed off
set linesize 200
pro userid=/
pro buffer=52428800
pro compress=n
pro constraints=y
pro indexes=y
pro grants=y
pro direct=Y
pro consistent=n
pro statistics=none
pro tables=(
select decode(rownum, 1, ' ', ',') ||
owner || '.' || table_name
from dba_tables where owner = 'DENIS';
pro )
spool off



  

The generated parfile looks like:


userid=/
buffer=52428800
compress=n
constraints=y
indexes=y
grants=y
direct=Y
consistent=n
statistics=none
tables=(

DENIS.BM
,DENIS.X
,DENIS.MY_LOAD_METRICS
,DENIS.T1
,DENIS.A
,DENIS.BI
,DENIS.T2
,DENIS.T14
,DENIS.PLAN_TABLE
)

ORA-14047 and Rename a Table

My manager pinged today, asking for the syntax to rename a table. He said he is on a Crisis Management Call, having issue with renaming table, receiving ORA-14047. I did remember that I had trouble to rename tables when I implemented a partitioning task last year. The trick is when using 'alter' statement the new table name should not be prefixed with the schema name.

Below is my test to repeat the ORA-14047 error.


denis@TESTDB> create table t ( a int);

Table created.


denis@TESTDB> alter table denis.t rename to t12;

Table altered.


denis@TESTDB> alter table denis.t12 rename to denis.t13;
alter table denis.t12 rename to denis.t13
*
ERROR at line 1:
ORA-14047: ALTER TABLE|INDEX RENAME may not be combined with other operations


denis@TESTDB> alter table denis.t12 rename to t13;

Table altered.

denis@TESTDB> rename t13 to t14;

Table renamed.

Sunday, April 05, 2009

Display table column statistics

Recently came across a script(STAT_SANITY.sql) on the RoughSea website, which can be used to display table column statistics.

I created an alternative script, which can be used for the same purpose:


rem script: tab_col_stats.sql
rem check table column statistics collected by dbms_stats
rem

rem --- create a function to be used in the sql

create or replace function display_raw_ (rawval raw, type varchar2)
return varchar2 is
cn number;
cv varchar2(32);
cd date;
cnv nvarchar2(32);
cr rowid;
cc char(32);
begin
if (type = 'NUMBER')
then
dbms_stats.convert_raw_value(rawval, cn);
return to_char(cn);
elsif (type = 'VARCHAR2')
then
dbms_stats.convert_raw_value(rawval, cv);
return to_char(cv);
elsif (type = 'DATE')
then
dbms_stats.convert_raw_value(rawval, cd);
return to_char(cd);
elsif (type = 'NVARCHAR2')
then
dbms_stats.convert_raw_value(rawval, cnv);
return to_char(cnv);
elsif (type = 'ROWID')
then
dbms_stats.convert_raw_value(rawval, cr);
return to_char(cnv);
elsif (type = 'CHAR')
then
dbms_stats.convert_raw_value(rawval, cc);
return to_char(cc);
else
return 'UNKNOWN DATATYPE';
end if;
end;
/

rem part 2

set linesize 120
col table_name format a15
col column_name format a20
col low_val format a15
col high_val format a15
select
a.column_name,
a.num_distinct distincts,
a.num_nulls nulls,
a.density,
a.num_buckets bkts,
display_raw_(a.low_value,b.data_type) as low_val,
display_raw_(a.high_value,b.data_type) as high_val
from
dba_tab_col_statistics a,
dba_tab_cols b
where a.table_name = upper('&tabname')
and a.owner=upper('&owner')
and a.table_name = b.table_name
and a.column_name = b.column_name
and a.owner = b.owner
order by 1
/

rem drop the function if necessary

drop function display_raw_
/




Sample output:


COLUMN_NAME DISTINCTS NULLS DENSITY BKTS LOW_VAL HIGH_VAL
-------------------- ---------- ---------- ---------- ---------- --------------- ---------------
ACCOUNT_ID 9101384 0 1.0987E-07 1 0 632471375
ARCHIVE_DATE 0 51654447 0 1
INT_MASTER_ORDER_ID 86208 0 .0000116 1 0 127140720
INT_ORDER_ID 51654447 0 1.9359E-08 1 7 127141566
INT_PQ_ORDER_ID 199175 0 5.0207E-06 1 0 127141560
INT_SUPP_ORDER_ID 5361989 0 5.1838E-07 21 0 334768529
IS_MASTER 2 0 .5 1 N Y
IS_PQ 2 0 .5 1 N Y
IS_VALID 2 0 .5 1 N Y
ORDER_COMP_DATE 13043687 8614054 7.6665E-08 1 2004-03-07 10:0 2009-04-03 20:0
1:20 0:02

ORDER_ID 51456267 0 1.9434E-08 1 CADX102866094 RSSI093199412
ORDER_SAVE_DATE 15392369 260 6.4967E-08 1 2004-03-20 02:5 2009-04-03 20:0
5:17 0:01

Thursday, April 02, 2009

Oracle Net does simple compression

I am supporting "DR Test" tonight. I have completed my DBA steps and is watching application guys to test their steps. At this moment, I am asking myself this question: what have I learned about Oracle today? Well, I would like to paste the following example from Oracle-L in which Tanel Poder demonstrated that Oracle Net does simple compression:


-----Original Message-----
From: Tanel Poder [mailto:tanel@poderc.com]
Sent: Thursday, April 02, 2009 3:25 PM
To: Herring Dave - dherri; oracle-l@freelists.org
Subject: RE: Less bytes when transferring table

Yep, Oracle Net does simple compression - when there are lots of rows with same consecutive column values sent over sqlnet then Oracle can send the column only once plus a count. Anjo Kolk wrote about it in his old blog but I couldn't find his article anymore.

Check the simple example (look into difference in bytes sent via SQL*Net even though the result data is the same):

SQL> select owner from dba_source order by dbms_random.random;

299151 rows selected.


Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
1994 consistent gets
0 physical reads
0 redo size
2782131 bytes sent via SQL*Net to client
6959 bytes received via SQL*Net from client
600 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
299151 rows processed

SQL>
SQL> select owner from dba_source order by owner;

299151 rows selected.


Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
1994 consistent gets
0 physical reads
0 redo size
1572261 bytes sent via SQL*Net to client
6959 bytes received via SQL*Net from client
600 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
299151 rows processed

SQL>

ordered data causes less bytes to be transferred thanks to the simple compression.

so you can "tune" dblink bulk tranfer over crappy WAN links by ordering data somewhat. its not something I'd like to rely on though