tag:blogger.com,1999:blog-302630442023-11-16T11:18:35.407-05:00Oracle Study Notes by DenisWhat have I learned about Oracle today?ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.comBlogger422125tag:blogger.com,1999:blog-30263044.post-46733270288044605992016-03-09T09:11:00.003-05:002016-03-23T09:37:29.379-04:00MariaDB - a surprise dependent subquery execution planWelcome myself into the MySQL world. While working with MariaDB 10.0.21, there are two queries as shown below, one uses alias and the other does not, seems to me they should be no difference
in terms of functionality or performance.<br />
<br />
-- query 1 with alias a and b
<br />
<pre>delete a FROM mydb.v_audit_info a
WHERE a.api_seq in
(SELECT cart_line_seq
FROM mydb.v_shopping_cart_item b
WHERE cart_seq = 127883 );
</pre>
-- query 2 without alias a and b
<br />
<pre>delete FROM mydb.v_audit_info
WHERE api_seq in
(SELECT cart_line_seq
FROM mydb.v_shopping_cart_item
WHERE cart_seq = 127883 );
</pre>
However, when I check the execution plan, I get quite different results:
-- query 1
<br />
<pre>+------+-------------+-------+------+-----------------------------------------------------------+---------------------------+---------+-----------------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+-----------------------------------------------------------+---------------------------+---------+-----------------------------+------+-------------+
| 1 | PRIMARY | b | ref | PRIMARY,v_shopping_cart_item_udx1,idx2_shopping_cart_item | v_shopping_cart_item_udx1 | 9 | const | 1 | Using index |
| 1 | PRIMARY | a | ref | idx1_audit_info | idx1_audit_info | 9 | mydb.b.cart_line_seq | 3 | |
+------+-------------+-------+------+-----------------------------------------------------------+---------------------------+---------+-----------------------------+------+-------------+
</pre>
-- query 2
<br />
<pre>+------+--------------------+----------------------+-----------------+-----------------------------------------------------------+---------+---------+------+-------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+--------------------+----------------------+-----------------+-----------------------------------------------------------+---------+---------+------+-------+-------------+
| 1 | PRIMARY | v_audit_info | ALL | NULL | NULL | NULL | NULL | 60646 | Using where |
| 2 | DEPENDENT SUBQUERY | v_shopping_cart_item | unique_subquery | PRIMARY,v_shopping_cart_item_udx1,idx2_shopping_cart_item | PRIMARY | 8 | func | 1 | Using where |
+------+--------------------+----------------------+-----------------+-----------------------------------------------------------+---------+---------+------
</pre>
In the execution plan for query 1, we first execute the subquery and obtain a list of cart_line_seq and then access the v_audit_info with primary key.<br />
<br />
In the execution plan for query 2, the subquery becomes dependent, we need to scan 60646 rows from v_audit_info and for each row, checking the condition in the subquery.<br />
<br />
Using the profiling, it can clearly see the huge difference in Duration:<br />
<br />
<pre>+----------+------------+--------------------------------------------------------------------+
| Query_ID | Duration | Query |
+----------+------------+--------------------------------------------------------------------+
| 1 | 0.00331659 | delete a FROM mydb.v_audit_info a
WHERE a.api_seq in
(SELECT cart_line_seq
FROM mydb.v_shopping_cart_item b
WHERE cart_seq = 127883 ) |
....
| 13 | 0.23299025 | delete FROM mydb.v_audit_info
WHERE api_seq in
(SELECT cart_line_seq
FROM mydb.v_shopping_cart_item
WHERE cart_seq = 127883 ) |
+----------+------------+-------------------------------------------------------------------
</pre>
Don't know why,but certainly I will suggest using the alias version.ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com1tag:blogger.com,1999:blog-30263044.post-2689249355314574072015-04-14T16:26:00.001-04:002015-04-14T16:26:09.396-04:00Monitoring Oracle GoldenGate LatencyToday I've implemented an approach to monitor OGG latency. Here I will describe what I've done.
<b> </b><br />
<br />
<b>1. Create a table gg_latency in source and target databases:</b>
<br />
<pre>create table gg_latency
(
extr varchar2(10),
pump varchar2(10),
repl varchar2(10),
update_time date
);
alter table gg_latency add constraint gg_latency_pk primary key(extr, pump, repl) using index;
</pre>
<b>2. Create a procedure that is used to update the latency table: </b><br />
<br />
<pre>create or replace procedure proc_update_gg_latency
is
begin
for rec in ( select * from gg_latency)
loop
update gg_latency set update_time=sysdate where extr=rec.extr and pump=rec.pump and repl = rec.repl;
commit;
end loop;
end;
/
</pre>
<b>3. Populate the table with every possible combination of the processing group names:</b><br />
<br />
For example, in my replication enviroment, at source I have three Extract groups, three Pump groups,
at target I have 15 Replicat groups :
<br />
<pre>insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1A');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1B');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1C');
insert into gg_latency(extr, pump, repl) values('ECRUDR1', 'PCRUDR1', 'CRURDR1D');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2A');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2B');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2C');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2D');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2F');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2G');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2H');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2M');
insert into gg_latency(extr, pump, repl) values('ECRUDR2', 'PCRUDR2', 'CRURDR2N');
insert into gg_latency(extr, pump, repl) values('ECRUDR3', 'PCRUDR3', 'CRURDR3');
</pre>
<b>4. For each EXTRACT group parameter file at source, add the TABLE clause with WHERE option for the GG_LATENCY table</b> , e.g.<br />
<br />
<pre> TABLE DB_ADMIN.GG_LATENCY WHERE ( EXTR="ECRUDR1");
</pre>
Note: do this for all the EXTRACT groups<br />
<br />
<br />
<b>5. For each PUMP group parameter file at source, add the TABLE clause with WHERE option for the GG_LATENCY table</b> , e.g.
<br />
<pre> TABLE DB_ADMIN.GG_LATENCY, WHERE (PUMP="PCRUDR1");
</pre>
Note: add the line before the PASSTHRU if exists, do this for all the PUMP groups<br />
<br />
<br />
<b> 6. For each REPLICAT group parameter file at target, add MAP clause with WHERE option for the GG_LATENCY table </b>, e.g.<br />
<br />
<pre>MAP DB_ADMIN.GG_LATENCY, TARGET DB_ADMIN.GG_LATENCY, WHERE (REPL='CRURDR1');</pre>
<pre> </pre>
Note: do this for all the REPLICAT groups. In 12c OGG, single quotation mark should be used for literal string.<br />
<br />
<b>7. Bounce all processes as parameter files are modified </b><br />
<br />
<br />
<b>8. Create a scheduler job to update the latency table every minute
</b><br />
<pre>begin
DBMS_SCHEDULER.create_job (
job_name => 'UPDATE_GG_LATENCY_TABLE',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN db_admin.proc_update_gg_latency; END;',
start_date => trunc(sysdate, 'HH24'),
repeat_interval => 'freq=minutely',
end_date => NULL,
enabled => TRUE
);
end;
/
</pre>
<b>9. Check latency by the following query at target:
</b><br />
<pre>
SQL> select extr, pump, repl, update_time, round((sysdate - update_time) *24*60) latency_mins from gg_latency;
EXTR PUMP REPL UPDATE_TIME LATENCY_MINS
---------- ---------- ---------- -------------------- ------------
ECRUDR1 PCRUDR1 CRURDR1D 14-Apr-2015 12:46:00 1
ECRUDR1 PCRUDR1 CRURDR1B 14-Apr-2015 12:46:00 1
ECRUDR1 PCRUDR1 CRURDR1A 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2D 14-Apr-2015 12:46:00 1
ECRUDR1 PCRUDR1 CRURDR1C 14-Apr-2015 12:46:00 1
ECRUDR1 PCRUDR1 CRURDR1 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2H 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2C 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2N 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2B 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2A 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2M 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2G 14-Apr-2015 12:46:00 1
ECRUDR2 PCRUDR2 CRURDR2F 14-Apr-2015 12:46:00 1
ECRUDR3 PCRUDR3 CRURDR3 14-Apr-2015 12:46:00 1
15 rows selected.
</pre>
Note: As we update every minute, the smallest unit for latency is a minute.
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-57499338643029696572015-03-13T16:00:00.000-04:002015-03-13T16:00:12.312-04:00Just another example - Writing efficient SQL with analytic function I have a monitoring job set up to alert me when the buffer gets per execution of a SQL above certain threshold.
Today I received one as below:
<br />
<pre>=================================================
start : 13-MAR-15 02.00.23.885 PM
end : 13-MAR-15 03.00.48.168 PM
snapid from 107851 to 107853
=================================================
********************************************************
list sql with buffer gets per execution > 100000
********************************************************
!!!! ########## Expensive SQL found ####### !!!!
instance : 1
sql_id : drdbm833ack3c
Buffer get : 260358
Execs : 1
BG/exec : 260358
Gets/row : 700
SQL TEXT :
SELECT p.rec_id, p.CREATE_DATE,
p.ordered_zzzyyy_type,p.qualified_zzzyyy_type,p.req_type,p.c_transid,p.status,p.
ERR_CODE,p.ERR_DESC,p.SVC_STATE,p.IS_DOWNGRADED,p.ETF_WAIVE_FLAG FROM
BL_XYZ_CHNG_PKG p WHERE p.req_type='R' AND p.status || ''='IN' AND
p.jrs_indicator IN ('1','2','3') and p.c_transid IN (SELECT MAX(C_TRANSID) FROM
BL_XYZ_CHNG_PKG GROUP BY rec_id)
...
</pre>
After checking this sql, I've realized this may be a classical example where using analytic function can save resource.<br />
<br />
Original one -- Full table Scan on the same table twice 259k gets
<br />
<pre>SQL> SELECT p.rec_id,
2 p.CREATE_DATE,
3 p.ordered_zzzyyy_type,
4 p.qualified_zzzyyy_type,
5 p.req_type ,
6 p.c_transid,p.status,
7 p.ERR_CODE,p.ERR_DESC,p.SVC_STATE,p.IS_DOWNGRADED,p.ETF_WAIVE_FLAG
8 FROM
9 xyzu.BL_XYZ_CHNG_PKG p
10 WHERE p.req_type='R'
11 AND p.status || ''='IN'
12 AND p.jrs_indicator IN ('1','2','3')
13 and p.c_transid IN
14 (SELECT MAX(C_TRANSID) FROM BL_XYZ_CHNG_PKG GROUP BY rec_id)
15 ;
242 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3808252951
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1613 | 127K| | 83112 (3)| 00:16:38 |
|* 1 | HASH JOIN | | 1613 | 127K| | 83112 (3)| 00:16:38 |
|* 2 | TABLE ACCESS FULL | BL_XYZ_CHNG_PKG | 1613 | 107K| | 25463 (3)| 00:05:06 |
| 3 | VIEW | VW_NSO_1 | 6443K| 79M| | 57578 (2)| 00:11:31 |
| 4 | HASH GROUP BY | | 6443K| 86M| 148M| 57578 (2)| 00:11:31 |
| 5 | TABLE ACCESS FULL| BL_XYZ_CHNG_PKG | 6459K| 86M| | 25154 (2)| 00:05:02 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("P"."C_TRANSID"="$nso_col_1")
2 - filter("P"."REQ_TYPE"='R' AND "P"."STATUS"||''='IN' AND ("P"."JRS_INDICATOR"='1'
OR "P"."JRS_INDICATOR"='2' OR "P"."JRS_INDICATOR"='3'))
Statistics
----------------------------------------------------------
95 recursive calls
0 db block gets
259618 consistent gets
267154 physical reads
0 redo size
10348 bytes sent via SQL*Net to client
261 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
242 rows processed
</pre>
Rewrite with<span style="background-color: red;"><b> rank() over partition by</b></span> construct - one Full table scan with 129k
<br />
<pre><span style="background-color: white;"><span></span></span><span style="background-color: yellow;"><span></span></span>
SQL> SELECT rec_id,
2 CREATE_DATE,
3 ordered_zzzyyy_type,
4 qualified_zzzyyy_type,
5 req_type ,
6 c_transid,status,
7 ERR_CODE,ERR_DESC,SVC_STATE,IS_DOWNGRADED,ETF_WAIVE_FLAG
8 from
9 (
10 SELECT p.rec_id,
11 p.CREATE_DATE,
12 p.ordered_zzzyyy_type,
13 p.qualified_zzzyyy_type,
14 p.req_type ,
15 p.c_transid,p.status,
16 p.ERR_CODE,p.ERR_DESC,p.SVC_STATE,p.IS_DOWNGRADED,p.ETF_WAIVE_FLAG,
17 rank() over ( partition by rec_id order by c_transid desc ) rank
18 FROM
19 xyzu.BL_XYZ_CHNG_PKG p
20 WHERE p.req_type='R'
21 AND p.status || ''='IN'
22 AND p.jrs_indicator IN ('1','2','3')
23 ) A
24 where rank=1;
242 rows selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3870177004
--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1613 | 546K| 25464 (3)| 00:05:06 |
|* 1 | VIEW | | 1613 | 546K| 25464 (3)| 00:05:06 |
|* 2 | WINDOW SORT PUSHED RANK| | 1613 | 107K| 25464 (3)| 00:05:06 |
|* 3 | TABLE ACCESS FULL | BL_XYZ_CHNG_PKG | 1613 | 107K| 25463 (3)| 00:05:06 |
--------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RANK"=1)
2 - filter(RANK() OVER ( PARTITION BY "REC_ID" ORDER BY
INTERNAL_FUNCTION("C_TRANSID") DESC )<=1)
3 - filter("P"."REQ_TYPE"='R' AND "P"."STATUS"||''='IN' AND
("P"."JRS_INDICATOR"='1' OR "P"."JRS_INDICATOR"='2' OR "P"."JRS_INDICATOR"='3'))
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
129827 consistent gets
127794 physical reads
0 redo size
10184 bytes sent via SQL*Net to client
261 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
242 rows processed
</pre>
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-82721659323631219562015-03-06T15:09:00.000-05:002015-03-06T15:09:21.508-05:00Killing DBSNMP sessions not easyWe had a performane issue for a prodcution database today. The CPU utilization went up. When I logged into the server and tried to log into
the database as sysdba, I got ORA-00020 error:
<br />
<pre>$sqlplus "/ as sysdba"
SQL*Plus: Release 11.2.0.2.0 Production on Fri Mar 6 10:55:10 2015
Copyright (c) 1982, 2010, Oracle. All rights reserved.
ERROR:
ORA-00020: maximum number of processes (700) exceeded
</pre>
Another DBA reported that the session count for DBSNMP user went up high. This user is from OEM. I stopped the OEM agent and he killed all DBSNMP sessions at database level.
However, almost 1 hour later, we still saw high DBSNMP session count:
<br />
<pre>Username COUNT(*)
---------------- ----------
DBSNMP 352
XXXX100 1
XXXXSELECT 1
XXXXUSER 232
IVIEW2XXXX 25
ORACLE 5
XX09195 1
XXX6621 6
XXX3294 1
XXDBMON 1
36
</pre>
note: some usernames are marked to protect ther innocence.<br />
<br />
Of course, we can see all DBSNMP sessions marked as "KILLED"
<br />
<pre>SQL> select username, status from v$session where username='DBSNMP';
Username Status
---------------- --------
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
DBSNMP KILLED
...
</pre>
So It appeared to be a good idea to kill the processes at OS level. I fired the following script I usually use to get the KILL command:
<br />
<pre>set pages 0
spool /tmp/kill_os
select 'kill -9 '|| p.spid
from v$process p , v$session s
where p.background is null
and s.paddr = p.addr (+)
and p.spid is not null
and s.username='DBSNMP'
order by p.spid
/
</pre>
Guess what, to my surprise, it returned nothing !.
I got the count for those "LOCAL=NO" processes as below:
<br />
<pre>$ps -ef |grep "LOCAL=NO" | wc -l
659
</pre>
Now the problem was how could I get the PIDs out of those 659 processes that belong to DBSNMP ?
What I did finally was:<br />
<br />
1. load the PIDs of all those "LOCAL=NO" processes into a table db_admin.pid with one number column called ID:
<br />
<pre>ps -ef | grep LOCAL=NO | awk '{print "insert into db_admin.pid values(" $2 "); }'
</pre>
2. Run the following query to generate the KILL commands<br />
<br />
<pre>select 'kill -9 ' || id from db_admin.pid x
where not exists (
select 1
from v$session s, v$process p
where s.paddr = p.addr (+)
and p.spid = x.id
)
order by id
/
</pre>
This trick worked. To prevent future troubles, we've also set up a profile limit for the number of sessions the user DBSNMP can have. We still don't know what caused the session count high though.
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com1tag:blogger.com,1999:blog-30263044.post-54812016213008968092015-03-06T09:18:00.002-05:002015-03-06T16:29:48.850-05:00Just Learned: About listener.oraRecently I have been planning to upgrade a 10.2.0.4 Oracle database to 12.1.0.2. For that, I created a test db called minidb to play with upgrade steps. At the end, I need to bring up a listener for that db. This is what I've just learned: <b>do you know we can start a listener without listener.ora?</b>
The listener.ora is a file that stores listener configuration information. Because the configuration parameters have default values, it is possible to start and use a listener with no configuration.
The below screenshot shows I don't have a listener.ora file at the normal location ($ORACLE_HOME/nework/admin), and I don't have listener up and running:
<br />
<pre>exxxxxspd02 /apps/opt/oracle/product/12.1.0/network/admin [minidb] > ls -l
total 7
drwxr-xr-x 2 oracle dba 6 Mar 6 08:41 samples
-rw-r--r-- 1 oracle dba 373 Mar 16 2014 shrept.lst
-rw-r--r-- 1 oracle dba 127 Mar 6 08:18 tnsnames.ora
exxxxxspd02 /apps/opt/oracle/product/12.1.0/network/admin [minidb] > lsnrctl status
LSNRCTL for Solaris: Version 12.1.0.2.0 - Production on 06-MAR-2015 08:42:20
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
TNS-12541: TNS:no listener
TNS-12560: TNS:protocol adapter error
TNS-00511: No listener
Solaris Error: 146: Connection refused
</pre>
Now I started up a listener:
<br />
<pre>exxxxxspd02 /apps/opt/oracle/product/12.1.0/network/admin [minidb] > lsnrctl start
LSNRCTL for Solaris: Version 12.1.0.2.0 - Production on 06-MAR-2015 08:44:59
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Starting /apps/opt/oracle/product/12.1.0/bin/tnslsnr: please wait...
TNSLSNR for Solaris: Version 12.1.0.2.0 - Production
Log messages written to /apps/opt/oracle/diag/tnslsnr/exxxxxspd02/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=exxxxxspd02)(PORT=1521)))
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Solaris: Version 12.1.0.2.0 - Production
Start Date 06-MAR-2015 08:44:59
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /apps/opt/oracle/diag/tnslsnr/exxxxxspd02/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=exxxxxspd02)(PORT=1521)))
The listener supports no services
The command completed successfully
</pre>
Notice by default this listener is listening on port 1521 and supports no services. I can register my minidb with this local listener through a mechanism called "Dynamic Service Registration". For that to happen, I need to configure an initialization parameter called "LOCAL_LISTENER":
<br />
<pre>exxxxxspd02 /apps/opt/oracle/product/12.1.0/network/admin [minidb] > sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Fri Mar 6 08:50:13 2015
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SQL> show parameter local_list
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
local_listener string
SQL> alter system set local_listener='(ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))' scope=both;
System altered.
SQL> ho lsnrctl status
LSNRCTL for Solaris: Version 12.1.0.2.0 - Production on 06-MAR-2015 08:51:47
Copyright (c) 1991, 2014, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Solaris: Version 12.1.0.2.0 - Production
Start Date 06-MAR-2015 08:44:59
Uptime 0 days 0 hr. 6 min. 48 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /apps/opt/oracle/diag/tnslsnr/exxxxxspd02/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=exxxxxspd02)(PORT=1521)))
Services Summary...
Service "minidb" has 1 instance(s).
Instance "minidb", status READY, has 1 handler(s) for this service...
The command completed successfully
</pre>
What happened at background is that by default, the LREG background process can register service information with its local listener on the default local address of TCP/IP, port 1521.
<br />
<pre>exxxxxspd02 /apps/opt/oracle/product/12.1.0/network/admin [minidb] > ps -ef |grep lreg |grep -v grep
oracle 28186 1 0 08:24:15 ? 0:00 ora_lreg_minidb
</pre>
To ensure service registration works properly, the initialization parameter file should contain the following parameters:<br />
<br />
<ul>
<li>SERVICE_NAMES for the database service name </li>
</ul>
<ul>
<li> INSTANCE_NAME for the instance name </li>
</ul>
<ul>
<li> LOCAL_LISTENER for the local listener </li>
</ul>
<br />
Note the SERVICE_NAMES defaults to the global database name, a name comprising the DB_NAME and DB_DOMAIN parameters in the initialization parameter file.
The value for the INSTANCE_NAME parameter defaults to the Oracle system identifier (SID).
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-61794744669718923222014-10-11T00:23:00.001-04:002014-10-11T00:23:25.593-04:00OEM 12c Database Monitoring Concepts and FrameworkPrepared a concept map for me to understand OEM 12c monitoring concepts and framework:<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWWeVnm7yhTVp5UYzQBjGXhlFm_GQjf4ObtgJfaaXG1tq5x0vdeQpoqnjyD3sZaojoysryJZNOMKqtFxCoDUu6ySKSCSQIs7KtQS-3wCbJY_JxUgr8vfuvjNXzz2zhc3DNoLdd/s1600/OEM_mon_concept.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWWeVnm7yhTVp5UYzQBjGXhlFm_GQjf4ObtgJfaaXG1tq5x0vdeQpoqnjyD3sZaojoysryJZNOMKqtFxCoDUu6ySKSCSQIs7KtQS-3wCbJY_JxUgr8vfuvjNXzz2zhc3DNoLdd/s1600/OEM_mon_concept.png" height="321" width="640" /></a></div>
<br />ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-66566039578829183592014-06-12T07:53:00.002-04:002014-06-12T07:53:53.943-04:00Using analytic function can improve query performance greatlyLet's say we have a table: t (x,y,z), how to output the rows of the table with max value of z based on y? <br />
<br />
For example if we have (1,1,2) (1,1,3),(2,1,1), we should get (1,1,3).
<br />
<br />
Below is a test case that demonstrats two approaches: Approach 1 uses subquery resulting in 38 consistent gets; Approach 2 uses analytic approach resulting in 7 consistent gets
<br />
<pre>SQL>@test
SQL>set echo on
SQL>drop table t;
Table dropped.
SQL>create table t(x number, y number, z number);
Table created.
SQL>
SQL>insert into t values(1,1,2);
1 row created.
SQL>insert into t values(1,1,3);
1 row created.
SQL>insert into t values(2,1,1);
1 row created.
SQL>
SQL>
SQL>set autotrace on
SQL>-- approach 1
SQL>
SQL>select x, y, z
2 from t
3 where
4 exists ( select 1 from
5 ( select y, max(z) mz
6 from t
7 group by y
8 ) zz
9 where t.y=zz.y
10 and t.z=zz.mz
11 )
12 ;
X Y Z
---------- ---------- ----------
1 1 3
1 row selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 3359768323
-----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 117 | 5 (20)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | TABLE ACCESS FULL | T | 3 | 117 | 2 (0)| 00:00:01 |
|* 3 | FILTER | | | | | |
| 4 | HASH GROUP BY | | 1 | 26 | 3 (34)| 00:00:01 |
|* 5 | TABLE ACCESS FULL| T | 1 | 26 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter( EXISTS (SELECT /*+ */ 0 FROM "T" "T" WHERE "Y"=:B1 GROUP
BY "Y",:B2 HAVING MAX("Z")=:B3))
3 - filter(MAX("Z")=:B1)
5 - filter("Y"=:B1)
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
78 recursive calls
0 db block gets
38 consistent gets
0 physical reads
0 redo size
270 bytes sent via SQL*Net to client
247 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
SQL>-- approach 2
SQL>
SQL>
SQL>select x,y,z from
2 (
3 select x,y, z, max(z) over (partition by y) mz
4 from t
5 )
6 where z=mz
7 ;
X Y Z
---------- ---------- ----------
1 1 3
1 row selected.
Execution Plan
----------------------------------------------------------
Plan hash value: 2206009079
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 156 | 3 (34)| 00:00:01 |
|* 1 | VIEW | | 3 | 156 | 3 (34)| 00:00:01 |
| 2 | WINDOW SORT | | 3 | 117 | 3 (34)| 00:00:01 |
| 3 | TABLE ACCESS FULL| T | 3 | 117 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("Z"="MZ")
Note
-----
- dynamic sampling used for this statement
Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
7 consistent gets
0 physical reads
0 redo size
272 bytes sent via SQL*Net to client
247 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
</pre>
Approach 1 is essetially used in a query in one of our production databases that causes performance issue.
I suggested we re-write the query using approach 2. <a href="https://sites.google.com/site/denissunsite/misc/sqltuning_061214.txt" target="_blank">My test results</a> of the production-like query show that
bad query needs 2M consistent gets to get 6382 rows out while the re-written query needs only about 6k.
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-61837370405689316482014-06-02T11:43:00.000-04:002014-06-02T11:47:04.142-04:00Bitmap plans can estimate wrong cardinalityThere is a problem query involving 3-table join in one of our prodcutin databases. The difference between the good plan and bad plan is the join order in a hash join execution plan.
The reason for Oracle to pick up the bad join order is that Oracle CBO chooses the bitmap plan and this plan gives very wrong cardinality.
What I did is to set <em>"_b_tree_bitmap_plans</em>"=false at system level, which allowing
the CBO to generate the good plan. Then I created a sql plan baseline for that query from the good plan ( using sqlt/utl/coe_load_sql_baseline.sql script). After that, considering that there may be some queries that benfit from
the bitmap access path, I changed back <em>"_b_tree_bitmap_plans"</em> to be true.
<br />
<br />
Below tests demonstrated how wrong the cardinaltiy estimeated by the bitmap plan could be.
<br />
<br />
<ul>
<li>
in the bitmap plan, the number of rows estimated is 578K
</li>
</ul>
<pre>SQL> select * from BOD_OTHR_SERIAL_ACTIVITY_I8400 I8400
2 where
3 (I8400.DVTQ_ACCT_NUM = 'A145779917' OR (
4 I8400.UJTJPM_CUSTOMER_ID = '' AND I8400.UJTJPM_ACCOUNT_ID = ''
5 ))
6 ;
Execution Plan
----------------------------------------------------------
Plan hash value: 972526864
-------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 578K| 170M| 161K (11)| 00:05:17 |
| 1 | TABLE ACCESS BY INDEX ROWID | BOD_OTHR_SERIAL_ACTIVITY_I8400 | 578K| 170M| 161K (11)| 00:05:17 |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 3 | BITMAP OR | | | | | |
| 4 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 5 | INDEX RANGE SCAN | IX_I8400_04 | | | 1 (0)| 00:00:01 |
| 6 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 7 | INDEX RANGE SCAN | IX_I8400_07 | | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access("I8400"."UJTJPM_CUSTOMER_ID"='' AND "I8400"."UJTJPM_ACCOUNT_ID"='')
7 - access("I8400"."DVTQ_ACCT_NUM"='A145779917')
</pre>
<ul>
<li>-- disable the bitmap plan
</li>
</ul>
<pre>SQL> alter session set "_b_tree_bitmap_plans"=false;
Session altered.
</pre>
<ul>
<li>-- Now the estimated rows is 2
</li>
</ul>
<pre>SQL> select * from BOD_OTHR_SERIAL_ACTIVITY_I8400 I8400
2 where
3 (I8400.DVTQ_ACCT_NUM = 'A145779917' OR (
4 I8400.UJTJPM_CUSTOMER_ID = '' AND I8400.UJTJPM_ACCOUNT_ID = ''
5 ))
6 ;
Execution Plan
----------------------------------------------------------
Plan hash value: 7978316
---------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 618 | 9 (0)| 00:00:01 |
| 1 | CONCATENATION | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| BOD_OTHR_SERIAL_ACTIVITY_I8400 | 1 | 309 | 6 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | IX_I8400_07 | 1 | | 4 (25)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID| BOD_OTHR_SERIAL_ACTIVITY_I8400 | 1 | 309 | 3 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | IX_I8400_04 | 1 | | 2 (50)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("I8400"."DVTQ_ACCT_NUM"='A145779917')
4 - filter(LNNVL("I8400"."DVTQ_ACCT_NUM"='A145779917'))
5 - access("I8400"."UJTJPM_CUSTOMER_ID"='' AND "I8400"."UJTJPM_ACCOUNT_ID"='')
</pre>
<ul>
<li> the acutall number of row is 1:
</li>
</ul>
<pre>SQL> select count(*) from BOD_OTHR_SERIAL_ACTIVITY_I8400 I8400
2 where
3 (I8400.CUST_ACCT_NUM = 'A145779917' OR (
4 I8400.UJTJPM_CUSTOMER_ID = '' AND I8400.UJTJPM_ACCOUNT_ID = ''
5 ))
6 ;
COUNT(*)
----------
1
1 row selected.
</pre>
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-56246642119586574752014-04-13T15:04:00.002-04:002014-04-13T15:04:51.850-04:00SQL Tuning Things I think important to know for sql tuning:<br />
<br />
<ul>
<li>Be able to read execution plan </li>
</ul>
- right-most first, from top to bottom <br />
-
<a href="https://www.simple-talk.com/sql/performance/designing-efficient-sql-a-visual-approach/" target="_blank">visual approach</a> to assist understanding <br />
- pay attention to the Predicate
Information section
<br />
<ul>
<li>Obtaining the execution plan </li>
</ul>
- AUTOTRACE <br />
- dbms_xplan.display_curosr
<br />
<ul>
<li>Cardinality </li>
</ul>
- the predicted number of rows generated by an operation <br />
-
cardinality = selectivity * ( number of input rows)
<br />
<ul>
<li>COST</li>
</ul>
- represent the optimizer's best estimate of the time it will take to
execute the statement; CBO always chooses the execution plan with minimum cost.
<br />
<ul>
<li><a href="http://oracle-study-notes.blogspot.com/2010/05/tuning-by-cardinality-feedback-and.html" target="_blank">Tuning by cardinality feedback</a> </li>
</ul>
- using dbms_xplan.display_cursor to obtain
E-row vs A-row ( set statistical_level=all; or using /* gather_plan_statistics */ )
<br />
<ul>
<li>AWR, ADDM, ASH and wait interface - able to identify the offensive SQLs
<li>bind variable peeking and plan instability
<li>10046 trace & tkprof </li>
</li>
</li>
</ul>
- a rule of thumb to check efficiency: query+ current/rows < 20<br />
<ul>
<li><a href="http://oracle-study-notes.blogspot.com/search?q=10053" target="_blank">10053 trace</a>
<li><a href="http://docs.oracle.com/cd/E11882_01/server.112/e41573/optimops.htm#i51523" target="_blank">join method and join order</a>
<li>Able to check table indexes and table column statistics ( HISTOGRAM)
<li>Using top-n query to show the data distribution </li>
</li>
</li>
</li>
</ul>
ie. select * from ( select
col_of_insterest,count(*) from table_A group by col_of_interest order by 2 desc)
where rownun <=50
<br />
<ul>
<li>Talk to lead developers </li>
</ul>
- tune questions not just tune queries
<br />
<ul>
<li>Fix a bad query or method of improvement </li>
</ul>
- index re-design, <br />
- hints, <br />
- gather
stats, <br />
- explore parallelism, <br />
- plan stability: sql profile( sqlt coe* script) and
sql plan baseline (sqlt coe* script)ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-53070335566980514052014-04-11T17:25:00.002-04:002014-04-11T17:32:09.747-04:00Creating a DG Broker Configuration and Perform Swithover testing<strong>1. Register Static service</strong>
<br />
<br />
To enable DGMGRL to restart instances during the course of broker operations, a static service must be registered
In the listener.ora, the following entries should be presented:
<br />
<pre> LISTENER = (DESCRIPTION =
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=host_name)
(PORT=port_num))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=sid_name)
(GLOBAL_DBNAME=db_unique_name_DGMGRL.db_domain)
(ORACLE_HOME=oracle_home)
(ENVS="TNS_ADMIN=oracle_home/network/admin")))
</pre>
In my case, for example in the listener.ora, including:
<br />
<pre> SID_LIST_LISTENER =
(SID_DESC =
(SID_NAME = etsdb)
(GLOBAL_DBNAME=etsdb_DGMGRL)
(ORACLE_HOME = /apps/opt/oracle/product/11.2.0)
(ENVS="TNS_ADMIN=/apps/opt/oracle/product/11.2.0/network/admin")
)
)
</pre>
LOCAL_LISTENER initialization parameter shout set,ie
<br />
<br />
SQL>
alter system set local_listener=(ADDRESS=(PROTOCOL=TCP) (HOST=server_name) (PORT=15xx)) scope=both
<br />
<br />
More details check: Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1)
<br />
<br />
<strong>2. Create a broker configuration</strong>
<br />
<br />
pre-requisite: DG_BROKER_START should set to TRUE in order for DMON background process to start
<br />
<pre>xxx.com:/db1/opt/oracle/product/11.2.0/dbs [ettdb] $ dgmgrl
DGMGRL for Solaris: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys
Password:
Connected.
DGMGRL>gt; create configuration 'DG_ETS_CONFIG' as
> primary database is 'ettdb'
> connect identifier is ettdb;
Configuration "DG_ETS_CONFIG" created with primary database "ettdb"
DGMGRL> add database 'etsdb' as
> connect identifier is etsdb;
Database "etsdb" added
DGMGRL> show configuration
Configuration - DG_ETS_CONFIG
Protection Mode: MaxPerformance
Databases:
ettdb - Primary database
etsdb - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
DISABLED
</pre>
<strong>3. Enable the configuration</strong>
<br />
<pre>DGMGRL> enable configuration
Enabled.
DGMGRL> show configuration
Configuration - DG_ETS_CONFIG
Protection Mode: MaxPerformance
Databases:
ettdb - Primary database
etsdb - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
</pre>
<strong>4. Test switchover</strong>
<br />
<pre>xxx.com:/home/oracle [ettdb] $ dgmgrl
DGMGRL for Solaris: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys
Password:
Connected.
DGMGRL> show configuration
Configuration - DG_ETS_CONFIG
Protection Mode: MaxPerformance
Databases:
ettdb - Primary database
etsdb - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
DGMGRL> switchover to etsdb
Performing switchover NOW, please wait...
New primary database "etsdb" is opening...
Operation requires shutdown of instance "ettdb" on database "ettdb"
Shutting down instance "ettdb"...
ORACLE instance shut down.
Operation requires startup of instance "ettdb" on database "ettdb"
Starting instance "ettdb"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "etsdb"
DGMGRL> show configuration
DGMGRL> switchover to ettdb
Performing switchover NOW, please wait...
New primary database "ettdb" is opening...
Operation requires shutdown of instance "etsdb" on database "etsdb"
Shutting down instance "etsdb"...
ORACLE instance shut down.
Operation requires startup of instance "etsdb" on database "etsdb"
Starting instance "etsdb"...
ORACLE instance started.
Database mounted.
Database opened.
Switchover succeeded, new primary is "ettdb"
Configuration - DG_ETS_CONFIG
Protection Mode: MaxPerformance
Databases:
etsdb - Primary database
ettdb - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
</pre>
<strong>A few issues encountered during testing:</strong>
<br />
<br />
<em>1. When set up configuration in DGMGRL, receive ORA-00088: command cannot be executed by shared server</em>
<br />
<br />
The application this database supports requires using shared server, the fix is to add "(server=dedicated)" in the tnsnames.ore
i.e.
<br />
<pre>ettdb =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = xxx.xxx.com)(PORT = 1523))
)
(CONNECT_DATA =
(SERVICE_NAME = ettdb )
(server=dedicated)
)
)
</pre>
<em>2. ORA-16664: unable to receive the result from a database</em>
<br />
<br />
It turns out due to I have a typo when doing "create configuration":
<br />
<br />
DGMGRL> create configuration 'DG_ETS_CONFIG' as<br />
> primary database is 'ettdb'<br />
> connect identifier is etsdb; <strong><span style="color: blue;">===========> should be ettdb obviously</span></strong><br />
<br />
<br />
<em>3. switchover failed in an attempt</em><br />
<br />
<br />
When I use dgmgrl connect to the primary db, I also have a dgmgrl session connected to the standby db. This appears to be the reason.
In the standby site Data Guard Broker log file I saw:
<br />
<pre>SQL Execution error=604, sql=[ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WAIT WITH SESSION SHUTDOWN]. See error stack below.
ORA-00604: error occurred at recursive SQL level 1
ORA-01093: ALTER DATABASE CLOSE only permitted with no sessions connected
Sswitchover to primary command failed
</pre>
After disconnecting any sessions connected to the standby db, switchover to the standby succeeded.
<br />
<br />
References:
<a href="http://docs.oracle.com/cd/E11882_01/server.112/e40771/configure.htm#i1021661">http://docs.oracle.com/cd/E11882_01/server.112/e40771/configure.htm#i1021661</a>
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-91996530514440561342014-04-08T21:11:00.001-04:002014-04-08T21:20:54.892-04:00Manage Physical Standby Database - Add a DatafileWhen a datafile is added to the primary database, what could happen at standby database? Today, I had some hands-on experiences about that.
<br />
<br />
<strong>Case 1 - STANDBY_FILE_MANAGEMENT set to MANUAL at standby database</strong>
<br />
<br />
In this case, an UNNAMED file will be created at $ORACLE_HOME/dbs, we need to issue
'ALTER DATABASE CREATE DATAFILE AS' to fix it.
<br />
<pre>(1) At primary:
alter tablespace dbaets_data add datafile '/db2/u01/oradata/ettdb/dbaets_data_02.dbf' size 500M;
(2) At standby, query v$datafile
NAME SIZE_M
------------------------------------------------- -------
/db2st2/u01/oradata/etsdb/system01.dbf 500
/db2st2/u01/oradata/etsdb/sysaux01.dbf 2048
/db2st2/u01/oradata/etsdb/undotbs01.dbf 271
/db2st2/u01/oradata/etsdb/users01.dbf 500
/db2st2/u01/oradata/etsdb/xdb01.dbf 1024
/db2st2/u01/oradata/etsdb/dbaets_data_01.dbf 500
<span style="color: blue;">/apps/opt/oracle/product/11.2.0/dbs/UNNAMED00007</span> 0
(3) At standby, issue 'ALTER DATABAE CREATE DATAFILE AS'
ALTER DATABASE CREATE DATAFILE '/apps/opt/oracle/product/11.2.0/dbs/UNNAMED00007' AS '/db2st2/u01/oradata/etsdb/dbaets_data_02.dbf';
(4) At standby, query v$datafile again:
NAME SIZE_M
-------------------------------------------------- ----------
/db2st2/u01/oradata/etsdb/system01.dbf 500
/db2st2/u01/oradata/etsdb/sysaux01.dbf 2048
/db2st2/u01/oradata/etsdb/undotbs01.dbf 271
/db2st2/u01/oradata/etsdb/users01.dbf 500
/db2st2/u01/oradata/etsdb/xdb01.dbf 1024
/db2st2/u01/oradata/etsdb/dbaets_data_01.dbf 500
/db2st2/u01/oradata/etsdb/dbaets_data_02.dbf 500
</pre>
<strong>Case 2 - STANDBY_FILE_MANAGEMENT set to AUTO, but file cannot be created due to other error</strong>
<br />
<br />
At standby, initially I have DB_FILE_NAMEC_CONVERT set as folllow:<br />
<br />
<br />
<pre>
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert string /db2st2/u01/oradata/etsdb, /db
2/u01/oradata/ettdb
</pre>
This is not in the correct order as at standby site the path is '/db2st2/u01/oradata/etsdb'
whereas '/db2/u01' does not exist.<br />
<br />
At primary, I added another datafile:
<br />
<br />
<pre>ops$oracle@ETSDB> alter tablespace dbaets_data add datafile '/db2/u01/oradata/ettdb/dbaets_data_03.dbf' size 100M;
Tablespace altered.
</pre>
In the alert log file of the standby database, it can be seen the file is not created and managed recovery process (MRP0) is terminated:
<br />
<br />
<pre>Media Recovery Log /db2st2/arch/1_1115_790089239.dbf
WARNING: File being created with same name as in Primary
Existing file may be overwritten
Errors in file /apps/opt/oracle/admin/etsdb/diag/rdbms/etsdb/etsdb/trace/etsdb_pr00_8554.trc:
ORA-01119: error in creating database file '/db2/u01/oradata/ettdb/dbaets_data_03.dbf'
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
SVR4 Error: 13: Permission denied
File #8 added to control file as 'UNNAMED00008'.
Originally created as:
'/db2/u01/oradata/ettdb/dbaets_data_03.dbf'
Recovery was unable to create the file as:
'/db2/u01/oradata/ettdb/dbaets_data_03.dbf'
MRP0: Background Media Recovery terminated with error 1274
Errors in file /apps/opt/oracle/admin/etsdb/diag/rdbms/etsdb/etsdb/trace/etsdb_pr00_8554.trc:
ORA-01274: cannot add datafile '/db2/u01/oradata/ettdb/dbaets_data_03.dbf' - file could not be created
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovery stopped due to failure in applying recovery marker (opcode 17.30).
Datafiles are recovered to a consistent state at change 12910838021570 but controlfile could be ahead of datafiles.
</pre>
<br />
I reset the the db_file_name_covert to be '/db2/u01/oradata/ettdb','/db2st2/u01/oradata/etsdb' and restarted the MRP0,
in the alert log I saw:<br />
<br />
<pre>MRP0: Background Managed Standby Recovery process started (etsdb)
started logmerger process
Tue Apr 08 16:43:35 2014
Managed Standby Recovery starting Real Time Apply
MRP0: Background Media Recovery terminated with error 1111
Errors in file /apps/opt/oracle/admin/etsdb/diag/rdbms/etsdb/etsdb/trace/etsdb_pr00_10330.trc:
ORA-01111: name for data file 8 is unknown - rename to correct file
ORA-01110: data file 8: '/apps/opt/oracle/product/11.2.0/dbs/UNNAMED00008'
ORA-01157: cannot identify/lock data file 8 - see DBWR trace file
ORA-01111: name for data file 8 is unknown - rename to correct file
ORA-01110: data file 8: '/apps/opt/oracle/product/11.2.0/dbs/UNNAMED00008'
Managed Standby Recovery not using Real Time Apply
Recovery Slave PR00 previously exited with exception 1111
MRP0: Background Media Recovery process shutdown (etsdb)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT
</pre>
<br />
So an UNNAMED file is created at $ORACLE_HOME/dbs and MRP0 is stopped again. To fix:<br />
<br />
<pre>SQL> alter system set standby_file_management=manual;
System altered.
SQL> alter database create datafile '/apps/opt/oracle/product/11.2.0/dbs/UNNAMED00008' as '/db2st2/u01/oradata/etsdb/dbaets_data_03.dbf';
Database altered.
SQL> alter system set standby_file_management=auto;
System altered.
</pre>
<strong>Case 3 STANDBY_FILE_MANAGEMENT set to AUTO and DB_FILE_NAME_CONVERT is correct
</strong><br />
<br />
No issue as expected:
<br />
At primary:
<br />
<br />
<pre>ops$oracle@ETSDB> alter tablespace dbaets_data add datafile '/db2/u01/oradata/ettdb/dbaets_data_04.dbf' size 100M;
Tablespace altered.
</pre>
<br />
At standby, the alert log shows the datafile is added with the converted file name:
<br />
<br />
<pre>Tue Apr 08 16:56:22 2014
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT
Attempt to start background Managed Standby Recovery process (etsdb)
Tue Apr 08 16:56:22 2014
MRP0 started with pid=33, OS id=12229
MRP0: Background Managed Standby Recovery process started (etsdb)
started logmerger process
Tue Apr 08 16:56:27 2014
Managed Standby Recovery starting Real Time Apply
Parallel Media Recovery started with 4 slaves
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Media Recovery Log /db2st2/arch/1_1115_790089239.dbf
Media Recovery Log /db2st2/arch/1_1116_790089239.dbf
Media Recovery Log /db2st2/arch/1_1117_790089239.dbf
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT
<span style="color: blue;">Recovery created file /db2st2/u01/oradata/etsdb/dbaets_data_04.dbf</span>
Successfully added datafile 9 to media recovery
Datafile #9: '/db2st2/u01/oradata/etsdb/dbaets_data_04.dbf'
</pre>
<br />
Note: The testing dataguard environment is the same as the one that is described in <a href="http://oracle-study-notes.blogspot.com/2014/03/create-physical-standby-database.html" target="_blank">the previous post</a>. However, I did a switchover, so in today's testing, ettdb is primary and etsdb is standby. ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-43876469198575196722014-03-27T23:12:00.000-04:002014-04-08T21:24:26.768-04:00Create a physical standby database through RMAN duplicate active data filesI have a very small database ( ~ 5GB) called etsdb, which hosts an Oracle APEX application called <a href="https://sites.google.com/site/denissunsite/my-oracle-dba-work/dbaets" target="_blank">DBAETS</a> for DBA team internal use. <br />
<br />
In the past, as a better-than-nothing backup and disaster recovery strategy, I export it every night and ship the export dump file to a remote server. Now I decide to create a physical standby database at the same remote server for it. <br />
<br />
The transaction volumn of this application is typically very low as at any given time there could be just two or three DBAs logged into it to update some info. So it is ideal to use RMAN active database duplication technique to create the physical standby. In this method, RMAN duplicate process copies the active data files of the primary database directly to the remote host through network. RMAN supports two basic types of duplication: active database duplication and backup-based duplication. <a href="http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmdupdb.htm#BGBDIJJH" target="_blank">Oracle online documentation</a> has a section about the considerations to choose duplication technique.<br />
<div>
<br /></div>
<br />
I describe the steps involved in the following:<br />
<br />
<strong>1. Prepare the production database to be the primary database</strong>
<br />
<br />
<em>a. Ensure that the database is in archivelog mode</em>
<br />
<br />
<em>b. Enable force logging</em>
<br />
<br />
<pre> SQL> ALTER DATABASE FORCE LOGGING;
</pre>
<br />
<em>c. Create standby redologs</em>
<br />
<br />
<pre> SQL> alter database add standby logfile '/db2st2/u01/oradata/etsdb/stby_redo01.rdo' size 100M;
</pre>
<em></em><br />
<em>d. Modify the primary initialization parameter for dataguard on primary,
</em>
<br />
<pre> <span style="font-size: x-small;">
</span>alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(etsdb,ettdb)';
alter system set LOG_ARCHIVE_DEST_1='LOCATION=/db2st2/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=etsdalter system set LOG_ARCHIVE_DEST_2='SERVICE=ettdb LGWR ASYNC VALID_FOR=(online_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=ettdb';
alter system set LOG_ARCHIVE_DEST_STATE_1=ENABLE;
alter system set FAL_SERVER=ettdb;
alter system set FAL_CLIENT=etsdb;
alter system set DB_FILE_NAME_CONVERT='/db2st2/u01/oradata/etsdb','/db2/u01/oradata/ettdb' scope=spfile;
alter system set LOG_FILE_NAME_CONVERT='/db2st2/u01/oradata/etsdb','/db2/u01/oradata/ettdb', '/db2st2/u02/oradata/etsdb','/db2/u02/oradata/ettdb' scope=spfile;
</pre>
<strong>2. Ensure that the sql*net connectivity is working fine.</strong>
<br />
<br />
Configure the listener in standby and put the following tns entries in tnsnames.ora:
<br />
<pre> ettdb =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = standby_server)(PORT = 1523))
)
(CONNECT_DATA =
(SERVICE_NAME = ettdb )
)
)
etsdb =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = primary_server)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = etsdb )
)
)
</pre>
Confirm that tnsping ettdb
and tnsping etsdb
work on both primary and standby sites
<br />
<br />
<strong>3. Create the standby database</strong>
<br />
<br />
<em>a. Copy the password file from the primary $ORACLE_HOME/dbs and rename it to the standby database name.</em>
<br />
<br />
The username is required to be SYS and the password needs to be the same on the Primary and Standby.
The best practice for this is to copy the passwordfile as suggested.
The password file name must match the instance name/SID used at the standby site, not the DB_NAME.
<br />
<br />
orapwetsdb ----> orapwettdb
<br />
<br />
<em>b. Create a initialization parameter with only one parameter DB_NAME.</em>
<br />
<pre>
-- initettdb.ora
DB_NAME=etsdb
DB_UNIQUE_NAME=ettdb
DB_BLOCK_SIZE=8192
</pre>
<em></em><br />
<em>c</em><em>. Create the necessary directories in the standby location to place the datafiles and the trace files in the $ADR_HOME.</em>
<br />
<br />
<em>d. Set the environment variable ORACLE_SID to the standby service and start the standby-instance.</em>
<br />
<pre> % export ORACLE_SID=ettdb
% sqlplus "/ as sysdba"
SQL> startup nomount pfile=$ORACLE_HOME/dbs/initettdb.ora
</pre>
<br />
<em>e. Verify if the connection 'AS SYSDBA' is working</em>
<br />
sqlplus sys@ettdb as sysdba<br />
sqlplus sys@etsdb as sysdba
<br />
<br />
<em>f. On the primary system invoke the RMAN executable and connect to the primary and the auxiliary database ( i.e., the standby)</em><br />
<br />
$ rman target sys/xxx@etsdb auxiliary sys/xxx@ettdb cmdfile=dup.rmn <br />
<br />
content of dup.rmn
:<br />
<pre>duplicate target database for standby from active database
spfile
parameter_value_convert 'etsdb','ettdb'
set db_unique_name='ettdb'
set db_file_name_convert='/db2st2/u01/oradata/etsdb','/db2/u01/oradata/ettdb','/db2st2','/db2'
set log_file_name_convert='/db2st2/u01/oradata/etsdb','/db2/u01/oradata/ettdb', '/db2st2/u02/oradata/etsdb','/db2/u02/oradata/ettdb'
set control_files='/db2/u01/oradata/ettdb/control01.ctl'
set log_archive_dest_1 = 'LOCATION=/db2/arch VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=ettdb'
set log_archive_max_processes='5'
set fal_client='ettdb'
set fal_server='etsdb'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(etsdb,ettdb)'
set log_archive_dest_2='service=etsdb ASYNC valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE) db_unique_name=etsdb'
;
</pre>
<br />
<strong>4. Connect to standby using SQL*Plus and start the MRP ( Managed Recovery Process)
</strong>
<br />
<br />
-- mount db first if required
<br />
<br />
alter database recover managed standby database disconnect from session;
<br />
<br />
<strong>5. If licensed and want to use Active Dataguard (ADG), than open the Standby Database in READ ONLY and start the revoer</strong>
<br />
<br />
Enable Active Data Guard using SQL Plus
:<br />
<br />
<pre>SQL> alter database recover managed standby database cancel;
SQL> alter database open read only;
SQL> alter database recover managed standby database disconnect using current logfile;
-- verify
SQL> select name, db_unique_name, database_role, open_mode from v$database;
NAME DB_UNIQUE_NAME DATABASE_ROLE OPEN_MODE
--------- ------------------------------ ---------------- --------------------
ETSDB ettdb PHYSICAL STANDBY READ ONLY WITH APPLY
</pre>
testing to confirm that standby db can be read and meanwhile redo logs are being applied:
<br />
<pre>in standby
create table dbaets.t as select * from all_objects
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-16000: database open for read-only access
in primary
1* select thread#, max(sequence#) from v$log_history group by thread#
SQL> /
THREAD# MAX(SEQUENCE#)
---------- --------------
1 1057
SQL> create table dbaets.t as select * from all_objects;
Table created.
SQL> alter system archive log current;
System altered.
SQL> select thread#, max(sequence#) from v$log_history group by thread#
2 ;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 1058
in standby:
SQL> select thread#, max(sequence#) from v$log_history group by thread#;
THREAD# MAX(SEQUENCE#)
---------- --------------
1 1058
SQL> select count(*) from dbaets.t;
COUNT(*)
----------
17855
</pre>
References:
<br />
<br />
Step by Step Guide on Creating Physical Standby Using RMAN DUPLICATE...FROM ACTIVE DATABASE (Doc ID 1075908.1)
<br />
<br />
RMAN duplicate: <a href="http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmdupdb.htm#BRADV298">http://docs.oracle.com/cd/E11882_01/backup.112/e10642/rcmdupdb.htm#BRADV298</a><br />
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com1tag:blogger.com,1999:blog-30263044.post-8820725940307015532014-03-25T19:45:00.001-04:002014-03-25T21:13:56.836-04:00Installation of Oracle Big Data Lite Virtual Machine on My New Desktop PC<br />
<div style="margin-bottom: 0in;">
I bought a new desktop PC - Lenovo
ThinkCentre M83 with 32 GB RAM and 64-bit Windows 7. The intention
is to use it as a learning environment for something like Oracle
database 12c as I know it requires 64-bit OS, which I did not have at
home previously. While trying to find an Oracle 12c virtual machine
to install, I came across the <a href="http://www.oracle.com/technetwork/database/bigdata-appliance/oracle-bigdatalite-2104726.html" target="_blank">Oracle Big Data Lite Virtual Machine page</a>. I am excited to see that this virtual machine really contains lots
of good stuff I am interested in, including the Oracle 12c database. It could be a great learning tool. So I downloaded and installed the
Oracle Virtual Box, and I downloaded the virtual machine files and
imported it to the Virtual Box. However when powering on the machine,
I got:<br />
</div>
<div style="margin-bottom: 0in; margin-left: 0.49in;">
<em>Failed to open a
session for the virtual machine xxx</em></div>
<em>
</em><em>VT-X is disabled
in the BIOS</em><br />
<em>
</em><em>(VERR_VMX_MSR_VMXON_DISABLED</em>)<br />
<br />
<div style="margin-bottom: 0in;">
To fix this, I need to go to the <a href="http://support.lenovo.com/en_US/" target="_blank">vendor's website</a> to download and install the latest BIOS driver and
use a tool provided by the vendor called CFGWIN.exe to change Bios
setting.<br />
</div>
<div style="margin-bottom: 0in;">
To flash the BIOS from operating
system, I need to issue the following command:<br />
</div>
<div style="margin-bottom: 0in; margin-left: 0.49in;">
<em>c:\SWTOOLS\FLASH\FBJY74USA>flash.cmd</em><em><br /></em><br />
To capture the current BIOS settings
into a file:</div>
<br />
<br />
<em>C:\SWTOOLS\FLASH\FBJW73USA>CFGWIN.exe
/c /path:c:\settings.txt</em><br />
<em>File Path =
"c:\settings.txt".</em><br />
<em>All items have
been captured successfully!</em><br />
<em><br /></em>To change a BIOS settings:<br />
<br />
<br />
<div style="margin-bottom: 0in;">
<em>C:\SWTOOLS\FLASH\FBJW73USA>CFGWIN.exe
/r /path:C:\Users\Denis\Documents\setting</em><em>s_vtx.txt</em></div>
<em>File Path =
"C:\Users\Denis\Documents\settings_vtx.txt".</em><br />
<em>Restore all BIOS settings successfully!</em><br />
<em><br /></em>The settings_vtx.txt contains one line:<br />
<br />
<em>Intel(R) Virtualization
Technology,Enabled;[Optional:Disabled,Enabled]</em><br />
<em><br /></em><br />
<br />
<br />
Basically after change the bios
parameter: Intel(R) Virtualization Technology to Enabled. I bypassed
the first errror shown above. Now I got something different:<br />
<br />
<em>This kernal requires an x86-64 CPU, but
only detected an i686 CPU.</em><br />
<br />
To fix this, I change the OS version from 32 bit to 64 bit in the virtual machine General tab.<br />
<br />
Finally, I have the environment. I call it a
day, happy ending. I was a little worried that I might end up unable
to run the virtual machine on this new PC. Nowdays, it is easy to set up learning environment with pre-built virtual machines, just so little time and so much to learn!<br />
<br />
<br />
<br />
<br />
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
<div style="margin-bottom: 0in;">
<br /></div>
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-43227986558942427062014-02-14T14:47:00.000-05:002014-02-14T14:47:23.968-05:00Update selected rows of a large table through rowid<br />We have a requirement to update a date column of about 14M rows in a large table of billions of rows . The challenge is we have limited downtime to perform such a task.<br />
In a Sun SPARC T4-2 machine with 128 virtual CPUs, we tested two different join methods with parallel executions: <br />
<br />-- nested loop took about 8h30 <br />
UPDATE TTQ_DQG.DQG_PRODUCT b<br /> SET billing_end_date = SYSDATE<br /> where exists (select /*+ parallel (a 12) */ 1 from DBSCRUB.ISOC_MAPPING_5M a where b.bo_id=a.old_bo_id);<br />
<br />-- hash join took about 5h<br />
UPDATE /*+ parallel (b 12) */TTQ_DQG.DQG_PRODUCT b<br /> SET billing_end_date = SYSDATE<br /> where exists (select 1 from DBSCRUB.ISOC_MAPPING_5M a where b.bo_id=a.old_bo_id)<br /> ;<br />
<br />The 5h execution time still does not meet our downtime requirement. I proposed to perform the task in two steps.<br />
<br />(1) create a rowid driver table outside downtime window<br />(2) update the table with rowid in the downtime window<br />
<br />
The test in the same machine shows we can do it in 41 min!<br />
SQL> <br />SQL> --Step 1. Create a rowid driver table several hours before downtime<br />SQL> <br />SQL> CREATE table dbscrub.dqg_product_5m_rowid nologging<br /> 2 AS<br /> 3 select /*+ full(a) parallel( a 10) */ a.rowid rid from TTQ_DQG.dqg_product a<br /> 4 where exists ( select 1 from dbscrub.ISOC_MAPPING_5M b where a.bo_id=b.old_bo_id);<br />
Table created.<br />
Elapsed: 00:22:01.71<br />SQL> <br />SQL> <br />SQL> <br />SQL> <br />SQL> --Step 2 update table with rowid access in the downtime window<br />SQL> <br />SQL> <br />SQL> UPDATE TTQ_DQG.CPF_PRODUCT b<br /> 2 SET billing_end_date = SYSDATE<br /> 3 where b.rowid in (select /*+ parallel (a, 12) */ rid from DBSCRUB.dqg_product_5m_rowid a)<br /> 4 ;<br />
14431117 rows updated.<br />
Elapsed: 00:41:34.28<br />SQL> <br />SQL> <br />SQL> spool off<br />
<br />
Is this approach safe, any concerns? You may check this AskTom thread: <br />
<a href="http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:53140678334596">http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:53140678334596</a>ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-14289868678665784782014-01-27T06:36:00.000-05:002014-01-27T06:38:21.804-05:00Move ASM database files from one diskgroup to anotherIn the past week, I have been involved in a migration of a standalone database to RAC with ASM through datab pump export and import. After the migration was done, one of the ASM diskgroup is more than 98% percent used, which triggerred alerts from BMC Patrol, which is a standard monitoring tool in my working environment. Searching My Oracle Support website, I found the following note:<br />
<br />
<a href="https://support.oracle.com/epmos/faces/DocumentDisplay?id=330103.1" target="_blank">How to move ASM database files from one diskgroup to another ? (Doc ID 330103.1)</a><br />
<br />
<br />
The note is pretty straight forword, however, there was a one suprise when I tried to offline a datafile :<br />
<br />
<em><strong>ORA-01145: offline immediate disallowed unless media recovery enabled</strong></em><br />
<br />
That was because I put the db in noarchivelog mode in order to speed up the data pump import jobs. After I configured the archivelog mode, thing went fine.<br />
<br />
Here are the steps I followed for future reference<br />
<br />
<strong>1) Identify the data file to be moved</strong><br />
<br />
<span style="font-size: x-small;">+DATA_01/roeprdsc/datafile/non_ttq_clb_msg_01.dbf NON_TTQ_CLB_MSG</span><br />
<strong>2) Identify the diskgroup on to which the file has to be moved.</strong><br />
<strong></strong><br />
<span style="font-size: x-small;">+DATA_09/roeprdsc/datafile</span><br />
<br />
<strong>3) Take the file offline.</strong><br />
<br />
<span style="font-size: x-small;"> ALTER DATABASE DATAFILE '+DATA_01/roeprdsc/datafile/non_ttq_clb_msg_01.dbf' OFFLINE;</span><br />
<br />
<strong>4. Copy the data file</strong><br />
<strong></strong><br />
<span style="font-size: x-small;">I used DBMS_FILE_TRANSFER package. We can also use RMAN, detail see the note.<br /> <br /> <br />create or replace directory orcl1 as '+DATA_01/roeprdsc/datafile';<br /> <br />create or replace directory orcl2 as '+DATA_09/roeprdsc/datafile';<br /> <br /> Note, if need to create a directory in a diskgroup, syntax looks like</span><br />
<span style="font-size: x-small;"></span><br />
<span style="font-size: x-small;"> <em>ASM> Alter disgroup asmdsk2 add directory '+asmdsk2/test';</em></span> <br />
<br />
<span style="font-size: x-small;">BEGIN<br /> DBMS_FILE_TRANSFER.COPY_FILE(<br /> source_directory_object => 'ORCL1',<br /> source_file_name => 'non_ttq_clb_msg_01.dbf',<br /> destination_directory_object => 'ORCL2',<br /> destination_file_name => 'non_ttq_clb_msg_01.dbf');<br />END; <br />/</span><br />
<br />
<br />
<strong>5) Rename the file to point to new location.</strong> <br />
<br />
<span style="font-size: x-small;">ALTER DATABASE RENAME FILE '+DATA_01/roeprdsc/datafile/non_ttq_clb_msg_01.dbf' TO <br /> '+DATA_09/roeprdsc/datafile/non_ttq_clb_msg_01.dbf' ;</span><br />
<br />
<strong>6) Recover the file.</strong><br />
<strong></strong><br />
<span style="font-size: x-small;"> RECOVER DATAFILE '+DATA_09/roeprdsc/datafile/non_ttq_clb_msg_01.dbf'</span><br />
<strong><br />7) Bring the file online.</strong><br />
<br />
<span style="font-size: x-small;">ALTER DATABASE DATAFILE '+DATA_09/roeprdsc/datafile/non_ttq_clb_msg_01.dbf' online;</span><br />
<br />
<br />
<strong>8) Verify the new file location.</strong><br />
<br />
<span style="font-size: x-small;">SQL> SELECT FILE_NAME FROM DBA_DATA_FILES where file_name like '%clb_msg%';<br /><br />FILE_NAME<br />--------------------------------------------------------------------------------<br />...<br />+DATA_09/roeprdsc/datafile/non_ttq_clb_msg_01.dbf<br />+DATA_01/roeprdsc/datafile/non_ttq_clb_msg_02.dbf<br />+DATA_01/roeprdsc/datafile/non_ttq_clb_msg_03.dbf<br />+DATA_01/roeprdsc/datafile/non_ttq_clb_msg_04.dbf<br />+DATA_01/roeprdsc/datafile/non_ttq_clb_msg_05.dbf</span><br />
..<br />
<br />
<br />
<br />
<span style="font-size: x-small;"><strong><span style="font-size: small;">9) Delete the file from its original location either per SQLPLUS or per ASMCMD</span></strong>:</span><br />
<span style="font-size: x-small;"></span><br />
<span style="font-size: x-small;">Syntax examples:</span><br />
<span style="font-size: x-small;"></span><br />
<span style="font-size: x-small;">SQL:ASM> ALTER DISKGROUP ASMDSK2 DROP FILE users.256.565313879;</span><br />
<span style="font-size: x-small;"></span><br />
<span style="font-size: x-small;">or: ASMCMD> rm -rf <file_name></file_name></span><br />
<span style="font-size: x-small;"></span><br />
<span style="font-size: x-small;">Note:</span><br />
<span style="font-size: x-small;">Most Automatic Storage Management files do not need to be manually deleted because, as Oracle managed files, they are removed automatically when they are no longer needed. ( To not to have the alert come again, I have to do this)</span><br />
<br />
<br />
<br />ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-68373715600280798862014-01-14T15:54:00.003-05:002014-01-14T16:08:50.411-05:00Reference: Clean up SharePlex QueuesWe use SharePlex replication as our DR solution for couple of applicatins. Last Saturday night, we did a DR test for one of the very important applications, but we could not make replication working from DR site to other target. As we ran out of time in the window, we just brought up application back to the production site without fixing issues. Today I involved vendor support to test DR configuration (with a dummy configuration) again to make sure that if real DR situation happens replication will work. I have learned that the key is that we should clean up orphan or corrupted queues before activating a configuration in DR. ( note: when we issue deactivate config <config_file>, suppose all associated queues with this particular conifg will be gone, if not, those left queues need to be cleaned up).</config_file><br />
<br />
<br />
Below are the steps to clean up SharePlex queues for future reference:<br />
<br />
<br />
1. shutdown or shutdown force at source and target<br />
2. qview -i<br />
3. qview> qsetup<br />
4. qview> qstatus<br />
5. qview> deleteq p -- for post<br />
qview> deleteq x -- for export<br />
6. On target: truncate splex.shareplex_trans;ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-86655765133323075732013-10-10T10:33:00.002-04:002014-04-09T23:16:25.422-04:00Can Oracle GoldenGate DDL replication support interval partition?We have a fast-growing table that requires keeping only two months' worth of data.
We plan to take advantage of the interval partition feature in Oracle 11g, in which
Oracle automatically creates an interval partition as data for that partition is inserted.
Therefore, we will not worry about data load failure situation due to the possibility of
forgetting to add new partitions by DBA.
<br />
<br />
In our environment, this table will also be in Oracle GoldenGate replication configuration.
So I try to do tests to confirm whether the GG DDL replication supports the following two
operations:
<br />
<br />
1. Create interval partition
<br />
<br />
2. Drop partition <br />
<br />
I used a
test GG environment, in which a two-way GG replication with DDL replication enabled is set up
between two databases: <em>WESTDB </em>and <em>EASTDB</em><span style="background-color: white;">; </span>and the tables are in different schemas
called <em>west</em> and <em>east</em> respectively.
<br />
<br />
The following are the testing steps:
<br />
<br />
1. create a table named interval_tab in the source db
<br />
<pre class="sql" name="code"> denis@WESTDB>> CREATE TABLE west.interval_tab
2 ( prod_id NUMBER(6)
3 , cust_id NUMBER
4 , time_id DATE
5 )
6 PARTITION BY RANGE (time_id)
7 INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
8 ( PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2013', 'MM-DD-YYYY')),
9 PARTITION p1 VALUES LESS THAN (TO_DATE('2-1-2013', 'MM-DD-YYYY')),
10 PARTITION p2 VALUES LESS THAN (TO_DATE('3-1-2013', 'MM-DD-YYYY')),
11 PARTITION p3 VALUES LESS THAN (TO_DATE('4-1-2013', 'MM-DD-YYYY')) );
Table created.
</pre>
Verify it is replicated in target:
<br />
<pre class="sql" name="code"> denis@EASTDB>> desc east.interval_tab;
Name Null? Type
----------------------------------------------------------------------------- -------- ---------------------
PROD_ID NUMBER(6)
CUST_ID NUMBER
TIME_ID DATE
</pre>
2. create indexes and contraint
At source:
<br />
<pre class="sql" name="code"> denis@WESTDB>> create unique index west.interval_tab_pk on west.interval_tab(prod_id, time_id) local;
Index created.
denis@WESTDB>> create index west.interval_tab_ix1 on west.interval_tab(cust_id) local;
Index created.
denis@WESTDB>> alter table west.interval_tab add constraint interval_tab_pk primary key (prod_id, time_id) using index;
Table altered.
</pre>
Verify index and constraints creation are replicated at target:
<br />
<pre class="sql" name="code"> denis@EASTDB>> @tabix
Enter value for tabowner: east
Enter value for tabname: interval_tab
TABLE_NAME INDEX_NAME COLUMN_NAME COL_POS UNIQUENES
-------------------- -------------------- -------------------- ---------- ---------
INTERVAL_TAB INTERVAL_TAB_IX1 CUST_ID 1 NONUNIQUE
INTERVAL_TAB INTERVAL_TAB_PK PROD_ID 1 UNIQUE
INTERVAL_TAB TIME_ID 2 UNIQUE
denis@EASTDB>> select owner,table_name, constraint_name from dba_constraints where table_name='INTERVAL_TAB';
OWNER TABLE_NAME CONSTRAINT_NAME
--------------- -------------------- --------------------
EAST INTERVAL_TAB INTERVAL_TAB_PK
</pre>
3. Insert data at source that will cause new interval partition created automatically
<br />
<br />
Before the insert, check the current partitions:
<br />
<pre class="sql" name="code"> denis@WESTDB>> select table_owner, table_name, partition_name, high_value from dba_tab_partitions where table_name='INTERVAL_TAB';
TABLE_OWNER TABLE_NAME PARTITION_NAME HIGH_VALUE
---------------- -------------------- -------------------- ---------------------------------------------
WEST INTERVAL_TAB P0 TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
WEST INTERVAL_TAB P1 TO_DATE(' 2013-02-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
WEST INTERVAL_TAB P2 TO_DATE(' 2013-03-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
WEST INTERVAL_TAB P3 TO_DATE(' 2013-04-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
</pre>
Perform the following insert at source and check the partitions:<br />
<br />
insert into interval_tab values (1004,1, sysdate);
<br />
<pre class="sql" name="code"> denis@WESTDB>> insert into west.interval_tab values (1004,1, sysdate);
1 row created.
denis@WESTDB>> commit;
Commit complete.
denis@WESTDB>> select table_owner, table_name, partition_name, high_value from dba_tab_partitions where table_name='INTERVAL_TAB';
TABLE_OWNER TABLE_NAME PARTITION_NAME HIGH_VALUE
------------------------------ -------------------- -------------------- ---------------------------------------------
WEST INTERVAL_TAB P0 TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
WEST INTERVAL_TAB P1 TO_DATE(' 2013-02-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
WEST INTERVAL_TAB P2 TO_DATE(' 2013-03-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
WEST INTERVAL_TAB P3 TO_DATE(' 2013-04-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
WEST INTERVAL_TAB SYS_P81 TO_DATE(' 2013-11-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
5 rows selected.
</pre>
4. Check partitions at target database
<br />
<pre class="sql" name="code"> denis@EASTDB>> select table_owner, table_name, partition_name, high_value from dba_tab_partitions where table_name='INTERVAL_TAB';
TABLE_OWNER TABLE_NAME PARTITION_NAME HIGH_VALUE
---------------------- -------------------- -------------------- ---------------------------------------------
EAST INTERVAL_TAB P0 TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
EAST INTERVAL_TAB P1 TO_DATE(' 2013-02-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
EAST INTERVAL_TAB P2 TO_DATE(' 2013-03-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
EAST INTERVAL_TAB P3 TO_DATE(' 2013-04-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
EAST INTERVAL_TAB SYS_P123 TO_DATE(' 2013-11-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
5 rows selected.
denis@EASTDB>> select index_owner,index_name,partition_name from dba_ind_partitions where index_name='INTERVAL_TAB_IX1';
INDEX_OWNER INDEX_NAME PARTITION_NAME
------------------------------ -------------------- --------------------
EAST INTERVAL_TAB_IX1 P0
EAST P1
EAST P2
EAST P3
EAST SYS_P123
5 rows selected.
</pre>
5. Drop a partition at source
alter table west.interval_table drop partition P0;
<br />
<pre class="sql" name="code"> denis@WESTDB>> alter table west.interval_tab drop partition P0;
Table altered.
Verified, it is replicable
denis@EASTDB>> /
TABLE_OWNER TABLE_NAME PARTITION_NAME HIGH_VALUE
------------------------------ -------------------- -------------------- ---------------------------------------------
EAST INTERVAL_TAB P1 TO_DATE(' 2013-02-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
EAST INTERVAL_TAB P2 TO_DATE(' 2013-03-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
EAST INTERVAL_TAB P3 TO_DATE(' 2013-04-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
EAST INTERVAL_TAB SYS_P123 TO_DATE(' 2013-11-01 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
4 rows selected.
</pre>
In conclusion, the tests confirmed that GG DDL replication support the interval partition automatic creation and drop partition operations.
It is worth noting that when we use DBMS_METADATA package to get the definitions of interval partitioned table and its associated
local indexes, we won't see the interval partitions that are created automatically. To verify the existance of the partitions, we shall use
dba_tab_partitions and dba_ind_partitions views. <br />
<br />
P.S. <br />
<br />
Note from Oracle docs:
<br />
<br />
<em>You cannot explicitly add a partition to an interval-partitioned table unless you first lock the partition,
which triggers the creation of the partition. The database automatically creates a partition for an interval when data for that interval is inserted. In general, you only need to explicitly create interval partitions for a partition exchange load scenario.</em>
<br />
<br />
<em>update Apr 9, 2014 -</em><br />
<em></em><br />
<em>I may have misunderstood the relationship of DDL replication and interval partition creation. DDL replication may have no relationship with interval partition creation at all. Even without DDL replication enabled. Interval partition will be created due to the data insertion DML statement. </em>ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com1tag:blogger.com,1999:blog-30263044.post-41908565282265367242013-09-11T16:10:00.000-04:002013-09-15T09:27:47.578-04:00Avoid Merge Join Cartesian in a SQL Tunning ExerciseEncountered a query that caused CPU utilization high. In a 15 min AWR, this query executes 78 times with total Buffer Gets 949M, which contributes 78.8% of the total. I filled in some bind variable values by checking v$sql_bind_capture view. And I executed the sql from sqlplus with gather_plan_statistics hint. Below is the execution plan witn E-Rows and A-Rows info
:
<br />
<pre class="sql" name="code">Plan hash value: 4161037915
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp|
-------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------
| 1 | SORT ORDER BY | | 1 | 1 | 113K|00:02:27.54 | 4242K| 12873 | 8580 | 76M| 3013K| 67M (0)| |
| 2 | HASH UNIQUE | | 1 | 1 | 113K|00:02:23.74 | 4242K| 12873 | 8580 | 72M| 7323K| 9538K (1)| 73728 |
|* 3 | FILTER | | 1 | | 114K|00:03:25.41 | 4242K| 4293 | 0 | | | | |
| 4 | NESTED LOOPS | | 1 | 1 | 114K|00:00:15.77 | 4239K| 4291 | 0 | | | | |
| 5 | MERGE JOIN CARTESIAN | | 1 | 1 | 199K|00:00:01.71 | 6001 | 123 | 0 | | | | |
| 6 | MERGE JOIN CARTESIAN | | 1 | 1 | 7112 |00:00:00.44 | 5998 | 123 | 0 | | | | |
|* 7 | HASH JOIN OUTER | | 1 | 1 | 889 |00:00:00.15 | 5995 | 123 | 0 | 985K| 927K| 1229K (0)| |
| 8 | NESTED LOOPS | | 1 | 1 | 889 |00:00:00.11 | 5979 | 123 | 0 | | | | |
| 9 | NESTED LOOPS | | 1 | 1 | 889 |00:00:00.10 | 5088 | 123 | 0 | | | | |
|* 10 | HASH JOIN OUTER | | 1 | 1 | 889 |00:00:00.09 | 4197 | 123 | 0 | 928K| 928K| 1265K (0)| |
| 11 | NESTED LOOPS | | 1 | 1 | 865 |00:00:00.10 | 4193 | 123 | 0 | | | | |
| 12 | NESTED LOOPS | | 1 | 1 | 1839 |00:00:00.07 | 504 | 123 | 0 | | | | |
| 13 | MERGE JOIN CARTESIAN | | 1 | 1 | 1 |00:00:00.01 | 12 | 0 | 0 | | | | |
| 14 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 8 | 0 | 0 | | | | |
| 15 | NESTED LOOPS | | 1 | 1 | 1 |00:00:00.01 | 5 | 0 | 0 | | | | |
| 16 | TABLE ACCESS BY INDEX ROWID| OZBJZFDS | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | 0 | | | | |
|* 17 | INDEX UNIQUE SCAN | PK_OZBJZFDS | 1 | 1 | 1 |00:00:00.01 | 2 | 0 | 0 | | | | |
| 18 | TABLE ACCESS BY INDEX ROWID| OZBJZFD_CATEGORY | 1 | 26 | 1 |00:00:00.01 | 2 | 0 | 0 | | | | |
|* 19 | INDEX UNIQUE SCAN | PK_OZBJZFD_CATEGORY | 1 | 1 | 1 |00:00:00.01 | 1 | 0 | 0 | | | | |
|* 20 | TABLE ACCESS BY INDEX ROWID | OZBJZFD_MARKETS | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | 0 | | | | |
|* 21 | INDEX RANGE SCAN | PK_OZBJZFD_MARKETS | 1 | 1 | 1 |00:00:00.01 | 2 | 0 | 0 | | | | |
| 22 | BUFFER SORT | | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | 0 | 2048 | 2048 | 2048 (0)| |
|* 23 | TABLE ACCESS BY INDEX ROWID | MARKETS | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | 0 | | | | |
|* 24 | INDEX RANGE SCAN | PK_MARKETS | 1 | 1 | 1 |00:00:00.01 | 2 | 0 | 0 | | | | |
|* 25 | TABLE ACCESS BY INDEX ROWID | OZBJZFD_OQNCVDSS | 1 | 1 | 1839 |00:00:00.07 | 492 | 123 | 0 | | | | |
|* 26 | INDEX RANGE SCAN | PK_OZBJZFD_OQNCVDSS | 1 | 1 | 1894 |00:00:00.01 | 16 | 13 | 0 | | | | |
|* 27 | TABLE ACCESS BY INDEX ROWID | OQNCVDSS | 1839 | 1 | 865 |00:00:00.03 | 3689 | 0 | 0 | | | | |
|* 28 | INDEX UNIQUE SCAN | PK_OQNCVDSS | 1839 | 1 | 1839 |00:00:00.01 | 1841 | 0 | 0 | | | | |
| 29 | VIEW | | 1 | 29 | 29 |00:00:00.01 | 4 | 0 | 0 | | | | |
| 30 | SORT UNIQUE | | 1 | 29 | 29 |00:00:00.01 | 4 | 0 | 0 | 4096 | 4096 | 4096 (0)| |
| 31 | UNION-ALL | | 1 | | 29 |00:00:00.01 | 4 | 0 | 0 | | | | |
| 32 | INDEX FULL SCAN | PK_SOURCE_TARGET_RULES | 1 | 20 | 20 |00:00:00.01 | 1 | 0 | 0 | | | | |
| 33 | TABLE ACCESS FULL | CARRYOVER_ISOC_MAPPING | 1 | 9 | 9 |00:00:00.01 | 3 | 0 | 0 | | | | |
| 34 | TABLE ACCESS BY INDEX ROWID | SPEED_CODES | 889 | 1 | 889 |00:00:00.01 | 891 | 0 | 0 | | | | |
|* 35 | INDEX UNIQUE SCAN | PK_SPEED_CODES | 889 | 1 | 889 |00:00:00.01 | 2 | 0 | 0 | | | | |
| 36 | TABLE ACCESS BY INDEX ROWID | OQNCVDS_TYPES | 889 | 1 | 889 |00:00:00.01 | 891 | 0 | 0 | | | | |
|* 37 | INDEX UNIQUE SCAN | PK_OQNCVDS_TYPES | 889 | 1 | 889 |00:00:00.01 | 2 | 0 | 0 | | | | |
| 38 | INDEX FAST FULL SCAN | PK_OFFER_PROD | 1 | 1255 | 1255 |00:00:00.01 | 16 | 0 | 0 | | | | |
| 39 | BUFFER SORT | | 889 | 8 | 7112 |00:00:00.01 | 3 | 0 | 0 | 2048 | 2048 | 2048 (0)| |
| 40 | TABLE ACCESS FULL | BILLING_FREQ | 1 | 8 | 8 |00:00:00.01 | 3 | 0 | 0 | | | | |
| 41 | BUFFER SORT | | 7112 | 28 | 199K|00:00:00.20 | 3 | 0 | 0 | 2048 | 2048 | 2048 (0)| |
| 42 | TABLE ACCESS FULL | UNIT_TYPES | 1 | 28 | 28 |00:00:00.01 | 3 | 0 | 0 | | | | |
|* 43 | TABLE ACCESS BY INDEX ROWID | UHCDN_RATES | 199K| 1 | 114K|00:02:16.61 | 4233K| 4168 | 0 | | | | |
|* 44 | INDEX RANGE SCAN | PK_UHCDN_RATES | 199K| 36 | 4879K|00:01:48.28 | 727K| 911 | 0 | | | | |
|* 45 | INDEX RANGE SCAN | PK_OZBJZFD_OQNCVDSS | 976 | 1 | 933 |00:00:00.02 | 2928 | 2 | 0 | | | | |
|* 46 | INDEX UNIQUE SCAN | PK_OQNCVDSS | 9 | 1 | 4 |00:00:00.01 | 18 | 0 | 0 | | | | |
-------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------
</pre>
The AUTOTRACE statistics is as follows:
<br />
<pre>Statistics
----------------------------------------------------------
72 recursive calls
0 db block gets
4242325 consistent gets
13434 physical reads
0 redo size
9673175 bytes sent via SQL*Net to client
13494 bytes received via SQL*Net from client
1141 SQL*Net roundtrips to/from client
5 sorts (memory)
0 sorts (disk)
113976 rows processed
</pre>
By comparing the E-rows and A-rows from the execution, it is easy to identify that the problem starts from the operation id 25 and 26, where E-rows=1 and A-rows=1839 and 1894.
<br />
<pre class="sql" name="code">|* 25 | TABLE ACCESS BY INDEX ROWID | OZBJZFD_OQNCVDSS | 1 | 1 | 1839 |00:00:00.07 | ...
|* 26 | INDEX RANGE SCAN | PK_OZBJZFD_OQNCVDSS | 1 | 1 | 1894 |00:00:00.01 | ...
</pre>
With E-rows=1, Oracle CBO decides to use "MERGE JOIN CARTESIAN". Notice at the end, CBO estimate only 1 row whereas actual number of rows is 113K. So the key to tune this <span style="background-color: yellow;">query </span>is to avoid the Cartesian join at operation id 5 and 6.
I modified the query by adding the following hints and of course make sure the tables order is correct in the FROM clause:<br />
<br />
<span style="color: #3d85c6;">/*+ ordered use_hash(E), use_hash(F) */</span>
<br />
<br />
Here is the execution plan of the modified query:
<br />
<pre class="sql" name="code">---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 2841 | 74 (9)| 00:00:01 |
| 1 | SORT ORDER BY | | 1 | 2841 | 74 (9)| 00:00:01 |
| 2 | HASH UNIQUE | | 1 | 2841 | 73 (7)| 00:00:01 |
|* 3 | FILTER | | | | | |
|* 4 | HASH JOIN OUTER | | 1 | 2841 | 72 (6)| 00:00:01 |
| 5 | NESTED LOOPS | | 1 | 2835 | 68 (5)| 00:00:01 |
| 6 | NESTED LOOPS | | 1 | 2828 | 67 (5)| 00:00:01 |
|* 7 | HASH JOIN | | 1 | 2813 | 66 (5)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 2801 | 64 (5)| 00:00:01 |
| 9 | NESTED LOOPS | | 1 | 2781 | 63 (5)| 00:00:01 |
|* 10 | HASH JOIN OUTER | | 1 | 2682 | 17 (18)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 578 | 11 (0)| 00:00:01 |
| 12 | NESTED LOOPS | | 1 | 326 | 10 (0)| 00:00:01 |
| 13 | MERGE JOIN CARTESIAN | | 1 | 263 | 7 (0)| 00:00:01 |
| 14 | NESTED LOOPS | | 1 | 153 | 5 (0)| 00:00:01 |
| 15 | NESTED LOOPS | | 1 | 125 | 3 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID| OZBJZFDS | 1 | 120 | 2 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | PK_OZBJZFDS | 1 | | 1 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID| OZBJZFD_CATEGORY | 1 | 5 | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | PK_OZBJZFD_CATEGORY | 1 | | 0 (0)| 00:00:01 |
|* 20 | TABLE ACCESS BY INDEX ROWID | OZBJZFD_MARKETS | 1 | 28 | 2 (0)| 00:00:01 |
|* 21 | INDEX RANGE SCAN | PK_OZBJZFD_MARKETS | 1 | | 1 (0)| 00:00:01 |
| 22 | BUFFER SORT | | 1 | 110 | 5 (0)| 00:00:01 |
|* 23 | TABLE ACCESS BY INDEX ROWID | MARKETS | 1 | 110 | 2 (0)| 00:00:01 |
|* 24 | INDEX RANGE SCAN | PK_MARKETS | 1 | | 1 (0)| 00:00:01 |
|* 25 | TABLE ACCESS BY INDEX ROWID | OZBJZFD_OQNCVDSS | 1 | 63 | 3 (0)| 00:00:01 |
|* 26 | INDEX RANGE SCAN | PK_OZBJZFD_OQNCVDSS | 1 | | 2 (0)| 00:00:01 |
|* 27 | TABLE ACCESS BY INDEX ROWID | OQNCVDSS | 1 | 252 | 1 (0)| 00:00:01 |
|* 28 | INDEX UNIQUE SCAN | PK_OQNCVDSS | 1 | | 0 (0)| 00:00:01 |
| 29 | VIEW | | 29 | 61016 | 5 (40)| 00:00:01 |
| 30 | SORT UNIQUE | | 29 | 1138 | 5 (80)| 00:00:01 |
| 31 | UNION-ALL | | | | | |
| 32 | INDEX FULL SCAN | PK_SOURCE_TARGET_RULES | 20 | 760 | 1 (0)| 00:00:01 |
| 33 | TABLE ACCESS FULL | CARRYOVER_ISOC_MAPPING | 9 | 378 | 2 (0)| 00:00:01 |
|* 34 | TABLE ACCESS BY INDEX ROWID | UHCDN_RATES | 21 | 2079 | 46 (0)| 00:00:01 |
|* 35 | INDEX RANGE SCAN | PK_UHCDN_RATES | 72 | | 2 (0)| 00:00:01 |
| 36 | TABLE ACCESS BY INDEX ROWID | OQNCVDS_TYPES | 1 | 20 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | PK_OQNCVDS_TYPES | 1 | | 0 (0)| 00:00:01 |
| 38 | TABLE ACCESS FULL | BILLING_FREQ | 8 | 96 | 2 (0)| 00:00:01 |
| 39 | TABLE ACCESS BY INDEX ROWID | UNIT_TYPES | 1 | 15 | 1 (0)| 00:00:01 |
|* 40 | INDEX UNIQUE SCAN | PK_UNIT_TYPES | 1 | | 0 (0)| 00:00:01 |
| 41 | TABLE ACCESS BY INDEX ROWID | SPEED_CODES | 1 | 7 | 1 (0)| 00:00:01 |
|* 42 | INDEX UNIQUE SCAN | PK_SPEED_CODES | 1 | | 0 (0)| 00:00:01 |
| 43 | INDEX FAST FULL SCAN | PK_OFFER_PROD | 1255 | 7530 | 3 (0)| 00:00:01 |
|* 44 | INDEX RANGE SCAN | PK_OZBJZFD_OQNCVDSS | 1 | 12 | 3 (0)| 00:00:01 |
|* 45 | INDEX UNIQUE SCAN | PK_OQNCVDSS | 1 | 6 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------
</pre>
AUTOTRACE statistics of the modified query is also shown below:
<br />
<pre>Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
479315 consistent gets
2097 physical reads
0 redo size
9673175 bytes sent via SQL*Net to client
13535 bytes received via SQL*Net from client
1141 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
113976 rows processed
</pre>
It can be seen that after tunning the "consistent gets" drop to 479,315 from 4,242,325.
In the production database, I created a SQL Profile to enforce the better plan. ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-8235086594872834822013-09-09T12:32:00.000-04:002013-09-09T12:40:10.361-04:00My Experience of Data Pump Export and Import for Database MigrationRecently I have carried out a test of using Data Pump Export and Import utilities to migrate
a 17 TB database from Solaris to Linux platform, also from single-instance, conventional
file system to a two-node RAC on ASM. During the exercise I have learned and become more
familiar with quite a few interesting features or options of Data Pump Export and Import
utilites and encountered a few difficulties as well.
<br />
<br />
The first obstacle I met was that I was unable to do Data Pump Export to the
NFS storage. We have the NFS storage that is shared between the Solaris and Linux server.
At first I tried to perform Data Pump Export to the NFS storage directly.
Unfortunately, the job was just stuck there for long time. Having researched this
issue for a while and tried the options mentioned in a MOS note (See References [1])
with the help of a SA, I just could not get this worked out. Finally we decided
to export to the local file system first then move the dump files to the NFS. I was
able to move about 150 GB dump file per hour. Fortunately, the Data Pump Import
from NFS storage did not pose any problems.
<br />
<br />
Table mode Data Pump Export and Import were used in the test. Large tables were typically
exported or imported individually and smaller tables were grouped together
into several batches. For each large table or a group of smaller tables, I prepared
the corresponding parameter files with parallel option if appropriate. In the Import job,
I always excluded the index, constraint and statistics. I used SQLFILE option of the impdp command
to extract the DDL commands related to the tables, including create table, index and constraint
statements etc. Those DDLs were executed after the Import with parallel option
for index creation and "ENABLE NOVALIDATE" for constraints typically. Therefore the database migration actually consisted of multiple tables migration jobs. For each tables migration,
I performed and recorded the timing of the following steps: export, move dump file to NFS, import,
DDLs for index etc. Those tables migration jobs were overlapped as two or three of them were executed
at the same time and could be in different steps. <br />
<br />
Shortly after the onset of the test I found out that Data Pump Export was extremely slow on a big table
with lob column. The export job was unable to run in parallel regardless of the parallel setting.
The segment sizes of this big table are 237 GB and 382 GB for table and lob respectively.
The table uses a sequence-based PK column called "EMAIL_SEQ_ID". To speed up the export, I started
23 export jobs at the same time, each job exported a particular range of rows based on the PK.
The key is to use the QUERY option for the expdp command. The QUERY option contains a WHERE
clause that specifies the PK range. Finally I was able to export this table in 30 hours,
move the 420 GB dump files in 2 hours, import in about 15 hours and execute DDL for indexes etc
in 2 hours. It is worth noting that although we can perform the lob table export job in parallel
by such an approach, we cannot do the same for import. The import job will hold a TM lock for
the table, so only one import job can run at a time. I used this approach for several big lob
tables. Appendix showed example Shell script to prepare expdp or impdp parfile.
By the way, later I found that unable to export in parallel for lob table is a known issue in
the Oracle community. (see Reference [3]). A blog post suggested using rowid based approach to
separate the rows of lob tables. (see Reference [4])
<br />
<br />
Occasionally I made mistakes and needed to terminate an export or import job.
I found it very convenient to use KILL_JOB command in the interactive mode. First, I need to
know the job name either from log file or from dba_datapump_jobs view. Then I can enter the
interactive mode using attach option:
<br />
<br />
<em>expdp / attach=[job_name]</em>
<br />
<br />
Then I issued "KILL_JOB" command. Note when using KILL_JOB to the expdb, all dump files will be
wiped out.
<br />
<br />
During the test, I have also learned that Data Pump Import can be done through db link.
When specifying a source database link for the NETWORK_LINK option of impdp command, import job will retrieve
data from the db link, and write the data directly to the target database There are no dump files involved.
<br />
<br />
I have mentioned previously SQLFILE option can be used to extract DDLs. But this option only available
for impdp. Suppose I have a table in a database and I want to extract DDLs related to this table
including indexes, constraints, object grants, comments, triggers etc associated with it, how should I do?
Previously I always use DBMS_METADATA package for such tasks. If we create a db link to the database
itself, we can use impdp with NETWORK_LINK and SQLFILE options to accomplish it very neatly.
<br />
<br />
Using export/import to migrate the database has one advantage over Transportable Tablespace
approach at least: it allows the re-organization of tables and indexes. i.e. it is easy to move
tables and indexes to different tablespaces if desired. <br />
<br />
I completed the 17 TB database migration with the size of about 12 TB in the target database
in about a month. The test was interrupted by other tasks with higher priority. I did total 17
tables migrations to complete the whole database migration. Ordered by end-to-end time
(including export, move to NFS, import, DDL steps), they were refered to as as batch 1 to 17 here:<br />
<br />
<pre>batch time (hours)
----- -----------
1 50
2 30
3 25
4 24
5 21
6 16
7 15
8 14
9 11
10 10
11 10
12 9.5
13 5.5
14 5
15 3.5
16 3
17 3
------------
</pre>
<br />
<br />
Based on above timing, if I perform the migration task as continuely as possible, I may be able to compete it in 5-7 days.
The ultimate goal is to do a zero-down time migration. We plan to use oracle Golden Gate to
caputure the changes at source during the database migration. So my next task will be investiagating whether
this is possible in practise in our environment. <br />
<br />
<strong>References:</strong>
<br />
[1] 781349.1 - ORA-27054: NFS file system where the file is created or resides is not mounted with correct options [ID 781349.1])
<br />
[2] Sample Export and Import parameter files:
<br />
<pre> -- expdp.par ----
DIRECTORY=dpump_dir3
DUMPFILE=dpump_dir3:email%U_q2.dmp
TABLES=TTQ_GMPX.TTQ_EMAIL_EVENT_DETAILS
LOGFILE=dpump_dir3:TTQ_EMAIL_EVENT_DETAILS_q2.log
#PARALLEL=20
FILESIZE=5G
JOB_NAME=exp_email_q2
QUERY="WHERE email_seq_id >=2*200000000 and email_seq_id < 3*200000000"
-- impdp.par ----
DIRECTORY=dpump_dir
DUMPFILE=dpump_dir:email%U_q2.dmp
TABLES=TTQ_GMPX.TTQ_EMAIL_EVENT_DETAILS
TABLE_EXISTS_ACTION=APPEND
LOGFILE=dpump_dir:TTQ_EMAIL_EVENT_DETAILS_q2_imp.log
PARALLEL=2
JOB_NAME=imp_email_q2
EXCLUDE=index,constraint,statistics
#EXCLUDE=statistics
transform=storage:n,oid:n
CLUSTER=NO
</pre>
[3] Bug 5599947 - Export Data Pump is slow when table has a LOB column - Defect: Bug:5599947 "DATAPUMP EXPORT VERY SLOW"<br />
<br />
[4] <a href="http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html">http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html</a><br />
<br />
[5] Some other references:
<br />
Master Note for Data Pump [ID 1264715.1]
<br />
Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) [ID 453895.1]
<br />
Parallel Capabilities of Oracle Data Pump [ID 365459.1]
<br />
<br />
<br />
<strong>Appendix - sample shell script to generate expdp/impdp par file for big lob table</strong> <br />
<br />
<pre>#!/bin/ksh
i=21
while [[ $i -le 43 ]];do
((j=i+1))
PARFILE="EMAIL_CONTENT_q$i.par"
echo "DIRECTORY=dpump_dir3" > $PARFILE
echo "DUMPFILE=dpump_dir3:ecntnt_q${i}_%U.dmp" >> $PARFILE
echo "TABLES=TTQ_GMPX.EMAIL_CONTENT" >> $PARFILE
echo "LOGFILE=dpump_dir3:EMAIL_CONTENT_q$i.log" >> $PARFILE
echo "FILESIZE=10G" >> $PARFILE
echo "JOB_NAME=exp_ecntnt_q$i" >> $PARFILE
echo "QUERY=\"WHERE email_seq_id > $i * 2000000 and email_seq_id <= $j * 2000000\"">> $PARFILE
echo "i=$i"
echo "j=$j"
((i=i+1))
done
--- sample script to generate impdp par file for big lob table -----------------
i=6
while [[ $i -le 43 ]];do
((j=i+1))
PARFILE="EMAIL_CONTENT_imp_q$i.par"
echo "DIRECTORY=dpump_dir" > $PARFILE
echo "DUMPFILE=ecntnt_q${i}_%U.dmp" >> $PARFILE
echo "TABLES=TTQ_GMPX.EMAIL_CONTENT" >> $PARFILE
echo "TABLE_EXISTS_ACTION=APPEND" >> $PARFILE
echo "LOGFILE=EMAIL_CONTENT_q${i}_imp.log" >> $PARFILE
echo "JOB_NAME=imp_ecntnt_q${i}" >> $PARFILE
echo "#EXCLUDE=index,constraint,statistics" >> $PARFILE
echo "EXCLUDE=statistics" >> $PARFILE
echo "transform=storage:n,oid:n" >> $PARFILE
echo "CLUSTER=NO" >> $PARFILE
echo "i=$i"
echo "j=$j"
((i=i+1))
done
</pre>
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-33089091591858628882013-09-06T14:27:00.001-04:002013-09-07T08:36:46.712-04:00Find just one set of columns that uniquely identify a rowIf a table has PK, we need to show the list of PK columns. If the table does not have PK, we need to see if we can find one set of columns that can uniquely identify the row. One of my colleagues wrote a PL/SQL script to look at dba_constraints to do that. But I have learned that <strong>tables could have unique indexes but don't have unique constraints; on the other hand, if we create a unique constraint, we will have a unique index</strong>. For the following test tables t1,t2,t3,t4,t5:<br />
<br />
<pre name="code" class="sql">
create table t1(c1 number, c2 number, c3 number);
alter table t1 add constraint t1_pk primary key (c1, c2);
create table t2(c1 number, c2 number, c3 number);
alter table t2 add constraint t2_uk unique (c1, c2);
create table t3(c1 number, c2 number, c3 number);
create unique index t3_ix on t3(c1,c2)
create table t4(c1 number, c2 number, c3 number);
create unique index t4_ix1 on t4(c1,c2);
create unique index t4_ix2 on t4(c1,c3);
create table t5(c1 number, c2 number, c3 number);
</pre>
The original script will output something like:
<br />
<br />
<pre>T5 does not have PK or any unique cols
T4 does not have PK or any unique cols
T3 does not have PK or any unique cols
T2 does not have PK but has unique cols
T2 does not have PK but has unique cols such as C1,C2
T1 has PK
T1 PK cols : C1,C2
</pre>
You can see that the statements about T3 and T4 are not correct, as they have unique indexes. Below is my modified script to look at dba_indexes in stead of dba_constraints.
<br />
<pre name="code" class="sql">
-- List PK columns or unique columns
-- find PK columns and if no PK, find one of the unique columns
---
set serveroutput on
declare
cntpk number;
cntuk number;
initial number;
pk varchar2(1000);
uk varchar2(1000);
-- cursor tab_cur is select upper(source_owner) source_owner,upper(source_object_name) source_object_name from dbscrub.config_table;
cursor tab_cur is select upper(owner) source_owner,upper(table_name) source_object_name from dba_tables
where owner='Vxxxx' and table_name in ('T1','T2','T3','T4', 'T5');
begin
for tab_rec in tab_cur
loop
pk :='';
uk :='';
initial := 1;
select count(1) into cntpk from dba_constraints where owner=tab_rec.source_owner and table_name=tab_rec.source_object_name and constraint_type = 'P';
-- select count(1) into cntuk from dba_constraints where owner=tab_rec.source_owner and table_name=tab_rec.source_object_name and constraint_type = 'U';
select count(1) into cntuk from dba_indexes where owner=tab_rec.source_owner and table_name=tab_rec.source_object_name and uniqueness = 'UNIQUE';
if cntpk != 0 then
-- update dbscrub.config_table set has_pk_uk='YES' where source_owner=tab_rec.source_owner and source_object_name=tab_rec.source_object_name;
dbms_output.put_line ( tab_rec.source_object_name || ' has PK' );
for i in (SELECT b.table_name, b.column_name, b.position
FROM dba_constraints a, dba_cons_columns b
WHERE a.owner = b.owner
AND a.constraint_name = b.constraint_name
AND constraint_type = 'P'
AND b.owner = tab_rec.source_owner
AND b.table_name = tab_rec.source_object_name
ORDER BY b.position)
loop
if initial=1 then
pk:=i.column_name;
initial:=initial+1;
else
pk:=pk||','||i.column_name;
end if;
end loop;
-- update dbscrub.config_table set pk_uk_columns=pk where source_owner=tab_rec.source_owner and source_object_name=tab_rec.source_object_name;
dbms_output.put_line ( tab_rec.source_object_name || ' PK cols : ' || pk );
elsif cntpk = 0 and cntuk != 0 then
-- update dbscrub.config_table set has_pk_uk='YES' where source_owner=tab_rec.source_owner and source_object_name=tab_rec.source_object_name;
dbms_output.put_line ( tab_rec.source_object_name || ' does not have PK but has unique cols' );
/* -- seems not correct
for i in (SELECT b.table_name, b.column_name, b.position
FROM dba_constraints a, dba_cons_columns b
WHERE a.owner = b.owner
AND a.constraint_name = b.constraint_name
AND constraint_type = 'U'
AND b.owner = tab_rec.source_owner
AND b.table_name = tab_rec.source_object_name
AND rownum = 1
ORDER BY b.position)
*/
-- replaced by :
/*
for i in (
SELECT b.table_name, b.column_name, b.position
FROM ( select * from dba_constraints a1
where a1.owner=tab_rec.source_owner
and a1.table_name=tab_rec.source_object_name
and a1.constraint_type='U'
and rownum=1
) a,
dba_cons_columns b
WHERE a.owner = b.owner
AND a.constraint_name = b.constraint_name
ORDER BY b.position )
*/
-- check dba_indexes and dba_ind_columns instead
for i in (
SELECT b.table_name, b.column_name, b.column_position
FROM ( select * from dba_indexes a1
where a1.owner=tab_rec.source_owner
and a1.table_name=tab_rec.source_object_name
and a1.UNIQUENESS='UNIQUE'
and rownum=1
) a,
dba_ind_columns b
WHERE a.owner = b.index_owner
AND a.index_name = b.index_name
ORDER BY b.column_position )
loop
if initial=1 then
uk:=i.column_name;
initial:=initial+1;
else
uk:=uk||','||i.column_name;
end if;
end loop;
-- update dbscrub.config_table set pk_uk_columns=uk where source_owner=tab_rec.source_owner and source_object_name=tab_rec.source_object_name;
dbms_output.put_line ( tab_rec.source_object_name || ' do not have PK but has unique cols such as ' || uk );
else
-- update dbscrub.config_table set has_pk_uk='NO' where source_owner=tab_rec.source_owner and source_object_name=tab_rec.source_object_name;
dbms_output.put_line ( tab_rec.source_object_name || ' does not have PK or any unique cols');
end if;
end loop;
commit;
end;
/
</pre>
The modified script's output looks like:
<br />
<br />
<pre>T5 does not have PK or any unique cols
T4 does not have PK but has unique cols
T4 does not have PK but has unique cols such as C1,C2
T3 does not have PK but has unique cols
T3 does not have PK but has unique cols such as C1,C2
T2 does not have PK but has unique cols
T2 does not have PK but has unique cols such as C1,C2
T1 has PK
T1 PK cols : C1,C2
</pre>
BTW, the need for this script comes from our tasks of configuring Oracle GoldenGate.ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-32650609126823738182013-09-05T13:56:00.000-04:002013-09-05T13:56:04.110-04:00My Oracle Database Troubleshooting Scripts<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9XaY8QB_6qJ33-M91fUgCezjNITolcf1_tQSOPQP5mp_qEUF7-MKoTbblkMw2un6NcWjwk1v30eKU41bS7ud3x9pSXc4Dtwu6OsZ90Mndq7Gv6NJ96hFQ75AzM11LFvlfP_ro/s1600/myscripts.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9XaY8QB_6qJ33-M91fUgCezjNITolcf1_tQSOPQP5mp_qEUF7-MKoTbblkMw2un6NcWjwk1v30eKU41bS7ud3x9pSXc4Dtwu6OsZ90Mndq7Gv6NJ96hFQ75AzM11LFvlfP_ro/s1600/myscripts.PNG" /></a></div>
<br />
<br />
<br />
<br />
<br />
Whenever I troubleshoot Oracle production database performance issues or just need to do a health check, I will start with a script called <em>event.sql</em>, then I will most likely execute several other scripts from the SQL* Plus command line interface. In the above picture, I grouped my often-used scripts into four categories: Wait Events, Sessions, SQLs and Workload. After executing <em>event.sql</em>, the second script I probably will use is among those green ones; and the third probably from those among blue ones. The methodology I adopted is rooted in wait event analysis. All the scripts mentioned are in <a href="https://sites.google.com/site/denissunsite/scripts/trblshtng_scripts.zip" target="_blank">this zip file</a>. Those scripts are intended to be used for pinpointing or narrowing down the problem area in the first 5 or 10 minnutes of troubleshooting production database issues, which often have a sense of urgency and require the solutions to stablize the system in short time.<br />
<br />
In the following I will give short descriptions of the purpose of each script. Notice in <a href="http://oracle-study-notes.blogspot.com/2009/09/my-collection-of-dba-scripts.html" target="_blank">a previous post</a> I already explained about a few scripts.<br />
<br />
<ul>
<li><strong>Wait Events</strong></li>
</ul>
<br />
<em><strong>event.sql</strong></em> - first script to execute usually; giving the count of each wait event; a quick way to show if there are any abnormalities; in a typical OLTP type database, we shall see 'db file sequential read' as the most counted event after idle events. Tanel Poder shared the thoughts about <em>first round session troubleshooting </em><a href="http://blog.tanelpoder.com/2008/01/05/systematic-application-troubleshooting-in-unix/" target="_blank">here</a>.<br />
<br />
<em><strong>eventashg.sql</strong></em> - show top 5 wait events for a given interval from gv$active_session_history; kind of like AWR top 5 wait events section; RAC -aware makes it probably the first script I will use to check RAC database health.<br />
<br />
<em><strong>sw.sql</strong></em> - from <a href="http://tech.e2sn.com/oracle-scripts-and-tools" target="_blank">Tanel Poder</a>; given SID, show current wait event of a session.<br />
<br />
<em><strong>snapper.sql</strong></em> - from Tanel Poder; very famous, <a href="http://tech.e2sn.com/oracle-scripts-and-tools/session-snapper" target="_blank">check this link</a>! In the SQL Server world, <a href="http://sqlblog.com/blogs/adam_machanic/archive/2012/03/22/released-who-is-active-v11-11.aspx" target="_blank">there is something similar.</a> (<em>snapper_dflt.sql in the zip file </em>is<em> </em> a wrapper for convenience by me).<br />
<br />
<br />
<ul>
<li><strong>Workload</strong></li>
</ul>
<em><strong>logsw.sql</strong> - </em>Display the number of log switch in every hour in a tabular format. Very useful to understand the workload distribution. From <a href="http://www.idevelopment.info/" target="_blank">Jeff Hunter</a>.<br />
<br />
<em><strong>sysmetric.sql</strong> -</em> Display some system metrics from gv$system_metric_history such as Redo Generated Per Sec, Host CPU Utilization (%) and User Transaction Per Sec etc in the past 60 minutes. RAC-aware makes it the 1st or 2nd script I use to check RAC database.<br />
<br />
<em><strong>aas_ash.sql and aas_awr.sql</strong></em> - Display average active sessions from ASH view and AWR, respectively. AAS is an indicator for workload or performance changes.<br />
<br />
<ul>
<li><strong>Sessions</strong></li>
</ul>
<strong><em>sesevt.sql</em></strong> - Given wait event name, show the sessions basic information.<br />
<br />
<em><strong>qlocks.sql</strong></em> - Display blockers and waiters based on v$lock view.<br />
<br />
<em><strong>longsql.sql</strong></em> - Display long running SQLs; A quick way to find candidate "bad" sqls in the database.<br />
<br />
<em><strong>longops.sql</strong></em> - Display long operations from v$session_longops.<br />
<br />
<strong><em>pxses.sql</em></strong> - Display parallel execution server sessions.<br />
<br />
<em><strong>snapper.sql</strong></em> - Yes again! It is really about sessions statistics and wait events.<br />
<br />
<strong><em>sessid.sql</em></strong> - Given session SID, display the session related information.<br />
<br />
<strong><em>ses*.sql</em></strong> - All those are querying v$session given some inputs such as: machine, server process id, OS user, database user and module etc.<br />
<br />
<strong><em>sess_kill_batch.sql</em></strong> - Generate kill database sessions commands.<br />
<br />
<em><strong>sess_kill_os.sql</strong></em> - Generate 'kill -9' command for killing server processes at OS level.<br />
<br />
<ul>
<li><strong>SQLs</strong></li>
</ul>
<em><strong> xplan.sql</strong></em> - Given sql_id, show the execution plan from cursor through dbms_xplan.display_cursor().<br />
<br />
<strong><em>sqlhistory.sql</em></strong> - by <a href="http://evdbt.com/scripts/" target="_blank">Tim Gorman</a>; query the "history" of a specified SQLstatement, using its "SQL ID" across all database instances in a database, using the AWR repository. Show execution statistics per execution plan.<br />
<br />
<em><strong>tabix.sql</strong></em> - List of the indexes of a table and show on which columns and in which order the indexes are.Very usefull when tune a SQL.<br />
<br />
<em><strong>tabcols.sql -</strong></em> Display table column CBO statistics. Very useful when doing SQL tunning. (from: <a href="http://www.roughsea.com/">http://www.roughsea.com</a> ).<br />
<br />
<strong><em>bindvar.sql</em></strong> - When tunning a SQL, I often need to find representative bind values from this script.<br />
<br />
<em><strong>get_ddl.sql -</strong></em> based on dbms_metadata package to obtain definitions of objects. When tuning a SQL, sometimes we want to know the underline table structure and index definitions<br />
<br />
Below are the screenshots of the output of several scripts:<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCaNGrZbu5mWCZ6KyKWNyHgOWPj6uaPZqtnNSaHLCZ7Iowqtilkk1aikpr_6x2YtThLr8jDS0rPwx8c6qm1eRQNu-di5788pSSfxT_NtmFqe9nE2-Ga2zVuQlcsoc1NOv6XdsT/s1600/event.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="142" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCaNGrZbu5mWCZ6KyKWNyHgOWPj6uaPZqtnNSaHLCZ7Iowqtilkk1aikpr_6x2YtThLr8jDS0rPwx8c6qm1eRQNu-di5788pSSfxT_NtmFqe9nE2-Ga2zVuQlcsoc1NOv6XdsT/s320/event.png" width="320" /></a><br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEib4uFI1ks-LkBExfjmOXNrBE4sJwis6zXjQZ8AexeX7tPe-ACABcLn3yQtkuqu-x7SWbCDmOJGhwgxc90IBBITJLhn3CRo7kJ7rOz2UEZjjsTl3kdNvDvNVREKPvFGejguaRgG/s1600/eventashg.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="116" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEib4uFI1ks-LkBExfjmOXNrBE4sJwis6zXjQZ8AexeX7tPe-ACABcLn3yQtkuqu-x7SWbCDmOJGhwgxc90IBBITJLhn3CRo7kJ7rOz2UEZjjsTl3kdNvDvNVREKPvFGejguaRgG/s320/eventashg.png" width="320" /></a></div>
<div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3JPMX0mOIpb7pu7DJFHCw_feiWzg5uzzQc5qgFa_Oe2w8aOpLAsB5T7EdWsDJjbvFirwf7el4FiD5sHeu1MD1At8Fesq20HmsDnk6G_3j8RyNB36RtFm1CiT6k63ewnrAOGJR/s1600/sysmetric.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="265" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3JPMX0mOIpb7pu7DJFHCw_feiWzg5uzzQc5qgFa_Oe2w8aOpLAsB5T7EdWsDJjbvFirwf7el4FiD5sHeu1MD1At8Fesq20HmsDnk6G_3j8RyNB36RtFm1CiT6k63ewnrAOGJR/s320/sysmetric.png" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir991lgkojkOYfxES4wQU-wlXgw3nnfAKyXbpvHMce264neDtneRMsmuDHjk0AGWxVoRJiD12JS0LVLrZlZOaSEQVhWc7hBRR8RhT8nTVliLJtzWuuWdEAmDrdB-fTN50aJntI/s1600/aas.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="208" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir991lgkojkOYfxES4wQU-wlXgw3nnfAKyXbpvHMce264neDtneRMsmuDHjk0AGWxVoRJiD12JS0LVLrZlZOaSEQVhWc7hBRR8RhT8nTVliLJtzWuuWdEAmDrdB-fTN50aJntI/s320/aas.png" width="320" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Additional cool scripts - Collection of DBA Scripts resources on the web:<br />
<br />
<a href="http://tech.e2sn.com/oracle-scripts-and-tools">http://tech.e2sn.com/oracle-scripts-and-tools</a> (Tanel Poder)<br />
<br />
<a href="http://blog.enkitec.com/scripts/">http://blog.enkitec.com/scripts/</a><br />
<br />
<a href="http://evdbt.com/scripts/">http://evdbt.com/scripts/</a> (Tim Gorman)<br />
<br />
<a href="http://www.idevelopment.info/">http://www.idevelopment.info/</a> (Jeffery M. Hunter)<br />
<br />
<a href="http://karlarao.wordpress.com/scripts-resources/">http://karlarao.wordpress.com/scripts-resources/</a><br />
<br />
<a href="http://www.oracle-base.com/dba/scripts.php">http://www.oracle-base.com/dba/scripts.php</a> </div>
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-64368065448622116912013-08-29T14:07:00.000-04:002013-08-29T14:08:12.252-04:00Practise Migration Through Transportable Tablespace from non-ASM to ASMReferences:
<br />
<br />
<a href="http://relcon.typepad.com/relatively_speaking/2009/10/migrating-an-oracle-database-from-solaris-to-linux.html">http://relcon.typepad.com/relatively_speaking/2009/10/migrating-an-oracle-database-from-solaris-to-linux.html</a><br />
<br />
<a href="http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-platformmigrationtts-129269.pdf">http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-platformmigrationtts-129269.pdf</a>
<br />
<br />
<br />
Source database OS is Solaris and using OS file system. Target database is in on Linux and ASM.
<br />
<br />
<strong>Steps:</strong><br />
<br />
<strong>1. Check that platforms are compatible.</strong> <br />
<br />
Run this query on the database you
want to migrate from and it will list the platforms you can migrate to:
<br />
<br />
select * from V$TRANSPORTABLE_PLATFORM;
<br />
<br />
<br />
check the value of ENDIAN_FORMAT, if you are migrating from one format to
another you are going to need to convert your database files, so for example
going from Solaris to Linux you will need to convert.
<br />
<pre>PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ------------------------------------ --------------
6 AIX-Based Systems (64-bit) Big
16 Apple Mac OS Big
21 Apple Mac OS (x86-64) Little
19 HP IA Open VMS Little
15 HP Open VMS Little
5 HP Tru64 UNIX Little
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
18 IBM Power Based Linux Big
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
13 Linux x86 64-bit Little
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
12 Microsoft Windows x86 64-bit Little
17 Solaris Operating System (x86) Little
20 Solaris Operating System (x86-64) Little
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
</pre>
<br />
<strong>2. You can only migrate tablespaces to a database which uses the same
character set and national charcter set, so check if you are migrating to an
existing database or make sure to use the same values if you are building a new
database.</strong>
<br />
<pre> srcdb> SELECT * FROM nls_database_parameters order by 1;
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_CALENDAR GREGORIAN
NLS_CHARACTERSET WE8ISO8859P1
NLS_COMP BINARY
NLS_CSMIG_SCHEMA_VERSION 2
NLS_CURRENCY $
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_DUAL_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_LANGUAGE AMERICAN
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_RDBMS_VERSION 11.2.0.3.0
NLS_SORT BINARY
NLS_TERRITORY AMERICA
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
21 rows selected.
tgtdb> SELECT * FROM nls_database_parameters order by 1;
PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_CALENDAR GREGORIAN
NLS_CHARACTERSET WE8MSWIN1252
NLS_COMP BINARY
NLS_CURRENCY $
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_DUAL_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_LANGUAGE AMERICAN
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_RDBMS_VERSION 11.2.0.3.0
NLS_SORT BINARY
NLS_TERRITORY AMERICA
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
20 rows selected.
</pre>
WE8MSWIN1252 is a superset of WE8ISO8859P1:<br />
Difference between WE8MSWIN1252 and WE8ISO8859P1 characterset (Doc ID 341676.1)
<br />
<br />
Some problems repored from WE8ISO8859P1 to WE8MSWIN1252: <a href="https://forums.oracle.com/message/9885182#9885182">https://forums.oracle.com/message/9885182#9885182</a><br />
<br />
<br />
<strong>3. Assuming you are good to go you need to determine which tablespaces hold
the data you want to transport and then double check that the list is
self-contained, execute this PL/SQL procedure to check the tablespaces you plan
to migrate:</strong>
<br />
<pre> sys@srcdb> execute sys.dbms_tts.transport_set_check('tbs1_data,tbs1_indx',true);
PL/SQL procedure successfully completed.
sys@srcdb> select * from sys.transport_set_violations;
no rows selected
Note: these violations must be resolved before the tablespaces can be transported.
</pre>
<strong>4. The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export</strong>:
<br />
<br />
alter tablespace tbs1_data read only; <br />
<br />
alter tablespace tbs1_indx read only;
<br />
<br />
<br />
<strong>5. Export the metadata.</strong> <br />
<br />
<pre>$ exp userid=\'/ as sysdba\' file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=tbs1_data,tbs1_indx
Export: Release 11.2.0.3.0 - Production on Thu Aug 29 14:42:18 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, Oracle Database Vault and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TBS1_DATA ...
. exporting cluster definitions
. exporting table definitions
. . exporting table EC
EXP-00091: Exporting questionable statistics.
For tablespace TBS1_INDX ...
. exporting cluster definitions
. exporting table definitions
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully with warnings.
</pre>
<strong>6. copy the metadata export, datafiles and any schema exports to the new host</strong>
<br />
<br />
cp tbs1_data_01.dbf /oradbbackup02_NFS/landingpad/dump
<br />
cp tbs1_indx_01.dbf /oradbbackup02_NFS/landingpad/dump<br />
cp tbs_exp.dmp /oradbbackup02_NFS/landingpad/dump
<br />
<br />
Note: /oradbbackup02_NFS is NFS mount at source, and at target the mount is /oradbbackup02
<br />
<br />
<strong>7. convert the datafiles to new platform format using RMAN, in this example we will put the converted datafiles into an ASM instance:</strong>
<br />
<pre>rman target /
convert datafile
'/oradbbackup02/landingpad/dump/tbs1_data_01.dbf',
'/oradbbackup02/landingpad/dump/tbs1_indx_01.dbf'
from platform 'Solaris[tm] OE (64-bit)'
format '+DATA_05'
parallelism 4;
Starting conversion at target at 29-AUG-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=118 instance=tgtdb2 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=124 instance=tgtdb2 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=137 instance=tgtdb2 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=151 instance=tgtdb2 device type=DISK
channel ORA_DISK_1: starting datafile conversion
input file name=/oradbbackup02/landingpad/dump/tbs1_data_01.dbf
channel ORA_DISK_2: starting datafile conversion
input file name=/oradbbackup02/landingpad/dump/tbs1_indx_01.dbf
converted datafile=+DATA_05/tgtdb/datafile/tbs1_data.316.824728129
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:04
converted datafile=+DATA_05/tgtdb/datafile/tbs1_indx.317.824728131
channel ORA_DISK_2: datafile conversion complete, elapsed time: 00:00:04
Finished conversion at target at 29-AUG-13
</pre>
<br />
<strong>8. plug the tablespace into the target database, check the ASM instance to get the names of the converted datafiles:
</strong><br />
<br />
<br />
<br />
-- imp.par
<br />
file=tbs_exp.dmp
<br />
log=tts.log<br />
transport_tablespace=Y
<br />
datafiles=
'+DATA_05/tgtdb/datafile/tbs1_data.316.824728129'
<br />
,'+DATA_05/tgtdb/datafile/tbs1_indx.317.824728131' <br />
<br />
<br />
$ imp userid=\'/ as sysdba\' parfile=imp.par
<br />
<br />
<br />
<pre>
Import: Release 11.2.0.3.0 - Production on Thu Aug 29 12:59:02 2013
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes
Export file created by EXPORT:V11.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8MSWIN1252 character set (possible charset conversion)
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. importing OPS$ORACLE's objects into OPS$ORACLE
. . importing table "EC"
. importing SYS's objects into SYS
Import terminated successfully without warnings.
</pre>
<strong>9. set the imported tablespaces to read write:</strong>
<br />
<br />
alter tablespace tbs1_data read write;
<br />
alter tablespace tbs1_indx read write;
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-63200021682407742452013-08-28T09:57:00.002-04:002013-09-06T16:07:43.401-04:00Get the row count from index fast full scanThis post is to show using parallel_index hint to enable an index fast full scan access path to obtain the row count of a large table with primary key faster. BO_ID is the primary key column of BO_NBQQJOH table.
<br />
<br />
The following summarizes the results:
<br />
<br />
parallel full table scan 14m11s 8.8M consistent gets
<br />
<br />
parallel index fast full scan 3m49s 2.8M consistent gets
<br />
<pre name="code" class="sql">
<span style="font-size: x-small;">SQL> select /*+ parallel (a 6) */ count(bo_id) from ttq_dqg.bo_nbqqjoh_v a;
COUNT(BO_ID)
------------
917384760
1 row selected.
Elapsed: 00:14:11.13
Execution Plan
----------------------------------------------------------
Plan hash value: 1529574786
------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 341K (1)| 01:08:17 | | | |
| 1 | SORT AGGREGATE | | 1 | 7 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 7 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 7 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 912M| 6088M| 341K (1)| 01:08:17 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL| BO_NBQQJOH | 912M| 6088M| 341K (1)| 01:08:17 | Q1,00 | PCWP | |
------------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
158 recursive calls
0 db block gets
8846234 consistent gets
8403973 physical reads
0 redo size
218 bytes sent via SQL*Net to client
252 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
8 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select /*+ parallel_index ( a, BO_NBQQJOH_NEW_PK, 6) */ count(bo_id) from ttq_dqg.bo_nbqqjoh_v a;
COUNT(BO_ID)
------------
917391460
1 row selected.
Elapsed: 00:03:49.60
Execution Plan
----------------------------------------------------------
Plan hash value: 1771473710
----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 7 | 487K (3)| 01:37:33 | | | |
| 1 | SORT AGGREGATE | | 1 | 7 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 7 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 7 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 912M| 6088M| 487K (3)| 01:37:33 | Q1,00 | PCWC | |
| 6 | INDEX FAST FULL SCAN| BO_NBQQJOH_NEW_PK | 912M| 6088M| 487K (3)| 01:37:33 | Q1,00 | PCWP | |
----------------------------------------------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
30 recursive calls
0 db block gets
2854834 consistent gets
2199288 physical reads
0 redo size
237 bytes sent via SQL*Net to client
252 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
</span></pre>
ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-34604230605976185512013-07-26T17:12:00.000-04:002013-07-26T17:12:10.036-04:00Find rows that have duplicate values in a columnThis post is to describe my test on finding rows that have duplicate values in a column. The purpose is to understand which method is more efficient: analytic function approach or subquery approach.
First of all, a test table called "big_table" was created with 10,000,000 rows and an ID column with number data type. The value of ID is from 1 to 10,000,000.
Then I created 10 duplicate rows by the following insert statement:
<br />
<pre>SQL> insert into big_table select * from big_table where rownum<=10;
</pre>
I also created an index on the ID column.
<br />
<pre>SQL> create index big_table_ix on big_table(id) nologging;
Index created.
Elapsed: 00:01:34.60
</pre>
The analytic function approach used the following SQL statement to find those rows:
<br />
<pre> SELECT rid
FROM (SELECT ROWID rid,
ROW_NUMBER () OVER (PARTITION BY id ORDER BY ROWID) rn
FROM big_table)
WHERE rn <>1;
</pre>
The execution plan of above SQL is as follows:
<br />
<pre>-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10M| 238M| | 90333 (2)| 00:18:04 |
|* 1 | VIEW | | 10M| 238M| | 90333 (2)| 00:18:04 |
| 2 | WINDOW SORT | | 10M| 171M| 268M| 90333 (2)| 00:18:04 |
| 3 | TABLE ACCESS FULL| BIG_TABLE | 10M| 171M| | 32467 (1)| 00:06:30 |
-----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RN"<>1)
</pre>
The subquery approach used the following SQL statement:
<br />
<pre> select rowid
from big_table a
where a.rowid >
( select min (rowid)
from big_table b
where b.id = a.id) ;
</pre>
And the execution plan looks like:
<br />
<pre>------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495K| 17M| | 77095 (3)| 00:15:26 |
|* 1 | FILTER | | | | | | |
| 2 | HASH GROUP BY | | 495K| 17M| 425M| 77095 (3)| 00:15:26 |
|* 3 | HASH JOIN | | 10M| 346M| 286M| 39374 (2)| 00:07:53 |
| 4 | INDEX FAST FULL SCAN| BIG_TABLE_IX | 10M| 171M| | 5303 (3)| 00:01:04 |
| 5 | INDEX FAST FULL SCAN| BIG_TABLE_IX | 10M| 171M| | 5303 (3)| 00:01:04 |
------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWID>MIN(ROWID))
3 - access("B"."ID"="A"."ID")
</pre>
If there is no index on the ID colume, the execution plan of the subquery approach looks like:
<br />
<pre>------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 495K| 17M| | 131K (2)| 00:26:18 |
|* 1 | FILTER | | | | | | |
| 2 | HASH GROUP BY | | 495K| 17M| 425M| 131K (2)| 00:26:18 |
|* 3 | HASH JOIN | | 10M| 346M| 286M| 93703 (2)| 00:18:45 |
| 4 | TABLE ACCESS FULL| BIG_TABLE | 10M| 171M| | 32467 (1)| 00:06:30 |
| 5 | TABLE ACCESS FULL| BIG_TABLE | 10M| 171M| | 32467 (1)| 00:06:30 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWID>MIN(ROWID))
3 - access("B"."ID"="A"."ID")
</pre>
<br />
I used the AUTOTRACE tool to obtain the Consistent gets and SET TIMING ON before executing the SQLs. I measured three times for each approach. Below are the results:
<br />
<pre>aproach consistent get exeuction time
-------------------------------------------
analytic 146613 1m50s
146607 1m48s
146545 1m36s
---------------------------------------------
subquery 47293 2m25s
47228 2m18s
47309 2m23s
---------------------------------------------
subquery 293172 2m27s
(no index) 293146 2m30s
293172 2m31s
-------------------------------------------
</pre>
It can be seen that compared to the subquery approach (with index) analytic function approah is faster by ~ 40 s although it requires more logical I/O. The subquery approach (without index) required double I/O compared to the analytic function approach as it needs to scan the big_table twice as shown in the execution plan. The execution time is not determined by I/O, these query is CPU bound. SQL_TRACE/TKPROF can show that clearly. Below is a summary of data from TKPROF report:
<br />
<pre>approach cpu elapsed query
-------------------------------------------
analytic 109.57 111.35 146631
subquery 172.68 173.99 47314
-------------------------------------------
</pre>
Obviously, CPU time is predominant. That explains the very small difference in term of execution time between subquery with and without index approach. It suggests that creating the index on ther id column brings little benefit to find the duplicate values in the column. However, there could be a case that we already know the duplicate values in the column and what we want to do is to remove those extra rows with the duplicate values. This situation acutaully happened recently in our production enviroment. We did a 1.5M data load into a production table. Before the data load we dropped all the indexes of this table. But we made a mistake, we loaded 50K rows twice. As a result, when we tried to create unique index on the pK column we failed. I used the analytic function approach to delete the extra rows, it took 50 min. The delete statement is similar to the following
<br />
<pre> delete from big_table
where rowid in ( SELECT rid
FROM (SELECT ROWID rid,
ROW_NUMBER () OVER (PARTITION BY id ORDER BY ROWID) rn
FROM big_table)
WHERE rn <>1
);
</pre>
The above approach is not the most efficient one in this case. Becasue we already know the 50K duplicate PK values, the better approach is to create a non-unique index on the PK column, construct a driver table (ids) containing the 50K values, using the following SQL:
<br />
<pre> delete from big_table a
where
a.id in ( select id from ids)
and a.rowid >
(select min(rowid)
from big_table b where b.id = a.id);
</pre>
This approach took about 20 min including the time to create the index and the driver table. ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0tag:blogger.com,1999:blog-30263044.post-23796965875045052182013-07-15T01:39:00.000-04:002013-07-15T01:39:07.227-04:00Redo Reduction by Using Global Tempoary TableThere are about 1M rows in the BIG_TABLE. The goal is to load data (transformed in some way in real situation) from BIG_TABLE to MY_BIG_TABLE.The following two procedures were compared:<br />
<br />
<ul>
<li>Procedure test_pro</li>
</ul>
<pre>
create or replace procedure test_pro IS
CURSOR BIG_TABLE_CUR
IS
SELECT id ,owner ,object_name ,object_type ,created ,status from big_table;
TYPE BIG_TABLE_TYPE
IS TABLE OF BIG_TABLE_CUR%ROWTYPE INDEX BY PLS_INTEGER;
BIG_TABLE_REC BIG_TABLE_TYPE;
BEGIN
OPEN BIG_TABLE_CUR;
LOOP
FETCH BIG_TABLE_CUR BULK COLLECT
INTO BIG_TABLE_REC LIMIT 2000;
EXIT WHEN BIG_TABLE_REC.COUNT = 0;
FORALL i IN 1 .. BIG_TABLE_REC.count
INSERT INTO my_big_table VALUES BIG_TABLE_REC (i);
COMMIT;
END LOOP;
CLOSE BIG_TABLE_CUR;
END;
</pre>
<ul>
<li>Procedure test_pro2, here my_big_table_gtt is a global tempoary table</li>
</ul>
<br />
<pre>
create or replace procedure test_pro2 IS
CURSOR BIG_TABLE_CUR
IS
SELECT
id ,owner ,object_name ,object_type ,created ,status from big_table;
TYPE BIG_TABLE_TYPE
IS TABLE OF BIG_TABLE_CUR%ROWTYPE INDEX BY PLS_INTEGER;
BIG_TABLE_REC BIG_TABLE_TYPE;
BEGIN
OPEN BIG_TABLE_CUR;
LOOP
FETCH BIG_TABLE_CUR BULK COLLECT
INTO BIG_TABLE_REC LIMIT 2000;
EXIT WHEN BIG_TABLE_REC.COUNT = 0;
FORALL i IN 1 .. BIG_TABLE_REC.count
INSERT INTO my_big_table_gtt VALUES BIG_TABLE_REC (i);
COMMIT;
END LOOP;
CLOSE BIG_TABLE_CUR;
execute immediate 'insert /*+ append */ into my_big_table select * from my_big_table_gtt';
commit;
END;
</pre>
I measured the redo by v$mystat in the test, the result is showed as follows: <br />
<br />
test_pro redo 71M run time: 4.57 s <br />
test_pro2 redo 6M run time: 4.59 s<br />
<br />
The test database is in noarchivelog mode.ydshttp://www.blogger.com/profile/15884036034395020134noreply@blogger.com0