Monday, December 17, 2007

A Testcase for Transportable Tablespace

To be familiar myself with the transportable tablespace feature, I conducted this test:

Test Steps

1. Create a tablespace called TEST_TTS for test

connect / as sysdba
create tablespace test_tts datafile='e:\temp\test_tts01.dbf' size 2M;
connect scott/tiger
create table t tablespace test_tts as
select rownum id, a.* from all_objects a where rownum <100;


2. Check if the tablespace is self-contained (optional)

sys@TEST10GR> execute dbms_tts.transport_set_check('test_tts',true);

PL/SQL procedure successfully completed.

3. Make the tablespace READ ONLY:

SQL>  alter tablespace test_tts read only;

scott@TEST10GR> select tablespace_name, status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
~
TEST_TTS                       READ ONLY

4. Export the tablespace. At the source host, from the OS prompt, issue:

#> exp tablespaces=test_tts transport_tablespace=y file=exp_test_tts.dmp

Export: Release 10.2.0.1.0 - Production on Mon Dec 17 13:38:45 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Username: sys/oracle as sysdba

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TEST_TTS ...
. exporting cluster definitions
. exporting table definitions
. . exporting table                              T
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings


Notes: Must log in as SYSDBA to perform the exp

5. Copy the files exp_test_tts.dmp and e:\temp\test_tts01.dbf to the directory e:\Denis_File\temp at the target host

6. Make the tablespace read write at source database

SQL> alter tablespace test_tts read write;

7. At the target host, plug the tablespace into the target database.
From the OS command prompt, issue:

cd E:\Denis_Files\temp
E:\Denis_Files\temp>ls -lh  *.DBF *.dmp
-rw-r--r-- 1 Denis Sun None 2.1M Dec 17 09:28 TEST_TTS01.DBF
-rw-r--r-- 1 Denis Sun None 4.0K Dec 17 09:40 exp_test_tts.dmp

E:\Denis_Files\temp>imp tablespaces=test_tts transport_tablespace=y file=exp_tes
t_tts.dmp datafiles='E:\Denis_Files\temp\TEST_TTS01.DBF' tts_owners=scott

Import: Release 10.2.0.1.0 - Production on Mon Dec 17 13:45:42 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Username: sys/oracle as sysdba

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Produc
tion With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
About to import transportable tablespace(s) metadata...
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. importing SCOTT's objects into SCOTT
. . importing table                            "T"
. importing SYS's objects into SYS
Import terminated successfully without warnings.


Notes: we can specify fromuser, touser to map differect schemas.

8. Verify the target database

SQL> select name from v$datafile;

NAME
-------------------------------------------------------------

D:\ORADATA\DB10G\SYSTEM01.DBF
D:\ORADATA\DB10G\UNDOTBS01.DBF
D:\ORADATA\DB10G\SYSAUX01.DBF
D:\ORADATA\DB10G\USERS01.DBF
E:\DENIS_FILES\TEMP\TEST_TTS01.DBF


9. Log into the source and target database, issue

SQL> drop tablespace test_tts including contents and datafiles;




Notes:
-----

Find the byte order of operating systems

select * from v$transportable_platform order by platform_id

PLATFORM_ID PLATFORM_NAME                       ENDIAN_FORMAT
----------- ----------------------------------- --------------
1           Solaris[tm] OE (32-bit)             Big
2           Solaris[tm] OE (64-bit)             Big
3           HP-UX (64-bit)                      Big
4           HP-UX IA (64-bit)                   Big
5           HP Tru64 UNIX                       Little
6           AIX-Based Systems (64-bit)          Big
7           Microsoft Windows IA (32-bit)       Little
8           Microsoft Windows IA (64-bit)       Little
9           IBM zSeries Based Linux             Big
10          Linux IA (32-bit)                   Little
11          Linux IA (64-bit)                   Little
12          Microsoft Windows 64-bit for AMD    Little
13          Linux 64-bit for AMD                Little
15          HP Open VMS                         Little
16          Apple Mac OS                        Big
17          Solaris Operating System (x86)      Little
18          IBM Power Based Linux               Big

Friday, December 14, 2007

RLS workshop

The purpsoe of this workshop is to demonstrate the use of RLS ( row level security, aka virtual private database (VPD), find-grained access )


Scenario - 1

Descriptsion:
Use RLS to modify the following query:
select * from emp;
transparently to the query:
select * from emp where sal <= 1500;

Steps:

1. Create a function authorized_emps
Logon as SCOTT
Script: f_authorized_emps.sql

2. set up an RLS policy
Logon as SCOTT
Script: add_policy.sql

3. Test
a. select statement
select * from emp;

Notes: only 7 rows returned not 14 rows

b. update statement
scott@TEST10GR> update emp set sal=1600;
7 rows updated.

c. delete statement
scott@TEST10GR> delete from emp;
0 rows deleted.

Notes: becuase as a result of (b) all the row have sal > 1500, so in (c)
we see 0 rows deleted. If we are not aware of RLS, this can be confuse.

d. issue: rollback

4. cleaup
script: cleanup_scnr1.sql



 



Scenario - 2: Using update_check in a policy

Steps:

1. Create a function authorized_emps
Logon as SCOTT
Script: f_authorized_emps.sql

2. Set up an RLS policy
Logon as SCOTT
Script: add_policy_uc.sql

3. Test with update

scott@TEST10GR> update emp set sal=1600;
update emp set sal=1600
*
ERROR at line 1:
ORA-28115: policy with check option violation

Notes:
The ORA-28115 error is raised because the policy now prevents any
updates to the value of columns in a row that will cause a change
in the visibility of those rows with the specified predicate.

4. Clean up
Script: cleanup_scnr1.sql (note: cleanup_scnr2.sql not exist)



 



Scenario - 3: Problems with static policys

Assume that the predicate depends on a changing value

Steps:
1. Setup the test
Script: setup_scnr3.sql

2. Set up an RLS policy
Logon as SCOTT
Script: add_policy_scnr3.sql

3. Test with dynamic policy (by default)
(a) Issue following statement once:
select count(*) from emp;
returen 14

(b) check trigger_fire

scott@TEST10GR> select * from trigger_fire;

VAL
----------
3

Notes: Because the policy function was called twice: once during
the parse phase and once during the execution phase,
the value was incremented by 2 from 1.

(c) select count for 5 times:

scott@TEST10GR> select count(*) from emp;

COUNT(*)
----------
14

scott@TEST10GR> select count(*) from emp;

COUNT(*)
----------
0

scott@TEST10GR> select count(*) from emp;

COUNT(*)
----------
0

scott@TEST10GR> select count(*) from emp;

COUNT(*)
----------
0

scott@TEST10GR> select count(*) from emp;

COUNT(*)
----------
7

(d) check trigger_fire
scott@TEST10GR> select * from trigger_fire;

VAL
----------
13
Notes: it can be seen for each selec count statement, the function gets
called twice

4. Test with static policy

(a) Run script: reset_scnr3.sql

(b) Run following statment multiple times:
select count(*) from emp;

Notes: the return number always the same.
(c) check trigger_fire

scott@TEST10GR> select * from trigger_fire;
VAL
----------
2
Notes: By declaring a policy as static, we have effectively instructed
the policy function to execute only once, and the policy to reuse
the predicate originally created, even though the predicate might
have changed in the course of time. This behavior might produce
unexpected results in your application, so you should use static
policies with great caution.



  



Scenario - 4 : Dynamic policy
-----------------------------
Description: A user can only see the infomation in his own department
in the emp table

Implement:
Create a special schema named RLSOWNER, which creates all the
RLS policies and owns all the policy functions. Only this user,
and no others, is granted the EXEMPT ACCESS POLICY system privilege.


Test steps:

1. Setup

scripts setup_scnr4.sql

2. test
script: test_scnr4.sql

Logon as differenct users, issuem the same statement:
select ename, sal, deptno from emp;

Output:

--- Logon as Martin
Connected.

ENAME SAL DEPTNO
---------- ---------- ----------
ALLEN 1600 30
WARD 1250 30
MARTIN 1250 30
BLAKE 2850 30
TURNER 1500 30
JAMES 950 30

6 rows selected.

--- Logon as Ford
Connected.

ENAME SAL DEPTNO
---------- ---------- ----------
SMITH 840 20
JONES 2975 20
SCOTT 3000 20
ADAMS 1100 20
FORD 3000 20

--- Logon as Miller
Connected.

ENAME SAL DEPTNO
---------- ---------- ----------
CLARK 2450 10
KING 5000 10
MILLER 1300 10

--- Logon as Noexist
Connected.
select ename, sal, deptno from emp
*
ERROR at line 1:
ORA-28112: failed to execute policy function


Notes: if a user is not exist in the scott.emp table we saw ORA-28112 error
and we can find a trace file under user_dump_dest. The policy function
may be modified to give a warning and prevent the trace file in this case

Solution: we can create a role called EMP_SELECT, we grant select on
emp to this role, then only grant this role to those users in the emp table.

3. Cleanup
script: cleanup_scnr4.sql




 



Appendix A - Scripts


***** Script: add_policy.sql **************************************************
BEGIN
DBMS_RLS.add_policy (
object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMP_POLICY',
function_schema => 'SCOTT',
policy_function => 'AUTHORIZED_EMPS',
statement_types => 'INSERT, UPDATE, DELETE, SELECT'
);
END;
/

***** Script: add_policy_uc.sql ***************************************
BEGIN
DBMS_RLS.add_policy (
object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMP_POLICY',
function_schema => 'SCOTT',
policy_function => 'AUTHORIZED_EMPS',
statement_types => 'INSERT, UPDATE, DELETE, SELECT',
update_check => TRUE
);
END;
/
***** Script: add_policy_scnr3.sql ************************************
BEGIN
DBMS_RLS.add_policy (
object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMP_POLICY',
function_schema => 'SCOTT',
policy_function => 'AUTHORIZED_EMPS',
statement_types => 'INSERT, UPDATE, DELETE, SELECT',
update_check => TRUE
static_policy => FALSE
);
END;
/


***** Script: cleanup_scnr1.sql ***************************************

BEGIN
DBMS_RLS.drop_policy (object_name => 'EMP', policy_name => 'EMP_POLICY') ;
END;
/

drop function authorized_emps
/

***** Script: cleanup_scnr3.sql ****************************************

BEGIN
DBMS_RLS.drop_policy (object_name => 'EMP', policy_name => 'EMP_POLICY') ;
END;
/

drop function authorized_emps
/

drop table trigger_fire
/

**** Script: cleanup_scnr4.sql ****************************************

connect / as sysdba
drop user rlsowner cascade;
drop user martin cascade;
drop user miller cascade;
drop user ford cascade;
drop user noexist cascade;
drop public synonym emp;

***** Script: f_authorized_emps.sql ***********************************

CREATE OR REPLACE FUNCTION authorized_emps (
p_schema_name IN VARCHAR2,
p_object_name IN VARCHAR2
)
RETURN VARCHAR2
IS
BEGIN
RETURN 'SAL <= 1500';
END;
/

***** Script: reset_scnr3.sql *****************************************

BEGIN
DBMS_RLS.drop_policy (object_name => 'EMP', policy_name => 'EMP_POLICY');
END;
/


BEGIN
DBMS_RLS.add_policy (
object_name => 'EMP',
policy_name => 'EMP_POLICY',
function_schema => 'SCOTT',
policy_function => 'AUTHORIZED_EMPS',
statement_types => 'INSERT, UPDATE, DELETE, SELECT',
update_check => TRUE,
static_policy => TRUE
);
END;
/

-- Reset the table TRIGGER_FIRE
UPDATE trigger_fire SET val = 1;
COMMIT;


***** Script: setup_scnr3.sql *****************************************

create table trigger_fire(val number);
insert into trigger_fire values(1);
commit;

CREATE OR REPLACE FUNCTION authorized_emps (
p_schema_name IN VARCHAR2,
p_object_name IN VARCHAR2
)
RETURN VARCHAR2
IS
l_return_val VARCHAR2 (2000);
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
l_return_val := 'SAL <= ' TO_NUMBER (TO_CHAR (SYSDATE, 'ss')) * 100;
UPDATE trigger_fire SET val = val + 1;
COMMIT;
RETURN l_return_val;
END;
/


**** Script: setup_scnr4.sql ***************************************

connect / as sysdba
create user rlsowner identified by rlsowner;
grant connect, resource to rlsowner;
grant exempt access policy to rlsowner;
grant execute on dbms_rls to rlsowner;


REM create users appeared in the emp table for testing purpose
create user martin identified by martin;
create user miller identified by miller;
create user ford identified by ford;
create user noexist identified by noexist;
grant connect to martin;
grant connect to miller;
grant connect to ford;
grant connect to noexist;


create public synonym emp for scott.emp;
grant select on emp to public;

connect rlsowner/rlsowner

CREATE OR REPLACE FUNCTION authorized_emps (
p_schema_name IN VARCHAR2,
p_object_name IN VARCHAR2
)
RETURN VARCHAR2
IS
l_deptno NUMBER;
l_return_val VARCHAR2 (2000);
BEGIN
IF (p_schema_name = USER)
THEN
l_return_val := NULL;
ELSE
SELECT deptno
INTO l_deptno
FROM emp
WHERE ename = USER;
l_return_val := 'DEPTNO = ' l_deptno;
END IF;

RETURN l_return_val;
END;
/

BEGIN
DBMS_RLS.add_policy (
object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMP_POLICY',
function_schema => 'RLSOWNER',
policy_function => 'AUTHORIZED_EMPS',
statement_types => 'INSERT, UPDATE, DELETE, SELECT',
update_check => TRUE
);
END;
/

exit;


**** Script: test_scnr4.sql ****************************************

Prompt --- Logon as Martin
connect martin/martin
select ename, sal, deptno from emp;

Prompt --- Logon as Ford
connect ford/ford
select ename, sal, deptno from emp;

Prompt --- Logon as Miller
connect miller/miller
select ename, sal, deptno from emp;

Prompt --- Logon as Noexist
connect noexist/noexist
select ename, sal, deptno from emp;


Tuesday, December 04, 2007

Sort operation is counted same regardless of the table size

Sort operation is counted same regardless of whether the table is big or small

In the following example, one sort is in memory against a small table, one sort is in disk against a big table, but they both count 1 sort operation.



**********************  Case 1  ***************************

scott@ORCL> select * from emp order by 5;

15 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 301026995

----------------------------------------------------------------------------
Id Operation Name Rows Bytes Cost (%CPU) Time
----------------------------------------------------------------------------
0 SELECT STATEMENT 15 540 4 (25) 00:00:01
1 SORT ORDER BY 15 540 4 (25) 00:00:01
* 2 FILTER
3 TABLE ACCESS FULL EMP 15 540 3 (0) 00:00:01
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(CASE WHEN "SAL">=2000 THEN
SYS_AUDIT('SCOTT','EMP','EMP_SEL',3) ELSE NULL END IS NULL)


Statistics
----------------------------------------------------------
4 recursive calls
15 db block gets
9 consistent gets
0 physical reads
696 redo size
1356 bytes sent via SQL*Net to client
381 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
15 rows processed

********************** Case 2 ***************************

scott@ORCL> select * from big_table order by 5;

1000000 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1472477105

----------------------------------------------------------------------------------------
Id Operation Name Rows Bytes TempSpc Cost (%CPU) Time
----------------------------------------------------------------------------------------
0 SELECT STATEMENT 998K 98M 24421 (1) 00:12:03
1 SORT ORDER BY 998K 98M 260M 24421 (1) 00:12:03
2 TABLE ACCESS FULL BIG_TABLE 998K 98M 3383 (2) 00:01:41
----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
117 recursive calls
43 db block gets
14457 consistent gets
30123 physical reads
520 redo size
21149727 bytes sent via SQL*Net to client
733707 bytes received via SQL*Net from client
66668 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
1000000 rows processed

Sunday, December 02, 2007

Combine Multiples Scans with CASE Statements

Combining multiple scans into one scan can be done by moving the WHERE condition of each scan into a CASE statement, which filters the data for the aggregation.



hr@ORCL> select count(*) from employees where salary < 10000;
COUNT(*)
----------
87

hr@ORCL> select sum(salary) from employees where salary < 10000;
SUM(SALARY)
-----------
463795.5

hr@ORCL> select avg(salary) from employees where salary < 10000;
AVG(SALARY)
-----------
5330.98276

hr@ORCL> SELECT COUNT (CASE WHEN salary < 10000
2 THEN 1 ELSE null END) count1,
3 SUM (CASE WHEN salary < 10000
4 THEN salary ELSE null END) sum1,
5 AVG (CASE WHEN salary < 10000
6 THEN salary ELSE null END) avg1
7 FROM employees;

COUNT1 SUM1 AVG1
---------- ---------- ----------
87 463795.5 5330.98276


Understanding Oracle Analytical Function: row_number()


Purpose
-------
Understanding Oracle analytical function: row_number()

Syntax
------
row_number() over (order by col_1 [, col_2 ...])
row_number() over (partition by col_n [, col_m ... ]
order by col_1 [, col_2 ...])

row_number() returns an integer greater or equal to one



Test steps
-----------

1. Without partition

*** Run script: row_number_noprttn.tst from sqlplus

scott@ORCL> ho cat row_number_noprttn.tst

create table row_number_test (
a number,
b varchar2(20)
);

insert into row_number_test values (22, 'twenty two');
insert into row_number_test values ( 1, 'one');
insert into row_number_test values (13, 'thirteen');
insert into row_number_test values ( 5, 'five');
insert into row_number_test values ( 4, 'four');

select a, b, row_number() over (order by b)
from row_number_test
order by a;

drop table row_number_test;

*** Results:

A B ROW_NUMBER()OVER(ORDERBYB)
---------- -------------------- --------------------------
1 one 3
4 four 2
5 five 1
13 thirteen 4
22 twenty two 5


*** Notes: the value of ROW_NUMER col is obtained by order by B
whith five the lowest and twenty two the highest


2. With partition

*** Run script: row_number_prttn.tst from sqlplus

scott@ORCL> ho cat row_number_prttn.tst
create table row_number_test_2 (
a number,
b varchar2(20),
c char(1)
);

insert into row_number_test_2 values (22, 'twenty two', '*');
insert into row_number_test_2 values ( 1, 'one', '+');
insert into row_number_test_2 values (13, 'thirteen', '*');
insert into row_number_test_2 values ( 5, 'five', '+');
insert into row_number_test_2 values ( 4, 'four', '+');

select
a, b, row_number() over (partition by c order by b)
from
row_number_test_2
order
by a;

drop table row_number_test_2;

*** Output

A B ROW_NUMBER()OVER(PARTITIONBYCORDERBYB)
---------- -------------------- --------------------------------------
1 one 3
4 four 2
5 five 1
13 thirteen 1
22 twenty two 2

*** Notes:
In this case, there are two partitions resulted by partition by c
and the row_number col value starts from 1 in each partition


Friday, November 30, 2007

Steps in the Emergency Performance Method

Steps in the Emergency Performance Method

(copy from: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96532/ch3.htm#9798 )

1. Survey the performance problem and collect the symptoms of the performance problem.

This process should include the following:

* User feedback on how the system is underperforming. Is the problem throughput or response time?
* Ask the question, "What has changed since we last had good performance?" This answer can give clues to the problem; however, getting unbiased answers in an escalated situation can be difficult.


2. Sanity-check the hardware utilization of all components of the application system.

Check where the highest CPU utilization is, and check the disk, memory usage, and network performance on all the system components. This quick process identifies which tier is causing the problem. If the problem is in the application, then shift analysis to application debugging. Otherwise, move on to database server analysis.

3. Determine if the database server is constrained on CPU or if it is spending time waiting on wait events.

(a) If the database server is CPU-constrained, then investigate the following:

* Sessions that are consuming large amounts of CPU at the operating system level
* Sessions or statements that perform many buffer gets at the database level (check V$SESSTAT, V$SQL)
* Execution plan changes causing sub-optimal SQL execution (these can be difficult to locate)
* Incorrect setting of initialization parameters
* Algorithmic issues as a result of code changes or upgrades of all components

(b) If the database sessions are waiting on events, then follow the wait events listed in V$SESSION_WAIT to determine what is causing serialization.

In cases of massive contention for the library cache, it might not be possible to logon or submit SQL to the database. In this case, use historical data to determine why there is suddenly contention on this latch. If most waits are for I/O, then sample the SQL being run by the sessions that are performing all of the I/Os.


4. Apply emergency action to stabilize the system.

This could involve actions that take parts of the application off-line or restrict the workload that can be applied to the system. It could also involve a system restart or the termination of job in process. These naturally have service level implications.

5. Validate that the system is stable.

Having made changes and restrictions to the system, validate that the system is now stable, and collect a reference set of statistics for the database. Now follow the rigorous performance method described earlier in this book to bring back all functionality and users to the system. This process may require significant application re-engineering before it is complete.

Top Ten Mistakes Found in Oracle Systems

Copied from Oracle9i Database Performance Planning Release 2 (9.2)

Top Ten Mistakes Found in Oracle Systems


  1. Bad Connection Management

    The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance, and it is totally unscalable.

  2. Bad Use of Cursors and the Shared Pool

    Not using cursors results in repeated parses. If bind variables are not used, then there is hard parsing of all SQL statements. This has an order of magnitude impact in performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of applications generating dynamic SQL.

  3. Getting Database I/O Wrong

    Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly, because they configure disks by disk space and not I/O bandwidth.

  4. Redo Log Setup Problems

    Many sites run with too few redo logs that are too small. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and I/O system. If there are too few redo logs, then the archive cannot keep up, and the database will wait for the archive process to catch up.

  5. Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.

    This is particularly common on INSERT-heavy applications, in applications that have raised the block size to 8K or 16K, or in applications with large numbers of active users and few rollback segments.

  6. Long Full Table Scans

    Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive and unscalable.

  7. In Disk Sorting

    In disk sorts for online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Disk sorts, by nature, are I/O-intensive and unscalable.

  8. High Amounts of Recursive (SYS) SQL

    Large amounts of recursive SQL executed by SYS could indicate space management activities, such as extent allocations, taking place. This is unscalable and impacts user response time. Recursive SQL executed under another user ID is probably SQL and PL/SQL, and this is not a problem.

  9. Schema Errors and Optimizer Problems

    In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementation. Examples of this are missing indexes or incorrect statistics. These errors can lead to sub-optimal execution plans and poor interactive user performance. When migrating applications of known performance, export the schema statistics to maintain plan stability using the DBMS_STATS package.

    Likewise, optimizer parameters set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer settings should be managed together as a group to ensure consistency of performance.

  10. Use of Nonstandard Initialization Parameters

    These might have been implemented based on poor advice or incorrect assumptions. In particular, parameters associated with SPIN_COUNT on latches and undocumented optimizer features can cause a great deal of problems that can require considerable investigation.

Performance Diagnosis - Understanding the core SQL statments

From:
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96532/ch2.htm


Understanding the core SQL statements in an application is key to performance diagnosis.


For example, a core business transaction is required to run in a subsecond response time. Initial investigation of the transaction shows that this transaction performs 200 logical reads, of which 40 are always obtained from disk. Taking a disk response time of 20 milliseconds, the likely I/O time is 40 x .02 = 0.8 seconds, which probably fails the response time target. The DBA requests that the transaction be rewritten, and the number of logical I/Os is reduced to 80, with an average of five coming from disk.

Thursday, November 29, 2007

Manage ASM - Some tests

Test with ASM
================


1. Login to the ASM instance

NTBK-/home/Denis/e_oracle/Practise_Oracle/asm_practice>export ORACLE_SID=+ASM
NTBK-/home/Denis/e_oracle/Practise_Oracle/asm_practice>sqlplus / as sysdba


2. Disk discovery
select name, header_status, path from v$asm_disk;

NAME HEADER_STATU PATH
------------------------------ ------------ ------------------------------
ORCL_DG1_0000 MEMBER E:\ASMDISKS\_FILE_DISK1
ORCL_DG1_0001 MEMBER E:\ASMDISKS\_FILE_DISK2
ORCL_DG1_0002 MEMBER E:\ASMDISKS\_FILE_DISK3
ORCL_DG1_0003 MEMBER E:\ASMDISKS\_FILE_DISK4

3. Create a diskgroup

idle> ;
1 CREATE DISKGROUP orcl_dg2 external REDUNDANCY
2 DISK 'e:\asmdisks\_file_disk10',
3* 'e:\asmdisks\_file_disk11'

idle> @disk_discover

NAME MOUNT_S HEADER_STATU STATE PATH
------------------------------ ------- ------------ -------- ------------------------------
ORCL_DG1_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK1
ORCL_DG2_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK10
ORCL_DG1_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK2
ORCL_DG1_0002 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK3
ORCL_DG1_0003 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK4
ORCL_DG2_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK11


4. Add a disk

ALTER DISKGROUP orcl_dg2 ADD DISK
'e:\asmdisks\_file_disk12';

idle> @disk_discover.sql

NAME MOUNT_S HEADER_STATU STATE PATH
------------------------------ ------- ------------ -------- -----------------------------
ORCL_DG1_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK1
ORCL_DG1_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK2
ORCL_DG1_0002 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK3
ORCL_DG1_0003 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK4
ORCL_DG2_0000 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK10
ORCL_DG2_0001 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK11
ORCL_DG2_0002 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK12


5. Drop a disk

idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG1_0000 82
ORCL_DG1_0001 74
ORCL_DG1_0002 79
ORCL_DG1_0003 77
ORCL_DG2_0000 18
ORCL_DG2_0001 18
ORCL_DG2_0002 18

7 rows selected.

idle> alter diskgroup orcl_dg2 drop disk orcl_dg2_0002;

Diskgroup altered.
idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG1_0000 82
ORCL_DG1_0001 74
ORCL_DG1_0002 79
ORCL_DG1_0003 77
ORCL_DG2_0000 26
ORCL_DG2_0001 26
100

6. Resize a diskgroup

idle> select name, total_mb from v$asm_diskgroup;

NAME TOTAL_MB
------------------------------ ----------
ORCL_DG1 396
ORCL_DG2 200


idle> alter diskgroup orcl_dg2 resize all size 80M;

Diskgroup altered.

idle> select name, total_mb from v$asm_diskgroup;

NAME TOTAL_MB
------------------------------ ----------
ORCL_DG1 396
ORCL_DG2 160


7. Mount/Dismount ASM diskgroup

idle> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
ORCL_DG1 MOUNTED
ORCL_DG2 MOUNTED


idle> alter diskgroup orcl_dg2 dismount;

Diskgroup altered.

idle> alter diskgroup orcl_dg1 dismount;
alter diskgroup orcl_dg1 dismount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15027: active use of diskgroup "ORCL_DG1" precludes its dismount

Notes: If you try to dismount a disk group that contains open files,
the statement will fail, unless you also specify the FORCE clause.

idle> select name, state from v$asm_diskgroup;

NAME STATE
------------------------------ -----------
ORCL_DG1 MOUNTED
ORCL_DG2 DISMOUNTED

idle> alter diskgroup orcl_dg2 mount;



8. Observing rebalancing when add a disk

idle> select name, total_mb, free_mb from v$asm_diskgroup;

NAME TOTAL_MB FREE_MB
------------------------------ ---------- ----------
ORCL_DG1 496 131
ORCL_DG2 160 107

SQL> create tablespace asm_users datafile '+ORCL_DG2' size 100m;

idle> select name, total_mb, free_mb from v$asm_diskgroup;

NAME TOTAL_MB FREE_MB
------------------------------ ---------- ----------
ORCL_DG1 496 131
ORCL_DG2 160 5

idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG2_0000 78
ORCL_DG2_0001 77


alter diskgroup orcl_dg2 add disk 'e:\asmdisks\_file_disk13';
alter diskgroup orcl_dg2 add disk 'e:\asmdisks\_file_disk14';

idle> select name, total_mb - free_mb "USED_MB" from v$asm_disk order by 1;

NAME USED_MB
------------------------------ ----------
ORCL_DG2_0000 43
ORCL_DG2_0001 44
ORCL_DG2_0002 20
ORCL_DG2_0003 52

Wednesday, November 28, 2007

Turn on and off tracing in another session

Turn on and off tracing in another session

Turn on:
exec sys.dbms_system.set_ev(34, 20716, 10046, 12, '');

Turn off:
exec sys.dbms_system.set_ev(34, 20716, 10046, 0, '');

dbms_system.set_ev

PROCEDURE SET_EV
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
SI BINARY_INTEGER IN
SE BINARY_INTEGER IN
EV BINARY_INTEGER IN
LE BINARY_INTEGER IN
NM VARCHAR2 IN

si = sid
se = serial#
ev = event = 10046
le = level = 12
nm = name = ''

Sunday, November 25, 2007

Usage Scenario - Transporting and Attaching Partitions for Data Warehousing

Goal: Employ trasportable tablespace feature to Load monthly sales data into a datawarehouse

Ref: This usage scenario is taken from Oracle9i Database Administrator's Guide

1. Suppose every month you would like to load one month's worth of data into
the data warehouse. There is a large fact table in the data warehouse
called sales, which has the following columns:

CREATE TABLE sales (invoice_no NUMBER,
sale_year INT NOT NULL,
sale_month INT NOT NULL,
sale_day INT NOT NULL)
PARTITION BY RANGE (sale_year, sale_month, sale_day)
(partition jan98 VALUES LESS THAN (1998, 2, 1),
partition feb98 VALUES LESS THAN (1998, 3, 1),
partition mar98 VALUES LESS THAN (1998, 4, 1),
partition apr98 VALUES LESS THAN (1998, 5, 1),
partition may98 VALUES LESS THAN (1998, 6, 1),
partition jun98 VALUES LESS THAN (1998, 7, 1));

2. You create a local nonprefixed index:
CREATE INDEX sales_index ON sales(invoice_no) LOCAL;

3. Initially, all partitions are empty, and are in the same default tablespace.
Each month, you want to create one partition and attach it to the
partitioned sales table.

4. Suppose it is July 1998, and you would like to load the July sales data into
the partitioned table.

5. In a staging database, you create a new tablespace, ts_jul. You also create
a table, jul_sales, in that tablespace with exactly the same column types as
the sales table. You can create the table jul_sales using the
CREATE TABLE ... AS SELECT statement.

6. After creating and populating jul_sales,
you can also create an index, jul_sale_index, for the table, indexing the same
column as the local index in the sales table.

7. After building the index, transport the tablespace ts_jul to the data warehouse.

8. In the data warehouse, add a partition to the sales table for the July sales data.
This also creates another partition for the local nonprefixed index:

ALTER TABLE sales ADD PARTITION jul98 VALUES LESS THAN (1998, 8, 1);

9. Attach the transported table jul_sales to the table sales by exchanging it
with the new partition:

ALTER TABLE sales EXCHANGE PARTITION jul98 WITH TABLE jul_sales
INCLUDING INDEXES
WITHOUT VALIDATION;

Notes: This statement places the July sales data into the new partition jul98,
attaching the new data to the partitioned table. This statement also converts
the index jul_sale_index into a partition of the local index for the sales table.
This statement should return immediately, because it only operates on the structural
information and it simply switches database pointers.
If you know that the data in the new partition does not overlap with data in
previous partitions, you are advised to specify the WITHOUT VALIDATION option.
Otherwise, the statement goes through all the new data in the new partition in
an attempt to validate the range of that partition.

If all partitions of the sales table came from the same staging database
(the staging database is never destroyed), the exchange statement always succeeds.
In general, however, if data in a partitioned table comes from different databases,
it's possible that the exchange operation may fail. For example, if the jan98 partition
of sales did not come from the same staging database, the above exchange operation
can fail, returning the following error:

ORA-19728: data object number conflict between table JUL_SALES and partition
JAN98 in table SALES

To resolve this conflict, move the offending partition by issuing the following statement:

ALTER TABLE sales MOVE PARTITION jan98;

Then retry the exchange operation.

10. After the exchange succeeds, you can safely drop jul_sales and jul_sale_index
(both are now empty). Thus you have successfully loaded the July sales data
into your data warehouse.

Tuesday, November 20, 2007

Display Session Waiting for Locks Using utllockt.sql script

The utllockt.sql script displays, in tree-structured fashion, the sessions in the system that are waiting for locks and the locks that they are waiting for.

The following test is done with Oracle 10gR2 database:


In session 1:

scott@ORCL> update emp set job=lower(job) where empno=1000;

1 row updated.


In session 2:

scott@ORCL> update emp set ename=upper(ename) where empno = 1000;

In session 3:

scott@ORCL> delete from emp where empno=1000;




cd /cygdrive/c/oraclexe/app/oracle/product/10.2.0/server/RDBMS/ADMIN
sqlplus / as sysdba


sys@ORCL> @utllockt.sql

...

WAITING_SESSION LOCK_TYPE MODE_REQUESTED MODE_HELD LOCK_ID1 LOCK_ID2
----------------- ----------------- -------------- ------------ --------- -----------------
151 None
135 Transaction Exclusive Exclusive 524333 13026
147 Transaction Exclusive Exclusive 524333 13026



After commit in session 1, run utllockt.sql again:


WAITING_SESSION LOCK_TYPE MODE_REQUESTED MODE_HELD LOCK_ID1 LOCK_ID2
----------------- ----------------- -------------- -------------- ----------------- -----------------
135 None
147 Transaction Exclusive Exclusive 131072 12916


After commit in session 2

scott@ORCL> delete from emp where empno=1000;

1 row deleted.

scott@ORCL> rollback;

Rollback complete.

scott@ORCL> select * from emp where empno=1000;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- -------------------- ---------- ---------- ----------
1000 DENNISII

Monday, November 19, 2007

Testing ASM on the Windows platform Using blank files

Testing ASM on the Windows platform Using blank files


Ref: http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_22.shtml


Environment: Oracle 10gR2 on Windows XP PC

1. Use DBCA to create an ASM instance

During this process, run the following script as asked:

C:\oraclexe\app\oracle\product\10.2.0\server\BIN>localconfig.bat add
Step 1: creating new OCR repository
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'denis', privgrp ''..
Operation successful.
Step 2: creating new CSS service
successfully created local CSS service
successfully added CSS to home

Notes: An ASM service will be automatically created

2. Set Initialization Parameter

C:\> set ORACLE_SID=+ASM
C:\> sqlplus "/ as sysdba"
SQL> alter system set "_asm_allow_only_raw_disks"=false scope=spfile;
SQL> alter system set asm_diskstring='e:\asmdisks\_file*' scope=both;

3. Bounce the ASM Instance

SQL> shutdown
SQL> startup

- verify the parameter
SQL> show parameters asm

NAME TYPE VALUE
------------------------------------ ----------- -----------------------------
_asm_allow_only_raw_disks boolean FALSE
asm_diskgroups string
asm_diskstring string e:\asmdisks\_file*
asm_power_limit integer 1


4. Create New Directory for Device Files

mkdir e:\asmdisks

5. Create Files for ASM Disks

make four 100MB text files using Perl.

------- creatTextFile.pl -----------------
my $s='0' x 2**20;

open(DF1,">E:/asmdisks/_file_disk1") die "Cannot create file - $!\n";
open(DF2,">E:/asmdisks/_file_disk2") die "Cannot create file - $!\n";
open(DF3,">E:/asmdisks/_file_disk3") die "Cannot create file - $!\n";
open(DF4,">E:/asmdisks/_file_disk4") die "Cannot create file - $!\n";

for (my $i=1; $i<100; style="font-weight: bold;">

Notes:
We can use asmtool to create files
asmtool -create \\server\share\file

E:\asmdisks>asmtool -create e:\asmdisks\_file_disk10 100


6. Confirm we can discover the 'disk' now

Log into the ASM instance, issue

col path format a30
SELECT group_number, disk_number, mount_status, header_status, state, path
FROM v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH
------------ ----------- ------- ------------ -------- ------------------------------
0 0 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK1
0 3 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK4
0 2 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK3
0 1 CLOSED CANDIDATE NORMAL E:\ASMDISKS\_FILE_DISK2



7. Create ASM disk group

Using SQL*Plus, create a disk group with normal redundancy and two failure groups:

set ORACLE_SID=+ASM
sqlplus "/ as sysdba"

SQL> CREATE DISKGROUP orcl_dg1 NORMAL REDUNDANCY
FAILGROUP controller1 DISK 'e:\ASMDISKS\_FILE_DISK1', 'e:\ASMDISKS\_FILE_DISK2'
FAILGROUP controller2 DISK 'e:\ASMDISKS\_FILE_DISK3', 'e:\ASMDISKS\_FILE_DISK4';
Diskgroup created.


sys@ORCL> select group_number, name, total_mb, free_mb, state, type
2 from v$asm_diskgroup;

GROUP_NUMBER NAME TOTAL_MB FREE_MB STATE TYPE
------------ ------------- ---------- ---------- ----------- ------
1 ORCL_DG1 396 290 MOUNTED NORMAL



GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE PATH
------------ ----------- ------- ------------ -------- ------------------------------
1 0 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK1
1 1 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK2
1 2 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK3
1 3 CACHED MEMBER NORMAL E:\ASMDISKS\_FILE_DISK4

8. Use ASM disk group
SQL> create tablespace users2 datafile '+ORCL_DG1' size 100m;


sys@ORCL> select tablespace_name, file_name from dba_data_files order by 1;

TABLESPACE_NAME FILE_NAME
------------------------------ ----------------------------------------
AUTO_TBS E:\ORADATA\ORCL\AUTO_TBS_DATA01.DBF
EXAMPLE C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\EXAMPLE01.DBF
HISTORY E:\ORADATA\HISTORY01.DBF
MANUAL_TBS E:\ORADATA\ORCL\MANUAL_TBS_DATA01.DBF
PART1 E:\ORADATA\ORCL\PART1_01.DBF
PART2 E:\ORADATA\ORCL\PART2_01.DBF
PART3 E:\ORADATA\ORCL\PART3_01.DBF
PART4 E:\ORADATA\ORCL\PART4_01.DBF
SEQ E:\ORADATA\ORCL\DATAFILE\O1_MF_SEQ_3MKLG
ORW_.DBF

SYSAUX C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\SYSAUX01.DBF

SYSTEM C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\SYSTEM01.DBF

UNDOTBS1 C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\UNDOTBS01.DBF

UNDO_BATCH E:\ORADATA\ORCL\UNDO_DATCH01.DBF
USERS C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\OR
ADATA\ORCL\USERS01.DBF

USERS2 +ORCL_DG1/orcl/datafile/users2.256.63904
9975

Wednesday, November 14, 2007

RMAN spool log to example

Never know can do this before in RMAN prompt:


NTBK-/home/Denis/e_oracle/Practise_Oracle/temp>rman

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Nov 14 11:02:04 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> connect target

connected to target database: ORCL (DBID=1151466663)

RMAN> spool log to temp.log;
RMAN> show all;
RMAN> exit;
NTBK-/home/Denis/e_oracle/Practise_Oracle/temp>cat temp.log

Spooling started in log file: temp.log

Recovery Manager10.2.0.1.0

RMAN>
using target database control file instead of recovery catalog
RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SE

RMAN>

Recovery Manager complete.

Thursday, November 01, 2007

Transaction isolation level - A test

From Oracle 9i Database Concepts Chapter 20, we can read the following statement:

Oracle generates an error when a serializable transaction tries to update or delete data modified by a transaction that commits after the serializable transaction began:

ORA-08177: Cannot serialize access for this transaction

Tested with the following steps to verify the above statement


(1) In Session 1, issue:

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SMITH
1 SUN
2 SUN
3 JIN

(2) Also in Session 1:

hr@ORCL> alter session set isolation_level=serializable;

Session altered.

Note: Oracle default isolation_level is read committed

(3) Still in Session 1:
hr@ORCL> update t1 set name='SUN' where id=3;

1 row updated.

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SMITH
1 SUN
2 SUN
3 SUN

(4) Now in session 2, execute:

hr@ORCL> update t1 set name='SUN' where id=4;

1 row updated.
hr@ORCL> commit;

Commit complete.

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SUN
1 SUN
2 SUN
3 JIN

(5) Now back to the session 1, issue:

hr@ORCL> delete from t1 where name='SUN';
delete from t1 where name='SUN'
*
ERROR at line 1:
ORA-08177: can't serialize access for this transaction


hr@ORCL> commit;

Commit complete.

hr@ORCL> select * from t1;

ID NAME
---------- ----------
4 SUN
1 SUN
2 SUN
3 SUN


Note: if the isolation level is the default (read commit), the delete in this step will delete all rows in the table t1

Monday, October 22, 2007

Definer rights vs Invoker rights

Definer rights and invoker rights are described in the Oracle docs as shown in
http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96624/08_subs.htm#15186

I took the following notes from the above docs to help me understand these concepts:

1. Assuming both Scott and Blake has a table called dept.
2. Scott created a procedure as follows:

CREATE PROCEDURE create_dept (
my_deptno NUMBER,
my_dname VARCHAR2,
my_loc VARCHAR2) AS
BEGIN
INSERT INTO dept VALUES (my_deptno, my_dname, my_loc);
END;

3. Scott grant the execute privilege to Blake, e.g.
scott@mydb> grant execute on create_dept to Blake;

4. Blake execute the procedure create_dept, e.g.
blake@mydb> execute create_dept(10, 'sales', 'New York');

5. Blake will create a department 'sales' in the table dept residing in Scott, not in Blake.

This is due to the definer's rights - the procedure gets executed with the privileges and schema context of the owner.

6. If in step 2, Scott create an invoker-rights procedure as follows

CREATE PROCEDURE create_dept (
my_deptno NUMBER,
my_dname VARCHAR2,
my_loc VARCHAR2) AUTHID CURRENT_USER AS
BEGIN
INSERT INTO dept VALUES (my_deptno, my_dname, my_loc);
END;

7. Then, if Blake repeats the step 4, Blake will create a a department 'sales' in the table dept residing in Blake, not in Scott.

This is due to the invoker's right - the procedure gets executed with the privileges and schema context of the current user.

Monday, October 08, 2007

Execute OS command in RMAN

Here are the examples, the key is to use single quotation mark:

RMAN> host 'ls -l';

total 12
-rw-r--r-- 1 Denis None 801 Oct 2 15:24 README
-rw-r--r-- 1 Denis None 176 Sep 23 20:45 flsharea_chk.sql
-rw-r--r-- 1 Denis None 64 Sep 23 13:32 lvl_0_bkup.cmd
-rw-r--r-- 1 Denis None 37 Sep 23 13:42 lvl_0_bkup_nocomp.cmd
-rw-r--r-- 1 Denis None 64 Sep 23 20:02 lvl_1_bkup.cmd
-rwxr-xr-x 1 Denis None 1704 Oct 2 15:14 sqlnet.log
host command complete

RMAN> host 'pwd';

/cygdrive/e/Denis_Files/ORACLE/Practise_Oracle/flash_recv_area_rman10g
host command complete

Sunday, October 07, 2007

RMAN : tablespace point-in-time recovery

I have been asked a question about how to recover a table to a point in the past time using RMAN. I response by saying if in Oracle 10g, we can use Flashback technology. But if using RMAN is only option, we can do tablespace point-in-time recovery(TSPITR). However I was unable to describe the clear conceptual steps about TSPITR. Here I did a test based on the document.

The test case assumes there is only one table: AUTO_TABLE in the tablespace AUTO_TBS.

The situation could be more complicated than what will be shown below if there are other tables in the tablespace that have been updated after the target time to which the tablespace will be recovered.

ref: http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtspit002.htm#i1010246

1. Information about the auto_table

a. auto_table resides in the AUTO_TBS tablespace

TABLE_NAME TABLESPACE_NAME
------------------------------ ------------------------------
.....
AUTO_TABLE AUTO_TBS


b. Number of rows

scott@ORCL> select count(*) from auto_table;

COUNT(*)
----------
66065

c. Timestamp just before truncate

scott@ORCL> select sysdate from dual;

SYSDATE
--------------------
07-OCT-2007 19:43:11

scott@ORCL> truncate table auto_table;

Table truncated.

The objective is to recover the auto_table to the point before the truncate.

2. Planing and Preparing for TSPITR


2.1 Choosing the Right Target Time for TSPITR
07-OCT-2007 19:43:11


2.2 Determining the Recovery Set: Analyzing Data Relationships

SELECT *
FROM SYS.TS_PITR_CHECK
WHERE (
TS1_NAME IN ('AUTO_TBS')
AND TS2_NAME NOT IN ('AUTO_TBS')
)
OR (
TS1_NAME NOT IN ('AUTO_TBS')
AND TS2_NAME IN ('AUTO_TBS')
);

Note: There are no complications for this case - no dependencies of auto_table on other objects in auto_tbs or other tablespaces

2.3 Identifying and Preserving Objects That Will Be Lost After TSPITR

SELECT OWNER, NAME, TABLESPACE_NAME,
TO_CHAR(CREATION_TIME, 'YYYY-MM-DD:HH24:MI:SS')
FROM TS_PITR_OBJECTS_TO_BE_DROPPED
WHERE TABLESPACE_NAME IN ('AUTO_TBS')
AND CREATION_TIME > TO_DATE('07-OCT-2007 19:43:11','YY-MON-DD:HH24:MI:SS')
ORDER BY TABLESPACE_NAME, CREATION_TIME;

Note:
You can preserve such objects, once they are identified, by exporting them before TSPITR using an Oracle export utility (Data Pump Export or Original Export) and re-importing them afterwards using the corresponding import utility.

In this case, we don't have any such objects.

3. Performing Basic RMAN TSPITR

Three options:

A. Fully automated TSPITR
B. Customized TSPITR with an automatic auxiliary instance
C. TSPITR with your own auxiliary instance

3.1 Take the tablespace AUTO_TBS offline

sys@ORCL> alter tablespace auto_tbs offline;

Tablespace altered.

3.2 Rocover using 'Fully automated TSPITR'

Note: Using an Auxiliary Destination for Automated RMAN TSPITR
e:\auxdest

start the RMAN client, connecting to the target database and, if applicable, a recovery catalog.

The following is the RMAN command:

RMAN> RECOVER TABLESPACE auto_tbs
2> UNTIL TIME '07-OCT-2007 19:43:11'
3> AUXILIARY DESTINATION 'e:\auxdest'
4> ;


Appendix A shows the full RMAN screen output.

3.3 Backing Up Recovered Tablespaces After TSPITR
RMAN> BACKUP TABLESPACE AUTO_TBS

3.4 Bring the tablespaces online, as follows:

RMAN> SQL "ALTER TABLESPACE AUTO_TBS ONLINE";


Appendix A:

===========

RMAN> RECOVER TABLESPACE auto_tbs
2> UNTIL TIME '07-OCT-2007 19:43:11'
3> AUXILIARY DESTINATION 'e:\auxdest'
4> ;
Starting recover at 07-OCT-2007 20:10:36
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=158 devtype=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified point in time

List of tablespaces expected to have UNDO segments
tablespace SYSTEM
tablespace UNDOTBS1
tablespace UNDO_BATCH

Creating automatic instance, with SID='aFbc'

initialization parameters used for automatic instance:
db_name=ORCL
compatible=10.2.0.1.0
db_block_size=8192
db_files=200
db_unique_name=tspitr_ORCL_aFbc
large_pool_size=1M
shared_pool_size=110M
#No auxiliary parameter file used
db_create_file_dest=e:\auxdest
control_files=e:\auxdest/cntrl_tspitr_ORCL_aFbc.f


starting up automatic instance ORCL

Oracle instance started

Total System Global Area 201326592 bytes

Fixed Size 1248092 bytes
Variable Size 146801828 bytes
Database Buffers 50331648 bytes
Redo Buffers 2945024 bytes
Automatic instance created

contents of Memory Script:
{
# set the until clause
set until time "07-OCT-2007 19:43:11";
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
# archive current online log for tspitr to a resent until time
sql 'alter system archive log current';
# avoid unnecessary autobackups for structural changes during TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;';
}
executing Memory Script

executing command: SET until clause

Starting restore at 07-OCT-2007 20:12:09
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=36 devtype=DISK

channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece E:\ORAFLASH\ORCL\BACKUPSET\2007_10_07\O1_MF_NCSN0_TAG2007100
LND6K2_.BKP
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=E:\ORAFLASH\ORCL\BACKUPSET\2007_10_07\O1_MF_NCSN0_TAG20071007T173142_3JLND6K2_.BKP tag=TAG2007100
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:05
output filename=E:\AUXDEST\CNTRL_TSPITR_ORCL_AFBC.F
Finished restore at 07-OCT-2007 20:12:48

sql statement: alter database mount clone database

sql statement: alter system archive log current

sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end;
released channel: ORA_DISK_1
released channel: ORA_AUX_DISK_1

contents of Memory Script:
{
# generated tablespace point-in-time recovery script
# set the until clause
set until time "07-OCT-2007 19:43:11";
# set an omf destination filename for restore
set newname for clone datafile 1 to new;
# set an omf destination filename for restore
set newname for clone datafile 2 to new;
# set an omf destination filename for restore
set newname for clone datafile 8 to new;
# set an omf destination tempfile
set newname for clone tempfile 1 to new;
# set a destination filename for restore
set newname for datafile 6 to
"E:\ORADATA\ORCL\AUTO_TBS_DATA01.DBF";
# rename all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set plus the auxilliary tablespaces
restore clone datafile 1, 2, 8, 6;
switch clone datafile all;
#online the datafiles restored or flipped
sql clone "alter database datafile 1 online";
#online the datafiles restored or flipped
sql clone "alter database datafile 2 online";
#online the datafiles restored or flipped
sql clone "alter database datafile 8 online";
#online the datafiles restored or flipped
sql clone "alter database datafile 6 online";
# make the controlfile point at the restored datafiles, then recover them
recover clone database tablespace "AUTO_TBS", "SYSTEM", "UNDOTBS1", "UNDO_BATCH" delete archivelog;
alter clone database open resetlogs;
# PLUG HERE the creation of a temporary tablespace if export fails due to lack
# of temporary space.
# For example in Unix these two lines would do that:
#sql clone "create tablespace aux_tspitr_tmp
# datafile ''/tmp/aux_tspitr_tmp.dbf'' size 500K";
}
executing Memory Script
executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed temporary file 1 to E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_TEMP_%U_.TMP in control file

Starting restore at 07-OCT-2007 20:16:10
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: sid=39 devtype=DISK

channel ORA_AUX_DISK_1: starting datafile backupset restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_SYSTEM_%U_.DBF
restoring datafile 00002 to E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_UNDOTBS1_%U_.DBF
restoring datafile 00008 to E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_UNDO_BAT_%U_.DBF
restoring datafile 00006 to E:\ORADATA\ORCL\AUTO_TBS_DATA01.DBF
channel ORA_AUX_DISK_1: reading from backup piece E:\ORAFLASH\ORCL\BACKUPSET\2007_10_07\O1_MF_NNND0_TAG20071007T173142_3J
LN26F5_.BKP
channel ORA_AUX_DISK_1: restored backup piece 1
piece handle=E:\ORAFLASH\ORCL\BACKUPSET\2007_10_07\O1_MF_NNND0_TAG20071007T173142_3JLN26F5_.BKP tag=TAG20071007T173142
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:06:26
Finished restore at 07-OCT-2007 20:23:10

datafile 1 switched to datafile copy
input datafile copy recid=9 stamp=635372617 filename=E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_SYSTEM_3JLXQD89_.DBF
datafile 2 switched to datafile copy
input datafile copy recid=10 stamp=635372617 filename=E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_UNDOTBS1_3JLXQDRW_.DBF
datafile 8 switched to datafile copy
input datafile copy recid=11 stamp=635372618 filename=E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_UNDO_BAT_3JLXQF00_.DBF

sql statement: alter database datafile 1 online

sql statement: alter database datafile 2 online

sql statement: alter database datafile 8 online

sql statement: alter database datafile 6 online

Starting recover at 07-OCT-2007 20:24:23
using channel ORA_AUX_DISK_1
using channel ORA_AUX_DISK_1

starting media recovery

archive log thread 1 sequence 9 is already on disk as file E:\ORAFLASH\ORCL\ARCHIVELOG\2007_10_07\O1_MF_1_9_3JLOS56T_.ARC

archive log thread 1 sequence 10 is already on disk as file E:\ORAFLASH\ORCL\ARCHIVELOG\2007_10_07\O1_MF_1_10_3JLXKK6T_.A
RC
archive log filename=E:\ORAFLASH\ORCL\ARCHIVELOG\2007_10_07\O1_MF_1_9_3JLOS56T_.ARC thread=1 sequence=9
archive log filename=E:\ORAFLASH\ORCL\ARCHIVELOG\2007_10_07\O1_MF_1_10_3JLXKK6T_.ARC thread=1 sequence=10
media recovery complete, elapsed time: 00:00:13
Finished recover at 07-OCT-2007 20:25:37

database opened

contents of Memory Script:
{
# export the tablespaces in the recovery set
host 'exp userid =\"/@(DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=oracle)(ARGV0=oracleaFbc)(ARGS=^'(DESCRIPTION=(LOCAL=Y
ES)(ADDRESS=(PROTOCOL=beq)))^')(ENVS=^'ORACLE_SID=aFbc^'))(CONNECT_DATA=(SID=aFbc))) as sysdba\" point_in_time_recover=y
tablespaces=
AUTO_TBS file=
tspitr_a.dmp';
# shutdown clone before import
shutdown clone immediate
# import the tablespaces in the recovery set
host 'imp userid =\"/@ as sysdba\" point_in_time_recover=y file=
tspitr_a.dmp';
# online/offline the tablespace imported
sql "alter tablespace AUTO_TBS online";
sql "alter tablespace AUTO_TBS offline";
# enable autobackups in case user does open resetlogs from RMAN after TSPITR
sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;';
}
executing Memory Script


Export: Release 10.2.0.1.0 - Production on Sun Oct 7 20:26:49 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Note: table data (rows) will not be exported

About to export Tablespace Point-in-time Recovery objects...
For tablespace AUTO_TBS ...
. exporting cluster definitions
. exporting table definitions
. . exporting table AUTO_TABLE
. exporting referential integrity constraints
. exporting triggers
. end point-in-time recovery
Export terminated successfully without warnings.
host command complete

database closed
database dismounted
Oracle instance shut down


Import: Release 10.2.0.1.0 - Production on Sun Oct 7 20:30:59 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
About to import Tablespace Point-in-time Recovery objects...
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SCOTT's objects into SCOTT
. . importing table "AUTO_TABLE"
. importing SYS's objects into SYS
Import terminated successfully without warnings.
host command complete

sql statement: alter tablespace AUTO_TBS online

sql statement: alter tablespace AUTO_TBS offline

sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end;

Removing automatic instance
Automatic instance removed
auxiliary instance file E:\AUXDEST\CNTRL_TSPITR_ORCL_AFBC.F deleted
auxiliary instance file E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_SYSTEM_3JLXQD89_.DBF deleted
auxiliary instance file E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_UNDOTBS1_3JLXQDRW_.DBF deleted
auxiliary instance file E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_UNDO_BAT_3JLXQF00_.DBF deleted
auxiliary instance file E:\AUXDEST\TSPITR_O\DATAFILE\O1_MF_TEMP_3JLY97MK_.TMP deleted
auxiliary instance file E:\AUXDEST\TSPITR_O\ONLINELOG\O1_MF_1_3JLY8G4W_.LOG deleted
auxiliary instance file E:\AUXDEST\TSPITR_O\ONLINELOG\O1_MF_2_3JLY8OX9_.LOG deleted
auxiliary instance file E:\AUXDEST\TSPITR_O\ONLINELOG\O1_MF_3_3JLY8WJM_.LOG deleted
Finished recover at 07-OCT-2007 20:32:10

RMAN>

Tuesday, October 02, 2007

OCP Oracle 10g Admin II - Ch 6: Recovering from User Errors

Chapter 6: Recovering from User Errors

Review Questions

1. Which underlying database technology is used by Flashback Drop, Flashback Table, and Flashback Versions Query to recover data?
A. Redo logs
B. Rollback segments
C. Undo data
D. Archive logs
---
Ans: C.
All the Flashback Technologies, with the exception of the Flashback Database,
utilize the undo data to recover objects.

2. Which of the following statements is true regarding the Recycle Bin? (Choose all that apply.)
A. The Recycle Bin is a physical storage area of flashback dropped objects.
B. The Recycle Bin is a logical container area of flashback dropped objects.
C. The objects in the Recycle Bin are stored in the UNDO tablespace.
D. The objects in the Recycle Bin are stored in the tablespace they were created in.
---
Ans: B, D.
The Recycle Bin is a logical container of flashback dropped objects. The objects in the Recycle Bin are stored in the tablespace they were created in.

3. What actions does the Flashback Drop process perform?
A. Back up table only
B. Back up table and indexes only
C. Back up table and referential constraints only
D. Back up table and dependent objects
---
Ans: D.
The Flashback Drop process is responsible for backing up tables and their dependent objects.

4. Which activity can occur with the Recycle Bin?
A. All indexed-organized tables are protected by the Recycle Bin.
B. System-and dictionary-managed tablespaces are stored in the Recycle Bin.
C. Dependent objects of stored tables, including referential constraints, are stored in the Recycle Bin.
D. Data Manipulation Language (DML) and Data Definition Language (DDL) can be run against objects in the Recycle Bin.
E. None of the above.
---
Ans: E.
Only non-partitioned index-organized tables are protected by the Recycle Bin.
Non-system and locally managed tablespaces are stored in the Recycle Bin.
Referential constraints are not stored in the Recycle Bin. DML or DDL cannot be
performed against objects in the Recycle Bin.

5. One method of dropping objects and bypassing the Recycle Bin is to perform which command?
A. DROP USER user CASCADE
B. DROP TABLE
C. DROP TABLE INCLUDING CONTENTS
D. DROP USER user
--
Ans: A.
The command DROP USER user CASCADE drops the user and the database objects
without recording a backup in the Recycle Bin. Objects in the Recycle Bin owned by
the user will be removed from the Recycle Bin.

6. Which command is responsible for removing the objects in multiple users from the Recycle Bin?
A. PURGE RECYCLEBIN
B. PURGE TABLESPACE user
C. PURGE DBA_RECYCLEBIN
D. PURGE TABLES user
---
Ans: C.
The PURGE DBA_RECYCLEBIN command purges the complete Recycle Bin for all users.

7. What is the naming convention of a Recycle Bin object?
A. BIN$globalUID$version
B. BIN$global$UIDversion
C. BIN$globalUIDversion
D. BINglobalUIDversion
---
Ans: A.
The naming convention of objects in the Recycle Bin consists of a globalUID and
version number assigned by the database with BIN$ prefixed, and a $ between the
globalUID and the version number. The formatting looks like
BIN$globalUID$version.

8. What two methods can be used to view the Recycle Bin?
A. Run the SHOW RECYCLEBIN command.
B. Query the view DBA_RECYCLEBIN.
C. Query the view V$RECYCLEBIN.
D. Query the view RECYCLEBIN.
----
Ans: A, D.
The SHOW RECYCLEBIN command and querying the RECYCLEBIN view are two methods of viewing the contents of the Recycle Bin.

Note: the answer should be A, B. RECYCLENIN is a public synonym for USER_RECYTCLEBIN

select owner, synonym_name, table_owner, table_name from dba_synonyms where synonym_name='RECYCLEBIN'

OWNER SYNONYM_NAME TABLE_OWNER TABLE_NAME
--------------- -------------------- -------------------- --------------------
PUBLIC RECYCLEBIN SYS USER_RECYCLEBIN


9. What best describes the space pressure in the Recycle Bin?
A. No free extents in the Recycle Bin, and objects being removed from the Recycle Bin to free up extents for non-Recycle Bin objects.
B. No free extents in a tablespace, and objects being removed from the Recycle Bin to free up extents for non-Recycle Bin objects.
C. No free extents in a tablespace, and objects being removed from the Recycle Bin to free up extents in a tablespace for non-Recycle Bin objects on a first in, first out (FIFO) basis.
D. No free extents in the Recycle Bin tablespace and objects being removed from Recycle Bin tablespace to free up extents for non- Recycle Bin objects on a first in, first out (FIFO) basis.
---
Ans: C.
The Recycle Bin is not stored in a Recycle Bin tablespace. It is a logical container pointing to objects in the tablespace where the objects were originally created. Objects are removed from the Recycle Bin when there is a need for available extents in a non-system tablespace. The objects are removed on a first in, first out (FIFO) basis.


10. If a tablespace is configured for AUTO EXTEND, what will occur when there are no free extents and there are objects in the AUTO EXTEND tablespace?
A. The tablespace will autoextend to make more free space and Oracle
will not remove Recycle Bin objects associated with the tablespace.
B. The tablespace will not autoextend, and objects in the Recycle Bin will
be removed to make space in the tablespace.
C. The tablespace will autoextend to make more free space and Oracle
will remove Recycle Bin objects associated with the tablespace.
D. The tablespace will not autoextend, and objects in the Recycle Bin will
be compressed to make space in the tablespace.
---
Ans: B.
A tablespace with AUTO EXTEND enabled will not autoextend to make space for the Recycle Bin objects. The objects will be removed from the Recycle Bin in the standard first in, first out (FIFO) basis.

11. Which of the following statements best describes Flashback Versions Query?
A. Flashback Versions Query is a query to perform diagnostics on version
changes in row data on rows that existed between the times the query
was executed to a determined point-in-time in the past.
B. Flashback Versions Query is a method of querying all version changes
on rows that existed between the times the query was executed to a
determined point-in-time in the past.
C. Flashback Versions Query is a query to perform diagnostics on table
changes in row data on rows that existed between the times the query
was executed to a determined point-in-time in the past.
D. Flashback Versions Query is a method of querying all version changes
on rows that existed between the times the query was executed to a
determined point-in-time in the future.
---
Ans: B.
Flashback Versions Query is a method of querying all version changes on rows. The rows must exist from the time the query was executed to a point-in-time in the past. The query will not display committed data after the query has been executed.

12. What view can be used to query diagnostic information about transactional changes in the database?
A. FLASHBACK_TRANSACTION_QUERY
B. DBA_TRANSACTION_QUERY
C. V$TRANSACTION_QUERY
D. V$FLASHBACK_TRANSACTION_QUERY
----
Ans: A.
The view FLASHBACK_TRANSACTION_QUERY is used as a diagnostic tool to identify version information about transactional changes to the database. This view can be used to view the DML statements that were executed against a row and in a specific table.

13. What are the methods of performing a Flashback Versions Query? (Choose all that apply.)
A. Flashback Versions Query can be performed by minimum and maximum SCN value.
B. Flashback Versions Query can be performed by minimum and maximum sequence number.
C. Flashback Versions Query can be performed by starting and ending timestamp.
D. Flashback Versions Query can be performed by minimum and maximum undo value.
----
Ans: A, C.
The Flashback Versions Query can be performed by either MINVALUE and MAXVALUE SCN or starting and ending TO_TIMESTAMP value.

14. Which of the following statements is true regarding the VERSIONS BETWEEN clause?
A. The VERSIONS BETWEEN clause only supports SCN.
B. The VERSIONS BETWEEN clause only supports log sequences.
C. The VERSIONS BETWEEN clause cannot produce versions past modifications to the table structure.
D. The VERSIONS BETWEEN clause can produce versions past Data Definition Language (DDL) changes to an object.
---
Ans: C.
The VERSIONS BETWEEN clause of the Flashback Versions Query cannot query past table modifications or DDL changes to a table.

15. Which pseudocolumn do you use to identify a unique row in a Flashback Versions Query?
A. VERSIONS_XID
B. BVERSIONS_OPERATION
C. VERSIONS_ENDTIME
D. VERSION_STARTTIME
----
Ans: A.
VERSIONS_XID uniquely identifies a row in the Flashback Versions Query. The other pseudocolumns can narrow down the selection criteria and may identify multiple rows in a timeframe or SCN range.

16. Which of the following statements are true regarding the VERSIONS BETWEEN clause? (Choose all that apply.)
A. The VERSIONS BETWEEN clause may be used in DML statements.
B. The VERSIONS BETWEEN clause may be used in DDL statements.
C. The VERSIONS BETWEEN clause may be used to query past DDL changes to tables.
D. The VERSIONS BETWEEN clause may not be used to query past DML statements to tables.
----
Ans: A, B.
The VERSIONS BETWEEN clause can be used in DML and DDL statements to identify data.


17. Which of the following statements is true regarding implementing a Flashback Table recovery?
A. SCN is never needed to perform a Flashback Table recovery.
B. ROW MOVEMENT must be enabled to perform a Flashback Table recovery.
C. Only one table may be recovered to perform a Flashback Table recovery.
D. Flashback Table recovery does not use undo data to perform a Flashback Table recovery.
---
Ans: B.
The command ALTER TABLE table_name ENABLE ROW MOVEMENT must be enabled to allow Flashback Table recovery.

18. What happens to enabled triggers on a table after a FLASHBACK TABLE command is performed? (Choose all that apply.)
A. The triggers are disabled by default.
B. The triggers are enabled by default.
C. Disabled triggers remain disabled with the default FLASHBACK TABLE command.
D. All triggers are enabled if the ENABLE TRIGGER clause is used.
---
Ans: A, C.
The default action for the FLASHBACK TABLE command is that the triggers will be disabled. If you disable the triggers on a table, the triggers will stay disabled after the FLASHBACK TABLE command as well. Also, if you use the ENABLE TRIGGER clause, all enabled triggers on the table prior to the Flashback Table operation will be enabled and all disabled triggers will be disabled.

19. What method can you use to identify a DML operation and the SQL statement
that has been performed against a row in a specific table for a schema owner?
(Choose all that apply.)
A. Query DBA_TRANSACTION_QUERY for TABLE_NAME, OPERATION, and
UNDO_SQL. Limit rows by START_SCN and TABLE_OWNER.
B. Query FLASHBACK_TRANSACTION_QUERY for TABLE_NAME, OPERATION,
and UNDO_SQL. Limit rows by START_SCN and TABLE_OWNER.
C. Query FLASHBACK_TRANSACTION_QUERY for TABLE_NAME, OPERATION,
and UNDO_SQL. Limit rows by START_TIMESTAMP and TABLE_OWNER.
D. Query DBA_TRANSACTION_QUERY for TABLE_NAME, OPERATION, and
UNDO_SQL. Limit rows by START_SCN and TABLE_OWNER.
----
Ans: B, C.
The proper method is to query the FLASHBACK_TRANSACTION_QUERY view. The TABLE_ NAME, OPERATION, and UNDO_SQL columns should be queried to display the information about the transactional information. The transactional information from FLASHBACK_TRANSACTION_ QUERY should then be narrowed down by START_TIMESTAMP or START_SCN. Then the information can be further narrowed down by TABLE_OWNER. Alternatively, the XID, if known, can be queried to identify the exact transaction, but that was not an option in this example.

20. How can you protect the amount of time you can query information from the Flashback Transaction Query?
A. Add UNDO GUARANTEE to the UNDO tablespace.
B. Add RETENTION GUARANTEE to the UNDO tablespace.
C. Add RETENTION GUARANTEE to the Recycle Bin logical storage container.
D. Add UNDO GUARANTEE to the Recycle Bin logical storage container.
---
Ans: B.
Adding RETENTION GUARANTEE to the UNDO tablespace or during database creation to the UNDO tablespace will protect unexpired undo data in the UNDO tablespace. This is where the Flashback Transaction Query retrieves information about transaction changes to rows in the database.

OCP Oracle 10g Admin II - Ch 5: Understanding the Flashback Database

Review Questions
1. What type of recovery is the Flashback Database best suited for? (Choose all that apply.)
A. User error
B. Physical corruption
C. Logical corruption
D. Media failure
---
Ans: A, C.
A Flashback Database recovery is best suited for a user error such as a truncated table or a logical corruption like an incomplete batch transaction affecting many tables. Media recovery situations cannot be performed with the Flashback Database recovery method.

2. Flashback Database recovery can recover from which of the following failure scenarios?
A. Loss of control file
B. Dropping a tablespace through RESETLOGS
C. A user error that resized datafiles to a smaller size
D. A large truncated table or group of tables
--
Ans: D.
A Flashback Database recovery can recover a large truncated table or group of tables.

3. What new background process is responsible for writing before block images
and recovering from the Flashback Database log?
A. RWVR
B. RVWR
C. RWRV
D. RVRW
---
Ans: B.
The RVWR process is responsible for writing the "before" image information to the Flashback Database log. The Flashback Database log is read to perform the Flashback Database recovery.

4. What are the benefits of the flash recovery area in the recovery and backup process?
A. Recovery efficiency is improved because all files are stored on tape media for fast access.
B. Recovery efficiency is improved because the files are stored in multiple locations.
C. Recovery efficiency is improved because the files are stored in one location on tape.
D. Recovery efficiency is improved because the files are stored in one location on disk.
---
Ans: D.
The flash recovery area is a centralized storage area for backups on disk. This allows for a more efficient recovery process because the required files are in one location and are stored on disk instead of tape.

5. Where is information about the status of the flash recovery area displayed?
(Choose all that apply.)
A. Alert log
B. Background trace files
C. V$_OUSTANDING_ALERTS
D. DBA_OUTSTANDING_ALERTS
----
Ans: A, D.
The Alert log reports space usage and other information about the flash recovery area. DBA_OUTSTANDING_ALERTS also show the information state of the flash recovery area.


6. How is the size of the flash recovery area determined? (Choose all that apply.)
A. The size is automatically allocated at 2 gigabytes.
B. Using the ALTER SYSTEM command to dynamically set the size.
C. With the initialization parameter DB_RECOVERY_FILE_DEST_SIZE.
D. Using the ALTER TABLESPACE command.
---
Ans: B, C.
The flash recovery area can be determined by either setting the initialization
parameter DB_RECOVERY_FILE_DEST_SIZE or using the ALTER SYSTEM command.

7. What type of backup commands can the flash recovery area be used for?
(Choose all that apply.)
A. BACKUP COPY
B. BACKUP IMAGE
C. BACKUP
D. BACKUPSET
------
Ans: A, C.
The flash recovery area supports both the BACKUP and BACKUP COPY commands, which perform backup sets and image copies.

8. The flash recovery area space utilization and stored files can be identified by what method?
A. DBA_OUTSTANDING_ALERTS
B. V$OUTSTANDING_ALERTS
C. V$RECOVERY_FILE_DEST
D. DBA_RECOVERY_FILE_DEST
---
Ans: C.
The dynamic view V$RECOVERY_FILE_DEST displays the space utilization and the
amount of files that make up the flash recovery area.

9. What parameter determines the length of time that the Flashback Database will
store "before" images that can be used in the recovery process?
A. DB_FLASHBACK_RETENTION_POLICY
B. DB_FLASHBACK_RETENTION_TIME
C. DB_FLASHBACK_RETENTION_STORE
D. DB_FLASHBACK_RETENTION_TARGET
---
Ans: D.
The DB_FLASHBACK_RETENTION_TARGET parameter determines how much data is available to recover.

10. How is the DB_FLASHBACK_RETENTION_TARGET parameter measured?
A. By SCN
B. By redo log sequences
C. By time in minutes
D. By redo log sequence and threads
----
Ans: C.
DB_FLASHBACK_RETENTION_TARGET is a parameter that is measured in minutes. This value determines how many minutes the Flashback Database should write data before this data gets overwritten.

11. To enable the Flashback Database, what must be done to the database?
(Choose all that apply.)
A. It must be mounted.
B. It must be opened with RESETLOGS.
C. The flash recovery area must be created.
D. The database must be in ARCHIVELOG mode.
----
Ans: A, C, D.
To enable the Flashback Database, the flash recovery area must be created. The database must be mounted but not opened to turn on the Flashback Database. The database must also be in ARCHIVELOG mode.

12. When using the Flashback Database in a recovery situation, what information would be useful to know? (Choose all that apply.)
A. Information about the smallest SCN number that is stored in the Flashback Database log
B. Information about the earliest timestamp that is stored in the Flashback Database log
C. Information about the greatest SCN number that is stored in the Flashback Database log
D. Information about the latest timestamp that is stored in the Flashback Database log
---
Ans: A, B.
The earliest timestamp and smallest SCN will tell you how far back you can recover the database. These values can be queried by the V$FLASHBACK_DATABASE_LOG dynamic view.

13. How can you determine if the Flashback Database is turned on?
A. Query the DBA_FLASHBACK_DATABASE view.
B. Query the V$DATABASE dynamic view.
C. Check the initialization parameters.
D. Check the alert log.
---
Ans: B.
The V$DATABASE dynamic view has a new column called FLASHBACK_ON, which contains a value of YES or NO.


14. Which of the following views can determine how much space you may need in the future in the Flashback Database logs?
A. V$DATABASE
B. V$FLASHBACK_DATABASE_STAT
C. V$FLASHBCK_DATABASE_LOG
D. DBA_FLASHBACK_LOG
---
Ans: B.
The V$FLASHBACK_DATABASE_STAT dynamic view shows the daily growth and utilization of the Flashback Database log. You can match daily activities to the daily utilization of the Flashback Database log.

15. What is the default size of a redo log file created in the flash recovery area?
A. 100MB
B. 150MB
C. 10MB
D. 50MB
---
Ans: A.
The default size of a redo log file created in the flash recovery area is 100MB.

16. Which initialization parameter will create archive logs to the flash recovery area?
A. ARCHIVE_LOG_DEST
B. ARCHIVE_DUPLEX_DEST
C. ARCHIVE_LOG_DEST_n
D. LOG_ARCHIVE_DEST_n
---
Ans: D.
The LOG_ARCHIVE_DEST_n is the only initialization parameter that will create or write archive logs to the flash recovery area.

17. Which database files are permanently stored in the flash recovery area?
(Choose all that apply.)
A. Datafiles
B. RMAN files
C. Control files
D. Current online redo logs
E. Archive logs
-----
Ans: C, D.
Control files and redo logs area considered permanent files in the flash recovery area. These files are not made obsolete and deleted, even when backed up to tape.

18. Which files will not be backed up in the flash recovery area when you're using
the BACKUP RECOVERY AREA and BACKUP RECOVERY FILES commands?
(Choose all that apply.)
A. Control files
B. Redo logs
C. Datafiles
D. Permanent files
E. Flashback logs
----
Ans: A, B, D, E.
The flash recovery area will not back up redo log files, control files, or
flashback logs with the BACKUP RECOVERY AREA and BACKUP RECOVERY FILES
commands. Permanent files are considered to be current online redo logs and control
files.

19. What is responsible for applying the "before" images to the database during a Flashback Database recovery?
A. LGWR
B. SMON
C. DBWR
D. RVWR
---
Ans: D.
The RVWR process is responsible for writing the "before" images to the
Flashback Database log. This process is also responsible for applying these to the
database during a recovery.

20. What administrative database activity cannot be undone with the Flashback Database recovery?
A. Dropped table
B. Dropped user
C. Resized datafiles to smaller size
D. Dropped tablespace
--
Ans: C.
Resizing of a tablespace or datafiles to a smaller size cannot be undone with Flashback Database recovery.