Tuesday, May 29, 2007

OCA 10g Ch10 - Implementing Database Backups

Ch10 - Implementing Database Backups

Review Questions

1. Which of the following is the most serious type of redo log file
failure?
A. The loss of an entire redo log file group, but no loss in any other
group
B. The loss of one member of each redo log file group
C. The failure of the ARC0 background process
D. The failure of the LGWR background process
---
Ans: A.
Loss of an entire redo log file group can result in loss of committed
transactions that may not
yet have been written to the database files. Losing all members of a
redo log file group except
for one does not affect database operation and does not result in data
lost. A message is placed
in the alert log file. The failure of LGWR or ARC0 causes an instance
failure, but you do not lose
any committed transaction data.

2. Which of the following tools or options does not ensure that either
no committed transactions
are lost or the MTBF is minimized?
A. Streams
B. Real Application Clusters
C. LogMiner
D. Archived redo logs
E. Standby databases
---
Ans: C.
The LogMiner tool helps you see which transactions have occurred in the
database along
with the DML to reverse the changes, but it does not minimize the MTBF
or reduce the possibility
of committed transaction loss.

3. When the database is in ARCHIVELOG mode, database recovery is
possible up to which event or time?
A. The last redo log file switch
B. The last checkpoint position
C. The last commit
D. The last incremental backup using RMAN
---
Ans: C.
In ARCHIVELOG mode, recovery of the database is possible up to the last
COMMIT statement;
in other words, no uncommitted transactions are lost in ARCHIVELOG mode.

4. From the following, select the true statement regarding image copies
and backup sets.
A. An image copy stores one datafile per image copy, and a backup set
can store all datafiles
in a single file.
B. An image copy stores one datafile per image copy, and a backup set
consists of one file per
datafile backed up.
C. Both image copies and backup sets use a single file to store all
objects to be backed up.
D. A backup set stores each datafile in its own backup file, but an
image copy places all datafiles
into a single output file.
--
Ans: A.
In addition to storing multiple datafiles in a single output file,
backup sets do not contain
unused blocks and can be compressed.


5. The option on the EM Database Control backup scheduling options
screen that allows you to
refresh an image copy on disk with an incremental backup is known as
which new RMAN feature?
A. Incrementally updated backups
B. Incremental level 0 backups
C. Compressed image copy refresh
D. Compressed incremental backups
---
Ans: A.
Incrementally updated backups save time during a recovery operation
because fewer incremental
backups need to be applied to the restored image copy.

6. When should the DBA make a trace copy of the control file using ALTER
DATABASE BACKUP
CONTROLFILE TO TRACE?
A. After every backup
B. After multiplexing the control files
C. Whenever restarting the instance
D. Whenever the physical structure of the database changes
---
Ans: D.
In the rare event that all multiplexed copies of the control file are
lost, having a trace copy
of the control file reduces the possibility of data loss and reduces
downtime during a recovery
operation.

7. Which of the following is not a step in configuring your database to
archive redo log files?
A. Place the database in ARCHIVELOG mode.
B. Multiplex the online redo log files.
C. Specify a destination for archived redo log files.
D. Specify a naming convention for your archived redo log files.
---
Ans: B.
Although it is recommended that you multiplex your online redo log
files, setting up the archival
of your redo log files is not required.


8. Why are online backups known as inconsistent backups?
A. Because not all control files are synchronized to the same SCN until
the database is shut down
B. Because both committed and uncommitted transactions are included in a
backup when the
database is online
C. Because a database failure while an online backup is in progress can
leave the database in an
inconsistent state
D. Because online backups make copies of datafiles while they are not
consistent with the
control files
---
Ans: D.
During an online backup, even if all datafiles are backed up at the same
time, they are rarely,
if ever, in synch with the control file.

9. In the Disk Settings section of the EM Database Control Configure
Backup Settings page, which
of the following backup settings is not configurable?
A. Disk Backup Type
B. Control File Autobackup Format
C. Disk Backup Location
D. Parallelism
---
Ans: B.
Settings such as the control file autobackup filename format and the
snapshot control file destination
filename must be configured using the RMAN command-line interface.

10. Which of the following initialization parameters specifies the
location where the control file
trace backup is sent?
A. USER_DUMP_DEST
B. BACKGROUND_DUMP_DEST
C. LOG_ARCHIVE_DEST
D. CORE_DUMP_DEST
---
Ans: A.
The trace backup is created in the location specified by USER_DUMP_DEST,
and its format is
sid_ora_pid.trc.

11. Which of the following pieces of information is not available in the
control file?
A. Instance name
B. Database name
C. Tablespace names
D. Log sequence number
---
Ans: A.
The instance name is not in the control file. The control file has
information about the physical
database structure.

12. Which data dictionary view shows that the database is in ARCHIVELOG
mode?
A. V$INSTANCE
B. V$LOG
C. V$DATABASE
D. V$THREAD
---
Ans: C.
The V$DATABASE view shows whether the database is in ARCHIVELOG mode or
in
NOARCHIVELOG mode.

13. Which file records all changes made to the database and is used only
when recovering
an instance?
A. Archive log file
B. Redo log file
C. Control file
D. Alert log file
---
Ans: B.
The redo log file records all changes made to the database. The LGWR
process writes the redo
log buffer entries to the redo log files. These entries are used to roll
forward, or to update, the datafiles
during an instance recovery. Archive log files are used for media
recovery.

14. Which initialization parameter contains the value used as the
default for archived log file
destination 10?
A. LOG_ARCHIVE_DEST
B. STANDBY_ARCHIVE_DEST
C. LOG_ARCHIVE_DUPLEX_DEST
D. DB_RECOVERY_FILE_DEST
E. USE_DB_RECOVERY_FILE_DEST
---
Ans D.
DB_RECOVERY_FILE_DEST points to the Flash Recovery area, and this is the
default for
archived log file destination number 10.

15. Which of the following commands is a key step in multiplexing
control files using an SPFILE?
A. ALTER SYSTEM SET CONTROL_FILES= '/u01/oradata/PRD/cntrl01.ctl',
'/u01/
oradata/PRD/cntrl02.ctl' SCOPE=SPFILE;
B. ALTER SYSTEM SET CONTROL_FILES= '/u01/oradata/PRD/cntrl01.ctl',
'/u01/
oradata/PRD/cntrl02.ctl' SCOPE=MEMORY;
C. ALTER SYSTEM SET CONTROL_FILES= '/u01/oradata/PRD/cntrl01.ctl',
'/u01/
oradata/PRD/cntrl02.ctl' SCOPE=BOTH;
D. The number of control files is fixed when the database is created.
---
Ans: A.
The location of the new control files is not valid until an operating
system copy is made of the current
control file to the new location(s) and the instance is restarted. The
SCOPE=SPFILE option specifies
that the parameter change will not take place until a restart.
Specifying either MEMORY or BOTH
causes an error, because the new control file does not yet exist.

16. Which statement adds a member /logs/redo22.log to redo log file
group 2?
A. ALTER DATABASE ADD LOGFILE '/logs/redo22.log' TO GROUP 2;
B. ALTER DATABASE ADD LOGFILE MEMBER '/logs/redo22.log' TO GROUP 2;
C. ALTER DATABASE ADD MEMBER '/logs/redo22.log' TO GROUP 2;
D. ALTER DATABASE ADD LOGFILE '/logs/redo22.log';
---
Ans: B.
When adding log file members, specify the group number, or specify all
the existing group
members.

17. What is the biggest advantage of having the control files on
different disks?
A. Database performance.
B. Guards against failure.
C. Faster archiving.
D. Writes are concurrent, so having control files on different disks
speeds up control
---
Ans: B.
Having the control files on different disks ensures that even if you
lose one disk, you lose only
one control file. If you lose one of the control files, you can shut
down the database, copy a control
file, or change the CONTROL_FILES parameter and restart the database.

18. To place the database into ARCHIVELOG mode, in which state must you
start the database?
A. MOUNT
B. NOMOUNT
C. OPEN
D. SHUTDOWN
E. Any of the above
---
Ans: A.
To put the database into ARCHIVELOG mode, the database must be in the
MOUNT state; the control
files and all datafiles that are not offline must be available to change
the database to
ARCHIVELOG mode.

19. Which of the following commands places the database in ARCHIVELOG
mode?
A. ALTER SYSTEM ARCHIVELOG;
B. ALTER DATABASE ARCHIVELOG;
C. ALTER SYSTEM SET ARCHIVELOG=TRUE;
D. ALTER DATABASE ENABLE ARCHIVELOG MODE;
E. ALTER DATABASE ARCHIVELOG MODE;
---
Ans: B.
You use the ALTER DATABASE ARCHIVELOG command while the database is in
the MOUNT state
to enable archiving of online redo log files.

20. Which of the following substitution variables formats are always
required for specifying the
names of the archived redo log files? (Choose all that apply.)
A. %d
B. %s
C. %r
D. %t
---
Ans: B, C, D.
The substitution variable %d, which represents the database ID, is
required only if multiple
databases share the same archive log destination.

OCA 10g -Ch9 - Proactive Database Maintenance

Review Questions

1. Which of the following components of Oracle architecture stores the statistics gathered by the
MMON process?
A. ADDM
B. AWR
C. Data dictionary views
D. Dynamic performance views
----
The MMON process gathers statistics from the SGA and stores them in the AWR. The
ADDM process then uses these statistics to compare the current state of the database with baseline and historical performance metrics before summarizing the results on the EM Database
Control screens.

Ans: B


2. Which of the following options for the PFILE/SPFILE parameter STATISTICS_LEVEL turns off AWR statistics gathering and ADDM advisory services?
A. OFF
B. TYPICAL
C. ALL
D. BASIC
---
Setting STATISTICS_LEVEL = BASIC disables the collection and analysis of AWR statistics.
TYPICAL is the default setting, and ALL gathers information for the execution plan and operating system timing. OFF is not a valid value for this parameter.

Ans: D.


3. The following graphic is from the Sessions: Waiting And Working section of the EM Database Control Performance screen. Using this output, which of the following is the primary source of
user wait activity?
A. CPU Used
B. User I/O
C. System I/O
D. Configuration
----
Ans: B.

The I/O caused by user activity is the primary source of user waits because it is listed first in
the graph's legend. Clicking the User I/O link opens a screen in which you can examine which
SQL statements are contributing the most to the user waits.



4. The following graphic shows the SQL statements that are having the greatest impact on overall DB Time. Which statement has had the greatest impact?
A. 9d87jmt7vo6nb(2.
B. 8acv8js8kr574(24.
C. b6usrq82hwsa3(73.
D. None of the above was highest.
---
Ans: C.
The pie graph shows that the SQL statement that has been assigned the identifier of
b6usrq82hwsa3(73) contributed to 73 percent of the total time spent servicing the top three
SQL statements.

5. Suppose you have used EM Database Control to drill down into ADDM findings and have found that a single SQL statement is causing the majority of I/O on your system. Which of the following advisors is best suited to troubleshoot this SQL statement?
A. SQL Tuning Advisor
B. SQL Access Advisor
C. Both A and B
D. Neither A or B
----
Ans: C.
You can use the SQL Tuning Advisor and SQL Access Advisor together to determine if I/O
can be minimized and overall DB Time reduced to the targeted SQL statement.

6. Nearly all the advisors submit their analysis activities to the database in the form of a job. When the analysis job is submitted, which option for job scope adds the least overhead to the system?
A. Limited
B. Restricted
C. Comprehensive
D. Thorough
----
Ans: A.
The Limited scope has the least impact on the system. The Comprehensive scope is the most
detailed, but also makes the greatest demands on the system. There are no job scope options
called Restricted or Thorough.


7. Using the Top SQL link of the EM Database Control Performance screen produces the output shown in the following graphic. Approximately which time interval produced the highest activity for this monitored event?
A. 9:45 to 9:50
B. 10:00 to 10:45
C. 9:55 to 10:10
D. 10:00 to 10:05
---
Ans: D.
The shaded area shows that the time interval from approximately 10:00 to 10:05 will be analyzed for Top SQL statements.


8. Which data dictionary view contains information explaining why ADDM made its recommendations?
A. DBA_ADVISOR_FINDINGS
B. DBA_ADVISOR_OBJECTS
C. DBA_ADVISOR_RECOMMENDATIONS
D. DBA_ADVISOR_RATIONALE
---
Ans: D. DBA_ADVISOR_RATIONALE provides the rationale for each ADDM
recommendation. The ADDM findings are stored in DBA_ADVISOR_FINDINGS. The object related to the findings are shown in DBA_ADVISOR_OBJECTS. The actual ADDM recommendations are found in DBA_ADVISOR_RECOMMENDATIONS.

9. Which of the following advisors determines if the space allocated to the Shared Pool, Large Pool, or Buffer Cache are adequate?
A. SQL Tuning Advisor
B. SGA Tuning Advisor
C. Memory Advisor
D. Pool Advisor
---
Ans C.
The Memory Advisor can help determine whether the overall size of the SGA is appropriate
and whether memory is properly allocated to the SGA components.


10. Which of the following advisors determines if the estimated instance recovery duration is within the expected service-level agreements?
A. Undo Management Advisor
B. SQL Access Advisor
C. SQL Tuning Advisor
D. MTTR Advisor
---
Ans: D.
The Mean Time To Recover (MTTR) Advisor provides recommendations that you can use
to configure the database so that the instance recovery time fits within the service levels that
you specified.


11. If no e-mail address is specified, where will alert information be displayed?
A. In the DBA_ALERTS data dictionary view
B. In the V$ALERTS dynamic performance view
C. In the EM Database Control main screen
D. No alert information is sent or displayed.
-----
Ans C.
By default, alerts are displayed in the Alerts section of the EM
Database Control main screen,
even when e-mail notifications are not displayed.


12. When you configure an alert, which of the following types of alert thresholds can you use to monitor a tablespace for diminishing free space?
A. Warning threshold
B. Critical threshold
C. Both A and B
D. Neither A or B
---
Ans: C.
You can specify both warning and critical thresholds for monitoring the available free space in a tablespace. In this situation, the warning threshold is generally a lower number than the critical
threshold.

13. Multiple baseline metrics can be gathered and stored in the AWR. Why might you want more than one metrics baseline?
A. You might want a separate baseline metric for each user.
B. You might want a separate baseline metric for daytime usage versus off-hours usage.
C. You might want a separate baseline metric for each schema.
D. You would never want more than one baseline metric, even though it is possible to gather
and store them.
---
Ans: B.
Because many transactional systems run batch processing during off-hours, having a relevant
baseline for each type of usage pattern yields better results in terms of alerts and ADDM recommendations.


14. Using EM Database Control, you discover that two application PL/SQL functions and a view are currently invalid. Which of the following might you use to fix these objects? (Choose two.)
A. Shut down and restart the database.
B. Use EM Database Control to recompile the object.
C. Export the invalid objects, drop them, and then import them.
D. Use the ALTER FUNCTION ... COMPILE and ALTER VIEW ... COMPILE
commands.
---
Ans: B, D.
After fixing the issue that originally caused the invalid status, you can use both EM Database
Control and SQL to compile an invalid object. Starting and stopping the database will not fix
invalid objects. Export/import is also not an appropriate technique for recompiling invalid objects.


15. You have just created a database using scripts that you wrote. Now
you notice that the automatic collection of database statistics by the EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS procedure is not running. What might be the cause?
A. The PFILE/SPFILE parameter GATHER_STATS=FALSE.
B. Only databases created using Database Configuration Assistant have automatic statistics
collection enabled.
C. The SYSMAN user who owns the AWR is not logged in.
D. The operating system does not support automatic statistics collection.
----
Ans: B.

Automatic statistics collection can be started on databases created outside the Database Configuration Assistant by using the Automatic Workload Repository link in the EM
Database Control Performance screen.

16. Which of the following is a performance metric that could be defined as 'the amount of work that a system can perform in a given amount of time'?
A. Response time
B. Uptime
C. Throughput
D. Runtime
---
Ans: C.
Throughput is an important performance metric because it is a overall measure of performance
that can be compared against similar measures taken before and after tuning changes are
implemented.

17. Which of the following is not one of the three primary sources of performance metric information in the EM Database Control Performance screen?
A. Host
B. Session
C. Instance
D. Network
---
Ans: D.
Network information may be contained in the Session Information section of the EM Database
Control Performance screen, but only if network issues contributed to session wait times.

18. By default, how long will database statistics be retained in the AWR?
A. 7 days
B. 30 days
C. 7 hours
D. Indefinitely
----
Ans: A.
By default, database statistics are retained in the AWR for seven days. You can change the
default duration using the EM Database Control Automatic Workload Repository link in
the Performance screen or using the DBMS_WORKLOAD_REPOSITORY PL/SQL package.

19. Your users have called to complain that system performance has suddenly decreased markedly. Where would be the most likely place to look for the cause of the problem in the EM
Database Control?
A. Main screen
B. Performance screen
C. Administration screen
D. Maintenance screen
---
Ans: B.
The Performance screen of the EM Database Control provides a quick overview of how the
host system, user sessions, and throughput are impacted by the system slowdown. You can also
drill down into any of these three areas to take a look at details about this slowdown.

20. Using EM Database Control, you've identified that the following SQL
statement is the source of a high amount of disk I/O:
SELECT NAME, LOCATION, CREDIT_LIMIT FROM CUSTOMERS
What might you do first to try to improve performance?
A. Run the SQL Tuning Advisor.
B. Run the SQL Access Advisor.
C. Check the EM Database Control main screen for alerts.
D. Click the Alert Log Content link in the EM Database Control main
screen.
----
Ans: A.
Running the SQL Tuning Advisor provides the most information about how the performance
of this SQL statement might be improved. The SQL Access Advisor is run only after the output
from the SQL Tuning Advisor indicates that it will be useful. EM Database Control does not
store detailed information about I/O activity in either its alerts or the Alert log.

Monday, May 28, 2007

Upgrade a 9.2.0 Database TOY9I to 10.2

This is my notes about the execersice of upgrading a 9.2 toy database to 10.2. The steps here follow the MetaLink Note:316889.1

Step 1:
~~~~~~~
Copy the following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the /tmp directory on your system:

ORACLE_HOME/rdbms/admin/utlu102i.sql
ORACLE_HOME/rdbms/admin/utltzuv2.sql

$ cp utlu102i.sql utltzuv2.sql /tmp

Step 2:
~~~~~~~
Login as sysdba, run and spool the utlu102i.sql file.
sys@TOY9I> spool database_info.log
sys@TOY9I> @utlu102i.sql
sys@TOY9I> spool off


---------------------------------
- Contents of database_info.log -
---------------------------------
sys@TOY9I> @utlu102i.sql
Oracle Database 10.2 Upgrade Information Utility 04-07-2007 12:34:59
.
**********************************************************************
Database:
**********************************************************************
--> name: TOY9I
--> version: 9.2.0.4.0
--> compatible: 9.2.0
--> blocksize: 8192
.
**********************************************************************
Logfiles: [make adjustments in the current environment]
**********************************************************************
--> The existing log files are adequate. No changes are required.
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 245 MB
WARNING: --> TEMP tablespace is not large enough for the upgrade.
.... currently allocated size: 20 MB
.... minimum required size: 58 MB
.... increase current size by: 38 MB
.... tablespace is NOT AUTOEXTEND ENABLED.
.
**********************************************************************
Update Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
WARNING: --> "shared_pool_size" needs to be increased to at least
197337907
WARNING: --> "java_pool_size" needs to be increased to at least 67108864
WARNING: --> "streams_pool_size" is not currently defined and needs a
value of at least 50331648
WARNING: --> "large_pool_size" needs to be increased to at least 8388608
WARNING: --> "session_max_open_files" needs to be increased to at least
20


**********************************************************************
Renamed Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 10.2 init.ora or
spfile]
**********************************************************************
-- No obsolete parameters found. No changes are required
.
**********************************************************************
Components: [The following database components will be upgraded or
installed]
**********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Deprecated CONNECT role granted to some user/roles.
.... CONNECT role after upgrade has only CREATE SESSION privilege.
WARNING: --> Database contains stale optimizer statistics.
.... Refer to the 10g Upgrade Guide for instructions to update
.... statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
.... SYS
.
**********************************************************************
SYSAUX Tablespace:
[Create tablespace in the Oracle Database 10.2 environment]
**********************************************************************
--> New "SYSAUX" tablespace
.... minimum required size for database upgrade: 500 MB
.

PL/SQL procedure successfully completed.


-------------------------------
- End of database_info.log -
-------------------------------

Action:
1. WARNING: --> TEMP tablespace is not large enough for the upgrade.
Increase temp tablespace to 60M

sys@TOY9I> select name, bytes/1024/1024 "Size(M)" from
v$tempfile;

NAME
Size(M)

--------------------------------------------------------------------------------
/ora01/orabkup/GENQ/.temp_toy10g/ora01/oradata/TOY9I/temp01.dbf20

sys@TOY9I> alter database tempfile
'/ora01/orabkup/GENQ/.temp_toy10g/ora01/oradata/TOY9I/temp01.dbf' resize
60M;
sys@TOY9I> select name, bytes/1024/1024 "Size(M)" from
v$tempfile;

NAME
Size(M)

--------------------------------------------------------------------------------
/ora01/orabkup/GENQ/.temp_toy10g/ora01/oradata/TOY9I/temp01.dbf 60

2. Modify the following two parameter in the init.ora
WARNING: --> "shared_pool_size" needs to be increased to at least
197337907
WARNING: --> "session_max_open_files" needs to be increased to at
least 20

Step 3:
~~~~~~~
Check for the deprecated CONNECT Role

sys@TOY9I> SELECT grantee FROM dba_role_privs
WHERE granted_role = 'CONNECT' and
grantee NOT IN (
'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP',
'LOGSTDBY_ADMINISTRATOR', 'ORDSYS',
'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY',
'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS',
'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER',
'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA',
2 'SI_INFORMTN_SCHEMA', 'XDB', 'ODM'); 3 4 5
6 7 8 9 10

GRANTEE
------------------------------
SCOTT

In Oracle 10.2 the CONNECT role only includes CREATE SESSION privilege.
So after upgrade if we want SCOTT has other privileges, we must do the
grant as necessary


Step 4:
~~~~~~~
Create the script for dblink in case of downgrade of the database.

sys@TOY9I> SELECT
'create 'DECODE(U.NAME,'PUBLIC','public ')'database link 'CHR(10)
DECODE(U.NAME,'PUBLIC',Null, U.NAME'.') L.NAMEchr(10)
'connect to ' L.USERID ' identified by '''
L.PASSWORD''' using ''' L.host ''''
chr(10)';' TEXT
FROM sys.link$ L,
sys.user$ U
WHERE L.OWNER# = U.USER# ;

2 3 4 5 6 7 8 9
no rows selected

Step 5:
~~~~~~~
Check for the TIMESTAMP WITH TIMEZONE Datatype. This step is only required for the 10gR1

sys@TOY9I> spool timezone_info.log
sys@TOY9I> @utltzuv2.sql
DROP TABLE sys.sys_tzuv2_temptab
*
ERROR at line 1:
ORA-00942: table or view does not exist



Table created.

Query sys.sys_tzuv2_temptab Table to see if any TIMEZONE data is
affected by version 2 transition
rules

PL/SQL procedure successfully completed.


Commit complete.

sys@TOY9I> spool off;

Step 6:
~~~~~~~
about NLS_NCHAR_CHARACTERSET

sys@TOY9I> select distinct OWNER, TABLE_NAME from DBA_TAB_COLUMNS where
DATA_TYPE in ('NCHAR','NVARCHAR2', 'NCLOB') and OWNER not in
('SYS','SYSTEM');
2 3
no rows selected

if no rows select, go to next step


Step 7:
~~~~~~~
Statistics

==> Backup the existing statistics as follow

sys@TOY9I> spool sdict
sys@TOY9I> grant analyze any to sys;
Grant succeeded.
sys@TOY9I> exec dbms_stats.create_stat_table('SYS','dictstattab');
PL/SQL procedure successfully completed.

sys@TOY9I> exec
dbms_stats.export_schema_stats('DBSNMP','dictstattab',statown => 'SYS');
sys@TOY9I> exec
dbms_stats.export_schema_stats('OUTLN','dictstattab',statown => 'SYS');
sys@TOY9I> exec
dbms_stats.export_schema_stats('SCOTT','dictstattab',statown => 'SYS');
sys@TOY9I> exec
dbms_stats.export_schema_stats('SYSTEM','dictstattab',statown => 'SYS');
sys@TOY9I> exec
dbms_stats.export_schema_stats('SYS','dictstattab',statown => 'SYS');

Note:

This data is useful if you want to revert back the statistics. For example, the following PL/SQL subprograms import the statistics for the SYS schema after deleting the existing statistics:

exec dbms_stats.delete_schema_stats('SYS');
exec dbms_stats.import_schema_stats('SYS','dictstattab');


==> To gather statistics

exec dbms_stats.gather_schema_stats('SCOTT', options=>'GATHER', -
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
method_opt => 'FOR ALL COLUMNS SIZE AUTO', -
cascade => TRUE)
/

exec dbms_stats.gather_schema_stats('OUTLN',options=>'GATHER', -
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE)
/

exec dbms_stats.gather_schema_stats('DBSNMP', -
options=>'GATHER', -
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
method_opt => 'FOR ALL COLUMNS SIZE AUTO', -
cascade => TRUE)
/

exec dbms_stats.gather_schema_stats('SYSTEM', -
options=>'GATHER', -
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
method_opt => 'FOR ALL COLUMNS SIZE AUTO', -
cascade => TRUE)
/

exec dbms_stats.gather_schema_stats('SYS', -
options=>'GATHER', -
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, -
method_opt => 'FOR ALL COLUMNS SIZE AUTO', -
cascade => TRUE)

Step 8:
~~~~~~~
Check for invalid objects

Step 9:
~~~~~~~~
Check for corruption in the dictionary, use the following commands in sqlplus
connected as sys:


Set verify off
Set space 0
Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql

Select 'Analyze cluster "'cluster_name'" validate structure
cascade;'
from dba_clusters
where owner='SYS'
union
Select 'Analyze table "'table_name'" validate structure cascade;'

from dba_tables
where owner='SYS' and partitioned='NO' and (iot_type='IOT' or iot_type
is NULL)
union
Select 'Analyze table "'table_name'" validate structure cascade
into invalid_rows;'
from dba_tables
where owner='SYS' and partitioned='YES';

spool off


This creates a script called analyze.sql.
Now execute the following steps.

$ sqlplus '/as sysdba'
SQL> @$ORACLE_HOME/rdbms/admin/utlvalid.sql
SQL> @analyze.sql

This script (analyze.sql) should not return any errors.

Step 10:
~~~~~~~~
Ensure that all Snapshot refreshes are successfully completed, and that replication is stopped.

$ sqlplus '/as sysdba'
SQL> select distinct(trunc(last_refresh)) from
dba_snapshot_refresh_times;

Step 11:
~~~~~~~~
Stop the listener for the database:

$ lsnrctl
LSNRCTL> stop

Ensure no files need media recovery:

$ sqlplus '/ as sysdba'
SQL> select * from v$recover_file;

This should return no rows.

Step 12:
~~~~~~~~
Ensure no files are in backup mode:

SQL> select * from v$backup where status!='NOT ACTIVE';

This should return no rows.

Step 13:
~~~~~~~~
Resolve any outstanding unresolved distributed transaction:

SQL> select * from dba_2pc_pending;

If this returns rows you should do the following:

SQL> select local_tran_id from dba_2pc_pending;
SQL> execute dbms_transaction.purge_lost_db_entry('');
SQL> commit;

Step 14:
~~~~~~~~
Disable all batch and cron jobs.

Step 15:
~~~~~~~~
Ensure the users sys and system have 'system' as their default tablespace.
sys@TOY9I> select username, default_tablespace from dba_users
where username in ('SYS','SYSTEM');
2
SYS SYSTEM
SYSTEM SYSTEM

Step 16:
~~~~~~~~
Optionally ensure the aud$ is in the system tablespace when auditing is enabled.

sys@TOY9I> select tablespace_name from dba_tables where
table_name='AUD$';
TABLESPACE_NAME
------------------------------
SYSTEM

Step 17:
~~~~~~~~
Note down where all control files are located.
sys@TOY9I> select name from v$controlfile;

NAME
------------------------------------------------------------------------
/ora01/orabkup/GENQ/.temp_toy10g/ora01/oradata/TOY9I/control.001.dbf
/ora01/orabkup/GENQ/.temp_toy10g/ora01/oradata/TOY9I/control.002.dbf
/ora01/orabkup/GENQ/.temp_toy10g/ora01/oradata/TOY9I/control.003.dbf

Step 18:
~~~~~~~~
Shutdown the database

$ sqlplus '/as sysdba'
SQL> shutdown immediate;


Step 19: (skip)
~~~~~~~~
PERFORM a Full cold backup!!!!!!!

You can either do this by manually copying the files or
sign on to RMAN:

$rman "target / nocatalog"

And issue the following RMAN commands:

RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT
'/ora01/orabkup/GENQ/.temp_toy10g/ora01/orabkup/TOY9I/%U' TAG
before_upgrade;
backup current controlfile format
'/ora01/orabkup/GENQ/.temp_toy10g/ora01/orabkup/TOY9I/%U';
}

Upgrading to the New Oracle Database 10g Release 2
--------------------------------------------------


Step 20:
~~~~~~~~
Update the init.ora file:

- Make a backup of the init.ora file.

- Comment out obsoleted parameters(list in appendix A).

- Change all deprecated parameters(list in appendix B).

- Set the COMPATIBLE initialization parameter to an appropriate value.
If you are upgrading from 8.1.7.4 then set the COMPATIBLE parameter to 9.2.0
until after the
upgrade has been completed successfully. If you are upgrading from
9.2.0 or 10.1.0 then leave the COMPATIBLE parameter set to it's current value until
the upgrade has been completed successfully. This will avoid any unnecessary
ORA-942 errors from being reported in SMON trace files during the upgrade (because
the upgrade is looking for 10.2 objects that have not yet been created)

- If you have set the parameter NLS_LENGTH_SEMANTICS to CHAR, change the
value to BYTE during the upgrade.

- Verify that the parameter DB_DOMAIN is set properly.

- Make sure the PGA_AGGREGATE_TARGET initialization parameter is set to at least 24 MB.

- Ensure that the SHARED_POOL_SIZE and the LARGE_POOL_SIZE are at least 150Mb.
Please also the check the "KNOWN ISSUES" section

- Make sure the JAVA_POOL_SIZE initialization parameter is set to at least 150 MB.

- Ensure there is a value for DB_BLOCK_SIZE

- On Windows operating systems, change the BACKGROUND_DUMP_DEST and USER_DUMP_DEST initialization parameters that point to RDBMS80 or any other
environment variable to point to the following directories instead:

BACKGROUND_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME

and

USER_DUMP_DEST to ORACLE_BASE\oradata\DB_NAME\archive

- Comment out any existing AQ_TM_PROCESSES parameter setting, and enter
a new one
that explicitly sets AQ_TM_PROCESSES=0 for the duration of the upgrade

- Make sure all path names in the parameter file are fully specified.
You should not
have relative path names in the parameter file.

- If you are using a cluster database, set the parameter
CLUSTER_DATABASE=FALSE
during the upgrade.

- If you are upgrading a cluster database, then modify the
initdb_name.ora
file in the same way that you modified the parameter file.

Step 21 :
~~~~~~~~~
Check for adequate freespace on archive log destination file systems.

Step 22 :
~~~~~~~~~
Ensure the NLS_LANG variable is set correctly:

$ env grep $NLS_LANG
sys@TOY9I> ho echo $NLS_LANG
american_america.we8dec

Note: what should we expect?


Step 23:
~~~~~~~~
If needed copy the SQL*Net files like (listener.ora,tnsnames.ora etc) to the new location (when no TNS_ADMIN env. Parameter is used)

$ cp $OLD_ORACLE_HOME/network/admin/*.ora /network/admin

Step 24: (ignore)
~~~~~~~~
If your Operating system is Windows NT, delete your services With the ORADIM of your old oracle version.

Step 25:
~~~~~~~~
copy spfile to new location


tx2qdfsdb1:TOY9I: /oracle/product/9.2.0/dbs > pwd
/oracle/product/9.2.0/dbs
tx2qdfsdb1:TOY9I: /oracle/product/9.2.0/dbs > cp spfileTOY9I.ora
/oracle/product/10.2.0/db_1/dbs/
tx2qdfsdb1:TOY9I: /oracle/product/9.2.0/dbs > ls
/oracle/product/10.2.0/db_1/dbs/
hc_BIODSQ.dat init.ora initdw.ora lkTOY10G
spfileBIODSQ.ora
hc_TOY10G.dat initBIODSQ.ora lkBIODSQ orapwBIODSQ
spfileTOY9I.ora


Step 26:
~~~~~~~~
Update the oratab entry, to set the new ORACLE_HOME and disable
automatic startup:

::N

Step 27:
~~~~~~~~
Update the environment variables like ORACLE_HOME and PATH

$. oraenv

Step 28:
~~~~~~~~
Make sure the following environment variables point to the new
Release directories:
- ORACLE_HOME
- ORACLE_BASE
- PATH
- ORA_NLS10
- LD_LIBRARY_PATH
- LD_LIBRARY_PATH_64 (Solaris only)
- LIBPATH (AIX only)
- SHLIB_PATH (HPUX only)
- ORACLE_PATH

tx2qdfsdb1:TOY9I: /oracle > env grep ORACLE
ORACLE_BASE=/oracle
ORACLE_SID=TOY9I
PS1=${SNAME}:${ORACLE_SID}: ${PWD} >
ORACLE_TERM=vt100
ORACLE_HOME=/oracle/product/10.2.0/db_1

tx2qdfsdb1:TOY9I: /oracle > env grep PATH
MANPATH=/usr/share/man:/opt/ssh2/man
SHLIB_PATH=/oracle/product/10.2.0/db_1/lib:/usr/lib
SQLPATH=/oracle/product/8.1.7.4/sqlplus/admin
PATH=/bin:/oracle/dba/local:/usr/local/bin:/usr/bin:/usr/ccs/bin:/usr/uc
b:/etc:/usr/sbin:/sbin:/usr/dt/bin:/usr/lpp/X11/bin:/opt/bin:/usr/lbin:/
usr/bin/X11:/usr/X11R6/bin:/usr/openwin/bin:/opt/STSssh/bin:/oracle/dba/
standard:/oracle/dba/standard/rman:/opt/pb/bin:/opt/UDSssh/bin:/banktool
s:/banktools/local/bin:/opt/SDSssh/bin/scp:/oracle/dba/foglight/script:/
usr/openv/netbackup/bin:/opt/VRTSvmsa/bin/vmsa:/banktools/pbrun/3.2/bin:
/oracle/product/10.2.0/db_1/bin:/oracle/product/10.2.0/db_1/oracm/bin:/o
racle/product/10.2.0/db_1/OPatch:.
CLASSPATH=/oracle/product/10.2.0/db_1/jdbc/lib/classes111.zip:/oracle/pr
oduct/10.2.0/db_1/ord/jlib:/oracle/product/10.2.0/db_1/JRE:/oracle/produ
ct/10.2.0/db_1/jlib:/oracle/product/10.2.0/db_1/rdbms/jlib:/oracle/produ
ct/10.2.0/db_1/network/jlib:
LIBPATH=/oracle/product/10.2.0/db_1/lib32:/oracle/product/10.2.0/db_1/li
b:/oracle/product/10.2.0/db_1/db_1/lib:/oracle/product/10.2.0/db_1/jdbc/
lib:/usr/lib:/usr/ccs/lib:/opt/SUNWspro/lib:/usr/openwin/lib:/usr/ucblib
:/usr/dt/lib:/oracle/product/10.2.0/db_1/oracm/lib:
LD_RUN_PATH=/opt/SUNWspro/lib
LD_LIBRARY_PATH=/oracle/product/10.2.0/db_1/lib32:/oracle/product/10.2.0
/db_1/lib:/oracle/product/10.2.0/db_1/db_1/lib:/oracle/product/10.2.0/db
_1/jdbc/lib:/usr/lib:/usr/ccs/lib:/opt/SUNWspro/lib:/usr/openwin/lib:/us
r/ucblib:/usr/dt/lib:/oracle/product/10.2.0/db_1/oracm/lib:

tx2qdfsdb1:TOY9I: /oracle > env grep ORA_NLS10
ORA_NLS10=/oracle/product/10.2.0/db_1/nls/data


$ env grep LD_LIBRARY_PATH
$ env grep ORACLE_PATH

(no ORACLE_PATH, what is this for)

Note that the ORA_NLS10 environment variable replaces the ORA_NLS33
environment
variable, so you may need to unset ORA_NLS33 and set ORA_NLS10.

Step 29:
~~~~~~~~
Startup upgrade the database:

$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba
Use Startup with the UPGRADE option:
SQL> startup upgrade


idle> startup upgrade
ORACLE instance started.

Total System Global Area 562036736 bytes
Fixed Size 2031648 bytes
Variable Size 520101856 bytes
Database Buffers 33554432 bytes
Redo Buffers 6348800 bytes
Database mounted.
Database opened.

Step 30:
~~~~~~~~

Create a SYSAUX tablespace. In Oracle Database 10g, the SYSAUX tablespace is used to consolidate data from a number of tablespaces that were separate in previous releases.

The SYSAUX tablespace must be created with the following mandatory attributes:

- ONLINE
- PERMANENT
- READ WRITE
- EXTENT MANAGEMENT LOCAL
- SEGMENT SPACE MANAGEMENT AUTO

The Upgrade Information Tool(utlu102i.sql in step 4) provides an estimate of the minimum required size for the SYSAUX tablespace in the SYSAUX Tablespace section.

The following SQL statement would create a 500 MB SYSAUX tablespace for the database:

CREATE TABLESPACE sysaux DATAFILE
'/ora01/orabkup/GENQ/.temp_toy10g/ora01/oradata/TOY9I/sysaux01.dbf'
SIZE 500M REUSE
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;

Step 31:
~~~~~~~~
If table XDB.MIGR9202STATUS exists in the database, drop it before upgrading
the database (to avoid the issue described in )

Step 32:
~~~~~~~~
Spool the output so you can take a look at possible errors after the
upgrade:

idle> spool /tmp/upgrade.log
idle> @catupgrd.sql


Turn off the spooling of script results to the log file:

SQL> SPOOL OFF

Then, check the spool file and verify that the packages and procedures
compiled successfully. You named the spool file earlier in this step; the suggested name was upgrade.log. Correct any problems you find in this file and rerun the appropriate upgrade script if necessary. You can rerun any of the scripts described in this note as many times as necessary.

-------------------------------------
- last portion of the upgrade.log -
-------------------------------------

Oracle Database 10.2 Upgrade Status Utility 04-07-2007
21:17:46
.
Component Status Version
HH:MM:SS
Oracle Database Server VALID 10.2.0.3.0
00:42:06
.
Total Upgrade Time: 00:43:27

PL/SQL procedure successfully completed.

idle>
idle> SET SERVEROUTPUT OFF
idle> SET VERIFY ON
idle>
idle> DOC
DOC>####################################################################
###
DOC>####################################################################
###
DOC>
DOC> The above PL/SQL lists the SERVER components in the upgraded
DOC> database, along with their current version and status.
DOC>
DOC> Please review the status and version columns and look for
DOC> any errors in the spool log file. If there are errors in the
spool
DOC> file, or any components are not VALID or not the current version,
DOC> consult the Oracle Database Upgrade Guide for troubleshooting
DOC> recommendations.
DOC>
DOC> Next shutdown immediate, restart for normal operation, and then
DOC> run utlrp.sql to recompile any invalid application objects.
DOC>
DOC>####################################################################
###
DOC>####################################################################
###


Step 33:
~~~~~~~~
Run utlu102s.sql, specifying the TEXT option:

SQL> @utlu102s.sql TEXT

This is the Post-upgrade Status Tool displays the status of the database

components in the upgraded database.

Step 34:
~~~~~~~~
Restart the database:
SQL> shutdown immediate (DO NOT USE SHUTDOWN ABORT!!!!!!!!!)
SQL> startup restrict

Executing this clean shutdown flushes all caches, clears buffers and performs
other database housekeeping tasks. Which is needed if you want to upgrade
specific components.

Step 35: (skip)
~~~~~~~~
Run olstrig.sql to re-create DML triggers on tables with Oracle Label
Security policies.
This step is only necessary if Oracle Label Security is in your database.
(Check from Step 33).

SQL> @olstrig.sql

Step 36:
~~~~~~~~
Run utlrp.sql to recompile any remaining stored PL/SQL and Java code.

SQL> @utlrp.sql

Verify that all expected packages and classes are valid:

After Upgrading a Database
--------------------------

Step 37:
~~~~~~~~
Shutdown the database and startup the database.

$ sqlplus '/as sysdba'
SQL> shutdown
SQL> startup restrict

Step 38:
~~~~~~~~
Complete the Step 38 only if you upgraded your database from release 8.1.7
Otherwise skip to Step 40.

Step 39: (skipped)
~~~~~~~~


Step 40:
~~~~~~~~
Now edit the init.ora:
- If you change the value for NLS_LENGTH_SEMANTICS prior to the upgrade
put the value back to CHAR.
- If you changed the CLUSTER_DATABASE parameter prior the upgrade set it
back to TRUE


Step 41:

~~~~~~~~
Startup the database:
SQL> startup

Create a server parameter file with a initialization parameter file
SQL> create spfile from pfile;

This will create a spfile as a copy of the init.ora file located in the
$ORACLE_HOME/dbs directory.


Step 42:
~~~~~~~~
Modify the listener.ora file:
For the upgraded intstance(s) modify the ORACLE_HOME parameter
to point to the new ORACLE_HOME.

Step 43:
~~~~~~~~
Start the listener
$ lsnrctl
LSNRCTL> start

Step 44:
~~~~~~~~
Enable cron and batch jobs

Step 45:
~~~~~~~~
Change oratab entry to use automatic startup
SID:ORACLE_HOME:Y

Step 46:
~~~~~~~~
Upgrade the Oracle Cluster Registry (OCR) Configuration
If you are using Oracle Cluster Services, then you must upgrade the
Oracle Cluster Registry (OCR)keys for the database.

* Use srvconfig from the 10g ORACLE_HOME. For example:

% srvconfig -upgrade -dbname db_name -orahome pre-10g_Oracle_home

Thursday, May 24, 2007

Rebuild/Move Replication Base Tables

1. Check deferred transactions.

system - chk_def_tbls.sql


----
spool chk_def_tbls.log

select * from defcall;
select * from deftran;
select * from deferror;
select * from dba_repcatlog;

spool off
----

2. Check base tables.

system - chk_base_def_tbls.sql
----
spool chk_base_def_tbls.log

select * from system.def$_calldest;
select * from system.def$_defaultdest;
select * from system.def$_error;
select * from system.def$_origin;
select * from system.repcat$_repschema;
select * from system.def$_destination;

spool off
-----


3. Remove replication if it exist.

sys - $ORACLE_HOME/rdbms/admin/catrepr.sql

4. Drop the base deferred transaction tables.

system - dr_base_def_tbls.sql

----------------
spool dr_base_tbls.log

drop table system.def$_calldest;
drop table system.def$_defaultdest;
drop table system.def$_error;
drop table system.def$_origin;
--drop table system.repcat$_repschema; This is dropped in catrepr.sql
drop table system.def$_destination;

execute dbms_aqadm.drop_queue_table('SYSTEM.DEF$_AQCALL', TRUE);
execute dbms_aqadm.drop_queue_table('SYSTEM.DEF$_AQERROR', TRUE);

commit;

spool off
-------------


5. Create the replication tablespace and
set system's default tablespace to the rep tblsp.

system - cr_rep_tblsp.sql

6. Create the base deferred transaction tables.

system - $ORACLE_HOME/rdbms/admin/catdefrt

7. Create replication

sys - $ORACLE_HOME/rdbms/admin/catrep

8. Change system's default tablespace back to what it was.

Wednesday, May 23, 2007

RMAN -- Set DBID

Issues -- target database name is ambiguous
=============================================

tx2qdfsdb1> /oracle/admin/GENQA/rman/shell [GENQA] rman target / catalog
rman/xxx@cataprd6

Recovery Manager: Release 8.1.7.4.0 - Production

RMAN-06006: connected to target database: GENQA (not mounted)
RMAN-06008: connected to recovery catalog database

RMAN> list backup;

RMAN-03022: compiling command: list
RMAN-03026: error recovery releasing channel resources
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure during compilation of command
RMAN-03013: command type: list
RMAN-06004: ORACLE error from recovery catalog database: RMAN-20005:
target database name is ambiguous

Solution -- Set DBID -- Must be done when database is down
===========================================================
rman@CATAPRD6> select * from rc_database where name='GENQA';

DB_KEY DBINC_KEY DBID NAME RESETLOGS_CHANGE#
RESETLOGS_TIME
---------- ---------- ---------- -------- -----------------
--------------------
4829855 4829856 525219448 GENQA 3.6278E+10 21-MAY-2007
13:24:35
4766609 4766610 515653788 GENQA 3.6123E+10 27-MAR-2007
23:17:18

==

RMAN> set dbid=525219448;

RMAN-03022: compiling command: CSET
RMAN-03026: error recovery releasing channel resources
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure during compilation of command
RMAN-03013: command type: CSET
RMAN-06188: cannot use command when connected to target database

RMAN> shutdown;

RMAN-06402: Oracle instance shut down

RMAN> set dbid=525219448;

RMAN-03022: compiling command: CSET

RMAN> startup nomount;

RMAN-06193: connected to target database (not started)
RMAN-06196: Oracle instance started

Total System Global Area 2064646304 bytes

Fixed Size 73888 bytes
Variable Size 1005494272 bytes
Database Buffers 1048576000 bytes
Redo Buffers 10502144 bytes

Tuesday, May 22, 2007

Post RMAN Refresh tasks -- Remove Replication

Post RMAN Refresh tasks -- Remove Replication
==================================================================

Issues:
-------

When use RAMN duplicate command to create a test database from a
production database, which is a replication master definition site
database, we want to drop all replication stuff in the test database.

Post-refresh task
--------------------

1. drop replicaiton packages
login as system
------ gen_drop_rep_packages.sql ---
set feedback off;
set echo off;
set pagesize 1000;
set linesize 200;
set heading off;
set echo off;
spool drop_replication_packages.sql;
select 'spool drop_replication_packages.lst;' from dual;

select 'DROP PACKAGE ' owner '.' object_name ';' from
dba_objects
where object_type ='PACKAGE' and (object_name like '%$RP' or
object_name like '%$RL') ;

select 'spool off;' from dual;
select 'exit;' from dual;
spool off;
-- exit;
---------------

select 'DROP PUBLIC SYNONYM ' object_name ';' from dba_objects
where object_type ='SYNONYM' and (object_name like '%$RP' or
object_name like '%$RL') ;

2. login as sys, run catrepr.sql ( remove replciation-replciated catalog
views)
@?/rdbms/admin/catrepr.sql

3. remove replciation related jobs

login as repadmin
select 'exec dbms_job.remove(' job ');' from user_jobs;

'EXECDBMS_JOB.REMOVE('JOB');'
---------------------------------------------------------------
exec dbms_job.remove(98);
exec dbms_job.remove(78);
exec dbms_job.remove(44);
exec dbms_job.remove(79);
exec dbms_job.remove(80);
exec dbms_job.remove(81);
exec dbms_job.remove(82);
exec dbms_job.remove(83);
exec dbms_job.remove(84);
exec dbms_job.remove(85);
exec dbms_job.remove(86);
exec dbms_job.remove(43);
exec dbms_job.remove(87);
exec dbms_job.remove(88);
exec dbms_job.remove(89);
exec dbms_job.remove(90);
exec dbms_job.remove(141);
exec dbms_job.remove(142);
exec dbms_job.remove(144);
exec dbms_job.remove(145);

4. drop database links
repadmin@GENQB> drop database link GENB.WORLD
2 /
Database link dropped.

repadmin@GENQB> @conn system/xxx

system@GENQB> drop public database link GENB.WORLD;
Database link dropped.

drop GENARCHP.WORLD public link
drop public database link GENARCHP.WORLD;

5. Remove the REPADMIN user
DROP USER repadmin CASCADE;

Note: Before re-create replication, maybe need to execulte following:
EXECUTE Dbms_Defer_Sys.Unregister_Propagator(username=>'REPADMIN');
EXECUTE Dbms_Repcat_Admin.Revoke_Admin_Any_Schema(username=>'REPADMIN');

6. change password for all users

7. fix global_name
alter database rename global_name to genqb.world

8. remove statspack job too
conn perfstat/xxx
execute dbms_job.remove(119);

9. Add tempfile if not done so

10. Drop gen_rep_data tablespace:

rm the following file after drop the tablespace
system@GENQB> select file_name from dba_data_files where tablespace_name
='GEN_REP_DATA';

FILE_NAME
------------------------------------------------------------------------
----------------------------
/ora02/oradata/GENXQB/gen_rep_data_05.dbf
/ora01/oradata/GENQB/gen_rep_data_01.dbf
/ora02/oradata/GENQB/gen_rep_data_02.dbf
/ora01/oradata/GENQB/gen_rep_data_03.dbf
/ora02/oradata/GENQB/gen_rep_data_04.dbf

Work Plan -- Apply Changes to TSTARCHP/TSTP

Apply changes to TSTARCHP/TSTP

1. Preparation of scripts

2. RMAN cold backup TSTARCHP

3. Apply scripts (In TSTARCHP)

cd
/u01/app/oracle/admin/TSTARCHP/create/chg2007-05-22/db_r147_combined_tst
archp/ddl
Login as SYSTEM, execute the following control script to alter table
columns.
alter_arch_unix.sql

Note: some errors found in the log
------------------
ORA-00942: table or view does not exist
ORA-00942: table or view does not exist
ORA-02443: Cannot drop constraint - nonexistent constraint
ORA-00942: table or view does not exist
ORA-00942: table or view does not exist
ORA-02443: Cannot drop constraint - nonexistent constraint
ORA-00904: invalid column name
ORA-00904: invalid column name
ORA-02262: ORA-%05d occurs while type-checking column default value
expression
ORA-01418: specified index does not exist
ORA-01418: specified index does not exist
ORA-02443: Cannot drop constraint - nonexistent constraint
-------------------

4. Apply this latest version of a trigger (In TSTP)
cd /oracle/admin/TSTP/create/chg2007-05-22
log in as DEV, execute
DEV.ARCH_CF_INVOICE_LINE_ITEM_TRG_ddl.sql

Problem:
--------

SQL> @DEV.ARCH_CF_INVOICE_LINE_ITEM_TRG_ddl.sql
CREATE OR REPLACE TRIGGER arch_cf_invoice_line_item_trg
*
ERROR at line 1:
ORA-04054: database link TSTARCH does not exist

Fix
---
Change TSTARCH to TSTARCHP

5. Recompile triggers starting with ARCH_*. (In TSTP)

Sunday, May 20, 2007

RMAN: delete backupset on disk -- sample script

#!/bin/ksh
# Filename: del_backupset.ksh
# Purpose : delete backupset completed from five days ago to one day ago
from disk and catalog
# Author : Denis Sun
# Created : 10-MAY-2007
# Usage : del_backupset.ksh
# Note : This is for TST3Q only (DBID=4050733714, registered in
catp92a)


. /oracle/.profile_old
. /oracle/local/shell/oraenv TST3Q


CUR_DATE=`date '+%Y%m%d'`
RMAN_CMD=/oracle/admin/TST3Q/rman/cmd
RMAN_LOG=/oracle/admin/TST3Q/rman/log


sqlplus -s rman/xxx@catp92a <<EOF
set heading off
set feedback off
spool ${RMAN_CMD}/del_backupset.cmd
select 'allocate channel for maintenance type disk;' from dual;
select 'change backupset ' bs_key ' delete;' ' # '
completion_time
from rc_backup_set
where db_id=4050733714
and completion_time > sysdate - 5
and completion_time < sysdate - 1
order by completion_time;
select 'release channel;' from dual;
spool off
EOF

rman target / catalog rman/xxx@catp92a
cmdfile=${RMAN_CMD}/del_backupset.cmd
log=${RMAN_LOG}/del_backupset_${CUR_DATE}.log
mailx -s "TST3Q: delete backupset" denis.sun@xxx.com <
${RMAN_LOG}/del_backupset_${CUR_DATE}.log
~
~
~

Saturday, May 19, 2007

Set up a two-site multimaster replication environment including databases MYDBQA and MYDBQB

Note: I had a task to create a testing replication environment. This is my note about this effort.

Purpose:
Create a test replication environment that is similar to the production environment for MYDBP and MYDBB

References:
MetaLink Note:117434. Initial Steps Required to Create a Multi Master Replication Environment v8.1/v9.x

Procedure:

1. Create MYDBQA and MYDBQB databases

a. Shutdown the MYDBQA
b. Delete all the datafiles, controlfiles, redologfiles
c. Recreate the password file at $ORACLE_HOME/dbs
orapw file=orapwMYDBQA.ora password=xxxx
d. Edit a simple initMYDBQA.ora (see note)
e. Start sqlplus, startup the instance in nomount mode (using the simple initMYDBQA.ora)
f. Create the database
Run the script: cr_mydbqa.sql ( only created system tablespace)
After succeed, shutdown, restart with original initMYDBQA.ora
Create some additional tablespaces and users
g. Add rollback segment
h. Run catalog.sql and catproc.sql ( as SYS)
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql

As system

@?/sqlplus/admin/pupbld.sql

Note: keep an old copy of initMYDBQA.ora file. Create a simple initMYDBQA.ora, including only necessary parameters such as those dump dir, db_name, db_block_size etc in the file first. After successfully start up the instance using this simple init.ora, running the create database script. If directly use the old initMYDBQA.ora to start up the instance, it will encounter an error at the creating database stage: ORA-02084: database name is missing a component. Not sure why it happened. After that, shutdown the database, using the old initMYDBQA.ora,
,check it can be startup/shutdown normally.

2. Preparations for replication environment:

2.1 Check init.ora parameter of MYDBQA, MYDBQB database, compared with MYDBP, MYDBB
2.2 Check tablespace requirements

3. Install the Replication Catalogue

Note: Create an new tablespace mydb_rep_data, change the default tablespace for user ‘SYSTEM’ to mydb_rep_data, after install the replication catalogue, change the default tablespace back to system for user ‘SYSTEM’

3.1 Connect to the database as sys (mydb_rep_data is the tablespace used)
a. Change user SYSTEM’s default tablespace to MYDB_REP_DATA

ALTER USER SYSTEM DEFAULT TABLESPACE MYDB_REP_DATA;

b. change sys and system user’s temporary tablespace to temp (optional – not critical for setting up replication)

alter user sys temporary tablespace temp;

alter user system temporary tablespace temp;


3.2. Execute the script CATREP.SQL once the database has started. ( as SYS)

a. execute the script catdefer.sql ( as system, not sure if necessary, it does not hurt)
@?/rdbms/admin/catdefer
b. execute catrep.sql as sys
@?/rdbms/admin/catrep

SQL> SPOOL output.log
Once CATREP.SQL has completed, turn off the output spooling.
SQL> SPOOL OFF
Check OUTPUT.LOG for errors before proceeding.

3.3 Confirm that CATREP.SQL ran correctly by running a query on ALL_OBJECTS
where STATUS = 'INVALID'. For example,

SELECT OWNER, OBJECT_NAME, OBJECT_TYPE FROM ALL_OBJECTS
WHERE STATUS = 'INVALID' and OWNER=’SYS’;
2
OWNER OBJECT_NAME OBJECT_TYPE
------------------------------ ------------------------------ ------------------
SYS DBMS_OBFUSCATION_TOOLKIT PACKAGE BODY
SYS DBMS_PSP PACKAGE BODY
SYS DBMS_SNAP_INTERNAL PACKAGE BODY
SYS DBMS_UTILITY PACKAGE BODY
SYS UTL_TCP PACKAGE BODY

If you find that any of the SYS or SYSTEM package bodies have compiled
incorrectly, recompile them manually.

SQL> ALTER PACKAGE COMPILE BODY;

alter package DBMS_OBFUSCATION_TOOLKIT compile body;
alter package DBMS_PSP compile body;
alter package DBMS_PSP compile body;
alter package DBMS_SNAP_INTERNAL compile body;
alter package DBMS_UTILITY compile body;
alter package UTL_TCP compile body;

If CATREP.SQL has run successfully, a number of replication catalog tables are created in the MYDB_REP_DATA tablespace. The database is now setup for advanced replication.

3.4 Change the default tablespace of the user ‘SYSTEM’ back to system.
alter user system default tablespace system;

Note: Oracle9i creates the replication catalog as part of the CATALOG.SQL script,
so customers running Oracle9i 9.0 / 9.2 can skip this section. However it might be useful to check the catalog is valid. ( In this case we may want to move the replication packages from SYSTEM tablespace to MYDB_REP_DATA tablespace)


4. Configure Oracle Net
Configuration file: listner.ora and tnsname.ora, already exist, no changes needed

5. Using OEM to setup a sample replication environment
( just to get a feeling about replication, clean objects created in this once done. Skip this step normally!)

5.1 Create a user: nbk9lsj on both MYDBQA and MYDBQB database. Create table t1 and t2.

5.2 Using OEM to create a replication group
nbk9lsj_grp, t1, t2 are replicated objects in this group
In OEM, Click: Distributed -> Advanced Replication -> right click: Multimaster replication, then follow the wizard to set up multimaster replication environment

Click Setup Master Sites... to launch a wizard that helps you set up a multimaster replication environment.

5.3. Create Master Group.

Login as repadmin, follow the wizard.

Upon completion, the new group will appear in the navigator, and a request will be submitted to start the replication process.

From the navigator, select the master group you just created. Multimaster replication is running if the master group status is Started.

( undo everything in this step)

6. Import schema objects to the newly created MYDBQA and MYDBQB.

Do creating public synonym, grant, compile invalid object, analyze etc after importing

Schema: MYDB, DEV, GWH, MYDBRSV

/oracle/admin/MYDBQA/create/scripts/impmydb/MYDBQA/REFRESH_ALL/reload_obj.sh

Note: when imp from MYDBP backup, there are many replication packages in mydb schema remain invalid, ignore. ( drop those replication package as well in the future)

7. Implement the replication using script:
( script directory: …/create/rep)


1) Create the replication administrator (edit and execute the following at /create/rep/admin as system).[1 ]
cr_admin.sql

Note: the following should be issued first to avoid some error:
system@MYDBQA> @?/sqlplus/admin/pupbld.sql

2) Create directories (rg_repgroupname) under the …/create/rep directory for each replication group.[2]

3) Copy the files from the rg_x directory to each rg_repgroupname directory.[2]

4) Edit the files in each of the rg_repgroupname directory to correspond to the replication group.[2 ]

5) Synchronize the data in all databases, if necessary.

6) Execute each of these scripts for each replication group (as repadmin):[2]
While running these, check dba_repcatlog for admin request.
Make sure that all admin request have completed before executing the next script.

cr_rg - after running this one for a group, change the job interval!
mydb_cr_ro2
cr_ro
mydb_cr_rs2
cr_rs
cr_ms

Note:
(1) mydb_cr_ro2 and mydb_cr_rs2 are created to replace the original mydb_cr_ro and mydb_cr_rs. They use db link to obtain the replicated table names from MYDBP.

As repadmin, create the db link as follows:
SQL> create database link MYDBP.WORLD
connect to system identified by using ‘MYDBP.WORLD’

(2) The script doitall.sql contains all above scripts for convenience. (It seems not guaranteed that the rep group will be in normal status after applying this script. The status of rep group could be still quiesced. But using OEM to adjust after running doitall is usually easy.)

(3) Using OEM to change the interval of the dbms_repcat.do_deferred_repcat_admin job.
If you want to use command line, follow these steps:
a. Login as repadmin
b. Find the job id for the replication group, e.g.
SQL> select job from dba_jobs where what like '%RG_MYDB_CF_AC%';

JOB
----------
5


c. Change the interval

BEGIN DBMS_JOB.CHANGE(
job => 5,
next_date => sysdate,
interval => '/*10:Secs*/ sysdate + 10/(60*60*24)',
what => NULL);
END;
/

d. Run the job
execute dbms_job.run(5);
After the replication has been created.

7) Schedule the purge (as repadmin).[1]
sch_purge
8) Schedule the push (as repadmin).[1]
sch_push

(Note: 7,8 only need to execute once for the first group)

9) Resume master activity (as repadmin).[2]
rsum_master_act

10) Schedule jobs to monitor the dba jobs and deferred transactions.[1]

[1]= Execute the step on all sites.
[2] = Execute the step on the master definition site only.

Friday, May 18, 2007

RMAN -- Restore from cold backup procedure:

Restore from cold backup procedure:
===================================

1. Obtain the datafile, redo log file, control file info

select name from v$datafile;
select member from v$logfile;
select name from v$controlfile;

2. shutdown database and startup in nomount mode ( need to restore
controlfile)

3. Remove the current datafile, redo logfile, control files found in
step 1

rm /ora02/oradata/GEN3Q/gen3q_system01.dbf
rm /ora02/oradata/GEN3Q/GEN3Q_audit_rdbs01.dbf
rm /ora01/oradata/GEN3Q/gen_data_01.dbf
rm /ora02/oradata/GEN3Q/gen_data_02.dbf
rm /ora01/oradata/GEN3Q/gen_data_03.dbf
rm /ora02/oradata/GEN3Q/gen_data_04.dbf
rm /ora01/oradata/GEN3Q/gen_data_05.dbf
rm /ora02/oradata/GEN3Q/gen_data_06.dbf
rm /ora01/oradata/GEN3Q/gen_data_07.dbf
rm /ora02/oradata/GEN3Q/gen_data_08.dbf
rm /ora01/oradata/GEN3Q/gen_data_09.dbf
rm /ora02/oradata/GEN3Q/gen_data_10.dbf
rm /ora01/oradata/GEN3Q/gen_data_11.dbf
rm /ora01/oradata/GEN3Q/gen_data_12.dbf
rm /ora01/oradata/GEN3Q/gen_indx_01.dbf
rm /ora02/oradata/GEN3Q/gen_indx_02.dbf
rm /ora01/oradata/GEN3Q/gen_indx_03.dbf
rm /ora02/oradata/GEN3Q/gen_indx_04.dbf
rm /ora01/oradata/GEN3Q/gen_indx_05.dbf
rm /ora02/oradata/GEN3Q/gen_indx_06.dbf
rm /ora01/oradata/GEN3Q/gen_indx_07.dbf
rm /ora02/oradata/GEN3Q/gen_indx_08.dbf
rm /ora01/oradata/GEN3Q/genp_indx01.dbf
rm /ora02/oradata/GEN3Q/genp_rbs01.dbf
rm /ora01/oradata/GEN3Q/genp_rbs02.dbf
rm /ora01/oradata/GEN3Q/genp_temp01.dbf
rm /ora02/oradata/GEN3Q/genp_temp02.dbf
rm /ora01/oradata/GEN3Q/genp_tools01.dbf
rm /ora02/oradata/GEN3Q/genp_users01.dbf
rm /ora01/oradata/GEN3Q/gen_rep_data_01.dbf
rm /ora02/oradata/GEN3Q/gen_rep_data_02.dbf
rm /ora01/oradata/GEN3Q/gen_rep_data_03.dbf
rm /ora02/oradata/GEN3Q/gen_rep_data_04.dbf
rm /ora02/oradata/GEN3Q/foglight_01.dbf
rm /ora03/oradata/GEN3Q/genp_temp03.dbf
rm /ora01/oraredo/GEN3Q/gen3q_redo01a.log
rm /ora02/oraredo/GEN3Q/gen3q_redo01b.log
rm /ora01/oraredo/GEN3Q/gen3q_redo02a.log
rm /ora02/oraredo/GEN3Q/gen3q_redo02b.log
rm /ora01/oraredo/GEN3Q/gen3q_redo03a.log
rm /ora02/oraredo/GEN3Q/gen3q_redo03b.log
rm /ora01/oraredo/GEN3Q/gen3q_redo04a.log
rm /ora02/oraredo/GEN3Q/gen3q_redo04b.log
rm /ora01/oraredo/GEN3Q/gen3q_redo05a.log
rm /ora02/oraredo/GEN3Q/gen3q_redo05b.log
rm /ora01/oraredo/GEN3Q/gen3q_redo06a.log
rm /ora02/oraredo/GEN3Q/gen3q_redo06b.log
rm /ora01/oradata/GEN3Q/gen3q_control01.ctl
rm /ora02/oradata/GEN3Q/gen3q_control02.ctl
rm /ora01/oradata/GEN3Q/gen3q_control03.ctl


4. Connect to target database and catalog database
set ORACLE env
rman target / catalog rman/xxx@cataprd6

5. Reset database to the correct incarnation if necessary
(not in this case, we have only one incarnation)

RMAN> list incarnations;
RMAN> reset database to incarnation <inc_key>;

6. Run the following command:

run {
# set until time if necessary
# set until time '06-Mar-2007 23:00:00';
allocate channel d1 type disk;
allocate channel d2 type disk;
restore controlfile;
# mount database after restoring control file
alter database mount;
restore database;
alter database open resetlogs;
}

7. Add temp files if needed

====== Actual Screen Output on Mar 31, 2007 ===========

RMAN>
2> run {
3> allocate channel d1 type disk;
4> allocate channel d2 type disk;
5> restore controlfile;
6> alter database mount;
7> restore database;
8> alter database open resetlogs;
9> }
RMAN-03022: compiling command: allocate
RMAN-03023: executing command: allocate
RMAN-08030: allocated channel: d1
RMAN-08500: channel d1: sid=13 devtype=DISK

RMAN-03022: compiling command: allocate
RMAN-03023: executing command: allocate
RMAN-08030: allocated channel: d2
RMAN-08500: channel d2: sid=14 devtype=DISK

RMAN-03022: compiling command: restore

RMAN-03022: compiling command: IRESTORE
RMAN-03023: executing command: IRESTORE
RMAN-08016: channel d1: starting datafile backupset restore
RMAN-08502: set_count=4 set_stamp=618620009 creation_time=30-MAR-2007
22:53:29
RMAN-08021: channel d1: restoring controlfile
RMAN-08505: output filename=/ora01/oradata/GEN3Q/gen3q_control01.ctl
RMAN-08023: channel d1: restored backup piece 1
RMAN-08511: piece
handle=/oracle/admin/GEN3Q/backup/cold_GEN3Q_1_4_task6mar30.dbf
tag=TASK6MAR30 params=NULL
RMAN-08024: channel d1: restore complete
RMAN-08058: replicating controlfile
RMAN-08506: input filename=/ora01/oradata/GEN3Q/gen3q_control01.ctl
RMAN-08505: output filename=/ora02/oradata/GEN3Q/gen3q_control02.ctl
RMAN-08505: output filename=/ora01/oradata/GEN3Q/gen3q_control03.ctl

RMAN-03022: compiling command: alter db
RMAN-06199: database mounted

RMAN-03022: compiling command: restore
RMAN-03025: performing implicit partial resync of recovery catalog
RMAN-03023: executing command: partial resync
RMAN-08003: starting partial resync of recovery catalog
RMAN-08005: partial resync complete

RMAN-03022: compiling command: IRESTORE
RMAN-03023: executing command: IRESTORE
RMAN-08016: channel d2: starting datafile backupset restore
RMAN-08502: set_count=2 set_stamp=618619679 creation_time=30-MAR-2007
22:47:59
RMAN-08089: channel d2: specifying datafile(s) to restore from backup
set
RMAN-08523: restoring datafile 00003 to
/ora01/oradata/GEN3Q/gen_data_01.dbf
RMAN-08523: restoring datafile 00004 to
/ora02/oradata/GEN3Q/gen_data_02.dbf
RMAN-08523: restoring datafile 00005 to
/ora01/oradata/GEN3Q/gen_data_03.dbf
RMAN-08016: channel d1: starting datafile backupset restore
RMAN-08502: set_count=1 set_stamp=618619679 creation_time=30-MAR-2007
22:47:59
RMAN-08089: channel d1: specifying datafile(s) to restore from backup
set
RMAN-08523: restoring datafile 00002 to
/ora02/oradata/GEN3Q/GEN3Q_audit_rdbs01.dbf
RMAN-08523: restoring datafile 00012 to
/ora02/oradata/GEN3Q/gen_data_10.dbf
RMAN-08523: restoring datafile 00013 to
/ora01/oradata/GEN3Q/gen_data_11.dbf
RMAN-08523: restoring datafile 00014 to
/ora01/oradata/GEN3Q/gen_data_12.dbf
RMAN-08523: restoring datafile 00034 to
/ora02/oradata/GEN3Q/foglight_01.dbf
RMAN-08023: channel d2: restored backup piece 1
RMAN-08511: piece
handle=/oracle/admin/GEN3Q/backup/cold_GEN3Q_1_2_task6mar30.dbf
tag=TASK6MAR30 params=NULL
RMAN-08024: channel d2: restore complete
RMAN-08016: channel d2: starting datafile backupset restore
RMAN-08502: set_count=3 set_stamp=618619944 creation_time=30-MAR-2007
22:52:24
RMAN-08089: channel d2: specifying datafile(s) to restore from backup
set
RMAN-08523: restoring datafile 00006 to
/ora02/oradata/GEN3Q/gen_data_04.dbf
RMAN-08523: restoring datafile 00007 to
/ora01/oradata/GEN3Q/gen_data_05.dbf
RMAN-08523: restoring datafile 00021 to
/ora01/oradata/GEN3Q/gen_indx_07.dbf
RMAN-08023: channel d1: restored backup piece 1
RMAN-08511: piece
handle=/oracle/admin/GEN3Q/backup/cold_GEN3Q_1_1_task6mar30.dbf
tag=TASK6MAR30 params=NULL
RMAN-08024: channel d1: restore complete
RMAN-08016: channel d1: starting datafile backupset restore
RMAN-08502: set_count=4 set_stamp=618620009 creation_time=30-MAR-2007
22:53:29
RMAN-08089: channel d1: specifying datafile(s) to restore from backup
set
RMAN-08523: restoring datafile 00001 to
/ora02/oradata/GEN3Q/gen3q_system01.dbf
RMAN-08523: restoring datafile 00008 to
/ora02/oradata/GEN3Q/gen_data_06.dbf
RMAN-08523: restoring datafile 00010 to
/ora02/oradata/GEN3Q/gen_data_08.dbf
RMAN-08523: restoring datafile 00016 to
/ora02/oradata/GEN3Q/gen_indx_02.dbf
RMAN-08523: restoring datafile 00018 to
/ora02/oradata/GEN3Q/gen_indx_04.dbf
RMAN-08523: restoring datafile 00020 to
/ora02/oradata/GEN3Q/gen_indx_06.dbf
RMAN-08523: restoring datafile 00022 to
/ora02/oradata/GEN3Q/gen_indx_08.dbf
RMAN-08523: restoring datafile 00023 to
/ora01/oradata/GEN3Q/genp_indx01.dbf
RMAN-08523: restoring datafile 00024 to
/ora02/oradata/GEN3Q/genp_rbs01.dbf
RMAN-08523: restoring datafile 00025 to
/ora01/oradata/GEN3Q/genp_rbs02.dbf
RMAN-08523: restoring datafile 00027 to
/ora02/oradata/GEN3Q/genp_temp02.dbf
RMAN-08523: restoring datafile 00029 to
/ora02/oradata/GEN3Q/genp_users01.dbf
RMAN-08523: restoring datafile 00031 to
/ora02/oradata/GEN3Q/gen_rep_data_02.dbf
RMAN-08523: restoring datafile 00033 to
/ora02/oradata/GEN3Q/gen_rep_data_04.dbf
RMAN-08023: channel d2: restored backup piece 1
RMAN-08511: piece
handle=/oracle/admin/GEN3Q/backup/cold_GEN3Q_1_3_task6mar30.dbf
tag=TASK6MAR30 params=NULL
RMAN-08024: channel d2: restore complete
RMAN-08016: channel d2: starting datafile backupset restore
RMAN-08502: set_count=5 set_stamp=618620185 creation_time=30-MAR-2007
22:56:25
RMAN-08089: channel d2: specifying datafile(s) to restore from backup
set
RMAN-08523: restoring datafile 00009 to
/ora01/oradata/GEN3Q/gen_data_07.dbf
RMAN-08523: restoring datafile 00026 to
/ora01/oradata/GEN3Q/genp_temp01.dbf
RMAN-08023: channel d2: restored backup piece 1
RMAN-08511: piece
handle=/oracle/admin/GEN3Q/backup/cold_GEN3Q_1_5_task6mar30.dbf
tag=TASK6MAR30 params=NULL
RMAN-08024: channel d2: restore complete
RMAN-08016: channel d2: starting datafile backupset restore
RMAN-08502: set_count=6 set_stamp=618620411 creation_time=30-MAR-2007
23:00:11
RMAN-08089: channel d2: specifying datafile(s) to restore from backup
set
RMAN-08523: restoring datafile 00011 to
/ora01/oradata/GEN3Q/gen_data_09.dbf
RMAN-08523: restoring datafile 00015 to
/ora01/oradata/GEN3Q/gen_indx_01.dbf
RMAN-08523: restoring datafile 00017 to
/ora01/oradata/GEN3Q/gen_indx_03.dbf
RMAN-08523: restoring datafile 00019 to
/ora01/oradata/GEN3Q/gen_indx_05.dbf
RMAN-08523: restoring datafile 00028 to
/ora01/oradata/GEN3Q/genp_tools01.dbf
RMAN-08523: restoring datafile 00030 to
/ora01/oradata/GEN3Q/gen_rep_data_01.dbf
RMAN-08523: restoring datafile 00032 to
/ora01/oradata/GEN3Q/gen_rep_data_03.dbf
RMAN-08523: restoring datafile 00035 to
/ora03/oradata/GEN3Q/genp_temp03.dbf
RMAN-08023: channel d1: restored backup piece 1
RMAN-08511: piece
handle=/oracle/admin/GEN3Q/backup/cold_GEN3Q_1_4_task6mar30.dbf
tag=TASK6MAR30 params=NULL
RMAN-08024: channel d1: restore complete
RMAN-08023: channel d2: restored backup piece 1
RMAN-08511: piece
handle=/oracle/admin/GEN3Q/backup/cold_GEN3Q_1_6_task6mar30.dbf
tag=TASK6MAR30 params=NULL
RMAN-08024: channel d2: restore complete
RMAN-03023: executing command: partial resync
RMAN-08003: starting partial resync of recovery catalog
RMAN-08005: partial resync complete

RMAN-03022: compiling command: alter db
RMAN-06400: database opened
RMAN-03023: executing command: alter db
RMAN-08006: database registered in recovery catalog
RMAN-03023: executing command: full resync
RMAN-08002: starting full resync of recovery catalog
RMAN-08004: full resync complete
RMAN-08031: released channel: d1
RMAN-08031: released channel: d2

RMAN>
RMAN> **end-of-file**

RMAN> exit


Recovery Manager complete.
tx2qdfsdb1> /oracle/admin/GEN3Q/rman/shell [GEN3Q] sqlplus system/xxx

SQL*Plus: Release 8.1.7.0.0 - Production on Sat Mar 31 12:22:46 2007

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production

system@GEN3Q> alter system set fixed_date ='2007-03-30 11:59:00';

Wednesday, May 16, 2007

EXP/IMP to modify storage properties

EXP/IMP to modify storage properties
====================================


1. Current storage properties of t1

gen@GENQB> select table_name, initial_extent, next_extent from
user_tables where table_name='T1';

TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
------------------------------ -------------- -----------
T1 131072 4194304

2. Exp

tx2ddfsdb1> /oracle/admin/GENQB/create [GENQB] exp system/xxx
file=t1.dmp tables=GEN.T1


3. Drop the table

4. Re-create table t1 with modified storage properties

CREATE TABLE GEN.T1
(
OWNER VARCHAR2(30) NOT NULL,
OBJECT_NAME VARCHAR2(30) NOT NULL,
SUBOBJECT_NAME VARCHAR2(30) NULL,
OBJECT_ID NUMBER NOT NULL,
DATA_OBJECT_ID NUMBER NULL,
OBJECT_TYPE VARCHAR2(18) NULL,
CREATED DATE NOT NULL,
LAST_DDL_TIME DATE NOT NULL,
TIMESTAMP VARCHAR2(19) NULL,
STATUS VARCHAR2(7) NULL,
TEMPORARY VARCHAR2(1) NULL,
GENERATED VARCHAR2(1) NULL,
SECONDARY VARCHAR2(1) NULL
)
TABLESPACE GEN_DATA
LOGGING
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
STORAGE(INITIAL 1M
NEXT 2M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT)
NOPARALLEL
NOCACHE
/

gen@GENQB> select table_name, initial_extent, next_extent from
user_tables where table_name='T1';

TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
------------------------------ -------------- -----------
T1 1048576 2097152

5. Imp data:

imp system/xxx file=t1.dmp tables=T1 fromuser=GEN touser=GEN ignore=y


gen@GENQB> select table_name, initial_extent, next_extent from
user_tables where table_name='T1';

TABLE_NAME INITIAL_EXTENT NEXT_EXTENT
------------------------------ -------------- -----------
T1 1048576 2097152

Sunday, May 13, 2007

Oracle Multi-Master Replication - Remove Replication Support


1. To remove replication perform the following on the Master Definition Site:

CONNECT repadmin/repadmin@tsh1
-- Stop replication
EXECUTE Dbms_Repcat.Suspend_Master_Activity(gname=>'MYREPGRP');

-- Delete replication groups
EXECUTE Dbms_Repcat.Drop_Master_Repobject('SCOTT', 'EMP', 'TABLE');
EXECUTE Dbms_Repcat.Remove_Master_Databases('MYREPGRP', 'TSH2.WORLD');

2. Next do the following on all Master Sites:
CONNECT repadmin/repadmin@tsh1

-- Remove private databse links to other master databases
EXECUTE Dbms_Repcat.Drop_Master_Repgroup('MYREPGRP');
DROP DATABASE LINK TSH2.WORLD;

-- Remove any leftover jobs (see DBA_JOBS for job numbers)
EXECUTE Dbms_Job.Remove(62);
EXECUTE Dbms_Job.Remove(63);

CONNECT sys@tsh1

-- Remove the REPADMIN user
EXECUTE Dbms_Defer_Sys.Unregister_Propagator(username=>'REPADMIN');
EXECUTE
Dbms_Repcat_Admin.Revoke_Admin_Any_Schema(username=>'REPADMIN');
DROP USER repadmin CASCADE;

-- Drop public database links to other master databases
DROP PUBLIC DATABASE LINK TSH2.WORLD;

Monday, May 07, 2007

SYSDBA and v$pwfile_user

This link contains information about how to check which user has sysdba privilege

http://www.adp-gmbh.ch/ora/admin/password_file.html#adding_user

SQL> select * from v$pwfile_users;

USERNAME SYSDB SYSOP
------------------------------ ----- -----

INTERNAL TRUE TRUE

SYS TRUE TRUE

SQL> grant sysdba to system;

Grant succeeded.

USERNAME SYSDB SYSOP

------------------------------ ----- -----

INTERNAL TRUE TRUE

SYS TRUE TRUE

SYSTEM TRUE FALSE