Thursday, August 29, 2013

Practise Migration Through Transportable Tablespace from non-ASM to ASM

References:

http://relcon.typepad.com/relatively_speaking/2009/10/migrating-an-oracle-database-from-solaris-to-linux.html

http://www.oracle.com/technetwork/database/features/availability/maa-wp-11g-platformmigrationtts-129269.pdf


Source database OS is Solaris and using OS file system. Target database is in on Linux and ASM.

Steps:

1. Check that platforms are compatible.

Run this query on the database you want to migrate from and it will list the platforms you can migrate to:

select * from V$TRANSPORTABLE_PLATFORM;


check the value of ENDIAN_FORMAT, if you are migrating from one format to another you are going to need to convert your database files, so for example going from Solaris to Linux you will need to convert.
PLATFORM_ID PLATFORM_NAME                        ENDIAN_FORMAT
----------- ------------------------------------ --------------
          6 AIX-Based Systems (64-bit)           Big
         16 Apple Mac OS                         Big
         21 Apple Mac OS (x86-64)                Little
         19 HP IA Open VMS                       Little
         15 HP Open VMS                          Little
          5 HP Tru64 UNIX                        Little
          3 HP-UX (64-bit)                       Big
          4 HP-UX IA (64-bit)                    Big
         18 IBM Power Based Linux                Big
          9 IBM zSeries Based Linux              Big
         10 Linux IA (32-bit)                    Little
         11 Linux IA (64-bit)                    Little
         13 Linux x86 64-bit                     Little
          7 Microsoft Windows IA (32-bit)        Little
          8 Microsoft Windows IA (64-bit)        Little
         12 Microsoft Windows x86 64-bit         Little
         17 Solaris Operating System (x86)       Little
         20 Solaris Operating System (x86-64)    Little
          1 Solaris[tm] OE (32-bit)              Big
          2 Solaris[tm] OE (64-bit)              Big
   

2. You can only migrate tablespaces to a database which uses the same character set and national charcter set, so check if you are migrating to an existing database or make sure to use the same values if you are building a new database.
 srcdb> SELECT * FROM nls_database_parameters order by 1;

 PARAMETER                      VALUE
 ------------------------------ ----------------------------------------
 NLS_CALENDAR                   GREGORIAN
 NLS_CHARACTERSET               WE8ISO8859P1
 NLS_COMP                       BINARY
 NLS_CSMIG_SCHEMA_VERSION       2
 NLS_CURRENCY                   $
 NLS_DATE_FORMAT                DD-MON-RR
 NLS_DATE_LANGUAGE              AMERICAN
 NLS_DUAL_CURRENCY              $
 NLS_ISO_CURRENCY               AMERICA
 NLS_LANGUAGE                   AMERICAN
 NLS_LENGTH_SEMANTICS           BYTE
 NLS_NCHAR_CHARACTERSET         AL16UTF16
 NLS_NCHAR_CONV_EXCP            FALSE
 NLS_NUMERIC_CHARACTERS         .,
 NLS_RDBMS_VERSION              11.2.0.3.0
 NLS_SORT                       BINARY
 NLS_TERRITORY                  AMERICA
 NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
 NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
 NLS_TIME_FORMAT                HH.MI.SSXFF AM
 NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR

 21 rows selected.



 tgtdb> SELECT * FROM nls_database_parameters order by 1;
  
 PARAMETER                      VALUE
 ------------------------------ ----------------------------------------
 NLS_CALENDAR                   GREGORIAN
 NLS_CHARACTERSET               WE8MSWIN1252
 NLS_COMP                       BINARY
 NLS_CURRENCY                   $
 NLS_DATE_FORMAT                DD-MON-RR
 NLS_DATE_LANGUAGE              AMERICAN
 NLS_DUAL_CURRENCY              $
 NLS_ISO_CURRENCY               AMERICA
 NLS_LANGUAGE                   AMERICAN
 NLS_LENGTH_SEMANTICS           BYTE
 NLS_NCHAR_CHARACTERSET         AL16UTF16
 NLS_NCHAR_CONV_EXCP            FALSE
 NLS_NUMERIC_CHARACTERS         .,
 NLS_RDBMS_VERSION              11.2.0.3.0
 NLS_SORT                       BINARY
 NLS_TERRITORY                  AMERICA
 NLS_TIMESTAMP_FORMAT           DD-MON-RR HH.MI.SSXFF AM
 NLS_TIMESTAMP_TZ_FORMAT        DD-MON-RR HH.MI.SSXFF AM TZR
 NLS_TIME_FORMAT                HH.MI.SSXFF AM
 NLS_TIME_TZ_FORMAT             HH.MI.SSXFF AM TZR
  
 20 rows selected.

WE8MSWIN1252 is a superset of WE8ISO8859P1:
Difference between WE8MSWIN1252 and WE8ISO8859P1 characterset (Doc ID 341676.1)

Some problems repored from WE8ISO8859P1 to WE8MSWIN1252: https://forums.oracle.com/message/9885182#9885182


3. Assuming you are good to go you need to determine which tablespaces hold the data you want to transport and then double check that the list is self-contained, execute this PL/SQL procedure to check the tablespaces you plan to migrate:
 sys@srcdb> execute sys.dbms_tts.transport_set_check('tbs1_data,tbs1_indx',true);
  
 PL/SQL procedure successfully completed.
  
 sys@srcdb> select * from sys.transport_set_violations;
  
 no rows selected

    
   Note: these violations must be resolved before the tablespaces can be transported. 

4. The tablespaces need to be in READ ONLY mode in order to successfully run a transport tablespace export:

alter tablespace tbs1_data read only;

alter tablespace tbs1_indx read only;


5. Export the metadata.
 
$ exp userid=\'/ as sysdba\'  file=tbs_exp.dmp log=tba_exp.log transport_tablespace=y tablespaces=tbs1_data,tbs1_indx 
 
Export: Release 11.2.0.3.0 - Production on Thu Aug 29 14:42:18 2013
 
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
 
 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Oracle Label Security, Oracle Database Vault and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TBS1_DATA ...
. exporting cluster definitions
. exporting table definitions
. . exporting table                             EC
EXP-00091: Exporting questionable statistics.
For tablespace TBS1_INDX ...
. exporting cluster definitions
. exporting table definitions
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully with warnings.


6. copy the metadata export, datafiles and any schema exports to the new host

cp tbs1_data_01.dbf /oradbbackup02_NFS/landingpad/dump
cp tbs1_indx_01.dbf /oradbbackup02_NFS/landingpad/dump
cp tbs_exp.dmp /oradbbackup02_NFS/landingpad/dump

Note: /oradbbackup02_NFS is NFS mount at source, and at target the mount is /oradbbackup02

7. convert the datafiles to new platform format using RMAN, in this example we will put the converted datafiles into an ASM instance:
rman  target / 

convert datafile 
  '/oradbbackup02/landingpad/dump/tbs1_data_01.dbf', 
  '/oradbbackup02/landingpad/dump/tbs1_indx_01.dbf'
  from platform 'Solaris[tm] OE (64-bit)' 
  format '+DATA_05' 
  parallelism 4;


Starting conversion at target at 29-AUG-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=118 instance=tgtdb2 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=124 instance=tgtdb2 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=137 instance=tgtdb2 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=151 instance=tgtdb2 device type=DISK
channel ORA_DISK_1: starting datafile conversion
input file name=/oradbbackup02/landingpad/dump/tbs1_data_01.dbf
channel ORA_DISK_2: starting datafile conversion
input file name=/oradbbackup02/landingpad/dump/tbs1_indx_01.dbf
converted datafile=+DATA_05/tgtdb/datafile/tbs1_data.316.824728129
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:04
converted datafile=+DATA_05/tgtdb/datafile/tbs1_indx.317.824728131
channel ORA_DISK_2: datafile conversion complete, elapsed time: 00:00:04
Finished conversion at target at 29-AUG-13



8. plug the tablespace into the target database, check the ASM instance to get the names of the converted datafiles:



  -- imp.par
file=tbs_exp.dmp
log=tts.log
transport_tablespace=Y
datafiles= '+DATA_05/tgtdb/datafile/tbs1_data.316.824728129'
,'+DATA_05/tgtdb/datafile/tbs1_indx.317.824728131'


$ imp userid=\'/ as sysdba\' parfile=imp.par


 
Import: Release 11.2.0.3.0 - Production on Thu Aug 29 12:59:02 2013
 
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
 
 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes
 
Export file created by EXPORT:V11.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8MSWIN1252 character set (possible charset conversion)
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. importing OPS$ORACLE's objects into OPS$ORACLE
. . importing table                           "EC"
. importing SYS's objects into SYS
Import terminated successfully without warnings.

9. set the imported tablespaces to read write:

alter tablespace tbs1_data read write;
alter tablespace tbs1_indx read write;

Wednesday, August 28, 2013

Get the row count from index fast full scan

This post is to show using parallel_index hint to enable an index fast full scan access path to obtain the row count of a large table with primary key faster. BO_ID is the primary key column of BO_NBQQJOH table.

The following summarizes the results:

parallel full table scan 14m11s 8.8M consistent gets

parallel index fast full scan 3m49s 2.8M consistent gets


SQL> select /*+ parallel (a 6) */ count(bo_id) from ttq_dqg.bo_nbqqjoh_v a;

COUNT(BO_ID)
------------
   917384760

1 row selected.

Elapsed: 00:14:11.13

Execution Plan
----------------------------------------------------------
Plan hash value: 1529574786

------------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name       | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |            |     1 |     7 |   341K  (1)| 01:08:17 |        |      |            |
|   1 |  SORT AGGREGATE        |            |     1 |     7 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |            |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000   |     1 |     7 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |            |     1 |     7 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |            |   912M|  6088M|   341K  (1)| 01:08:17 |  Q1,00 | PCWC |            |
|   6 |       TABLE ACCESS FULL| BO_NBQQJOH |   912M|  6088M|   341K  (1)| 01:08:17 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        158  recursive calls
          0  db block gets
    8846234  consistent gets
    8403973  physical reads
          0  redo size
        218  bytes sent via SQL*Net to client
        252  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
          1  rows processed




SQL> select /*+ parallel_index ( a, BO_NBQQJOH_NEW_PK, 6) */ count(bo_id) from ttq_dqg.bo_nbqqjoh_v a;

COUNT(BO_ID)
------------
   917391460

1 row selected.

Elapsed: 00:03:49.60

Execution Plan
----------------------------------------------------------
Plan hash value: 1771473710

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name              | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                   |     1 |     7 |   487K  (3)| 01:37:33 |        |      |            |
|   1 |  SORT AGGREGATE           |                   |     1 |     7 |            |          |        |      |            |
|   2 |   PX COORDINATOR          |                   |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)    | :TQ10000          |     1 |     7 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE        |                   |     1 |     7 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR    |                   |   912M|  6088M|   487K  (3)| 01:37:33 |  Q1,00 | PCWC |            |
|   6 |       INDEX FAST FULL SCAN| BO_NBQQJOH_NEW_PK |   912M|  6088M|   487K  (3)| 01:37:33 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
         30  recursive calls
          0  db block gets
    2854834  consistent gets
    2199288  physical reads
          0  redo size
        237  bytes sent via SQL*Net to client
        252  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>