Wednesday, December 16, 2015

EXADATA Smart Scan Overview

n  Table and Index scans: Scans are performed inside Exadata Storage rather than transporting all data to database server.
n  Predicate Filtering: Only requesting rows are returned to the database server rather than all rows in a table.
n  Column Filtering: Only requested columns are returned to the database server rather than all table columns.
n  Join Filtering: Join processing using Bloom Filters are offloaded to exadata server.
n  Smart scan can also be applied to encrypted and compressed data.

Smart Scan Requirements:
Smart Scan optimization is a run time decision. Smart scan is not governed by optimizer, but it is influenced by results of query optimization.

n  Query specific requirements
---- Smart Scan is only possible for Full Table or Index scans
---- Can be used only for direct path reads. (A direct read is a physical I/O from a data file that bypasses the buffer cache and reads the data block directly into process-private memory.)
n  Additional general requirements
--- Smart Scan must be enabled with in database. CELL_OFFLOAD_PROCESSING parameter controls smart scan. Default value is TRUE. Can be set dynamically using alter session or alter system.
---- Each segment being scanned must be on a disk group that is completely stored on exadata cells.


Situations Preventing Smart Scan

n  Scan on clustered table
n  Scan on Index Organized Table.
n  Fast Full Scan on Compressed Index
n  Fast Full scan on a reverse key index
n  More than 255 columns are referenced by query.

n  Optimized wants to scan to return rows in ROWID order.

Thursday, October 29, 2015

Upgrading from Oracle 11gR2 to Oracle 12c using Full Transportable Export/Import

********************************************************************************
**************** Oracle Database 12c: Full Transportable Export/Import *********
********************************************************************************

-- Full transportable export is available starting with Oracle Database 11g Release 2 (11.2.0.3). Full transportable import is available starting with Oracle Database 12c.

-- The Full Transportable Export and Import feature combines the best features of TTS/XTTS and Data Pump. 

-- Full transportable export/import moves all of the system, user, and application metadata needed for a database migration

--  Parameters to initiate "Full Transportable Export". These parameter values tell Data Pump to use full transportable rather than conventional export methods.

TRANSPORTABLE=ALWAYS and FULL=Y

-- In addition, there is a special consideration if the COMPATIBLE database initialization parameter of the source database is not set to a value of at least 12.0. This would be true, for example, if you perform a full transportable export from Oracle Database 11g Release 2 (11.2.0.3). In this case, you must also specify the Data Pump parameter VERSION=12 to denote the fact that the result of the export will be imported into an Oracle Database 12c Release 1 (12.1) or later database.

-- Invoking Full Transportable Import

In case of using dump,  file-based full transportable import thus requires only that you specify the dumpfile name and the list of user tablespace data files to be transported using the TRANSPORT_DATAFILES=<datafile_names> parameter.
Because while importing a dump file, Oracle Data Pump is able to determine whether a dump file was produced by a conventional or full transportable export

--- Without dump file (importing over NETWORK_LINK)

• FULL=Y TRANSPORTABLE=ALWAYS as specified for a full transportable export
• TRANSPORT_DATAFILES=<datafile_names> as used in a file-based full transportable import
• VERSION=12 if the COMPATIBLE setting for the source database is lower than 12.0. Note that the source database must be Oracle Database 11g Release 2 (11.2.0.3) or higher for a full transportable import
• NETWORK_LINK=<dblink_name> to specify the database link over which the data and metadata
are transferred from the source database.


************* Endian Convesion ***********

If the source platform and the target platform are of different endianness, then the data files being transported must be converted to the target platform format using either the RMAN CONVERT command or the GET_FILE or PUT_FILE procedures in the DBMS_FILE_TRANSFER package.

Note that encrypted tablespaces cannot be transported to a platform with different endianness

Example: Using Full Transportable to Move a Non-CDB into a CDB

Source - Oracle 11.2.0.3 running on Oracle Solaris x86
Target - Oracle 12c PDB inside of a CDB running on Oracle Linux

TABLE 1. SOURCE DATABASE TABLESPACES
TABLESPACE NAME ENCRYPTED? DATAFILE NAME
HR_1 Yes /u01/app/oracle/oradata/hr_db/hr_101.dbf
HR_2 No /u01/app/oracle/oradata/hr_db/hr_201.dbf
For this example we assume that the Oracle SID of the target database is HR_PDB, and that the data files for the target PDB are stored in the directory /u01/app/oracle/oradata/hr_pdb/ on the destination server.

STEP 1: Check the endianness of both platforms

SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_ID = d.PLATFORM_ID;

Oracle Solaris x86 and Oracle Enterprise Linux have little endian format, so no
endian conversion is necessary.

STEP 2: Verify that the set of tablespaces to be transported is self-contained 

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('hr_1,hr_2', TRUE);

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

STEP 3: Create a directory object in the source database

SQL> CREATE DIRECTORY dp_dir AS ’/u01/app/datafiles’;

STEP 4: Place the hr_1 and hr_2 tablespaces in read-only mode 

SQL> ALTER TABLESPACE hr_1 READ ONLY;
SQL> ALTER TABLESPACE hr_2 READ ONLY;

STEP 5: Invoke full transportable export on the source database

Invoke the Data Pump export utility as a user with the DATAPUMP_EXP_FULL_DATABASE role.

$ expdp system/manager full=y transportable=always version=12 \
directory=dp_dir dumpfile=full_tts.dmp \
metrics=y exclude=statistics \
encryption_password=secret123word456 \
logfile=full_tts_export.log


STEP 6: Transport the tablespace data files and the export dump file from source to target

$ cd /u01/app/oracle/oradata/hr_pdb/
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_101.dbf .
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_201.dbf .
$ cp /net/<source-server>/u01/app/datafiles/full_tts.dmp

STEP 7: Create a directory object on the destination database 

SQL> CREATE DIRECTORY dp_dir AS '/u01/app/oracle/oradata/hr_pdb';
SQL> GRANT read, write on directory dp_dir to system;


STEP 8: Invoke full transportable import on the destination database
Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role.

$ impdp system/manager@hr_pdb directory=dp_dir \
dumpfile=full_tts.dmp logfile=full_tts_imp.log \
metrics=y \
encryption_password=secret123word456 \
transport_datafiles='/u01/app/oracle/oradata/hr_pdb/hr_101.dbf',\
'/u01/app/oracle/oradata/hr_pdb/hr_201.dbf'