Wednesday, December 16, 2015

EXADATA Smart Scan Overview

n  Table and Index scans: Scans are performed inside Exadata Storage rather than transporting all data to database server.
n  Predicate Filtering: Only requesting rows are returned to the database server rather than all rows in a table.
n  Column Filtering: Only requested columns are returned to the database server rather than all table columns.
n  Join Filtering: Join processing using Bloom Filters are offloaded to exadata server.
n  Smart scan can also be applied to encrypted and compressed data.

Smart Scan Requirements:
Smart Scan optimization is a run time decision. Smart scan is not governed by optimizer, but it is influenced by results of query optimization.

n  Query specific requirements
---- Smart Scan is only possible for Full Table or Index scans
---- Can be used only for direct path reads. (A direct read is a physical I/O from a data file that bypasses the buffer cache and reads the data block directly into process-private memory.)
n  Additional general requirements
--- Smart Scan must be enabled with in database. CELL_OFFLOAD_PROCESSING parameter controls smart scan. Default value is TRUE. Can be set dynamically using alter session or alter system.
---- Each segment being scanned must be on a disk group that is completely stored on exadata cells.


Situations Preventing Smart Scan

n  Scan on clustered table
n  Scan on Index Organized Table.
n  Fast Full Scan on Compressed Index
n  Fast Full scan on a reverse key index
n  More than 255 columns are referenced by query.

n  Optimized wants to scan to return rows in ROWID order.

Thursday, October 29, 2015

Upgrading from Oracle 11gR2 to Oracle 12c using Full Transportable Export/Import

********************************************************************************
**************** Oracle Database 12c: Full Transportable Export/Import *********
********************************************************************************

-- Full transportable export is available starting with Oracle Database 11g Release 2 (11.2.0.3). Full transportable import is available starting with Oracle Database 12c.

-- The Full Transportable Export and Import feature combines the best features of TTS/XTTS and Data Pump. 

-- Full transportable export/import moves all of the system, user, and application metadata needed for a database migration

--  Parameters to initiate "Full Transportable Export". These parameter values tell Data Pump to use full transportable rather than conventional export methods.

TRANSPORTABLE=ALWAYS and FULL=Y

-- In addition, there is a special consideration if the COMPATIBLE database initialization parameter of the source database is not set to a value of at least 12.0. This would be true, for example, if you perform a full transportable export from Oracle Database 11g Release 2 (11.2.0.3). In this case, you must also specify the Data Pump parameter VERSION=12 to denote the fact that the result of the export will be imported into an Oracle Database 12c Release 1 (12.1) or later database.

-- Invoking Full Transportable Import

In case of using dump,  file-based full transportable import thus requires only that you specify the dumpfile name and the list of user tablespace data files to be transported using the TRANSPORT_DATAFILES=<datafile_names> parameter.
Because while importing a dump file, Oracle Data Pump is able to determine whether a dump file was produced by a conventional or full transportable export

--- Without dump file (importing over NETWORK_LINK)

• FULL=Y TRANSPORTABLE=ALWAYS as specified for a full transportable export
• TRANSPORT_DATAFILES=<datafile_names> as used in a file-based full transportable import
• VERSION=12 if the COMPATIBLE setting for the source database is lower than 12.0. Note that the source database must be Oracle Database 11g Release 2 (11.2.0.3) or higher for a full transportable import
• NETWORK_LINK=<dblink_name> to specify the database link over which the data and metadata
are transferred from the source database.


************* Endian Convesion ***********

If the source platform and the target platform are of different endianness, then the data files being transported must be converted to the target platform format using either the RMAN CONVERT command or the GET_FILE or PUT_FILE procedures in the DBMS_FILE_TRANSFER package.

Note that encrypted tablespaces cannot be transported to a platform with different endianness

Example: Using Full Transportable to Move a Non-CDB into a CDB

Source - Oracle 11.2.0.3 running on Oracle Solaris x86
Target - Oracle 12c PDB inside of a CDB running on Oracle Linux

TABLE 1. SOURCE DATABASE TABLESPACES
TABLESPACE NAME ENCRYPTED? DATAFILE NAME
HR_1 Yes /u01/app/oracle/oradata/hr_db/hr_101.dbf
HR_2 No /u01/app/oracle/oradata/hr_db/hr_201.dbf
For this example we assume that the Oracle SID of the target database is HR_PDB, and that the data files for the target PDB are stored in the directory /u01/app/oracle/oradata/hr_pdb/ on the destination server.

STEP 1: Check the endianness of both platforms

SQL> SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_ID = d.PLATFORM_ID;

Oracle Solaris x86 and Oracle Enterprise Linux have little endian format, so no
endian conversion is necessary.

STEP 2: Verify that the set of tablespaces to be transported is self-contained 

SQL> EXECUTE DBMS_TTS.TRANSPORT_SET_CHECK('hr_1,hr_2', TRUE);

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

STEP 3: Create a directory object in the source database

SQL> CREATE DIRECTORY dp_dir AS ’/u01/app/datafiles’;

STEP 4: Place the hr_1 and hr_2 tablespaces in read-only mode 

SQL> ALTER TABLESPACE hr_1 READ ONLY;
SQL> ALTER TABLESPACE hr_2 READ ONLY;

STEP 5: Invoke full transportable export on the source database

Invoke the Data Pump export utility as a user with the DATAPUMP_EXP_FULL_DATABASE role.

$ expdp system/manager full=y transportable=always version=12 \
directory=dp_dir dumpfile=full_tts.dmp \
metrics=y exclude=statistics \
encryption_password=secret123word456 \
logfile=full_tts_export.log


STEP 6: Transport the tablespace data files and the export dump file from source to target

$ cd /u01/app/oracle/oradata/hr_pdb/
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_101.dbf .
$ cp /net/<source-server>/u01/app/oracle/oradata/hr_db/hr_201.dbf .
$ cp /net/<source-server>/u01/app/datafiles/full_tts.dmp

STEP 7: Create a directory object on the destination database 

SQL> CREATE DIRECTORY dp_dir AS '/u01/app/oracle/oradata/hr_pdb';
SQL> GRANT read, write on directory dp_dir to system;


STEP 8: Invoke full transportable import on the destination database
Invoke the Data Pump import utility as a user with DATAPUMP_IMP_FULL_DATABASE role.

$ impdp system/manager@hr_pdb directory=dp_dir \
dumpfile=full_tts.dmp logfile=full_tts_imp.log \
metrics=y \
encryption_password=secret123word456 \
transport_datafiles='/u01/app/oracle/oradata/hr_pdb/hr_101.dbf',\
'/u01/app/oracle/oradata/hr_pdb/hr_201.dbf' 





Friday, December 5, 2014

Checking Optimizer Compatibility

Sometime optimizer may not be compatible.  It is usually the result of an in-place upgrade or patch. The result is that the production server may behave differently from the development and test servers. 

To check the compatibility you can run below query.

set serveroutput on
DECLARE
ver VARCHAR2(30);
compat VARCHAR2(30);
BEGIN
dbms_utility.db_version(ver, compat);
dbms_output.put_line('Version: ' || ver ||' compatible: ' || compat);
END;
/
Version: 11.2.0.4.0 compatible: 11.2.0.0.0

PL/SQL procedure successfully completed.


Thursday, November 27, 2014

Troubleshooting Log File Sync Waits

If you notice a very large number of waits with a short average wait time per wait, that’s an indication that too many commit statements are being issued by the database. You must change the commit behavior by batching the commits.
Instead of committing after each row, for example, you can specify that the commits occur after every 500 rows.

If you notice that the large amount of wait time accumulated due to the redo log file sync event was caused by long waits for writing to the redo log file (high average time waited for this event), it’s more a matter of how fast your I/O subsystem is. You can alternate the redo log files on various disks to reduce contention. You can also see if you can dedicate disks entirely for the redo logs instead of allowing other files on those disks—this will reduce I/O contention when the LGWR is writing the buffers to disk. Finally, as a long-term solution, you can look into placing redo logs on faster devices, say, by moving them from a RAID 5 to a RAID 1 device.

How It Works

Oracle (actually the LGWR background process) automatically flushes a session’s redo information to the redo log file whenever a session issues a COMMIT statement. The database writes commit records to disk before it returns control to the client. The server process thus waits for the completion of the write to the redo log. 


The log file sync wait event includes the wait during the writing of the log buffer to disk by LGWR and the posting of that information to the session. The server process will have to wait until it gets confirmation that the LGWR process has completed writing the log buffer contents out to the redo log file.

The log file sync events are caused by contention during the writing of the log buffer contents to the redo log files. Check the V$SESSION_WAIT view to ascertain whether Oracle is incrementing the SEQ# column. If Oracle is incrementing this column, it means that the LGWR process is the culprit, as it may be stuck.

P1 = buffer#
All changes up to this buffer number (in the log buffer) must be flushed to disk and the writes confirmed to ensure that the transaction is committed and will be kept on an instance crash. The wait is for LGWR to flush up to this 

As the log file sync wait event is caused by contention caused by the LGWR process, see if you can use the NOLOGGING option to get rid of these waits. Of course, in a production system, you can’t use the NOLOGGING option when the database is processing user requests, so this option is of limited use in most cases.

Troubleshooting Buffer Busy Wait Events

The Buffer Busy Waits Oracle metric occur when an Oracle session needs to access a block in the buffer cache, but cannot because the buffer copy of the data block is locked. 

What causes the buffer busy wait event 

1. The block is being read into the buffer by another session, so the waiting session must wait for the block read to complete.

2. Another session has the buffer block locked in a mode that is incompatible with the waiting session's request.

While the block is being changed, the block is marked as unreadable by others. The changes that are being made should last under a few hundredths of a second, for example, a disk read should be under 20 milliseconds and a block modification should be under one millisecond. Therefore it will take a lot of buffer busy waits to cause a problem, but some examples of this are:

1. Hot block issue, such as the first block on the free list of a table, with high concurrent inserts. All users will insert into that block at the same time, until it fills up, then users start inserting into the next free block on the list, and so on.
2. Multiple users running an inefficient SQL statement performing a full table scan on the same large table at the same time. One user will read the block off disk, and the other users will wait on buffer busy waits (or read by other session in 10g and higher) for the physical I/O to complete.

Find more  information of buffer busy waits

col event for a30
select event,p1 file#,p2 block#
from v$session_wait
where sid in
(select sid from v$session where username='&username')
/

To get more information about the SQL statement being executed and the block being waited for, trace the session or gather data from V$SESSION V$SESSION_WAIT 

SELECT s.sql_hash_value, sw.p1 file#, sw.p2 block#, sw.p3 reason
FROM v$session_wait sw, v$session s
WHERE sw.event = 'buffer busy waits'
AND sw.sid = s.sid
/

P1 stands for File id 
P2 stands for Block id
P3 stands for reason code


Now determine the object being waited for, use the P1 (file_number) and P2 (block_number) information from the above query:

col owner for a10
col segment_name for a15
select owner, segment_name
from dba_extents
where file_id = &p1
and &p2 between block_id and block_id+blocks-1
/

Troubleshooting buffer busy waits

Now you know the database object for this wait, consider the following causes of contention and their solutions

Undo Header – If using Automatic Undo Management, increase the size of the undo tablespace. 

Data Block- Data blocks are the blocks that hold the row data in a table or index. The typical problem is that multiple sessions are requesting a block that is either not in cache or in an incompatible mode (read by other session in 10g and higher).

Tune inefficient queries that read too many blocks into the buffer cache. These queries could flush out blocks that may be useful for other sessions in the buffer cache. By tuning queries, the number of blocks that need to be read into the cache is minimized, reducing aging out of the existing “good” blocks in the cache.

Resolve Hot Blocks – If the queries above consistently return the same block or set of blocks, this is considered a hot block scenario. Delete some of the hot rows and insert them back into the table. Most of the time, the rows will be placed in a different block. The DBA may need to adjust PCTFREE and/or PCTUSED to ensure the rows are placed into a different block. Also talk with the developers to understand why a set of blocks are hot.

Place Table in Memory – Cache the table or keep the table in the KEEP POOL. When multiple sessions are requesting the blocks that reside in the disk, it takes too much time for a session to read it into the buffer cache. Other session(s) that need the same block will register ‘buffer busy wait’. If the block is already in buffer cache, however, this possibility is eliminated. Another alternative is to increase the buffer cache size. A larger buffer cache means less I/O from disk. This reduces situations where one session is reading a block from the disk subsystem and other sessions are waiting for the block.

Fix Low Cardinality Indexes – Look for ways to reduce the number of low cardinality indexes, i.e. an index with a low number of unique values that could result in excessive block reads. This can especially be problematic when concurrent DML operates on table with low cardinality indexes and cause contention on a few index blocks.

Data Segment Header- Each segment has one header block that contains segment information, e.g. free and available block details and the highwater mark. At times, this block can be a point of contention when multiple sessions are attempting to insert/delete into/from the same table.

Friday, August 23, 2013

Using DBMS_STATS to backup and restore statistics

1) Check last analyzed date of scott schema tables

SQL> select table_name, last_analyzed from DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     LAST_ANAL
------------------------------ ---------
TEMP                           18-AUG-13
SALGRADE                       18-AUG-13
BONUS                          18-AUG-13
EMP                            18-AUG-13
DEPT                           18-AUG-13


2) Create stat table in users Tablespace

SQL> exec dbms_stats.create_stat_table(ownname => 'SCOTT', stattab => 'stats_bkp_scott', tblspace => 'USERS');

PL/SQL procedure successfully completed.


3) Take the statistics backup of scott schema in stat table

SQL> exec dbms_stats.export_schema_stats(ownname => 'SCOTT', stattab => 'stats_bkp_scott');

PL/SQL procedure successfully completed.


4) Gather the statistics

SQL> exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'EMP', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);

PL/SQL procedure successfully completed.

5) Check the last analyzed date of tables

SQL> select table_name, last_analyzed from DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     LAST_ANAL
------------------------------ ---------
STATS_BKP_SCOTT
DEPT                           23-AUG-13
EMP                            23-AUG-13
BONUS                          18-AUG-13
SALGRADE                       18-AUG-13
TEMP                           18-AUG-13

6 rows selected.

6) Import/Revert the statistics of one/two table from the backup

SQL> exec dbms_stats.import_table_stats(ownname=>'SCOTT', tabname=>'EMP', statown=>'SCOTT', stattab=>'stats_bkp_scott', cascade=>true);

PL/SQL procedure successfully completed.

7) Check the last analyzed date of the tables

SQL>  select table_name, last_analyzed from DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     LAST_ANAL
------------------------------ ---------
STATS_BKP_SCOTT
TEMP                           18-AUG-13
SALGRADE                       18-AUG-13
BONUS                          18-AUG-13
EMP                            18-AUG-13
DEPT                           23-AUG-13

6 rows selected.

8) Revert the statistics of whole schema from the backup

SQL> EXECUTE DBMS_STATS.IMPORT_SCHEMA_STATS ('SCOTT','stats_bkp_scott');

PL/SQL procedure successfully completed.


9) Check the last analyzed date of the tables

SQL> select table_name, last_analyzed from DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     LAST_ANAL
------------------------------ ---------
STATS_BKP_SCOTT
DEPT                           18-AUG-13
EMP                            18-AUG-13
BONUS                          18-AUG-13
SALGRADE                       18-AUG-13
TEMP                           18-AUG-13

6 rows selected.




Tuesday, March 12, 2013

Using RMAN Incremental Backups to Roll Forward a Physical Standby Database

In this article, we will show the steps that can be used to resolve problems if a physical standby database has lost or corrupted archived redo data or has an unresolvable archive gap.



1). Check the CURRENT_SCN on Primary and Standby

--On Primary
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
     871303

--On Standby
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
     859798

2). On the standby database, stop the Managed Recovery Process.
SQL> alter database recover managed standby database cancel;

3). Take "FROM SCN" backup of Primary Database
RMAN> backup incremental from scn 859790 database format '/u01/stnd_backp_%U.bak';

4). Create Standby Controlfile at Primary:
SQL> alter database create standby controlfile as /u01/stnd_cntl.ctl';

5).Copy the backup files and control file to standby server.

scp oracle@192.168.195.128:/u01/stnd* /u01/

At Standby copy controlfiles to their loactaion:

cp stnd_cntl.ctl /u01/app/oracle/oradata/HCLDB/control01.ctl
cp stnd_cntl.ctl /u01/app/oracle/flash_recovery_area/HCLDB/control02.ctl

6). Mount the standby database using backup controlfile.
SQL> startup nomount
SQL> alter database mount standby database;

7).Catalog the backupieces which you have copied from primary server to standby server using below command on standby server.

RMAN> catalog start with '/u01/stnd_backp_0fo4apn7_1_1.bak';
RMAN> catalog start with '/u01/stnd_backp_0go4apov_1_1.bak';

8). Recover the standby database with the cataloged incremental backup.
RMAN> recover database noredo;

**You must also specify NOREDO if the online logs are available but the redo cannot be applied to the incrementals.
**If you do not specify NOREDO, then RMAN searches for redo logs after applying the incremental backup, and issues an error message when it does not find them.

9), Now check whether the standby database is properly synched or not
- On Primary database
SQL> select max(sequence#) from v$archived_log;

- On Standby database
SQL> select max(sequence#) from v$log_history;

10) On the standby database, start the MRP process.
SQL> alter database recover managed standby database disconnect from session;